I've been using GitHub copilot for a while, which suggests the right thing 9 times out of 10, maybe more, with some unusual quirks (I started writing an example JSON file with a key of "city" and it suggested "Abilene") but not got round to trying chatGPT in earnest.
Super impressive when asking in the abstract but haven't tried it for anything I'm actually trying to do yet. Going all in on it next week
― Tow Law City (cherry blossom), Sunday, 21 May 2023 07:49 (two years ago)
I've been using it too, I'd say more like 50% but I guess it depends on what you're trying to do, also copilot x is allegedly much better as it learns more of your own codebase. even when it does get it right you have to read it carefully to be extra sure it's not doing something ever so slightly inappropriate. but when it does work it is satisfying.
― ledge, Sunday, 21 May 2023 08:40 (two years ago)
even 50% is an overestimate, most of the time i finish typing before it has time to make a suggestion. some colleagues say it's made them much more productive though.
― ledge, Sunday, 21 May 2023 08:52 (two years ago)
I've probably been writing fairly generic stuff in the period I've been using it which might explain the accuracy.
Where it really came in useful recently was I was wanting to add some comments to a bunch of bash functions in a file accumulated over time
As soon as I typed the # it suggested the perfect comment for every single function, except one which it guessed wildly incorrectly
― Tow Law City (cherry blossom), Sunday, 21 May 2023 09:22 (two years ago)
Asked chatGPT to write me an ansible playbook to install docker on a raspberry pi, . I didn't actually mention ansible it just inferred it from playbook. Playbook worked first time out the box
But on the other hand it missed the mark with persistently changing dns servers. Took 5 attempts, each option looked right, but didn't work. I didn't know why, and neither did chatGPT. Got there in the end but only after it told me to edit a file that didn't exist but was similar in name to one that did
Did some more ansible stuff where it gave me some unsupported parameters. I said "chatGpt, you have given me some unsupported parameters!" and it said sorry about that, and gave me a better version without the unsupported parameters
― Tow Law City (cherry blossom), Monday, 22 May 2023 04:43 (two years ago)
Some of the time I feel like "this isn't that different to googling" but mostly a sense of unreality. I started to write some code and couldn't shake the feeling of being at the desk with a quill, people peering in and pointing and exclaiming "what is that?"
― Tow Law City (cherry blossom), Monday, 22 May 2023 04:49 (two years ago)
https://architizer.com/blog/inspiration/stories/ai-will-destroy-creativity-if-we-let-it/
my official word on this topic
― treeship., Tuesday, 23 May 2023 17:30 (two years ago)
it looks like Nvidia will be the beneficiary of the largest single day market cap increase ever, driven by its AI-related business
― Muad'Doob (Moodles), Thursday, 25 May 2023 14:52 (two years ago)
Not to sound like an Adobe shill, but the new AI-driven generative fill in the new Photoshop Beta is amazing. Went down a rabbit hole yesterday extending the width of photos just for the fun of it.
― bookmarkflaglink (Darin), Thursday, 25 May 2023 15:46 (two years ago)
I was pretty impressed that I was able to plug in an LSAT logical reasoning question and it not only answered correctly but properly explained why the answer was correct when asked. It's hard for me to wrap my mind around the fact that it can do that via some kind of largely probabilistic word association, like how?
― longtime caller, first time listener (man alive), Thursday, 25 May 2023 15:58 (two years ago)
was it a novel question or one that can already be found on the internet? it can get standard winograd sentences correct but it's easy to make new ones that it gets wrong.
― ledge, Thursday, 25 May 2023 18:26 (two years ago)
this was a good read and it's clear that you are much more knowledgeable about the ethics and philosophy of art than i am. but i'm curious how much time you've spent actually using and exploring these tools? talking specifically about the stable diffusion image models, anyone who has spent a non-trivial amount of time working with these tools can see that they are capable of producing true emergent beauty from that problematic sewer of data. though like some others in the thread, my creative curiosity lies in finding the seams and ripping them open
― butch wig (diamonddave85), Thursday, 25 May 2023 18:47 (two years ago)
― 龜, Thursday, 25 May 2023 22:21 (two years ago)
Everyone is so worried about people dying because AI becomes sentient and plots to destroy humanity, that we're not focused enough on the more likely reason AI will kill people: That we outsource basic human kindness, care, and dignity to machines. pic.twitter.com/ht393mYQRe— Joel S. (@jh_swanson) May 25, 2023
― xyzzzz__, Friday, 26 May 2023 07:31 (two years ago)
my work just announced that using chat GPT and other generative AI is restricted, which is interesting considering I was just told by a developer a few weeks ago that I should use chat GPT to answer questions I had about building a data model
― Muad'Doob (Moodles), Friday, 26 May 2023 13:18 (two years ago)
we got the same type of email message
on the other hand, a couple coworkers authorized to use the Azure ChatGPT data actually came up with a useful way to use it to classify some user text ("is this comment on the nature of the project or a specific measurement?" type of thing) and I was surprised
― mh, Friday, 26 May 2023 15:45 (two years ago)
I tried the latest GPT-4 version and was surprised it struggled so much with the following prompt:
The following is a pytest unit test that tests whether a Python function, `rotation_matrix_to_rotation_vector`, is equivalent to the implementation provided by SciPy (`scipy.spatial.transform.Rotation`): ```Pythonfrom scipy.spatial.transform import Rotationimport numpyimport torch.testingdef test_rotation_matrix_to_rotation_vector(): x = torch.from_numpy(Rotation.random(32).as_matrix()) expected = torch.from_numpy( Rotation.from_matrix(x).as_rotvec(), ) torch.testing.assert_close( rotation_matrix_to_rotation_vector(x), expected, )test_rotation_matrix_to_rotation_vector()```Implement the following `rotation_matrix_to_rotation_vector` function so it passes the provided unit test:```Pythonfrom torch import Tensordef rotation_matrix_to_rotation_vector( rotation_matrix: Tensor,) -> Tensor: raise NotImplementedError```In the implementation, ensure epsilon values are defined by `torch.finfo` of the input type.
```Pythonfrom scipy.spatial.transform import Rotationimport numpyimport torch.testing
def test_rotation_matrix_to_rotation_vector(): x = torch.from_numpy(Rotation.random(32).as_matrix())
expected = torch.from_numpy( Rotation.from_matrix(x).as_rotvec(), )
torch.testing.assert_close( rotation_matrix_to_rotation_vector(x), expected, )
test_rotation_matrix_to_rotation_vector()```
Implement the following `rotation_matrix_to_rotation_vector` function so it passes the provided unit test:
```Pythonfrom torch import Tensor
def rotation_matrix_to_rotation_vector( rotation_matrix: Tensor,) -> Tensor: raise NotImplementedError```
In the implementation, ensure epsilon values are defined by `torch.finfo` of the input type.
It recalled the appropriate technique and even appropriate issues to mitigate (e.g., singularities around 0 and pi) but it eventually conceded after 20 attempts.
― Allen (etaeoe), Friday, 26 May 2023 16:18 (two years ago)
https://www.theguardian.com/technology/2023/may/26/rishi-sunak-races-to-tighten-rules-for-ai-amid-fears-of-existential-risk
When you know chat GPT just isn't a big deal.
― xyzzzz__, Friday, 26 May 2023 21:45 (two years ago)
What shocks me most about AI is how rapidly many people are eager to trust it with important tasks despite not understanding what the product fundamentally is. It's very good at predicting the next word in a sentence—a hyper-advanced autocomplete. It doesn't *think creatively.* https://t.co/E3XHtypOGY— poorly hidden account (@poorly_hidden) May 27, 2023
― xyzzzz__, Sunday, 28 May 2023 16:27 (two years ago)
ok this is a lot more interesting than the "extend classic paintings" thing that's going around
― frogbs, Wednesday, 31 May 2023 02:51 (two years ago)
referring to this:
Lots of people are using Photoshop's new AI generation to expand classic paintings like the Mona Lisa, but the true test is to crop it smaller, then expand it to areas where we already know what it looks like to see how accurately it re-generates it. Here's my test. Nailed it! pic.twitter.com/a6ZIKC3fhj— 🏴☠️ Maddox 🏴☠️ (@maddoxrules) May 31, 2023
― frogbs, Wednesday, 31 May 2023 02:52 (two years ago)
https://www.cnbc.com/2023/05/31/ai-poses-human-extinction-risk-sam-altman-and-other-tech-leaders-warn.html
If this is true why not stop development at least until there is an international consensus on alignment? (which might be never, but so what?)
― treeship., Wednesday, 31 May 2023 13:51 (two years ago)
Non-proliferation treaties for these chatbots.
― treeship., Wednesday, 31 May 2023 13:52 (two years ago)
I do find it hard to believe that, per the tweet above, a hyper-advanced autocomplete poses an existential threat to our species.
― ledge, Wednesday, 31 May 2023 13:53 (two years ago)
I’m very skeptical of the ai ceos who say this stuff. If i was sam altman and i believed i was the ceo of a company producing a technology that would kill everyone, i would stop being the ceo of that company and go to bartending school or something.
― treeship., Wednesday, 31 May 2023 13:55 (two years ago)
seems to leave out an important bit about how, exactly, this is going to kill us all
― frogbs, Wednesday, 31 May 2023 13:58 (two years ago)
https://www.lesswrong.com/posts/eaDCgdkbsfGqpWazi/the-basic-reasons-i-expect-agi-ruin
Here is one explanation
― treeship., Wednesday, 31 May 2023 14:01 (two years ago)
Dwarkesh is thinking about the end of humanity as a causal chain with many links and if any of them are broken it means humans will continue on, while Eliezer thinks of the continuity of humanity (in the face of AGI) as a causal chain with many links and if any of them are broken it means humanity ends. Or perhaps more discretely, Eliezer thinks there are a few very hard things which humanity could do to continue in the face of AI, and absent one of those occurring, the end is a matter of when, not if, and the when is much closer than most other people think.Anyway, I think each of Dwarkesh and Eliezer believe the other one falls on the side of extraordinary claims require extraordinary evidence - Dwarkesh thinking the end of humanity is "wild" and Eliezer believing humanity's viability in the face of AGI is "wild" (though not in the negative sense).
Anyway, I think each of Dwarkesh and Eliezer believe the other one falls on the side of extraordinary claims require extraordinary evidence - Dwarkesh thinking the end of humanity is "wild" and Eliezer believing humanity's viability in the face of AGI is "wild" (though not in the negative sense).
Basically the theory is we cannot predict how these things will behave because we don’t even know how gpt-4 works. And I guess the idea is that we will give it the ability to manufacture things—like machines or even viruses I guess—and not just the ability to generate plans that humans will either green light or not?
― treeship., Wednesday, 31 May 2023 14:04 (two years ago)
5. STEM-level AGI timelines don't look that long
I won't try to argue for this proposition
― ledge, Wednesday, 31 May 2023 14:09 (two years ago)
to some extent yes we don't know how these things behave and they can produce strange results but ... they produce images and text.
i mean if we were looking at wiring one of these things up to a nuclear button then sure i'd be worried.
― ledge, Wednesday, 31 May 2023 14:11 (two years ago)
To be fair I am surprised that it can write coherent essays that put forward relatively complex arguments. It seems that scientific thinking might not lag too far beyond that .
― treeship., Wednesday, 31 May 2023 14:13 (two years ago)
if by 'put forward' you mean 'rehash existing' then yeah...
― ledge, Wednesday, 31 May 2023 14:14 (two years ago)
It’s still a pretty complex process.
― treeship., Wednesday, 31 May 2023 14:16 (two years ago)
Like teaching a bright 9th grader to write as clearly as gpt-4 is difficult
― treeship., Wednesday, 31 May 2023 14:18 (two years ago)
‘Remarkable’ AI tool designs mRNA vaccines that are more potent and stable
probably just rehashing existing vaccines, though. lame!
― budo jeru, Wednesday, 31 May 2023 14:20 (two years ago)
I’m sort of playing devil’s advocate here. I am on record saying that gpt and midjourney are not really “creative” and that this element of them has been overhyped. But i also just want to understand what exactly this technology is and how it is likely to progress.
― treeship., Wednesday, 31 May 2023 14:21 (two years ago)
idk this whole thing seems to hinge on the idea that some AI chatbot is going to figure out how to take over various computer systems and start unilaterally making decisions. as someone who works for a company with 20 different applications let me tell you it's very difficult sometimes to get them to talk to each other, in fact we have entire teams who mostly just write translation logic, so the idea that AI, which already has a major problem with making shit up, is gonna start screwing with nuclear reactors (??) or whatever it's supposed to do seems pretty farfetched to me. this is the same reason why the whole "NFTs will let you use your avatar in every single video game and give you unique powers" idea is idiotic
― frogbs, Wednesday, 31 May 2023 14:26 (two years ago)
Hope so.
My biggest hope right now is that ai will cause us to stop measuring human value in terms of “productivity,” and somehow lead to socialism in a way that is very different from what marx predicted. (The proletariat probably will not have much leverage in a world with even more—radically more—automation. Or perhaps they will… but as consumers not producers…)
My biggest fear is that mass job displacement will make people feel lost and useless and also render them unable to care for themselves and their families. Perhaps the issue will be patched by some inadequate ubi system but in general there will be no cultural shift that would allow people to lead meaningful lives in this “post-work” future.
― treeship., Wednesday, 31 May 2023 14:32 (two years ago)
where's the energy going to come from to power all these "AGI" applications?
― rob, Wednesday, 31 May 2023 14:36 (two years ago)
this is impressive! but it still looks like a highly efficient limited application tool, not anywhere near AGI or likely to pose an existential threat.
― ledge, Wednesday, 31 May 2023 14:38 (two years ago)
where's the energy going to come from to power all these "AGI" applications?― rob, Wednesday, May 31, 2023 10:36 AM (five minutes ago) bookmarkflaglink
― rob, Wednesday, May 31, 2023 10:36 AM (five minutes ago) bookmarkflaglink
Isn’t part of the dream that these bad boys will discover new cheap, clean ways of powering everything? Like salt pellets or something.
― treeship., Wednesday, 31 May 2023 14:42 (two years ago)
every time the AI guys say we should be afraid of AI, it's marketing. don't be sucked in.
― 龜, Wednesday, 31 May 2023 14:43 (two years ago)
AGI? (To me it measns Adjusted Gross Income)
― Every post of mine is an expression of eternity (Boring, Maryland), Wednesday, 31 May 2023 14:55 (two years ago)
artificial general intelligence, i.e. one that can do anything, not just play chess or write essays or make jodorowsky's tron. coined because we can't stop everyone calling the current narrow scope systems AI even though they're clearly not intelligent.
― ledge, Wednesday, 31 May 2023 14:59 (two years ago)
― treeship., Wednesday, 31 May 2023 bookmarkflaglink
Surely it's a skill that needs to be grown overtime in a human being. They aren't comparing like with like.
I wonder if teachers -- seen one or two saying this -- are actually saying they are bad at their jobs.
― xyzzzz__, Wednesday, 31 May 2023 15:05 (two years ago)
nice, thank you
― treeship., Wednesday, 31 May 2023 15:14 (two years ago)
Lol I didn't know. You're welcome.
― xyzzzz__, Wednesday, 31 May 2023 15:16 (two years ago)
I read this and haven't got much further from the feeling that AI is just about rattling cages.
I get about 12 emails a week from random companies where the entire pitch can be broken down to "for some reason, we involve AI in something a sensor could do"the computing-greediness of AI aside, things like cell seal checking don't need anything generative or semi-intelligent— Hazel Southwell (@HSouthwellFE) May 31, 2023
― xyzzzz__, Wednesday, 31 May 2023 15:26 (two years ago)
just read this, not because I found it particularly interesting or compelling, but because this guy used to be a famous Magic: the Gathering player who I actually met when I was 13
https://thezvi.substack.com/p/response-to-tyler-cowens-existential
maybe I'm just a smooth brained imbecile but I still have trouble getting the big picture here. "computers will be smarter than humans" is not exactly terrifying to me because I think in a lot of ways they already are. all the hardest things to do are already being done with the assistance of computers. a lot of these doomsday scenarios hinge on two things - one, that it becomes super smart and therefore somewhat infallible, which I already think is pretty far-fetched because it's trained on human data, and we are extremely fallible. plus I don't know if making these things more powerful is going to necessarily deal with the problem that it fundamentally can't separate good data from bad.
secondly, and this is the one I really have trouble wrapping my head around, but aren't computers just a form of data input/output? this idea they're gonna "take over" is missing one important step, they don't really have a physical manifestation, and as far as I know we're not planning to build an army of millions of AI-powered humanoid robots. yes, there are powerful text and image generation tools and potentially much more coming very soon, but this is still all I/O stuff, all these doomsday scenarios hinge on it somehow generating physical capabilities, or at least the ability to generate them and take control of the bulldozers or whatever. like this whole argument that Zvi is making here that human intelligence is going to "compete" with artificial intelligence...isn't us having bodies kind of a big difference there?
(thirdly, as rob alludes to, all these scenarios seem to rely on Moore's law just continuing onto infinity, and there also being massive sources of power available to make this all run)
― frogbs, Wednesday, 31 May 2023 15:31 (two years ago)
pic.twitter.com/bPlLaoDmtu— William Friedkin Truths (@LazlosGhost) May 31, 2023
― 𝔠𝔞𝔢𝔨 (caek), Wednesday, 31 May 2023 15:51 (two years ago)