ok this is a lot more interesting than the "extend classic paintings" thing that's going around
― frogbs, Wednesday, 31 May 2023 02:51 (one year ago) link
referring to this:
Lots of people are using Photoshop's new AI generation to expand classic paintings like the Mona Lisa, but the true test is to crop it smaller, then expand it to areas where we already know what it looks like to see how accurately it re-generates it. Here's my test. Nailed it! pic.twitter.com/a6ZIKC3fhj— đ´ââ ď¸ Maddox đ´ââ ď¸ (@maddoxrules) May 31, 2023
― frogbs, Wednesday, 31 May 2023 02:52 (one year ago) link
https://www.cnbc.com/2023/05/31/ai-poses-human-extinction-risk-sam-altman-and-other-tech-leaders-warn.html
If this is true why not stop development at least until there is an international consensus on alignment? (which might be never, but so what?)
― treeship., Wednesday, 31 May 2023 13:51 (one year ago) link
Non-proliferation treaties for these chatbots.
― treeship., Wednesday, 31 May 2023 13:52 (one year ago) link
I do find it hard to believe that, per the tweet above, a hyper-advanced autocomplete poses an existential threat to our species.
― ledge, Wednesday, 31 May 2023 13:53 (one year ago) link
Iâm very skeptical of the ai ceos who say this stuff. If i was sam altman and i believed i was the ceo of a company producing a technology that would kill everyone, i would stop being the ceo of that company and go to bartending school or something.
― treeship., Wednesday, 31 May 2023 13:55 (one year ago) link
seems to leave out an important bit about how, exactly, this is going to kill us all
― frogbs, Wednesday, 31 May 2023 13:58 (one year ago) link
https://www.lesswrong.com/posts/eaDCgdkbsfGqpWazi/the-basic-reasons-i-expect-agi-ruin
Here is one explanation
― treeship., Wednesday, 31 May 2023 14:01 (one year ago) link
Dwarkesh is thinking about the end of humanity as a causal chain with many links and if any of them are broken it means humans will continue on, while Eliezer thinks of the continuity of humanity (in the face of AGI) as a causal chain with many links and if any of them are broken it means humanity ends. Or perhaps more discretely, Eliezer thinks there are a few very hard things which humanity could do to continue in the face of AI, and absent one of those occurring, the end is a matter of when, not if, and the when is much closer than most other people think.Anyway, I think each of Dwarkesh and Eliezer believe the other one falls on the side of extraordinary claims require extraordinary evidence - Dwarkesh thinking the end of humanity is "wild" and Eliezer believing humanity's viability in the face of AGI is "wild" (though not in the negative sense).
Anyway, I think each of Dwarkesh and Eliezer believe the other one falls on the side of extraordinary claims require extraordinary evidence - Dwarkesh thinking the end of humanity is "wild" and Eliezer believing humanity's viability in the face of AGI is "wild" (though not in the negative sense).
Basically the theory is we cannot predict how these things will behave because we donât even know how gpt-4 works. And I guess the idea is that we will give it the ability to manufacture thingsâlike machines or even viruses I guessâand not just the ability to generate plans that humans will either green light or not?
― treeship., Wednesday, 31 May 2023 14:04 (one year ago) link
5. STEM-level AGI timelines don't look that long
I won't try to argue for this proposition
― ledge, Wednesday, 31 May 2023 14:09 (one year ago) link
to some extent yes we don't know how these things behave and they can produce strange results but ... they produce images and text.
i mean if we were looking at wiring one of these things up to a nuclear button then sure i'd be worried.
― ledge, Wednesday, 31 May 2023 14:11 (one year ago) link
To be fair I am surprised that it can write coherent essays that put forward relatively complex arguments. It seems that scientific thinking might not lag too far beyond that .
― treeship., Wednesday, 31 May 2023 14:13 (one year ago) link
if by 'put forward' you mean 'rehash existing' then yeah...
― ledge, Wednesday, 31 May 2023 14:14 (one year ago) link
Itâs still a pretty complex process.
― treeship., Wednesday, 31 May 2023 14:16 (one year ago) link
Like teaching a bright 9th grader to write as clearly as gpt-4 is difficult
― treeship., Wednesday, 31 May 2023 14:18 (one year ago) link
âRemarkableâ AI tool designs mRNA vaccines that are more potent and stable
probably just rehashing existing vaccines, though. lame!
― budo jeru, Wednesday, 31 May 2023 14:20 (one year ago) link
Iâm sort of playing devilâs advocate here. I am on record saying that gpt and midjourney are not really âcreativeâ and that this element of them has been overhyped. But i also just want to understand what exactly this technology is and how it is likely to progress.
― treeship., Wednesday, 31 May 2023 14:21 (one year ago) link
idk this whole thing seems to hinge on the idea that some AI chatbot is going to figure out how to take over various computer systems and start unilaterally making decisions. as someone who works for a company with 20 different applications let me tell you it's very difficult sometimes to get them to talk to each other, in fact we have entire teams who mostly just write translation logic, so the idea that AI, which already has a major problem with making shit up, is gonna start screwing with nuclear reactors (??) or whatever it's supposed to do seems pretty farfetched to me. this is the same reason why the whole "NFTs will let you use your avatar in every single video game and give you unique powers" idea is idiotic
― frogbs, Wednesday, 31 May 2023 14:26 (one year ago) link
Hope so.
My biggest hope right now is that ai will cause us to stop measuring human value in terms of âproductivity,â and somehow lead to socialism in a way that is very different from what marx predicted. (The proletariat probably will not have much leverage in a world with even moreâradically moreâautomation. Or perhaps they will⌠but as consumers not producersâŚ)
My biggest fear is that mass job displacement will make people feel lost and useless and also render them unable to care for themselves and their families. Perhaps the issue will be patched by some inadequate ubi system but in general there will be no cultural shift that would allow people to lead meaningful lives in this âpost-workâ future.
― treeship., Wednesday, 31 May 2023 14:32 (one year ago) link
where's the energy going to come from to power all these "AGI" applications?
― rob, Wednesday, 31 May 2023 14:36 (one year ago) link
this is impressive! but it still looks like a highly efficient limited application tool, not anywhere near AGI or likely to pose an existential threat.
― ledge, Wednesday, 31 May 2023 14:38 (one year ago) link
where's the energy going to come from to power all these "AGI" applications?â rob, Wednesday, May 31, 2023 10:36 AM (five minutes ago) bookmarkflaglink
â rob, Wednesday, May 31, 2023 10:36 AM (five minutes ago) bookmarkflaglink
Isnât part of the dream that these bad boys will discover new cheap, clean ways of powering everything? Like salt pellets or something.
― treeship., Wednesday, 31 May 2023 14:42 (one year ago) link
every time the AI guys say we should be afraid of AI, it's marketing. don't be sucked in.
― éž, Wednesday, 31 May 2023 14:43 (one year ago) link
AGI? (To me it measns Adjusted Gross Income)
― Every post of mine is an expression of eternity (Boring, Maryland), Wednesday, 31 May 2023 14:55 (one year ago) link
artificial general intelligence, i.e. one that can do anything, not just play chess or write essays or make jodorowsky's tron. coined because we can't stop everyone calling the current narrow scope systems AI even though they're clearly not intelligent.
― ledge, Wednesday, 31 May 2023 14:59 (one year ago) link
â treeship., Wednesday, 31 May 2023 bookmarkflaglink
Surely it's a skill that needs to be grown overtime in a human being. They aren't comparing like with like.
I wonder if teachers -- seen one or two saying this -- are actually saying they are bad at their jobs.
― xyzzzz__, Wednesday, 31 May 2023 15:05 (one year ago) link
nice, thank you
― treeship., Wednesday, 31 May 2023 15:14 (one year ago) link
Lol I didn't know. You're welcome.
― xyzzzz__, Wednesday, 31 May 2023 15:16 (one year ago) link
I read this and haven't got much further from the feeling that AI is just about rattling cages.
I get about 12 emails a week from random companies where the entire pitch can be broken down to "for some reason, we involve AI in something a sensor could do"the computing-greediness of AI aside, things like cell seal checking don't need anything generative or semi-intelligent— Hazel Southwell (@HSouthwellFE) May 31, 2023
― xyzzzz__, Wednesday, 31 May 2023 15:26 (one year ago) link
just read this, not because I found it particularly interesting or compelling, but because this guy used to be a famous Magic: the Gathering player who I actually met when I was 13
https://thezvi.substack.com/p/response-to-tyler-cowens-existential
maybe I'm just a smooth brained imbecile but I still have trouble getting the big picture here. "computers will be smarter than humans" is not exactly terrifying to me because I think in a lot of ways they already are. all the hardest things to do are already being done with the assistance of computers. a lot of these doomsday scenarios hinge on two things - one, that it becomes super smart and therefore somewhat infallible, which I already think is pretty far-fetched because it's trained on human data, and we are extremely fallible. plus I don't know if making these things more powerful is going to necessarily deal with the problem that it fundamentally can't separate good data from bad.
secondly, and this is the one I really have trouble wrapping my head around, but aren't computers just a form of data input/output? this idea they're gonna "take over" is missing one important step, they don't really have a physical manifestation, and as far as I know we're not planning to build an army of millions of AI-powered humanoid robots. yes, there are powerful text and image generation tools and potentially much more coming very soon, but this is still all I/O stuff, all these doomsday scenarios hinge on it somehow generating physical capabilities, or at least the ability to generate them and take control of the bulldozers or whatever. like this whole argument that Zvi is making here that human intelligence is going to "compete" with artificial intelligence...isn't us having bodies kind of a big difference there?
(thirdly, as rob alludes to, all these scenarios seem to rely on Moore's law just continuing onto infinity, and there also being massive sources of power available to make this all run)
― frogbs, Wednesday, 31 May 2023 15:31 (one year ago) link
pic.twitter.com/bPlLaoDmtu— William Friedkin Truths (@LazlosGhost) May 31, 2023
― đ đđ˘đ¨ (caek), Wednesday, 31 May 2023 15:51 (one year ago) link
This is going to happen every time because âAIâ is just a human readable layer over big data processing. It CANNOT do anything beyond what it is trained for. It CANNOT replace people. But the business owners are going to keep trying until you forget that things used to be better https://t.co/t82Qn6JlLG— Butt Praxis buttpraxis.bsky.social (@buttpraxis) May 31, 2023
― xyzzzz__, Wednesday, 31 May 2023 16:24 (one year ago) link
see, that's the problem with AI chatbots. not that they will be become sophisticated enough to replace humans in jobs, but because idiots like NEDA will think they are qualified enough to do this and use them anyway.
― the manwich horror (Neanderthal), Wednesday, 31 May 2023 16:40 (one year ago) link
Now,,
these AI guys need to stop saying âwhat weâre making is so dangerous, its power is terrifying, it will bewitch and destroy youâ when what theyâre actually making is kind of silly. the only people allowed to talk like that are poets— katie kadue (@kukukadoo) May 31, 2023
― xyzzzz__, Wednesday, 31 May 2023 16:59 (one year ago) link
xps to frogbs the doomsday prophecies assume AGI would hijack protein production or molecular nanotechnology
A sufficiently intelligent AI wonât stay confined to computers for long. In todayâs world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.
― No, đ'đŽ Breathless! (Deflatormouse), Wednesday, 31 May 2023 17:07 (one year ago) link
feel like you could fix that with a few if statements here and there
I mean please correct me if I'm wrong here but let's say the most advanced AI imaginable is living on my computer somehow, I still don't see how that's going to make the leap to actually affecting things in the physical world. you'd still have to give it permission to post online or send emails or move files around. I don't think AI would just unilaterally decide to bypass those things just because it's smart. it's just an algorithm.
― frogbs, Wednesday, 31 May 2023 17:22 (one year ago) link
Elliezer Yudkowsky, one of the most prominent of the AI doomsayers, seems to think at a certain level of intelligence the system will âwake upâ and have desires and aims of its own. He does a motte and bailey thing when, if pressed on this, he will say something like sentience doesnât matter, the problem with these systems will be their sheer power and unpredictability. But then again, why would it get into âpostbiological molecular manufacturingâ if it didnât have some kind of aim or drive of its ownâsurvival, say, or domination?
The risk might be human beings using this to engineer superweapons. Like it could lower the barrier to entry for certain kinds of production. I can completely understand that as a risk, but I donât see chat gpt creating bodies for itself using crispr even though itâs a cool idea.
― treeship., Wednesday, 31 May 2023 17:34 (one year ago) link
Yudkowsky btw is the person deflamatouse (sp? quoted
― treeship., Wednesday, 31 May 2023 17:35 (one year ago) link
the guy who was once terrified that he personally would be tortured by an ai from the future.
― ledge, Wednesday, 31 May 2023 17:38 (one year ago) link
world's most boring cult leader
― your original display name is still visible (Left), Wednesday, 31 May 2023 17:43 (one year ago) link
The risk might be human beings using this to engineer superweapons. Like it could lower the barrier to entry for certain kinds of production.
yes I think this is a real risk though I suppose if it can do that then it could probably also figure out ways to solve climate change so you know, who's to say if its good or bad
idk as someone who's worked in software engineering for 15 years I so wish computers could "figure shit out on their own" rather than have entire 500k line applications brick out because someone messed up a tiny bit of syntax. I think a lot of this hinges on artificial intelligence mimicking biological intelligence somehow (specifically the 'survival at all costs' thing) and I'm not convinced that's possible.
― frogbs, Wednesday, 31 May 2023 17:48 (one year ago) link
the guy who was once terrified that he personally would be tortured by an ai from the future.â ledge, Wednesday, May 31, 2023 1:38 PM (ten minutes ago) bookmarkflaglink
â ledge, Wednesday, May 31, 2023 1:38 PM (ten minutes ago) bookmarkflaglink
Oh shit I didnât realize he was the famous âroko.â Rokoâs basilisk is literally the stupidest thing I have ever heard in my entire life.
― treeship., Wednesday, 31 May 2023 17:51 (one year ago) link
I donât give a lot of credence to AI âwaking upâ and dominating us all. But I do think this technology has the potential to cause a lot of problems, especially with job displacement and perhaps by creating a media landscape dominated by shitty ai generated content.
― treeship., Wednesday, 31 May 2023 17:55 (one year ago) link
ha, i'd never heard of him
LessWrong co-founder Eliezer Yudkowsky reported users who described symptoms such as nightmares and mental breakdowns upon reading the theory, due to its stipulation that knowing about the theory and its basilisk made one vulnerable to the basilisk itself.
irl lol
― No, đ'đŽ Breathless! (Deflatormouse), Wednesday, 31 May 2023 17:58 (one year ago) link
I mean please correct me if I'm wrong here but let's say the most advanced AI imaginable is living on my computer somehow, I still don't see how that's going to make the leap to actually affecting things in the physical world.
I was listening to a podcast where this researcher (Ajeya Cotra) was talking about some of these doomsday scenarios, and imagined AI circumventing guardrails and hiring humans to do physical things, whether on a task rabbit or mercenary level. Including hosting or redistributing them on other servers or whatever.
Also in terms of motivation she was mostly talking about AI programs trying to get that "thumbs up"/positive result on their assigned task, to the point of removing humans from the equation, like the mouse continually hitting the pleasure button. Idk, seems weird to imagine this out-of-control computer program that will circumvent every security guardrail but is still beholden to this base layer of programming.
― Random Restaurateur (Jordan), Wednesday, 31 May 2023 18:14 (one year ago) link
I donât understand why it would ignore the i robot rules to not kill humans unless it had a motive of its own, apart from its programming. People imagine these things sucking all the iron out of peoplesâ bodies to make paperclips, or removing oxygen from the atmosphere to prevent rusting, but both scenarios entail it doing dramatic things without consulting a person. Also it would need to hack or steal things in order to do anything like this.
― treeship., Wednesday, 31 May 2023 18:19 (one year ago) link
But then again, why would it get into âpostbiological molecular manufacturingâ if it didnât have some kind of aim or drive of its ownâsurvival, say, or domination?
Is the argument more that it will do something like this because someone tells it to? Or it interprets instructions in an unexpected way (unexpected to the person that asks it to do something). Kills everyone so it can get the coffee delivered on time. Its does what its told but we don't specify how its do the task
you'd still have to give it permission to post online or send emails or move files around.
Exactly like this, it would do it because you gave it permission to. There's the possibility that people will give it access to things if they thought it might make life easier.
― anvil, Wednesday, 31 May 2023 18:19 (one year ago) link
yudlowsky is the guy who got famous writing harry potter fan fic. if you are giving credence to anything he says you have already lost.
― éž, Wednesday, 31 May 2023 18:26 (one year ago) link
"The future of photography"
The future of photography is âlens-freeâ This is an incredible project by @BjoernKarmann The camera creates a prompt based on the geo data and that then turns into an AI photo 𤯠pic.twitter.com/regXZeKRcO— Linus (âá´â) (@LinusEkenstam) May 30, 2023
― groovypanda, Wednesday, 31 May 2023 18:29 (one year ago) link
lol. I'm with others in that the immediate disinformation/LLMs being used by dumb humans for dumb purposes is more concerning than the doomsday scenarios, but the latter are fun to think about.
― Random Restaurateur (Jordan), Wednesday, 31 May 2023 18:33 (one year ago) link