Artificial intelligence still has some way to go

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (6046 of them)

yudlowsky is the guy who got famous writing harry potter fan fic. if you are giving credence to anything he says you have already lost.

, Wednesday, 31 May 2023 18:26 (two years ago)

"The future of photography"

The future of photography is “lens-free”

This is an incredible project by @BjoernKarmann

The camera creates a prompt based on the geo data and that then turns into an AI photo 🤯

pic.twitter.com/regXZeKRcO

— Linus (●ᴗ●) (@LinusEkenstam) May 30, 2023

groovypanda, Wednesday, 31 May 2023 18:29 (two years ago)

lol. I'm with others in that the immediate disinformation/LLMs being used by dumb humans for dumb purposes is more concerning than the doomsday scenarios, but the latter are fun to think about.

Random Restaurateur (Jordan), Wednesday, 31 May 2023 18:33 (two years ago)

there's def a strong element of wishful thinking / triumph of the nerds fantasizing in the ai-pocalypse world

rob, Wednesday, 31 May 2023 19:52 (two years ago)

also quite a bit of boneheaded atheist eschatology

rob, Wednesday, 31 May 2023 19:55 (two years ago)

Good book about the latter - https://www.penguinrandomhouse.com/books/567075/god-human-animal-machine-by-meghan-ogieblyn/

Random Restaurateur (Jordan), Wednesday, 31 May 2023 19:57 (two years ago)

wonder what the overlap is between that crowd and the "humans will be living on Mars in 100 years" folks

frogbs, Wednesday, 31 May 2023 19:59 (two years ago)

ok so from that article I linked above:

Have we already forgotten March of 2020? How many times in history has life undergone that rapid and huge a transformation? According to GPT-4, the answer is zero. It names The Black Death, Industrial Revolution and World War II, while admitting they fall short. Yes, those had larger long-term impacts, by far (or so we think for now, I agree but note it is too soon to tell), yet they impacted things relatively slowly.

the fact that he's using ChatGPT to answer very broad and open-ended questions like this says more about the author than it does AI, especially since one can think of a lot of obvious reasons it might think Covid was a more transformative event than anything that's happened in human history

frogbs, Wednesday, 31 May 2023 20:10 (two years ago)

https://www.vice.com/en/article/qjvk97/eating-disorder-helpline-disables-chatbot-for-harmful-responses-after-firing-human-staff?utm_source=reddit.com

This took like one day to happen, lol

longtime caller, first time listener (man alive), Wednesday, 31 May 2023 20:58 (two years ago)

they're already union busters!

Andy the Grasshopper, Wednesday, 31 May 2023 21:06 (two years ago)

but the AI refused to scab

longtime caller, first time listener (man alive), Wednesday, 31 May 2023 21:09 (two years ago)

Christ, I hope whoever approved that deeply and obviously stupid change got fired for life and replaced by a chatbot.

Beautiful Bean Footage Fetishist (Old Lunch), Wednesday, 31 May 2023 21:37 (two years ago)

re: AI access to resources - it would be a pretty plausible scenario for an AI to have sufficient knowledge of hacking to be able to take over a power plant or a transit hub or the 50 billion internet of things devices. and considering we are pretty bad at exercising the principle of least privilege and having proper authentication/authorization mechanisms (and the fact that an AI would be able to find vulnerabilities in a much more efficient manner), the only solution would be to physically contain AI within an environment, which isn’t likely to be enforced.

scanner darkly, Thursday, 1 June 2023 01:56 (two years ago)

tbh i think there is more danger not in the AI itself but getting it to the point where it can make some big scientific discoveries at a pace exceeding humanity’s ability to adapt to them. a lot of scientific breakthroughs were the result of connecting seemingly disconnected points across multiple fields - something that AIs are very, very good at.

scanner darkly, Thursday, 1 June 2023 02:03 (two years ago)

The thing is AI won't destroy the world. We--humans--are already doing that in pretty much every meaningful way. So-called AI will just add unpleasant extra static and bullshit to the quality of life in the meantime as it plummets towards zero.

Tsar Bombadil (James Morrison), Thursday, 1 June 2023 02:20 (two years ago)

utm_source=reddit.com /look of disapproval

recovering internet addict/shitposter (viborg), Thursday, 1 June 2023 02:22 (two years ago)

xp - yes the planet might be killed. That sounds bad, then again I won't be reading draft variations on the Terminator script.

xyzzzz__, Thursday, 1 June 2023 06:39 (two years ago)

https://static.fusionmovies.to/images/character/UJk4Taw6yQG93RMHNp3Qf3MpdtQS-VNtt8ZtXD5O41Xj6p1pVmRU4GnCqXuhlFuau_a7pqWHIucNBauCyI43kn2YM92t-bxQmUc8yF-6FsM.jpg?1&resize_w=320

"I'd piss on the spark plug if I thought it'd do any good!"

Tracer Hand, Thursday, 1 June 2023 09:10 (two years ago)

Tsar B otm. Danger isn't ai escaping from human control, it's ai remaining securely under the control of... these humans

Toploader on the road, unite and take over (Bananaman Begins), Thursday, 1 June 2023 09:46 (two years ago)

Look I'm not gonna lie, my friends and I are going to require an absolute truckload of grant money to mitigate the literal species-level existential threats associated with this thing we claim to be making; this is how you know we are deeply serious people btw

— Kieran Healy (@kjhealy) May 30, 2023

rob, Thursday, 1 June 2023 13:02 (two years ago)

I have no idea what artificial intelligence is, and at this point I’m too afraid to ask.

Allen (etaeoe), Thursday, 1 June 2023 14:32 (two years ago)

uhhh yikes

The US Air Force tested an AI enabled drone that was tasked to destroy specific targets. A human operator had the power to override the drone—and so the drone decided that the human operator was an obstacle to its mission—and attacked him. 🤯 pic.twitter.com/HUSGxnunIb

— Armand Domalewski (@ArmandDoma) June 1, 2023

frogbs, Thursday, 1 June 2023 18:51 (two years ago)

Wow literally terminator

Its big ball chunky time (Jimmy The Mod Awaits The Return Of His Beloved), Thursday, 1 June 2023 18:53 (two years ago)

Said Hamilton:

https://en.m.wikipedia.org/wiki/Linda_Hamilton

xyzzzz__, Thursday, 1 June 2023 18:54 (two years ago)

I think they should shut it down, the way they have with human cloning and things like that. We do not need technologies like this, especially not when they are being engineered by corporations with interests at times radically at odds with the public.

treeship., Thursday, 1 June 2023 19:10 (two years ago)

This technology has potential for medicine and climate but it seems like it will come at the cost of mass social disruption. Doesn’t seem worth it. Under socialism, sure.

treeship., Thursday, 1 June 2023 19:14 (two years ago)

How would it be shut down, at this point? Cloning a human seems to be a higher barrier to entry than cloning an ai model from a couple years ago

z_tbd, Thursday, 1 June 2023 19:17 (two years ago)

I don't see the issue with that quoted bit. It's why things are tested. It fails the test, it isn't used.

xyzzzz__, Thursday, 1 June 2023 19:23 (two years ago)

I'd say two things: 1) it demonstrates a recurring problem with systems that use machine learning + optimization--there's even a term for it that I'm blanking on right now, but it's a potential hazard of any similar AI and 2) these systems display emergent behavior, meaning they behave unpredictably, so it's entirely feasible that a product could get through testing and then begins behaving this way in the real world.

That said, it says this was a "simulated test" so I'm not sure how much of a genuine threat this particular tech is. OTOH various military & defence industry people have long been drooling over the prospect of automated warfare and that will obvs be bad

rob, Thursday, 1 June 2023 20:18 (two years ago)

I'm being stalked by some chatbot on instagram.. it's a weird one, because she's pretending to be a harbor pilot living in Sweden (with photos of her and colleagues on the boat), and we had a couple friends in common so I replied to her initial message. All her subsequent replies were really weird and came way too fast, and this morning she messaged me "Good Morning, dear.. how did you sleep?"

I think it's time to end it right now, hope I don't break her artificial heart

Andy the Grasshopper, Thursday, 1 June 2023 20:34 (two years ago)

OTOH various military & defence industry people have long been drooling over the prospect of automated warfare and that will obvs be bad

― rob, Thursday, 1 June 2023 bookmarkflaglink

Someone mentioned on twitter that AI companies could cut their ties to defense but won't as those contracts are lucrative.

Thing is even if they did the military would set something up in-house anyway.

xyzzzz__, Thursday, 1 June 2023 21:59 (two years ago)

So another one for the "Terminator draft" bin

I deleted this tweet because the “AI powered drone turns on its operator story” was total nonsense—the Colonel who described it as a simulation now says it was just “a thought experiment.”

😑 pic.twitter.com/IMIguxKuuY

— Armand Domalewski (@ArmandDoma) June 2, 2023

xyzzzz__, Friday, 2 June 2023 12:48 (two years ago)

That's pretty fuckin dumb.

I agree that it's a plausible scenario (or at least an illustration of the general type of scenario we might be concerned about) but why the need to completely misrepresent things?

The general idea is that AI doesn't need to be "awake" or "sentient" or "conscious" to do something harmful, it just needs to have a sufficiently open-ended directive, be automated in pursuing that directive, and some leeway to make "decisions" in furtherance of that directive. That's what the paperclip maximizer idea is supposed to illustrate as well.

In a way I actually find an unconscious AI scarier than a conscious one in this regard. Consciousness at least seems to entail competing drives, desires and restraints. A very few humans do kind of behave like paper clip maximizers, but most don't, and even the ones that do are often restrained by other humans.

longtime caller, first time listener (man alive), Friday, 2 June 2023 13:55 (two years ago)

like if we're worried about self-aware AI killing us, I could point us to the myriad of real, observed things that are likely to kill us all first

the manwich horror (Neanderthal), Friday, 2 June 2023 13:59 (two years ago)

like the police, or poverty, or no access to basic preventative healthcare

hey guys i have a startup, it's called mr choppy ltd

Tracer Hand, Friday, 2 June 2023 14:21 (two years ago)

So why can’t LLMs just learn what stop words mean? Ultimately, because “meaning” is something orthogonal to how these models work. Negations matter to us because we’re equipped to grasp what those words do. But models learn “meaning” from mathematical weights: “Rose” appears often with “flower,” “red” with “smell.” And it’s impossible to learn what “not” is this way.

hey but just wait until they learn what “not” means, nothing will be the same

Tracer Hand, Friday, 2 June 2023 16:16 (two years ago)

that's such nonsense though

"not" would appear in a network of words alongside other words of negation, like "no" and "never"

and also, along a different axis, with other function words that can be used in grammatically similar ways

i personally feel there's something fundamentally true about word-meaning being largely associative. there's another piece too, for many words, but a lot of poetry and literature function along that associative line

sean gramophone, Friday, 2 June 2023 18:22 (two years ago)

Finally a researcher who is good at not saying that much about AI. But he is v good on those letters.

https://venturebeat.com/ai/top-ai-researcher-dismisses-ai-extinction-fears-challenges-hero-scientist-narrative/

xyzzzz__, Friday, 2 June 2023 20:07 (two years ago)

"What I say to AI researchers — not the more senior ones, they know better — but to my students, or more junior researchers, I just try my best to show them what I work on, what I think we should work on to give us small but tangible benefits. That’s the reason why I work on AI for healthcare and science. That’s why I’m spending 50% of my time at [biotechnology company] Genentech, part of the Prescient Design team to do computational antibody and drug design. I just think that’s the best I can do. I’m not going to write a grand letter. I’m very bad at that."

Amen to this.

xyzzzz__, Friday, 2 June 2023 20:08 (two years ago)

Finally a researcher who is good at not saying that much about AI. But he is v good on those letters.

This is my pal (and now coworker) KC! He’s the best and he’s 100% correct.

Allen (etaeoe), Saturday, 3 June 2023 20:18 (two years ago)

Someone recommended I listen to the Holly Herndon podcast. I took them up on their recommendation, and as a fan of her music, I _really_ wish I didn’t. Herndon, like many artists and cultural critics discussing AI, come across as entirely unaware that the distance, from a science or engineering perspective, between new and old methods is far smaller than, for example, the distance between a world without frame buffers and the world with frame buffers. I’d love to ask her, what are the cultural changes when your preferred interpolation method goes from “pretty good” to “very good?”

Allen (etaeoe), Saturday, 3 June 2023 20:21 (two years ago)

In fact, after I write that, I think it’s extremely fun that presently the most ballyhooed auto-regressive method is so simple that it could be reasonably reproduced by an excited primary school student over the weekend. It’s sad that we mythologized this rather than make it a neat example of understandable science.

Allen (etaeoe), Saturday, 3 June 2023 20:26 (two years ago)

xp yikes, that’s too bad.

One of my PhD-having coworkers joked while presenting how he’s running one of the current protein folding systems that its release made his work that he did to get that PhD obsolete.Which may be true in a way, but the accessibility of ChatGPT, etc., have just presented a public face to just the current step in a long series of efforts. We wouldn’t be here without the work.

The way a lot of articles have been written, you’d think computers just got sufficiently powerful and someone threw a bunch of text at one until a chatbot popped out of it like it’s Zeus’s forehead, fully formed

mh, Sunday, 4 June 2023 16:27 (two years ago)

Huh, I'd been wondering whether we should have a "Who is Eliezer Yudkowsky and can we eat him freeze him for later eating?" thread. An acquaintance I occasionally read the twitter of is in with that crowd, so I occasionally go read Yudkowsky's - though this is a bad habit that I should try to break.

There's some self-interest in the recent announcement (though, these are not generally people who clamour for more government regulation), but I think there's also a bunch of pareidolia, like with Blake Lemoine, some real "the beguiling voices you only hear when you stare at the flames for 200+ hours".

I'm not sure if I misread above, but Yudkowsky isn't Roko, he's just the guy who set up the whole LessWrong community - David Gerard has a good article on its effects on Effective Altruism, which reminds me to link to Elizabeth Sandifer on Yudkowsky, which contains the crucial context for him - he is first and foremost a crank, albeit one who has a lot of reach at present.

He's an interesting writer (in that he's not as terrible as you'd naturally assume) - I found this memorial (including the update right at the end of the comments) after the death of his brother to be moving and powerful, while also revealing that a very broken sense of humanity. I genuinely think his anguish is real, even when the sources are (elsewhere) silly.

(No, I will not be reading the Harry Potter work, though I understand that it's more highjacking a popular franchise as a framing for his thoughts, than anything that can really be called 'fanfic')

The fuel behind the explosion of capability they expect is the idea of 'intelligence' as a linear, number-goes-up, value: We are intelligent enough to make computers that will be more intelligent than us, which will make computers more intelligent than them, and so on and on infinitely, IQ one billion! They're generally big believers in the idea that intelligence is a real thing measured by IQ tests rather than a function of it and, as usual, when you get to that you're only 10 minutes from the word 'heritable' and then, as they say, you're off to the races.

There's a Strangelove vibe that creeps in as well - "If this is the battle for the end of the world, then surely the traitors to humanity are those that insist there are words we can't say / thoughts we can't think" - AFAIK Yudkowsky's not that far, he just has the usual Libertarian free speech brainworms - I wasn't surprised to see him linking to a "how I was cancelled" post by Kathleen Stock on Unherd earlier this week.

(There may also be a bit of "these things will be even further above me as I am above the masses, and will hold me (me!) in even more contempt then I hold them"... but that might just be the guy I know)

There are of course people in the space that are worth listening to, though they define themselves more as AI ethics than safety- I understand Timnit Gebru is a good follow there. The angle is one that's been mentioned a lot above, that we should consider the actual effects of this on people right now, and the intersections with already-existing injustices. Though for a lot of the doomers, that's just what they don't want - there's already a choice of apocalypses available, but none of those on offer centre these guys as much as they'd want.

Andrew Farrell, Sunday, 4 June 2023 22:32 (two years ago)

Timnit Gebru is more credible than Eliezer Yudkowsky but she’s still very much a crank.

Allen (etaeoe), Sunday, 4 June 2023 23:06 (two years ago)

We are intelligent enough to make computers that will be more intelligent than us, which will make computers more intelligent than them, and so on and on infinitely, IQ one billion!

If it’s gonna do this on Windows I give it maybe 2 iterations before it crashes

The “more intelligent than us” thing I don’t quite get, computers already are in a lot of ways and have been for a long time. Idk if people are fiddling around with ChatGPT or Midjourney and can’t see the difference I dunno what to tell them. Hopefully this is just really good at fooling people.

frogbs, Sunday, 4 June 2023 23:49 (two years ago)

There does seem to be something about negatives that doesn't sit well with ChatGPT. I can't get it to write a Perec-style story without an 'e' in it, despite trying various prompts. I'm surprised because I've got it to do things that would seem way more tricky than that. It didn't bat an eyelid when I asked it to write an acrostic poem about my trip to the supermarket to buy ingredients for chicken cacciatore for instance!

Zelda Zonk, Sunday, 4 June 2023 23:53 (two years ago)

it doesnt do any oulipo or rules based composition well or at all, frankly

, Monday, 5 June 2023 03:53 (two years ago)

Some rules-based things it doesn't have a problem with - ask it to do a rhyming acrostic using the alphabet sequentially from A to Z, and it will comply. But no it doesn't do lipograms - or it will start to do one and then forget the rule after the first couple of sentences. Similarly, I asked it to compose a story using only sentences with exactly seven letters, but it couldn't maintain that past the first sentence or two.

Zelda Zonk, Monday, 5 June 2023 04:55 (two years ago)

"I'm surprised because I've got it to do things that would seem way more tricky than that."

You mean like programming tasks?

xyzzzz__, Monday, 5 June 2023 06:40 (two years ago)


You must be logged in to post. Please either login here, or if you are not registered, you may register here.