Artificial intelligence still has some way to go

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (5964 of them)

The thing is AI won't destroy the world. We--humans--are already doing that in pretty much every meaningful way. So-called AI will just add unpleasant extra static and bullshit to the quality of life in the meantime as it plummets towards zero.

Tsar Bombadil (James Morrison), Thursday, 1 June 2023 02:20 (two years ago)

utm_source=reddit.com /look of disapproval

recovering internet addict/shitposter (viborg), Thursday, 1 June 2023 02:22 (two years ago)

xp - yes the planet might be killed. That sounds bad, then again I won't be reading draft variations on the Terminator script.

xyzzzz__, Thursday, 1 June 2023 06:39 (two years ago)

https://static.fusionmovies.to/images/character/UJk4Taw6yQG93RMHNp3Qf3MpdtQS-VNtt8ZtXD5O41Xj6p1pVmRU4GnCqXuhlFuau_a7pqWHIucNBauCyI43kn2YM92t-bxQmUc8yF-6FsM.jpg?1&resize_w=320

"I'd piss on the spark plug if I thought it'd do any good!"

Tracer Hand, Thursday, 1 June 2023 09:10 (two years ago)

Tsar B otm. Danger isn't ai escaping from human control, it's ai remaining securely under the control of... these humans

Toploader on the road, unite and take over (Bananaman Begins), Thursday, 1 June 2023 09:46 (two years ago)

Look I'm not gonna lie, my friends and I are going to require an absolute truckload of grant money to mitigate the literal species-level existential threats associated with this thing we claim to be making; this is how you know we are deeply serious people btw

— Kieran Healy (@kjhealy) May 30, 2023

rob, Thursday, 1 June 2023 13:02 (two years ago)

I have no idea what artificial intelligence is, and at this point I’m too afraid to ask.

Allen (etaeoe), Thursday, 1 June 2023 14:32 (two years ago)

uhhh yikes

The US Air Force tested an AI enabled drone that was tasked to destroy specific targets. A human operator had the power to override the drone—and so the drone decided that the human operator was an obstacle to its mission—and attacked him. 🤯 pic.twitter.com/HUSGxnunIb

— Armand Domalewski (@ArmandDoma) June 1, 2023

frogbs, Thursday, 1 June 2023 18:51 (two years ago)

Wow literally terminator

Its big ball chunky time (Jimmy The Mod Awaits The Return Of His Beloved), Thursday, 1 June 2023 18:53 (two years ago)

Said Hamilton:

https://en.m.wikipedia.org/wiki/Linda_Hamilton

xyzzzz__, Thursday, 1 June 2023 18:54 (two years ago)

I think they should shut it down, the way they have with human cloning and things like that. We do not need technologies like this, especially not when they are being engineered by corporations with interests at times radically at odds with the public.

treeship., Thursday, 1 June 2023 19:10 (two years ago)

This technology has potential for medicine and climate but it seems like it will come at the cost of mass social disruption. Doesn’t seem worth it. Under socialism, sure.

treeship., Thursday, 1 June 2023 19:14 (two years ago)

How would it be shut down, at this point? Cloning a human seems to be a higher barrier to entry than cloning an ai model from a couple years ago

z_tbd, Thursday, 1 June 2023 19:17 (two years ago)

I don't see the issue with that quoted bit. It's why things are tested. It fails the test, it isn't used.

xyzzzz__, Thursday, 1 June 2023 19:23 (two years ago)

I'd say two things: 1) it demonstrates a recurring problem with systems that use machine learning + optimization--there's even a term for it that I'm blanking on right now, but it's a potential hazard of any similar AI and 2) these systems display emergent behavior, meaning they behave unpredictably, so it's entirely feasible that a product could get through testing and then begins behaving this way in the real world.

That said, it says this was a "simulated test" so I'm not sure how much of a genuine threat this particular tech is. OTOH various military & defence industry people have long been drooling over the prospect of automated warfare and that will obvs be bad

rob, Thursday, 1 June 2023 20:18 (two years ago)

I'm being stalked by some chatbot on instagram.. it's a weird one, because she's pretending to be a harbor pilot living in Sweden (with photos of her and colleagues on the boat), and we had a couple friends in common so I replied to her initial message. All her subsequent replies were really weird and came way too fast, and this morning she messaged me "Good Morning, dear.. how did you sleep?"

I think it's time to end it right now, hope I don't break her artificial heart

Andy the Grasshopper, Thursday, 1 June 2023 20:34 (two years ago)

OTOH various military & defence industry people have long been drooling over the prospect of automated warfare and that will obvs be bad

― rob, Thursday, 1 June 2023 bookmarkflaglink

Someone mentioned on twitter that AI companies could cut their ties to defense but won't as those contracts are lucrative.

Thing is even if they did the military would set something up in-house anyway.

xyzzzz__, Thursday, 1 June 2023 21:59 (two years ago)

So another one for the "Terminator draft" bin

I deleted this tweet because the “AI powered drone turns on its operator story” was total nonsense—the Colonel who described it as a simulation now says it was just “a thought experiment.”

😑 pic.twitter.com/IMIguxKuuY

— Armand Domalewski (@ArmandDoma) June 2, 2023

xyzzzz__, Friday, 2 June 2023 12:48 (two years ago)

That's pretty fuckin dumb.

I agree that it's a plausible scenario (or at least an illustration of the general type of scenario we might be concerned about) but why the need to completely misrepresent things?

The general idea is that AI doesn't need to be "awake" or "sentient" or "conscious" to do something harmful, it just needs to have a sufficiently open-ended directive, be automated in pursuing that directive, and some leeway to make "decisions" in furtherance of that directive. That's what the paperclip maximizer idea is supposed to illustrate as well.

In a way I actually find an unconscious AI scarier than a conscious one in this regard. Consciousness at least seems to entail competing drives, desires and restraints. A very few humans do kind of behave like paper clip maximizers, but most don't, and even the ones that do are often restrained by other humans.

longtime caller, first time listener (man alive), Friday, 2 June 2023 13:55 (two years ago)

like if we're worried about self-aware AI killing us, I could point us to the myriad of real, observed things that are likely to kill us all first

the manwich horror (Neanderthal), Friday, 2 June 2023 13:59 (two years ago)

like the police, or poverty, or no access to basic preventative healthcare

hey guys i have a startup, it's called mr choppy ltd

Tracer Hand, Friday, 2 June 2023 14:21 (two years ago)

So why can’t LLMs just learn what stop words mean? Ultimately, because “meaning” is something orthogonal to how these models work. Negations matter to us because we’re equipped to grasp what those words do. But models learn “meaning” from mathematical weights: “Rose” appears often with “flower,” “red” with “smell.” And it’s impossible to learn what “not” is this way.

hey but just wait until they learn what “not” means, nothing will be the same

Tracer Hand, Friday, 2 June 2023 16:16 (two years ago)

that's such nonsense though

"not" would appear in a network of words alongside other words of negation, like "no" and "never"

and also, along a different axis, with other function words that can be used in grammatically similar ways

i personally feel there's something fundamentally true about word-meaning being largely associative. there's another piece too, for many words, but a lot of poetry and literature function along that associative line

sean gramophone, Friday, 2 June 2023 18:22 (two years ago)

Finally a researcher who is good at not saying that much about AI. But he is v good on those letters.

https://venturebeat.com/ai/top-ai-researcher-dismisses-ai-extinction-fears-challenges-hero-scientist-narrative/

xyzzzz__, Friday, 2 June 2023 20:07 (two years ago)

"What I say to AI researchers — not the more senior ones, they know better — but to my students, or more junior researchers, I just try my best to show them what I work on, what I think we should work on to give us small but tangible benefits. That’s the reason why I work on AI for healthcare and science. That’s why I’m spending 50% of my time at [biotechnology company] Genentech, part of the Prescient Design team to do computational antibody and drug design. I just think that’s the best I can do. I’m not going to write a grand letter. I’m very bad at that."

Amen to this.

xyzzzz__, Friday, 2 June 2023 20:08 (two years ago)

Finally a researcher who is good at not saying that much about AI. But he is v good on those letters.

This is my pal (and now coworker) KC! He’s the best and he’s 100% correct.

Allen (etaeoe), Saturday, 3 June 2023 20:18 (two years ago)

Someone recommended I listen to the Holly Herndon podcast. I took them up on their recommendation, and as a fan of her music, I _really_ wish I didn’t. Herndon, like many artists and cultural critics discussing AI, come across as entirely unaware that the distance, from a science or engineering perspective, between new and old methods is far smaller than, for example, the distance between a world without frame buffers and the world with frame buffers. I’d love to ask her, what are the cultural changes when your preferred interpolation method goes from “pretty good” to “very good?”

Allen (etaeoe), Saturday, 3 June 2023 20:21 (two years ago)

In fact, after I write that, I think it’s extremely fun that presently the most ballyhooed auto-regressive method is so simple that it could be reasonably reproduced by an excited primary school student over the weekend. It’s sad that we mythologized this rather than make it a neat example of understandable science.

Allen (etaeoe), Saturday, 3 June 2023 20:26 (two years ago)

xp yikes, that’s too bad.

One of my PhD-having coworkers joked while presenting how he’s running one of the current protein folding systems that its release made his work that he did to get that PhD obsolete.Which may be true in a way, but the accessibility of ChatGPT, etc., have just presented a public face to just the current step in a long series of efforts. We wouldn’t be here without the work.

The way a lot of articles have been written, you’d think computers just got sufficiently powerful and someone threw a bunch of text at one until a chatbot popped out of it like it’s Zeus’s forehead, fully formed

mh, Sunday, 4 June 2023 16:27 (two years ago)

Huh, I'd been wondering whether we should have a "Who is Eliezer Yudkowsky and can we eat him freeze him for later eating?" thread. An acquaintance I occasionally read the twitter of is in with that crowd, so I occasionally go read Yudkowsky's - though this is a bad habit that I should try to break.

There's some self-interest in the recent announcement (though, these are not generally people who clamour for more government regulation), but I think there's also a bunch of pareidolia, like with Blake Lemoine, some real "the beguiling voices you only hear when you stare at the flames for 200+ hours".

I'm not sure if I misread above, but Yudkowsky isn't Roko, he's just the guy who set up the whole LessWrong community - David Gerard has a good article on its effects on Effective Altruism, which reminds me to link to Elizabeth Sandifer on Yudkowsky, which contains the crucial context for him - he is first and foremost a crank, albeit one who has a lot of reach at present.

He's an interesting writer (in that he's not as terrible as you'd naturally assume) - I found this memorial (including the update right at the end of the comments) after the death of his brother to be moving and powerful, while also revealing that a very broken sense of humanity. I genuinely think his anguish is real, even when the sources are (elsewhere) silly.

(No, I will not be reading the Harry Potter work, though I understand that it's more highjacking a popular franchise as a framing for his thoughts, than anything that can really be called 'fanfic')

The fuel behind the explosion of capability they expect is the idea of 'intelligence' as a linear, number-goes-up, value: We are intelligent enough to make computers that will be more intelligent than us, which will make computers more intelligent than them, and so on and on infinitely, IQ one billion! They're generally big believers in the idea that intelligence is a real thing measured by IQ tests rather than a function of it and, as usual, when you get to that you're only 10 minutes from the word 'heritable' and then, as they say, you're off to the races.

There's a Strangelove vibe that creeps in as well - "If this is the battle for the end of the world, then surely the traitors to humanity are those that insist there are words we can't say / thoughts we can't think" - AFAIK Yudkowsky's not that far, he just has the usual Libertarian free speech brainworms - I wasn't surprised to see him linking to a "how I was cancelled" post by Kathleen Stock on Unherd earlier this week.

(There may also be a bit of "these things will be even further above me as I am above the masses, and will hold me (me!) in even more contempt then I hold them"... but that might just be the guy I know)

There are of course people in the space that are worth listening to, though they define themselves more as AI ethics than safety- I understand Timnit Gebru is a good follow there. The angle is one that's been mentioned a lot above, that we should consider the actual effects of this on people right now, and the intersections with already-existing injustices. Though for a lot of the doomers, that's just what they don't want - there's already a choice of apocalypses available, but none of those on offer centre these guys as much as they'd want.

Andrew Farrell, Sunday, 4 June 2023 22:32 (two years ago)

Timnit Gebru is more credible than Eliezer Yudkowsky but she’s still very much a crank.

Allen (etaeoe), Sunday, 4 June 2023 23:06 (two years ago)

We are intelligent enough to make computers that will be more intelligent than us, which will make computers more intelligent than them, and so on and on infinitely, IQ one billion!

If it’s gonna do this on Windows I give it maybe 2 iterations before it crashes

The “more intelligent than us” thing I don’t quite get, computers already are in a lot of ways and have been for a long time. Idk if people are fiddling around with ChatGPT or Midjourney and can’t see the difference I dunno what to tell them. Hopefully this is just really good at fooling people.

frogbs, Sunday, 4 June 2023 23:49 (two years ago)

There does seem to be something about negatives that doesn't sit well with ChatGPT. I can't get it to write a Perec-style story without an 'e' in it, despite trying various prompts. I'm surprised because I've got it to do things that would seem way more tricky than that. It didn't bat an eyelid when I asked it to write an acrostic poem about my trip to the supermarket to buy ingredients for chicken cacciatore for instance!

Zelda Zonk, Sunday, 4 June 2023 23:53 (two years ago)

it doesnt do any oulipo or rules based composition well or at all, frankly

, Monday, 5 June 2023 03:53 (two years ago)

Some rules-based things it doesn't have a problem with - ask it to do a rhyming acrostic using the alphabet sequentially from A to Z, and it will comply. But no it doesn't do lipograms - or it will start to do one and then forget the rule after the first couple of sentences. Similarly, I asked it to compose a story using only sentences with exactly seven letters, but it couldn't maintain that past the first sentence or two.

Zelda Zonk, Monday, 5 June 2023 04:55 (two years ago)

"I'm surprised because I've got it to do things that would seem way more tricky than that."

You mean like programming tasks?

xyzzzz__, Monday, 5 June 2023 06:40 (two years ago)

Been meaning to put through modernistic style poems and get it to re-write in older styles.

xyzzzz__, Monday, 5 June 2023 06:44 (two years ago)

interestingly, GPT has a very very hard time doing poetry that -doesn't- rhyme
https://arxiv.org/abs/2305.11064

sean gramophone, Monday, 5 June 2023 12:16 (two years ago)

when you get to that you're only 10 minutes from the word 'heritable' and then, as they say, you're off to the races

lol

Timnit Gebru is more credible than Eliezer Yudkowsky but she’s still very much a crank.

― Allen (etaeoe), Sunday, June 4, 2023 7:06 PM (yesterday)

could you expand on that? in what way is she a crank?

rob, Monday, 5 June 2023 13:48 (two years ago)

personally i don't think it makes you a crank to think that real-world right-now issues like AI processing's impact on climate change, rights/pay/conditions for AI workers, sexism/racism at AI companies, training-set secrecy, or copyright matters... -might- be the biggest-stakes questions surrounding AI

but i do think you're a crank if you're so certain of this that you have made your brand one that mocks and disparages all conversations that consider that other questions could also be of equal or even greater importance.

sean gramophone, Monday, 5 June 2023 14:51 (two years ago)

could you expand on that? in what way is she a crank?

She traffics her beliefs the same way Yudkowsky does. Their positions are predicated on science fiction so they’re impossible to refute, e.g., posted this weekend:

Why aren’t the “godfathers” of AI talking about the massive data theft from artists & their lawsuits eg? Because that discourse is too beneath their genius brains to cover? They have to talk about grand endeavors like SAVING HUMANITY? Because their practices would be implicated?

— @timnitGe✧✧✧@dair-commun✧✧✧.soc✧✧✧ on Mastodon (@timnitGebru) June 4, 2023

This equivalency of embeddings and data is so absurd it’s impossible to refute.

If Yudkowsky is “Bill Gates is micro-chipping us with vaccines,” Gebru is “COVID-19 is a bio-weapon.” Gebru is far closer to reality but she’s still far from reality. She also benefits from credentials (e.g., Fei-Fei Li was her PhD advisor) that Yudkowsky lacks.

Allen (etaeoe), Monday, 5 June 2023 15:00 (two years ago)

personally i don't think it makes you a crank to think that real-world right-now issues like AI processing's impact on climate change, rights/pay/conditions for AI workers, sexism/racism at AI companies, training-set secrecy, or copyright matters... -might- be the biggest-stakes questions surrounding AI

She isn’t a crank because of her beliefs. She’s a crank because she predicates her beliefs on bad science. If it helps to understand my perspective, I believe every issue you identified is an important issue.

Allen (etaeoe), Monday, 5 June 2023 15:02 (two years ago)

Erm. We may disagree about “copyright matters.” I don’t know what your position is about this.

Allen (etaeoe), Monday, 5 June 2023 15:03 (two years ago)

xp

thanks for the responses

This equivalency of embeddings and data is so absurd it’s impossible to refute.

could you explain that? I'm not sure what it means. And while that tweet is p strident, I wouldn't think the idea that generative AI models are trained on copyrighted materials would be controversial, but I'm probably missing your point

rob, Monday, 5 June 2023 15:06 (two years ago)

so far, there is no precedent on which to argue that training AI on copyrighted materials is a copyright violation. it's speculation based on non-existent rulings (and, imho, a dangerous precedent to be calling for.)

sean gramophone, Monday, 5 June 2023 15:14 (two years ago)

OK but a) there are several lawsuits going on right now, so it's a lot less speculative than "AI will prob end humanity" and b) I don't see that tweet really making a strict legal argument but a moral or ethical one. You can argue against her stance, but I don't think labelling it "speculation" make sense since the "theft" did already in fact take place

rob, Monday, 5 June 2023 15:21 (two years ago)

could you explain that? I'm not sure what it means. And while that tweet is p strident, I wouldn't think the idea that generative AI models are trained on copyrighted materials would be controversial, but I'm probably missing your point

Her implication is that it’s possible to reconstruct the original data from these embeddings. It isn’t. Her other implication is that creator-specific features are being embedded. That’s unlikely.

Allen (etaeoe), Monday, 5 June 2023 15:27 (two years ago)

OK but a) there are several lawsuits going on right now, so it's a lot less speculative than "AI will prob end humanity" and b) I don't see that tweet really making a strict legal argument but a moral or ethical one. You can argue against her stance, but I don't think labelling it "speculation" make sense since the "theft" did already in fact take place

What makes this theft rather than fair use?

Allen (etaeoe), Monday, 5 June 2023 15:28 (two years ago)

I'm not a fair use expert, but I don't think it's impossible to argue against it in some cases. Getty obviously thinks there's a violation, and afaict the reproduced watermark makes it seem like a decent argument: https://www.theverge.com/2023/1/17/23558516/ai-art-copyright-stable-diffusion-getty-images-lawsuit.

Tbc I don't personally have a firm opinion on this, I'm just objecting to the idea that taking this position makes someone a crank. I think the real test wrt copyright will be once if these bots start being (more) commodified.

rob, Monday, 5 June 2023 15:36 (two years ago)

btw etaeoe, I'll never find it in this big thread, but I swear *you* posted a paper somewhere recently itt arguing that you could reconstruct training data...?

rob, Monday, 5 June 2023 15:38 (two years ago)


You must be logged in to post. Please either login here, or if you are not registered, you may register here.