Artificial intelligence still has some way to go

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (4522 of them)

I'd say two things: 1) it demonstrates a recurring problem with systems that use machine learning + optimization--there's even a term for it that I'm blanking on right now, but it's a potential hazard of any similar AI and 2) these systems display emergent behavior, meaning they behave unpredictably, so it's entirely feasible that a product could get through testing and then begins behaving this way in the real world.

That said, it says this was a "simulated test" so I'm not sure how much of a genuine threat this particular tech is. OTOH various military & defence industry people have long been drooling over the prospect of automated warfare and that will obvs be bad

rob, Thursday, 1 June 2023 20:18 (one year ago) link

I'm being stalked by some chatbot on instagram.. it's a weird one, because she's pretending to be a harbor pilot living in Sweden (with photos of her and colleagues on the boat), and we had a couple friends in common so I replied to her initial message. All her subsequent replies were really weird and came way too fast, and this morning she messaged me "Good Morning, dear.. how did you sleep?"

I think it's time to end it right now, hope I don't break her artificial heart

Andy the Grasshopper, Thursday, 1 June 2023 20:34 (one year ago) link

OTOH various military & defence industry people have long been drooling over the prospect of automated warfare and that will obvs be bad

― rob, Thursday, 1 June 2023 bookmarkflaglink

Someone mentioned on twitter that AI companies could cut their ties to defense but won't as those contracts are lucrative.

Thing is even if they did the military would set something up in-house anyway.

xyzzzz__, Thursday, 1 June 2023 21:59 (one year ago) link

So another one for the "Terminator draft" bin

I deleted this tweet because the “AI powered drone turns on its operator story” was total nonsense—the Colonel who described it as a simulation now says it was just “a thought experiment.”

😑 pic.twitter.com/IMIguxKuuY

— Armand Domalewski (@ArmandDoma) June 2, 2023

xyzzzz__, Friday, 2 June 2023 12:48 (one year ago) link

That's pretty fuckin dumb.

I agree that it's a plausible scenario (or at least an illustration of the general type of scenario we might be concerned about) but why the need to completely misrepresent things?

The general idea is that AI doesn't need to be "awake" or "sentient" or "conscious" to do something harmful, it just needs to have a sufficiently open-ended directive, be automated in pursuing that directive, and some leeway to make "decisions" in furtherance of that directive. That's what the paperclip maximizer idea is supposed to illustrate as well.

In a way I actually find an unconscious AI scarier than a conscious one in this regard. Consciousness at least seems to entail competing drives, desires and restraints. A very few humans do kind of behave like paper clip maximizers, but most don't, and even the ones that do are often restrained by other humans.

longtime caller, first time listener (man alive), Friday, 2 June 2023 13:55 (one year ago) link

like if we're worried about self-aware AI killing us, I could point us to the myriad of real, observed things that are likely to kill us all first

the manwich horror (Neanderthal), Friday, 2 June 2023 13:59 (one year ago) link

like the police, or poverty, or no access to basic preventative healthcare

hey guys i have a startup, it's called mr choppy ltd

Tracer Hand, Friday, 2 June 2023 14:21 (one year ago) link

So why can’t LLMs just learn what stop words mean? Ultimately, because “meaning” is something orthogonal to how these models work. Negations matter to us because we’re equipped to grasp what those words do. But models learn “meaning” from mathematical weights: “Rose” appears often with “flower,” “red” with “smell.” And it’s impossible to learn what “not” is this way.

hey but just wait until they learn what “not” means, nothing will be the same

Tracer Hand, Friday, 2 June 2023 16:16 (one year ago) link

that's such nonsense though

"not" would appear in a network of words alongside other words of negation, like "no" and "never"

and also, along a different axis, with other function words that can be used in grammatically similar ways

i personally feel there's something fundamentally true about word-meaning being largely associative. there's another piece too, for many words, but a lot of poetry and literature function along that associative line

sean gramophone, Friday, 2 June 2023 18:22 (one year ago) link

Finally a researcher who is good at not saying that much about AI. But he is v good on those letters.

https://venturebeat.com/ai/top-ai-researcher-dismisses-ai-extinction-fears-challenges-hero-scientist-narrative/

xyzzzz__, Friday, 2 June 2023 20:07 (one year ago) link

"What I say to AI researchers — not the more senior ones, they know better — but to my students, or more junior researchers, I just try my best to show them what I work on, what I think we should work on to give us small but tangible benefits. That’s the reason why I work on AI for healthcare and science. That’s why I’m spending 50% of my time at [biotechnology company] Genentech, part of the Prescient Design team to do computational antibody and drug design. I just think that’s the best I can do. I’m not going to write a grand letter. I’m very bad at that."

Amen to this.

xyzzzz__, Friday, 2 June 2023 20:08 (one year ago) link

Finally a researcher who is good at not saying that much about AI. But he is v good on those letters.

This is my pal (and now coworker) KC! He’s the best and he’s 100% correct.

Allen (etaeoe), Saturday, 3 June 2023 20:18 (one year ago) link

Someone recommended I listen to the Holly Herndon podcast. I took them up on their recommendation, and as a fan of her music, I _really_ wish I didn’t. Herndon, like many artists and cultural critics discussing AI, come across as entirely unaware that the distance, from a science or engineering perspective, between new and old methods is far smaller than, for example, the distance between a world without frame buffers and the world with frame buffers. I’d love to ask her, what are the cultural changes when your preferred interpolation method goes from “pretty good” to “very good?”

Allen (etaeoe), Saturday, 3 June 2023 20:21 (one year ago) link

In fact, after I write that, I think it’s extremely fun that presently the most ballyhooed auto-regressive method is so simple that it could be reasonably reproduced by an excited primary school student over the weekend. It’s sad that we mythologized this rather than make it a neat example of understandable science.

Allen (etaeoe), Saturday, 3 June 2023 20:26 (one year ago) link

xp yikes, that’s too bad.

One of my PhD-having coworkers joked while presenting how he’s running one of the current protein folding systems that its release made his work that he did to get that PhD obsolete.Which may be true in a way, but the accessibility of ChatGPT, etc., have just presented a public face to just the current step in a long series of efforts. We wouldn’t be here without the work.

The way a lot of articles have been written, you’d think computers just got sufficiently powerful and someone threw a bunch of text at one until a chatbot popped out of it like it’s Zeus’s forehead, fully formed

mh, Sunday, 4 June 2023 16:27 (one year ago) link

Huh, I'd been wondering whether we should have a "Who is Eliezer Yudkowsky and can we eat him freeze him for later eating?" thread. An acquaintance I occasionally read the twitter of is in with that crowd, so I occasionally go read Yudkowsky's - though this is a bad habit that I should try to break.

There's some self-interest in the recent announcement (though, these are not generally people who clamour for more government regulation), but I think there's also a bunch of pareidolia, like with Blake Lemoine, some real "the beguiling voices you only hear when you stare at the flames for 200+ hours".

I'm not sure if I misread above, but Yudkowsky isn't Roko, he's just the guy who set up the whole LessWrong community - David Gerard has a good article on its effects on Effective Altruism, which reminds me to link to Elizabeth Sandifer on Yudkowsky, which contains the crucial context for him - he is first and foremost a crank, albeit one who has a lot of reach at present.

He's an interesting writer (in that he's not as terrible as you'd naturally assume) - I found this memorial (including the update right at the end of the comments) after the death of his brother to be moving and powerful, while also revealing that a very broken sense of humanity. I genuinely think his anguish is real, even when the sources are (elsewhere) silly.

(No, I will not be reading the Harry Potter work, though I understand that it's more highjacking a popular franchise as a framing for his thoughts, than anything that can really be called 'fanfic')

The fuel behind the explosion of capability they expect is the idea of 'intelligence' as a linear, number-goes-up, value: We are intelligent enough to make computers that will be more intelligent than us, which will make computers more intelligent than them, and so on and on infinitely, IQ one billion! They're generally big believers in the idea that intelligence is a real thing measured by IQ tests rather than a function of it and, as usual, when you get to that you're only 10 minutes from the word 'heritable' and then, as they say, you're off to the races.

There's a Strangelove vibe that creeps in as well - "If this is the battle for the end of the world, then surely the traitors to humanity are those that insist there are words we can't say / thoughts we can't think" - AFAIK Yudkowsky's not that far, he just has the usual Libertarian free speech brainworms - I wasn't surprised to see him linking to a "how I was cancelled" post by Kathleen Stock on Unherd earlier this week.

(There may also be a bit of "these things will be even further above me as I am above the masses, and will hold me (me!) in even more contempt then I hold them"... but that might just be the guy I know)

There are of course people in the space that are worth listening to, though they define themselves more as AI ethics than safety- I understand Timnit Gebru is a good follow there. The angle is one that's been mentioned a lot above, that we should consider the actual effects of this on people right now, and the intersections with already-existing injustices. Though for a lot of the doomers, that's just what they don't want - there's already a choice of apocalypses available, but none of those on offer centre these guys as much as they'd want.

Andrew Farrell, Sunday, 4 June 2023 22:32 (one year ago) link

Timnit Gebru is more credible than Eliezer Yudkowsky but she’s still very much a crank.

Allen (etaeoe), Sunday, 4 June 2023 23:06 (one year ago) link

We are intelligent enough to make computers that will be more intelligent than us, which will make computers more intelligent than them, and so on and on infinitely, IQ one billion!

If it’s gonna do this on Windows I give it maybe 2 iterations before it crashes

The “more intelligent than us” thing I don’t quite get, computers already are in a lot of ways and have been for a long time. Idk if people are fiddling around with ChatGPT or Midjourney and can’t see the difference I dunno what to tell them. Hopefully this is just really good at fooling people.

frogbs, Sunday, 4 June 2023 23:49 (one year ago) link

There does seem to be something about negatives that doesn't sit well with ChatGPT. I can't get it to write a Perec-style story without an 'e' in it, despite trying various prompts. I'm surprised because I've got it to do things that would seem way more tricky than that. It didn't bat an eyelid when I asked it to write an acrostic poem about my trip to the supermarket to buy ingredients for chicken cacciatore for instance!

Zelda Zonk, Sunday, 4 June 2023 23:53 (one year ago) link

it doesnt do any oulipo or rules based composition well or at all, frankly

, Monday, 5 June 2023 03:53 (one year ago) link

Some rules-based things it doesn't have a problem with - ask it to do a rhyming acrostic using the alphabet sequentially from A to Z, and it will comply. But no it doesn't do lipograms - or it will start to do one and then forget the rule after the first couple of sentences. Similarly, I asked it to compose a story using only sentences with exactly seven letters, but it couldn't maintain that past the first sentence or two.

Zelda Zonk, Monday, 5 June 2023 04:55 (one year ago) link

"I'm surprised because I've got it to do things that would seem way more tricky than that."

You mean like programming tasks?

xyzzzz__, Monday, 5 June 2023 06:40 (one year ago) link

Been meaning to put through modernistic style poems and get it to re-write in older styles.

xyzzzz__, Monday, 5 June 2023 06:44 (one year ago) link

interestingly, GPT has a very very hard time doing poetry that -doesn't- rhyme
https://arxiv.org/abs/2305.11064

sean gramophone, Monday, 5 June 2023 12:16 (one year ago) link

when you get to that you're only 10 minutes from the word 'heritable' and then, as they say, you're off to the races

lol

Timnit Gebru is more credible than Eliezer Yudkowsky but she’s still very much a crank.

― Allen (etaeoe), Sunday, June 4, 2023 7:06 PM (yesterday)

could you expand on that? in what way is she a crank?

rob, Monday, 5 June 2023 13:48 (one year ago) link

personally i don't think it makes you a crank to think that real-world right-now issues like AI processing's impact on climate change, rights/pay/conditions for AI workers, sexism/racism at AI companies, training-set secrecy, or copyright matters... -might- be the biggest-stakes questions surrounding AI

but i do think you're a crank if you're so certain of this that you have made your brand one that mocks and disparages all conversations that consider that other questions could also be of equal or even greater importance.

sean gramophone, Monday, 5 June 2023 14:51 (one year ago) link

could you expand on that? in what way is she a crank?

She traffics her beliefs the same way Yudkowsky does. Their positions are predicated on science fiction so they’re impossible to refute, e.g., posted this weekend:

Why aren’t the “godfathers” of AI talking about the massive data theft from artists & their lawsuits eg? Because that discourse is too beneath their genius brains to cover? They have to talk about grand endeavors like SAVING HUMANITY? Because their practices would be implicated?

— @timnitGe✧✧✧@dair-commun✧✧✧.soc✧✧✧ on Mastodon (@timnitGebru) June 4, 2023

This equivalency of embeddings and data is so absurd it’s impossible to refute.

If Yudkowsky is “Bill Gates is micro-chipping us with vaccines,” Gebru is “COVID-19 is a bio-weapon.” Gebru is far closer to reality but she’s still far from reality. She also benefits from credentials (e.g., Fei-Fei Li was her PhD advisor) that Yudkowsky lacks.

Allen (etaeoe), Monday, 5 June 2023 15:00 (one year ago) link

personally i don't think it makes you a crank to think that real-world right-now issues like AI processing's impact on climate change, rights/pay/conditions for AI workers, sexism/racism at AI companies, training-set secrecy, or copyright matters... -might- be the biggest-stakes questions surrounding AI

She isn’t a crank because of her beliefs. She’s a crank because she predicates her beliefs on bad science. If it helps to understand my perspective, I believe every issue you identified is an important issue.

Allen (etaeoe), Monday, 5 June 2023 15:02 (one year ago) link

Erm. We may disagree about “copyright matters.” I don’t know what your position is about this.

Allen (etaeoe), Monday, 5 June 2023 15:03 (one year ago) link

xp

thanks for the responses

This equivalency of embeddings and data is so absurd it’s impossible to refute.

could you explain that? I'm not sure what it means. And while that tweet is p strident, I wouldn't think the idea that generative AI models are trained on copyrighted materials would be controversial, but I'm probably missing your point

rob, Monday, 5 June 2023 15:06 (one year ago) link

so far, there is no precedent on which to argue that training AI on copyrighted materials is a copyright violation. it's speculation based on non-existent rulings (and, imho, a dangerous precedent to be calling for.)

sean gramophone, Monday, 5 June 2023 15:14 (one year ago) link

OK but a) there are several lawsuits going on right now, so it's a lot less speculative than "AI will prob end humanity" and b) I don't see that tweet really making a strict legal argument but a moral or ethical one. You can argue against her stance, but I don't think labelling it "speculation" make sense since the "theft" did already in fact take place

rob, Monday, 5 June 2023 15:21 (one year ago) link

could you explain that? I'm not sure what it means. And while that tweet is p strident, I wouldn't think the idea that generative AI models are trained on copyrighted materials would be controversial, but I'm probably missing your point

Her implication is that it’s possible to reconstruct the original data from these embeddings. It isn’t. Her other implication is that creator-specific features are being embedded. That’s unlikely.

Allen (etaeoe), Monday, 5 June 2023 15:27 (one year ago) link

OK but a) there are several lawsuits going on right now, so it's a lot less speculative than "AI will prob end humanity" and b) I don't see that tweet really making a strict legal argument but a moral or ethical one. You can argue against her stance, but I don't think labelling it "speculation" make sense since the "theft" did already in fact take place

What makes this theft rather than fair use?

Allen (etaeoe), Monday, 5 June 2023 15:28 (one year ago) link

I'm not a fair use expert, but I don't think it's impossible to argue against it in some cases. Getty obviously thinks there's a violation, and afaict the reproduced watermark makes it seem like a decent argument: https://www.theverge.com/2023/1/17/23558516/ai-art-copyright-stable-diffusion-getty-images-lawsuit.

Tbc I don't personally have a firm opinion on this, I'm just objecting to the idea that taking this position makes someone a crank. I think the real test wrt copyright will be once if these bots start being (more) commodified.

rob, Monday, 5 June 2023 15:36 (one year ago) link

btw etaeoe, I'll never find it in this big thread, but I swear *you* posted a paper somewhere recently itt arguing that you could reconstruct training data...?

rob, Monday, 5 June 2023 15:38 (one year ago) link

https://arxiv.org/abs/2301.13188 was the paper. As you have probably guessed, I'm not a computer scientist so maybe I misunderstood the implications or I don't get the different terms that are being used

rob, Monday, 5 June 2023 15:45 (one year ago) link

btw etaeoe, I'll never find it in this big thread, but I swear *you* posted a paper somewhere recently itt arguing that you could reconstruct training data...?

Yeah. I should’ve been clearer. Reconstruction isn’t a _definitive_ outcome. I think that’s why I’m puzzled by the copyright rhetoric. If the model can be used to reconstruct, it’s a copyright issue. If it can’t, it should be considered fair use. There’s nothing intrinsically problematic about the underlying methods (e.g., diffusion) and I don’t understand why we’re not presently equipped to deal with this distinction (there’s already caselaw about compression).

Allen (etaeoe), Monday, 5 June 2023 16:37 (one year ago) link

so-called AI godfathers don't care about copyright lawsuits because they're both inevitable and essential to prove out the legal ramifications and decide what counts as fair use and derivative work

weighing in on those things is something you'd do at trial, and with specific answers about the technology and how it works

researchers/programmers/etc. should act ethically and can act as advocates or whistleblowers, but I'd use them as primary sources for ethical and legal questions as much as I'd approach any random person who isn't an ethicist or lawyer

mh, Monday, 5 June 2023 17:08 (one year ago) link

that probably came off as glib, in that any person is entitled to an opinion and so-called godfathers should have considered these things. the way general media covers AI isn't for the most part useful when it comes to evaluating its use both within existing ethical and legal frameworks or determining how we change those frameworks to address new technology

mh, Monday, 5 June 2023 17:13 (one year ago) link

If the model can be used to reconstruct, it’s a copyright issue. If it can’t, it should be considered fair use.

It shouldn't be considered fair use if the results are going to be used for commercial purposes. Why should it be fair use to train a model on copyrighted data with the goal of producing content so you don't have to pay copyright holders?

Random Restaurateur (Jordan), Monday, 5 June 2023 17:34 (one year ago) link

That’s too small of a concern. Not SAVING HUMANITY.

— @timnitGe✧✧✧@dair-commun✧✧✧.soc✧✧✧ on Mastodon (@timnitGebru) June 4, 2023

the manwich horror (Neanderthal), Monday, 5 June 2023 17:37 (one year ago) link

I mean, the answer here is to stop paying attention to figureheads of the "AI movement" if they're providing nothing of value to the public conversation

if they only talk about SAVING HUMANITY then find someone else to listen to, because there's nothing there

mh, Monday, 5 June 2023 17:49 (one year ago) link

xp if I understand etaoe's point correctly now, what they're saying is that there are two ways of thinking about this:
(1) all outputs of generative AI violate copyright because they were trained on (some) copyrighted materials
(2) some outputs of generative AI may violate copyright depending on [factors]

sort of like how you can definitely use a sampler to violate copyright beyond fair use, but it's not inherent to sampling that that is the case.

fwiw I'm not sure Gebru was actually saying (1), but this is why Twitter is a bad forum for complex arguments

rob, Monday, 5 June 2023 17:50 (one year ago) link

Neanderthal's repost of her tweet was her being sarcastic

mh, Monday, 5 June 2023 17:54 (one year ago) link

is there even any legal requirement to disclose what goes into a model? if someone builds their own private model entirely off of copyrighted material, how would anyone even know based on the outputs?

This is not unlike imagining if someone built a massive library of microscopic samples from famous songs and then used those to make new music in which the source samples were entirely unrecognizable. There wouldn't be rights issues raised because the end result is completely different from the inputs. (Yes, I understand that AI image engines do not actually piece together elements of existing images)

I'm somewhat open to the idea that people can opt their images out of publicly available models, even though I don't exactly buy that putting them in there causes harm in any obvious way that taking them out would somehow fix.

Muad'Doob (Moodles), Monday, 5 June 2023 18:00 (one year ago) link

serious question for those who actually think AGI is gonna be able to self-replicate and produce world-ending superintelligence within 5 minutes or whatever - what is this going to run on? wouldn't this sort of thing just instantly overload whatever CPU it was running on?

frogbs, Monday, 5 June 2023 18:03 (one year ago) link

something something nvidia stock price

mh, Monday, 5 June 2023 18:17 (one year ago) link

It's surprising how recognizable even micro-samples are (and apparently AI is being used for sample-snitching now). And "There wouldn't be rights issues raised because the end result is completely different from the inputs." -- there definitely are if the sample is identified and the end result has made a lot of money, it doesn't matter if it's a one-shot.

Random Restaurateur (Jordan), Monday, 5 June 2023 18:20 (one year ago) link

I don't know how good they've gotten at recognizing all samples, and I also don't know how far they would plan to take sample litigation. Sampling is massively widespread and most of it happens without issues. My example specified that the samples in the context of the new piece of music were unrecognizable. Perhaps at this point that is a purely hypothetical idea because technology has gotten so good at recognizing samples, so let's assume I mean unrecognizable by human ears. In my experience, not a lot music gets targeted for sample violation unless the samples are fairly discernable and have an active role in the music, but perhaps that has changed.

Either way, AI images are not in fact made up of samples of other images so not sure it's relevant at all. My only point was, if the inputs are not discernable in the outputs, how is someone even going to go about proving harm?

Muad'Doob (Moodles), Monday, 5 June 2023 18:28 (one year ago) link


You must be logged in to post. Please either login here, or if you are not registered, you may register here.