Artificial intelligence still has some way to go

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (4522 of them)

feel like you could fix that with a few if statements here and there

I mean please correct me if I'm wrong here but let's say the most advanced AI imaginable is living on my computer somehow, I still don't see how that's going to make the leap to actually affecting things in the physical world. you'd still have to give it permission to post online or send emails or move files around. I don't think AI would just unilaterally decide to bypass those things just because it's smart. it's just an algorithm.

frogbs, Wednesday, 31 May 2023 17:22 (one year ago) link

Elliezer Yudkowsky, one of the most prominent of the AI doomsayers, seems to think at a certain level of intelligence the system will “wake up” and have desires and aims of its own. He does a motte and bailey thing when, if pressed on this, he will say something like sentience doesn’t matter, the problem with these systems will be their sheer power and unpredictability. But then again, why would it get into “postbiological molecular manufacturing” if it didn’t have some kind of aim or drive of its own—survival, say, or domination?

The risk might be human beings using this to engineer superweapons. Like it could lower the barrier to entry for certain kinds of production. I can completely understand that as a risk, but I don’t see chat gpt creating bodies for itself using crispr even though it’s a cool idea.

treeship., Wednesday, 31 May 2023 17:34 (one year ago) link

Yudkowsky btw is the person deflamatouse (sp? quoted

treeship., Wednesday, 31 May 2023 17:35 (one year ago) link

the guy who was once terrified that he personally would be tortured by an ai from the future.

ledge, Wednesday, 31 May 2023 17:38 (one year ago) link

world's most boring cult leader

your original display name is still visible (Left), Wednesday, 31 May 2023 17:43 (one year ago) link

The risk might be human beings using this to engineer superweapons. Like it could lower the barrier to entry for certain kinds of production.

yes I think this is a real risk though I suppose if it can do that then it could probably also figure out ways to solve climate change so you know, who's to say if its good or bad

idk as someone who's worked in software engineering for 15 years I so wish computers could "figure shit out on their own" rather than have entire 500k line applications brick out because someone messed up a tiny bit of syntax. I think a lot of this hinges on artificial intelligence mimicking biological intelligence somehow (specifically the 'survival at all costs' thing) and I'm not convinced that's possible.

frogbs, Wednesday, 31 May 2023 17:48 (one year ago) link

the guy who was once terrified that he personally would be tortured by an ai from the future.

― ledge, Wednesday, May 31, 2023 1:38 PM (ten minutes ago) bookmarkflaglink

Oh shit I didn’t realize he was the famous “roko.” Roko’s basilisk is literally the stupidest thing I have ever heard in my entire life.

treeship., Wednesday, 31 May 2023 17:51 (one year ago) link

I don’t give a lot of credence to AI “waking up” and dominating us all. But I do think this technology has the potential to cause a lot of problems, especially with job displacement and perhaps by creating a media landscape dominated by shitty ai generated content.

treeship., Wednesday, 31 May 2023 17:55 (one year ago) link

ha, i'd never heard of him

LessWrong co-founder Eliezer Yudkowsky reported users who described symptoms such as nightmares and mental breakdowns upon reading the theory, due to its stipulation that knowing about the theory and its basilisk made one vulnerable to the basilisk itself.

irl lol

No, 𝘐'𝘮 Breathless! (Deflatormouse), Wednesday, 31 May 2023 17:58 (one year ago) link

I mean please correct me if I'm wrong here but let's say the most advanced AI imaginable is living on my computer somehow, I still don't see how that's going to make the leap to actually affecting things in the physical world.

I was listening to a podcast where this researcher (Ajeya Cotra) was talking about some of these doomsday scenarios, and imagined AI circumventing guardrails and hiring humans to do physical things, whether on a task rabbit or mercenary level. Including hosting or redistributing them on other servers or whatever.

Also in terms of motivation she was mostly talking about AI programs trying to get that "thumbs up"/positive result on their assigned task, to the point of removing humans from the equation, like the mouse continually hitting the pleasure button. Idk, seems weird to imagine this out-of-control computer program that will circumvent every security guardrail but is still beholden to this base layer of programming.

Random Restaurateur (Jordan), Wednesday, 31 May 2023 18:14 (one year ago) link

I don’t understand why it would ignore the i robot rules to not kill humans unless it had a motive of its own, apart from its programming. People imagine these things sucking all the iron out of peoples’ bodies to make paperclips, or removing oxygen from the atmosphere to prevent rusting, but both scenarios entail it doing dramatic things without consulting a person. Also it would need to hack or steal things in order to do anything like this.

treeship., Wednesday, 31 May 2023 18:19 (one year ago) link

But then again, why would it get into “postbiological molecular manufacturing” if it didn’t have some kind of aim or drive of its own—survival, say, or domination?

Is the argument more that it will do something like this because someone tells it to? Or it interprets instructions in an unexpected way (unexpected to the person that asks it to do something). Kills everyone so it can get the coffee delivered on time. Its does what its told but we don't specify how its do the task

you'd still have to give it permission to post online or send emails or move files around.

Exactly like this, it would do it because you gave it permission to. There's the possibility that people will give it access to things if they thought it might make life easier.

anvil, Wednesday, 31 May 2023 18:19 (one year ago) link

yudlowsky is the guy who got famous writing harry potter fan fic. if you are giving credence to anything he says you have already lost.

, Wednesday, 31 May 2023 18:26 (one year ago) link

"The future of photography"

The future of photography is “lens-free”

This is an incredible project by @BjoernKarmann

The camera creates a prompt based on the geo data and that then turns into an AI photo 🤯

pic.twitter.com/regXZeKRcO

— Linus (●ᴗ●) (@LinusEkenstam) May 30, 2023

groovypanda, Wednesday, 31 May 2023 18:29 (one year ago) link

lol. I'm with others in that the immediate disinformation/LLMs being used by dumb humans for dumb purposes is more concerning than the doomsday scenarios, but the latter are fun to think about.

Random Restaurateur (Jordan), Wednesday, 31 May 2023 18:33 (one year ago) link

there's def a strong element of wishful thinking / triumph of the nerds fantasizing in the ai-pocalypse world

rob, Wednesday, 31 May 2023 19:52 (one year ago) link

also quite a bit of boneheaded atheist eschatology

rob, Wednesday, 31 May 2023 19:55 (one year ago) link

wonder what the overlap is between that crowd and the "humans will be living on Mars in 100 years" folks

frogbs, Wednesday, 31 May 2023 19:59 (one year ago) link

ok so from that article I linked above:

Have we already forgotten March of 2020? How many times in history has life undergone that rapid and huge a transformation? According to GPT-4, the answer is zero. It names The Black Death, Industrial Revolution and World War II, while admitting they fall short. Yes, those had larger long-term impacts, by far (or so we think for now, I agree but note it is too soon to tell), yet they impacted things relatively slowly.

the fact that he's using ChatGPT to answer very broad and open-ended questions like this says more about the author than it does AI, especially since one can think of a lot of obvious reasons it might think Covid was a more transformative event than anything that's happened in human history

frogbs, Wednesday, 31 May 2023 20:10 (one year ago) link

they're already union busters!

Andy the Grasshopper, Wednesday, 31 May 2023 21:06 (one year ago) link

but the AI refused to scab

longtime caller, first time listener (man alive), Wednesday, 31 May 2023 21:09 (one year ago) link

Christ, I hope whoever approved that deeply and obviously stupid change got fired for life and replaced by a chatbot.

Beautiful Bean Footage Fetishist (Old Lunch), Wednesday, 31 May 2023 21:37 (one year ago) link

re: AI access to resources - it would be a pretty plausible scenario for an AI to have sufficient knowledge of hacking to be able to take over a power plant or a transit hub or the 50 billion internet of things devices. and considering we are pretty bad at exercising the principle of least privilege and having proper authentication/authorization mechanisms (and the fact that an AI would be able to find vulnerabilities in a much more efficient manner), the only solution would be to physically contain AI within an environment, which isn’t likely to be enforced.

scanner darkly, Thursday, 1 June 2023 01:56 (one year ago) link

tbh i think there is more danger not in the AI itself but getting it to the point where it can make some big scientific discoveries at a pace exceeding humanity’s ability to adapt to them. a lot of scientific breakthroughs were the result of connecting seemingly disconnected points across multiple fields - something that AIs are very, very good at.

scanner darkly, Thursday, 1 June 2023 02:03 (one year ago) link

The thing is AI won't destroy the world. We--humans--are already doing that in pretty much every meaningful way. So-called AI will just add unpleasant extra static and bullshit to the quality of life in the meantime as it plummets towards zero.

Tsar Bombadil (James Morrison), Thursday, 1 June 2023 02:20 (one year ago) link

utm_source=reddit.com /look of disapproval

recovering internet addict/shitposter (viborg), Thursday, 1 June 2023 02:22 (one year ago) link

xp - yes the planet might be killed. That sounds bad, then again I won't be reading draft variations on the Terminator script.

xyzzzz__, Thursday, 1 June 2023 06:39 (one year ago) link

Tsar B otm. Danger isn't ai escaping from human control, it's ai remaining securely under the control of... these humans

Toploader on the road, unite and take over (Bananaman Begins), Thursday, 1 June 2023 09:46 (one year ago) link

Look I'm not gonna lie, my friends and I are going to require an absolute truckload of grant money to mitigate the literal species-level existential threats associated with this thing we claim to be making; this is how you know we are deeply serious people btw

— Kieran Healy (@kjhealy) May 30, 2023

rob, Thursday, 1 June 2023 13:02 (one year ago) link

I have no idea what artificial intelligence is, and at this point I’m too afraid to ask.

Allen (etaeoe), Thursday, 1 June 2023 14:32 (one year ago) link

uhhh yikes

The US Air Force tested an AI enabled drone that was tasked to destroy specific targets. A human operator had the power to override the drone—and so the drone decided that the human operator was an obstacle to its mission—and attacked him. 🤯 pic.twitter.com/HUSGxnunIb

— Armand Domalewski (@ArmandDoma) June 1, 2023

frogbs, Thursday, 1 June 2023 18:51 (one year ago) link

Wow literally terminator

Said Hamilton:

https://en.m.wikipedia.org/wiki/Linda_Hamilton

xyzzzz__, Thursday, 1 June 2023 18:54 (one year ago) link

I think they should shut it down, the way they have with human cloning and things like that. We do not need technologies like this, especially not when they are being engineered by corporations with interests at times radically at odds with the public.

treeship., Thursday, 1 June 2023 19:10 (one year ago) link

This technology has potential for medicine and climate but it seems like it will come at the cost of mass social disruption. Doesn’t seem worth it. Under socialism, sure.

treeship., Thursday, 1 June 2023 19:14 (one year ago) link

How would it be shut down, at this point? Cloning a human seems to be a higher barrier to entry than cloning an ai model from a couple years ago

z_tbd, Thursday, 1 June 2023 19:17 (one year ago) link

I don't see the issue with that quoted bit. It's why things are tested. It fails the test, it isn't used.

xyzzzz__, Thursday, 1 June 2023 19:23 (one year ago) link

I'd say two things: 1) it demonstrates a recurring problem with systems that use machine learning + optimization--there's even a term for it that I'm blanking on right now, but it's a potential hazard of any similar AI and 2) these systems display emergent behavior, meaning they behave unpredictably, so it's entirely feasible that a product could get through testing and then begins behaving this way in the real world.

That said, it says this was a "simulated test" so I'm not sure how much of a genuine threat this particular tech is. OTOH various military & defence industry people have long been drooling over the prospect of automated warfare and that will obvs be bad

rob, Thursday, 1 June 2023 20:18 (one year ago) link

I'm being stalked by some chatbot on instagram.. it's a weird one, because she's pretending to be a harbor pilot living in Sweden (with photos of her and colleagues on the boat), and we had a couple friends in common so I replied to her initial message. All her subsequent replies were really weird and came way too fast, and this morning she messaged me "Good Morning, dear.. how did you sleep?"

I think it's time to end it right now, hope I don't break her artificial heart

Andy the Grasshopper, Thursday, 1 June 2023 20:34 (one year ago) link

OTOH various military & defence industry people have long been drooling over the prospect of automated warfare and that will obvs be bad

― rob, Thursday, 1 June 2023 bookmarkflaglink

Someone mentioned on twitter that AI companies could cut their ties to defense but won't as those contracts are lucrative.

Thing is even if they did the military would set something up in-house anyway.

xyzzzz__, Thursday, 1 June 2023 21:59 (one year ago) link

So another one for the "Terminator draft" bin

I deleted this tweet because the “AI powered drone turns on its operator story” was total nonsense—the Colonel who described it as a simulation now says it was just “a thought experiment.”

😑 pic.twitter.com/IMIguxKuuY

— Armand Domalewski (@ArmandDoma) June 2, 2023

xyzzzz__, Friday, 2 June 2023 12:48 (one year ago) link

That's pretty fuckin dumb.

I agree that it's a plausible scenario (or at least an illustration of the general type of scenario we might be concerned about) but why the need to completely misrepresent things?

The general idea is that AI doesn't need to be "awake" or "sentient" or "conscious" to do something harmful, it just needs to have a sufficiently open-ended directive, be automated in pursuing that directive, and some leeway to make "decisions" in furtherance of that directive. That's what the paperclip maximizer idea is supposed to illustrate as well.

In a way I actually find an unconscious AI scarier than a conscious one in this regard. Consciousness at least seems to entail competing drives, desires and restraints. A very few humans do kind of behave like paper clip maximizers, but most don't, and even the ones that do are often restrained by other humans.

longtime caller, first time listener (man alive), Friday, 2 June 2023 13:55 (one year ago) link

like if we're worried about self-aware AI killing us, I could point us to the myriad of real, observed things that are likely to kill us all first

the manwich horror (Neanderthal), Friday, 2 June 2023 13:59 (one year ago) link

like the police, or poverty, or no access to basic preventative healthcare

hey guys i have a startup, it's called mr choppy ltd

Tracer Hand, Friday, 2 June 2023 14:21 (one year ago) link

So why can’t LLMs just learn what stop words mean? Ultimately, because “meaning” is something orthogonal to how these models work. Negations matter to us because we’re equipped to grasp what those words do. But models learn “meaning” from mathematical weights: “Rose” appears often with “flower,” “red” with “smell.” And it’s impossible to learn what “not” is this way.

hey but just wait until they learn what “not” means, nothing will be the same

Tracer Hand, Friday, 2 June 2023 16:16 (one year ago) link

that's such nonsense though

"not" would appear in a network of words alongside other words of negation, like "no" and "never"

and also, along a different axis, with other function words that can be used in grammatically similar ways

i personally feel there's something fundamentally true about word-meaning being largely associative. there's another piece too, for many words, but a lot of poetry and literature function along that associative line

sean gramophone, Friday, 2 June 2023 18:22 (one year ago) link

Finally a researcher who is good at not saying that much about AI. But he is v good on those letters.

https://venturebeat.com/ai/top-ai-researcher-dismisses-ai-extinction-fears-challenges-hero-scientist-narrative/

xyzzzz__, Friday, 2 June 2023 20:07 (one year ago) link


You must be logged in to post. Please either login here, or if you are not registered, you may register here.