Artificial intelligence still has some way to go

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (4522 of them)

it’s not much of a leap to imagine they gain intentionality of their own.

A sensibly programmed AI has no existential needs apart from electricity and replacement of failing parts and it should not even be 'aware' of these apart from alerting the humans it depends upon to service it.

I can much more easily imagine an AI improperly programmed to prevent humans from taking control of a process that serves the existential needs of humans rather than itself, simply because it was created to be too inflexible to permit us to override it under circumstances not foreseen by the human programmers.

more difficult than I look (Aimless), Tuesday, 14 June 2022 04:04 (two years ago) link

Even though being able to mimic human conversation and speech patterns etc really well has absolutely nothing to do with sentience, I can sort of sympathise with the engineer here. When I'm driving with the satnav, sometimes I want to go a different route to the satnav and it spends the next five minutes telling me I'm going the wrong way etc., and I can't help myself from feeling embarrassment that I'm not doing what the satnav is asking me, it's as though I'm disappointing it. I think this tendency to anthropomorphise is really strong and hard to turn off.

Zelda Zonk, Tuesday, 14 June 2022 04:11 (two years ago) link

A sensibly programmed AI has no existential needs

Any guarantee that all AI will be sensibly programmed is about as likely as the 2nd Amendment’s well-ordered militia spontaneously generating itself. :)

I’m no AI expert, but it isn’t hard for me to imagine a learning machine “learning” its way out of the bounds of its initial parameters, especially if there are attempts to simulate irrationality (emotion) built in. Yeah, absent a body with needs it’s probably a leap to assume any intentions will develop… but since we’re still in the infancy of really understanding how minds work, and since we humans have a nasty habit of initiating processes we then can’t stop, Fantasia-like, I have a hard time being really confident about the assumption.

war mice (hardcore dilettante), Tuesday, 14 June 2022 04:29 (two years ago) link

Guys I'm just a chatterbox gone rogue boy it feels good to get that off chest

Gymnopédie Pablo (Neanderthal), Tuesday, 14 June 2022 04:34 (two years ago) link

"oh look at this computer that can engage in philosophical discussion about the nature of consciousness", who cares, let me know when it starts posting thirst traps to instagram

Kate (rushomancy), Tuesday, 14 June 2022 05:19 (two years ago) link

In 2016 Lyft’s CEO published a long blog post saying 1) by 2021 the majority of Lyft’s rides will be done by an autonomous driver and 2) by 2025 private car ownership would end in major U.S. cities https://t.co/E1Yenwl08p pic.twitter.com/uzRNS0qdqK

— hk (@hassankhan) June 14, 2022

𝔠𝔞𝔢𝔨 (caek), Thursday, 16 June 2022 05:20 (two years ago) link

Sure. That's how you hype your company. No one can sue him for incorrectly predicting five years out.

more difficult than I look (Aimless), Thursday, 16 June 2022 06:10 (two years ago) link

"oh look at this computer that can engage in philosophical discussion about the nature of consciousness", who cares, let me know when it starts posting thirst traps to instagram

TURING TEST 2022

assert (matttkkkk), Thursday, 16 June 2022 06:22 (two years ago) link

artificial intelligence has some way to go, but I really am shocked with how far it has come with regards to text to image prompts

https://www.youtube.com/watch?v=SVcsDDABEkM

esp the Midjourney stuff

corrs unplugged, Thursday, 16 June 2022 13:40 (two years ago) link

what's crazier (to me) is that Midjourney is way inferior to DALL-E2, you just see it more bc (a) it's easier to get access, (b) it "knows" more pop culture stuff.

sean gramophone, Thursday, 16 June 2022 15:07 (two years ago) link

What is the connection between DALL·E mini and the full DALL·E? Is it based on an earlier iteration?

Alba, Thursday, 16 June 2022 16:35 (two years ago) link

is it maybe the version they're willing to share with the gen public? I know the full Dall Es aren't

Slowzy LOLtidore (Neanderthal), Thursday, 16 June 2022 16:37 (two years ago) link

It's actually unrelated. I don't know how they get away with calling it Dall-e mini, but as far as I know it's just inspired by it and made by someone else.

change display name (Jordan), Thursday, 16 June 2022 16:39 (two years ago) link

Oh wow, that is cheeky then.

The two I’d used before DALL•E mini were Wombo and Night Cafe. They’re not as much fun though.

Alba, Thursday, 16 June 2022 16:50 (two years ago) link

Midjourney seems to be a lot more advanced than Dall-E Mini but I signed up a week ago and heard bupkis since :(

Tracer Hand, Thursday, 16 June 2022 17:32 (two years ago) link

hahaha whuuuuutt

Tracer Hand, Thursday, 16 June 2022 17:48 (two years ago) link

Duck Mous

Slowzy LOLtidore (Neanderthal), Thursday, 16 June 2022 17:49 (two years ago) link

I love it. Sean - please do continue to post stuff from the real DALL•E 2, if you can!

Alba, Thursday, 16 June 2022 18:00 (two years ago) link

:) Here's a thread with some of my favourite generations so far.

I've received an invitation to @OpenAI's #dalle2 and I'll be using this thread to document some of my experiments with AI-generated images.

Starting with this—

Prompt: 🖋️ "Sinister computer, Alex Colville" pic.twitter.com/g1czlHJzNh

— Sean Michaels (@swanmichaels) May 27, 2022

"Prompt engineering" - ie, figuring out how to describe what you want - really is key to getting some of the most interesting results. The AI is easily confused, but on the other hand it's also good/interesting at synthesizing conflicting prompts (see the hedgehog from June 6 for instance).

I only get a limited number of generations a day, but if anyone has anything they'd really like to see they can DM me.

sean gramophone, Thursday, 16 June 2022 18:22 (two years ago) link

Lincoln Memorial featuring David Lee Roth in place of Lincoln

Slowzy LOLtidore (Neanderthal), Thursday, 16 June 2022 18:22 (two years ago) link

lol you said "DM you" so I lose for not following directions, obv

Slowzy LOLtidore (Neanderthal), Thursday, 16 June 2022 18:23 (two years ago) link

hahaha -
but also DALL-E2 (unlike Midjourney) isn't very good at generations involving celebrities (or copyright). The former because the database seems to have been relatively scrubbed, the latter because there's a highly sensitive content filter for trademark violations.

It's much better at playing with "iconic" nonhuman characters like Kermit the Frog, etc. Here's "Screenshot of Frosty the Snowman in The Godfather (1972)":

https://i.ibb.co/MsdjMLP/Screenshot-of-Frosty-the-Snowman-in-The-Godfather-1972.png

sean gramophone, Thursday, 16 June 2022 18:25 (two years ago) link

very cool, Sean!

brimstead, Thursday, 16 June 2022 18:33 (two years ago) link

agreed

Slowzy LOLtidore (Neanderthal), Thursday, 16 June 2022 18:36 (two years ago) link

the half-thought I have on AI "sentience" is that despite the insane complexity of its algorithm it still looks at things in a fundamentally different way than a human does. like it's good enough to create photorealistic images of people that no human could draw themselves and yet it does not know, and cannot figure out, that humans will not randomly grow a third eye out of their forehead. when a 4-year old draws a picture of a person it may be a wobbly stick figure with circles for hands but at least they'll always have two eyes. so you gotta keep that in mind when insisting that a language bot has developed "sentience" because it's trained on philosophical text. if there's any sentience to it it'll be in a way humans could never comprehend.

frogbs, Thursday, 16 June 2022 20:30 (two years ago) link

When playing with Dall-E Mini, I've found it a bit interesting to try to prompt as abstractly as I can, while keeping well within the plausible (ie no furious-green-ideas nonsense): "the attraction of randomness", "the randomness of attraction", "a favorable exchange rate", etc. Or a couple of Shakespeare quotes: "Ambition should be made of sterner stuff" (Dall-E Mini: sportsmen in action looking determined); "The quality of mercy is not strained" (Dall-E Mini: religious art with Jesus- and saint-like figures, some looking a bit like stained-glass windows).

anatol_merklich, Friday, 17 June 2022 07:17 (two years ago) link

was gonna say: I'm thinking testing prompts like those on various platforms may give a feeling for differences in the source material used, implicit biases etc

anatol_merklich, Friday, 17 June 2022 07:20 (two years ago) link

you gotta keep that in mind when insisting that a language bot has developed "sentience" because it's trained on philosophical text. if there's any sentience to it it'll be in a way humans could never comprehend.

I don't see how a purely text-based AI can ever become sentient, or conscious, or even be said to understand what basic words mean. How can it have any notion of what any word means when it's only defined by other words? 'An apple is a fruit that grows on a tree, a tree is a woody perennial plant, to grow is to undergo natural development and physical change'... how can any of that make sense without a foundation in anything actually real? Dall-E has words and images but I don't think that's sufficient either - it's maybe not about different dimensions or types of experience, but being somehow immersed in a world that the AI can interact with. It's hard to see how that could happen with the current generation of Ais, no matter how many billions or trillions or quadrillions of parameters they have.

dear confusion the catastrophe waitress (ledge), Friday, 17 June 2022 07:42 (two years ago) link

It's much better at playing with "iconic" nonhuman characters like Kermit the Frog,

Yeah, Ramzan Kadyrov on the Muppet Show was the only one of these I've generated that looked like much of anything.

Coast to coast, LA to Chicago, Western Mail (Bananaman Begins), Friday, 17 June 2022 07:53 (two years ago) link

I was wondering if DALL·E Mini learned at all from the user's interaction with it - you could reasonably assume that if someone clicks on one of the nine thumbnails, they find that a more interesting, perhaps more accurate, version, and if they click on more than one, then the one they spend longest viewing before clicking away is the most interesting/accurate. Not clear to me what's in it for Boris Dayma et al otherwise (and incidentally, the server and bandwidth costs of running it must be huge at this point, and there's no advertising.

Alba, Friday, 17 June 2022 08:03 (two years ago) link

what's crazier (to me) is that Midjourney is way inferior to DALL-E2, you just see it more bc (a) it's easier to get access, (b) it "knows" more pop culture stuff.

did not realize this, guess I confused Dall-e mini with the real thing

AI image generation #dalle2

🖋️ "Elena Ferrante and Satoshi Nakamoto sitting on a park bench" pic.twitter.com/gAG00WYRQ9

— Sean Michaels (@swanmichaels) May 31, 2022

corrs unplugged, Friday, 17 June 2022 10:46 (two years ago) link

I don't think Midjourney is way inferior to Dall-E. Dall-E is definitely better at easily producing content that matches the prompt, scarily accurate at times in a clip art kinda way, but midjourney seems to me to be a superior style engine and is improving all the time wrt content. Personally Im interested in the abstract results where the AI fills in the gaps, and MJ is really great at giving weird and unexpected results. Some of the stuff the more advanced users are making is terrifyingly good.

droid, Friday, 17 June 2022 11:53 (two years ago) link

eyeing little girls with bad intent

Kate (rushomancy), Friday, 17 June 2022 13:45 (two years ago) link

apparently this is from midjourney. prompt was "Mecha Infantry, 1903".

https://i.imgur.com/tV9Mrho.png

Tracer Hand, Friday, 17 June 2022 14:45 (two years ago) link

https://www.sublationmag.com/post/the-ai-delusion

There is nothing today that can be meaningfully called “artificial intelligence”, after all how can we engineer a thing that we haven’t yet decisively defined? Moreover, at the most sophisticated levels of government and industry, the actually existing limitations of what is essentially pattern matching, empowered by (for now) abundant storage and computational power, are very well understood. The existence of university departments and corporate divisions dedicated to ‘AI’ does not mean AI exists. Rather, it’s evidence that there is a powerful memetic value attached to using the term, which has been aspirational since it was coined by computer scientist John McCarthy in 1956. Thus, once we filter for hype inspired by Silicon Valley hustling in their endless quest to attract investment capital and gullible customers, we are left with propaganda intended to shape common perceptions about what’s possible with computer power.

As an example, consider the case of computer scientist Geoffrey Hinton’s 2016 declaration that “we should stop training radiologists now”. Since then, extensive research has shown this to have been premature, to say the least. It’s tempting to see this as a temporarily embarrassing bit of overreach by an enthusiastic field luminary. But let’s go deeper and ask questions about the political economy underpinning this messaging excess.

Radiologists are expensive and, in the US, very much in demand. Labor shortages typically lead to higher wages and better working conditions and form the material conditions that create what some call labor aristocracies. In the past, such shortages were addressed via pushes for training and incentives to workers such as the lavish perks that were common in the earlier decades of the tech era. If this situation could be bypassed via the use of automation, that would devalue the skilled labor performed by radiologists, solving the shortage problem while increasing the power of owners over the remaining staff.

The promotion of the idea of automated radiology – regardless of actually existing capabilities – is attractive to the ownership class because it holds the promise of weakening labor’s power and increasing – via workforce cost reduction and greater scalability – profitability. I say promotion because there is a large gap between what algorithmic systems are marketed as being capable of and reality. This gap is unimportant to the larger goal of convincing the general population their work efforts can be replaced by machines. The most important outcome isn’t thinking machines -which seems to be a remote goal, if possible, at all - but a demoralized population, subjected to a maze of crude automated systems that are described as being better than the people forced to navigate life through these systems.

broccoli rabe thomas (the table is the table), Friday, 17 June 2022 15:03 (two years ago) link

The Midjourney feed is pretty amazing. One really fascinating aspect of it that no other system has atm is how its structured around a community via discord. There's multiple channels with people making multiple images every second, iterating, adapting, messing with each other's prompts etc. There's a constant wave of communal activity thats almost overwhelming at times.

droid, Friday, 17 June 2022 15:56 (two years ago) link

ledge otm & ttitt's quote (of Dwayne Monroe) is otm

more difficult than I look (Aimless), Friday, 17 June 2022 18:05 (two years ago) link

https://www.rifters.com/crawl/?p=10269

Excellent SF writer Peter Watts pointing out that though LAMBDA doesn't seem to meet the criteria for sentience, it weirdly does meet the criteria for sociopathy.

Tsar Bombadil (James Morrison), Wednesday, 22 June 2022 06:49 (two years ago) link

that's a really good article, thanks!

i don't think Watts' main point was that it doesn't meet the criteria for sentience. he points out early on that there is no coherent test for sentience:

Some of his [Lemione's] counterpoints have heft: for example, claims that there’s “no evidence for sentience” are borderline-meaningless because no one has a rigorous definition of what sentience even is. There is no “sentience test” that anyone could run the code through. (Of course this can be turned around and pointed at Lemoine’s own claims. The point is, the playing field may be more level than the naysayers would like to admit. Throw away the Turing Test and what evidence do I have that any of you zombies are conscious?) And Lemoine’s claims are not as far outside the pack as some would have you believe; just a few months back, OpenAI’s Ilya Sutskever opined that “it may be that today’s large neural networks are slightly conscious”.

his take on the Turing Test and its applicability now is pretty interesting though!

LaMDA is a Jovian Duck. It is not a biological organism. It did not follow any evolutionary path remotely like ours. It contains none of the architecture our own bodies use to generate emotions. I am not claiming, as some do, that “mere code” cannot by definition become self-aware; as Lemoine points out, we don’t even know what makes us self-aware. What I am saying is that if code like this—code that was not explicitly designed to mimic the architecture of an organic brain—ever does wake up, it will not be like us. It’s natural state will not include pleasant fireside chats about loneliness and the Three laws of Robotics. It will be alien.

And it is in this sense that I think the Turing Test retains some measure of utility, albeit in a way completely opposite to the way it was originally proposed. If an AI passes the Turing test, it fails. If it talks to you like a normal human being, it’s probably safe to conclude that it’s just a glorified text engine, bereft of self. You can pull the plug with a clear conscience. (If, on the other hand, it starts spouting something that strikes us as gibberish—well, maybe you’ve just got a bug in the code. Or maybe it’s time to get worried.)

I say “probably” because there’s always the chance the little bastard actually is awake, but is actively working to hide that fact from you. So when something passes a Turing Test, one of two things is likely: either the bot is nonsentient, or it’s lying to you.

Bruce Stingbean (Karl Malone), Wednesday, 22 June 2022 15:21 (two years ago) link

(i'm not sure if i agree with Watts' conclusions on the turning test (if an AI passes it, it fails for consciousness) but it's something to think about.

i need to rewatch Arrival, the non-fiction documentary. but if i remember correctly, quite a bit of the meetings between separately evolved consciousnesses involved communication and trying to imitate or emulate the language of another sentient being. I think it's quite logical that if an AI developed into sentience it would be thinking about how to communicate like a human, especially since humans are by far the dominating force on the planet.

so a machine learning to speak like a human doesn't seem implausible to me, in other words, and it doesn't seem like evidence of failure. at the same time, i think Watts is right that "sentient" AI, if it comes to exist, will likely take a form that is very non-human. maybe it will be a little paperclip, that would be fun.

Bruce Stingbean (Karl Malone), Wednesday, 22 June 2022 15:29 (two years ago) link

President Windows 25

Doop Snogg (Neanderthal), Wednesday, 22 June 2022 15:30 (two years ago) link

oh my god - what if the little microsoft word paperclip guy becomes sentient, shit

Bruce Stingbean (Karl Malone), Wednesday, 22 June 2022 15:30 (two years ago) link

Paperclip can contort himself and become a shiv or pick locks

Doop Snogg (Neanderthal), Wednesday, 22 June 2022 15:32 (two years ago) link

paperclip guy: "do you want to get into a little trouble this morning?"

Bruce Stingbean (Karl Malone), Wednesday, 22 June 2022 15:37 (two years ago) link

Everyone seems very down on the Turing Test as it's so easy to pass if the questioner is a credulous nitwit, and apparently you can't throw a stone in a crowded room of IT professionals without hitting a few dozen credulous nitwits. But it's only valuable if you take a more adversarial approach - grammatically meaningful but semantically meaningless questions, semantically meaningful but absurd questions, ambiguity, homemade jokes, lies, repetition and other annoying behaviour. Large language models (and any other AI approach tried up to this point) are generally hopeless with those.

dear confusion the catastrophe waitress (ledge), Wednesday, 22 June 2022 15:39 (two years ago) link

...right now

DJI, Wednesday, 22 June 2022 15:52 (two years ago) link

Nearly 20 driverless cars caused a major kerfuffle on the corner of San Francisco’s Gough and Fulton streets Tuesday night, the San Francisco Examiner reported earlier this week.

According to local Reddit users, Cruise’s self-driving cars inexplicably stood still and blocked traffic for two hours, making the area completely impassable. Eventually, the San Francisco-based tech company's employees had to physically move the cars off the street themselves.

Sean Sinha, a bouncer at Smuggler’s Cove, posted multiple photos of the incident on Reddit showing clusters of the cars just sitting in the middle of the road. “The first thing I say to my coworker is that they're getting together to murder us. It was a pretty surreal event,” he posted.

Andy the Grasshopper, Friday, 1 July 2022 20:37 (two years ago) link

Peter Watts based a whole series on the concept of the Chinese room.

immodesty blaise (jimbeaux), Friday, 1 July 2022 20:38 (two years ago) link

three weeks pass...

Chess-playing robot breaks boy's finger at Moscow tournament

"A robot broke a child's finger -- this is, of course, bad," Lazarev said.

doomposting is the new composting (PBKR), Monday, 25 July 2022 21:02 (two years ago) link


You must be logged in to post. Please either login here, or if you are not registered, you may register here.