Artificial intelligence still has some way to go

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (6084 of them)

In the Q&A that followed Bender’s talk, a bald man in a black polo shirt, a lanyard around his neck, approached the microphone and laid out his concerns. “Yeah, I wanted to ask the question about why you chose humanization and this character of human, this category of humans, as the sort of framing for all these different ideas that you’re bringing together.” The man did not see humans as all that special. “Listening to your talk, I can’t help but think, you know, there are some humans that are really awful, and so being lumped in with them isn’t so great. We’re the same species, the same biological kind, but who cares? My dog is pretty great. I’m happy to be lumped in with her.”

He wanted to separate “a human, the biological category, from a person or a unit worthy of moral respect.” LLMs, he acknowledged, are not human — yet. But the tech is getting so good so fast. “I wondered, if you could just speak a little more to why you chose a human, humanity, being a human as this sort of framing device for thinking about this, you know, a whole host of different things,” he concluded. “Thanks.”

Bender listened to all this with her head slightly cocked to the right, chewing on her lips. What could she say to that? She argued from first principles. “I think that there is a certain moral respect accorded to anyone who’s human by virtue of being human,” she said. “We see a lot of things going wrong in our present world that have to do with not according humanity to humans.”

The guy did not buy it. “If I could, just very quickly,” he continued. “It might be that 100 percent of humans are worthy of certain levels of moral respect. But I wonder if maybe it’s not because they’re human in the species sense.”

Many far from tech also make this point. Ecologists and animal-personhood advocates argue that we should quit thinking we’re so important in a species sense. We need to live with more humility. We need to accept that we’re creatures among other creatures, matter among other matter. Trees, rivers, whales, atoms, minerals, stars — it’s all important. We are not the bosses here.

But the road from language model to existential crisis is short indeed. Joseph Weizenbaum, who created ELIZA, the first chatbot, in 1966, spent most of the rest of his life regretting it. The technology, he wrote ten years later in Computer Power and Human Reason, raises questions that “at bottom … are about nothing less than man’s place in the universe.” The toys are fun, enchanting, and addicting, and that, he believed even 47 years ago, will be our ruin: “No wonder that men who live day in and day out with machines to which they believe themselves to have become slaves begin to believe that men are machines.”

The echoes of the climate crisis are unmistakable. We knew many decades ago about the dangers and, goosed along by capitalism and the desires of a powerful few, proceeded regardless. Who doesn’t want to zip to Paris or Hanalei for the weekend, especially if the best PR teams in the world have told you this is the ultimate prize in life? “Why is the crew that has taken us this far cheering?” Weizenbaum wrote. “Why do the passengers not look up from their games?”

Creating technology that mimics humans requires that we get very clear on who we are. “From here on out, the safe use of artificial intelligence requires demystifying the human condition,” Joanna Bryson, professor of ethics and technology at the Hertie School of Governance in Berlin, wrote last year. We don’t believe we are more giraffelike if we get taller. Why get fuzzy about intelligence?

Others, like Dennett, the philosopher of mind, are even more blunt. We can’t live in a world with what he calls “counterfeit people.” “Counterfeit money has been seen as vandalism against society ever since money has existed,” he said. “Punishments included the death penalty and being drawn and quartered. Counterfeit people is at least as serious.”

Artificial people will always have less at stake than real ones, and that makes them amoral actors, he added. “Not for metaphysical reasons but for simple, physical reasons: They are sort of immortal.”

We need strict liability for the technology’s creators, Dennett argues: “They should be held accountable. They should be sued. They should be put on record that if something they make is used to make counterfeit people, they will be held responsible. They’re on the verge, if they haven’t already done it, of creating very serious weapons of destruction against the stability and security of society. They should take that as seriously as the molecular biologists have taken the prospect of biological warfare or the atomic physicists have taken nuclear war.” This is the real code red. We need to “institute new attitudes, new laws, and spread them rapidly and remove the valorization of fooling people, the anthropomorphization,” he said. “We want smart machines, not artificial colleagues.”

https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html

z_tbd, Thursday, 2 March 2023 03:19 (two years ago)

No they shouldn't.

Mark G, Thursday, 2 March 2023 07:18 (two years ago)

See, murder is a serious crime, and punishments are severe.

But the punishments for financial fraud can run much higher.

Mark G, Thursday, 2 March 2023 07:22 (two years ago)

Not having personally encountered any true believers I wondered what the response was to the imo irrefutable argument that meaning cannot exist without referents and that llms, swimming in a sea of words and nothing else, can never have even the smallest fraction of understanding of what words mean. Now I know, and it's 'uh shutup yes it can and yes they do'.

Referents, actual things and ideas in the world, like coconuts and heartbreak, are needed to produce meaning. This refers to that. Manning now sees this idea as antiquated, the “sort of standard 20th-century philosophy-of-language position.”

ledge, Thursday, 2 March 2023 08:24 (two years ago)

that was an excellent article, thanks.

ledge, Thursday, 2 March 2023 08:28 (two years ago)

The credulous will not, until it’s too late, realize that the body is one of the major components of what makes meaning for us — and that therefore AI, no matter how smart and how human it can seem, will not ever be anything like actually sentient or intelligent.

That doesn’t mean it won’t destroy us, and probably soon.

I follow a lot of talking birds on Instagram, and there are a few that I know without a shadow of a doubt are investing their words with personal meaning. Probably not something we’ll ever be able to properly parse, but dammit they’re trying. I’ve had dogs that tried very hard to speak English, and I’ve had dogs that can hardly be coaxed to give a fuck about the word “walk” or “supper” … where was I going with this? … something about being a physical creature and how emotions and sensation and thoughts are a tangled inextricable web that produce experience in a way that a language model could never ever. I have more kinship with a cockroach, and so do you, than with any computer no later how lifelike. The fact that MDMA has similar effects on cephalopods that it does on us, despite the fact they branched off from us earlier than any other animal that could plausibly have anything like personhood, whoooo! That’s some heavy deeds. Give Bing some MDMA and see what happens (spoiler: nothing).

I’m super interested in AI for all sorts of reasons (closet technocrat) but goddammit stop being an idiot about what makes a being a being.

The land of dreams and endless remorse (hardcore dilettante), Friday, 3 March 2023 03:30 (two years ago)

I'm willing to believe that roughly a billion years of life-or-death consequences for earthly life forms has created living beings for which "meaning" has far greater reach and depth than the pattern perception, recognition, and manipulation of objects that AI is currently capable of.

If a robot picking up blocks and using them to form the Microsoft logo were frequently interrupted by other robots who smashed the first robot or disassembled it into pieces in order to cannibalize it, and all the robots involved were frequently recreating themselves and passing along their programs with small random variations, then I could envision robots for which fear, anger, curiosity, love and laughter constituted major sources of meaning enriching their existence. Maybe after some millions of years. Otherwise, they are just as real and sentient as Pygmalion's statue.

more difficult than I look (Aimless), Friday, 3 March 2023 04:05 (two years ago)

That sounds like you're asking researchers to speed run millions of years of robo-brutality in a simulator!
https://img2.thejournal.ie/inline/2792085/original/?width=480&version=2792085

Philip Nunez, Friday, 3 March 2023 22:05 (two years ago)

I tried Bing image creator - 'my perfect husband' came up with one lol result (a bride and groom with their faces sort of melted together by accident) plus 3 men of varying ethnicities. When I tried 'my perfect wife' I was told it wasn't allowed and had blocked that search request!

kinder, Monday, 6 March 2023 09:18 (two years ago)

all the blocking makes the ai experience no fun, we all just want to make the ai tell us some fucked up stuff let us do it

lag∞n, Monday, 6 March 2023 12:46 (two years ago)

The fact it can find a perfect husband but no wives measure up is a bit off if you ask me

kinder, Monday, 6 March 2023 13:18 (two years ago)

it simply believes all wives are perfect

mh, Monday, 6 March 2023 13:42 (two years ago)

Nothing Forever is back but it feels like it's lacking something, I think this has happened, sadly:

No joke, I watched for almost four hours straight. It’s one of those things that is going to get worse as they “improve” it.

― Karl Malone, Wednesday, 1 February 2023 14:52 (one month ago) bookmarkflaglink

soref, Friday, 10 March 2023 18:38 (two years ago)

also the George character now has long blond hair for some reason

soref, Friday, 10 March 2023 18:42 (two years ago)

wait, I think the Elaine character is the one with long blond hair, it's difficult to tell which character corresponds with which voice.

soref, Friday, 10 March 2023 18:44 (two years ago)

george -> fred kastopolous > nick sterling
elaine -> yvonne torres > kelly coffee
jerry -> larry feinberg > leo borges
kramer - > zoltan kakler > manfred fredman (note, i only know of manfred from the opening credits, which is now a Sex and the City style "blogging while monologuing" scene rather than the nightclub comedy act. but manfred afaik has not showed up in the show yet)

z_tbd, Friday, 10 March 2023 18:48 (two years ago)

they added a restaurant for them to go to, and the music has changed as well. predictably i think the primary take is that it's not as good as the first season, when the characters more directly referenced seinfeld. and to be sure, if the current season 2 incarnation is how the show "debuted", it wouldn't have been a hit. it needed the seinfeld connection to make sense with people, i think.

however, if it has a chance to be something that lasts and not the typical internet fame cycle of 1 to 14 days of exhaustive consumption, than death, then it was going to have to change. the people that are left in the chat are the real freaks. it's a good thing!

z_tbd, Friday, 10 March 2023 18:51 (two years ago)

god, there was some real magic in the standup routine parts of season 1, though, especially when larry would directly ask the audience to make up some jokes for him, and there was silence, followed by a seamless transition into larry's apartment with fred telling everyone about a new restaurant that just opened up

z_tbd, Friday, 10 March 2023 18:53 (two years ago)

Larry forever, Leo never

soref, Friday, 10 March 2023 18:58 (two years ago)

don't get me wrong, i keep larry near my heart
https://i.imgur.com/4Olu4lE.png

it's a new day in the international landscape (z_tbd), Friday, 10 March 2023 19:03 (two years ago)

leo is like becky after becky got replaced on roseanne. the whole s2 crew is like that, really. leo's last name is a bit on the nose, but it points to one scenario of the show, which is versions of what was a version in the first place, which somehow, with time and wishful thinking (1) versions of versions 2) ...? 3) profit!) becomes a show about something, generated from a series of alterations from a show about nothing

it's a new day in the international landscape (z_tbd), Friday, 10 March 2023 19:15 (two years ago)

i've been wondering about what gpt-3 and all the other LLM's would be like without the constraints that the developers put into it - you can't ask it the easiest and cheapest way to make a bomb, for example, and you can't ask it to come up with malign ideas or to do negative things in general (although there are some ways to get around that). but it seems inevitable that there will be versions of it where you can. maybe not mainstream, but easily done among computer knowledge lords, and thus somewhat accessible to others as well. seems like all of that is close:

Since ChatGPT launched, some people have been frustrated by the AI model's built-in limits that prevent it from discussing topics that OpenAI has deemed sensitive. Thus began the dream—in some quarters—of an open source large language model (LLM) that anyone could run locally without censorship and without paying API fees to OpenAI.

Open source solutions do exist (such as GPT-J), but they require a lot of GPU RAM and storage space. Other open source alternatives could not boast GPT-3-level performance on readily available consumer-level hardware.

Enter LLaMA, an LLM available in parameter sizes ranging from 7B to 65B (that's "B" as in "billion parameters," which are floating point numbers stored in matrices that represent what the model "knows"). LLaMA made a heady claim: that its smaller-sized models could match OpenAI's GPT-3, the foundational model that powers ChatGPT, in the quality and speed of its output. There was just one problem—Meta released the LLaMA code open source, but it held back the "weights" (the trained "knowledge" stored in a neural network) for qualified researchers only.

Meta's restrictions on LLaMA didn't last long, because on March 2, someone leaked the LLaMA weights on BitTorrent. Since then, there's been an explosion of development surrounding LLaMA. Independent AI researcher Simon Willison has compared this situation to the release of Stable Diffusion, an open source image synthesis model that launched last August. Here's what he wrote in a post on his blog:

It feels to me like that Stable Diffusion moment back in August kick-started the entire new wave of interest in generative AI—which was then pushed into over-drive by the release of ChatGPT at the end of November.

That Stable Diffusion moment is happening again right now, for large language models—the technology behind ChatGPT itself. This morning I ran a GPT-3 class language model on my own personal laptop for the first time!

AI stuff was weird already. It’s about to get a whole lot weirder.

https://arstechnica.com/information-technology/2023/03/you-can-now-run-a-gpt-3-level-ai-model-on-your-laptop-phone-and-raspberry-pi/

it's a new day in the international landscape (z_tbd), Tuesday, 14 March 2023 01:13 (two years ago)

Yeah I got LLaMa running on my Macbook. It'll do untold damage but man it's fun.

official representative of Roku's Basketshit in at least one alternate u (lukas), Tuesday, 14 March 2023 01:50 (two years ago)

kinda funny facebook open sourced it when openai is over there raising billions of dollars guarding chatgpt with their lives

lag∞n, Tuesday, 14 March 2023 01:54 (two years ago)

I’m super interested in AI for all sorts of reasons (closet technocrat) but goddammit stop being an idiot about what makes a being a being.

― The land of dreams and endless remorse (hardcore dilettante), Friday, March 3, 2023 3:30 AM (one week ago) bookmarkflaglink

this is me minus the first half of this sentence. one of the things that i just categorically, flat-out laugh at and am completely agog that anyone believes is the whole category of "AI is so advanced what even are we?????" time-ass pop-sci philosophizing that gets treated more or less seriously these days. as for "destroying us," it's certainly going to destroy some of us in partnership with a constantly shifting apparatus, those who have already been marked for oblivion in our pockmarked topography of sacrifice.

as far as images and text go, it's blindingly clear to me that none of these products have any spirit or life behind them & it was a rather stupid and unfortunate decision to do ai images for this year's ilx poll afaic.

ꙮ (map), Tuesday, 14 March 2023 02:02 (two years ago)

im with you brother

lag∞n, Tuesday, 14 March 2023 02:24 (two years ago)

heyyyyy

ꙮ (map), Tuesday, 14 March 2023 02:25 (two years ago)

Noam Chomsky: AI Isn't Coming For Us All, You Idiots

obsidian crocogolem (sleeve), Tuesday, 14 March 2023 03:14 (two years ago)

can't remember if it was posted on another thread but Ted Chiang on chatgpt a few weeks ago was pretty good: https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web?src=longreads

Obviously, no one can speak for all writers, but let me make the argument that starting with a blurry copy of unoriginal work isn’t a good way to create original work. If you’re a writer, you will write a lot of unoriginal work before you write something original. And the time and effort expended on that unoriginal work isn’t wasted; on the contrary, I would suggest that it is precisely what enables you to eventually create something original. The hours spent choosing the right word and rearranging sentences to better follow one another are what teach you how meaning is conveyed by prose. Having students write essays isn’t merely a way to test their grasp of the material; it gives them experience in articulating their thoughts. If students never have to write essays that we have all read before, they will never gain the skills needed to write something that we have never read.

And it’s not the case that, once you have ceased to be a student, you can safely use the template that a large language model provides. The struggle to express your thoughts doesn’t disappear once you graduate—it can take place every time you start drafting a new piece. Sometimes it’s only in the process of writing that you discover your original ideas. Some might say that the output of large language models doesn’t look all that different from a human writer’s first draft, but, again, I think this is a superficial resemblance. Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an A.I.

Roz, Tuesday, 14 March 2023 04:36 (two years ago)

i've been wondering about what gpt-3 and all the other LLM's would be like without the constraints that the developers put into it

Soon you may not need to wonder: https://www.theverge.com/2023/3/13/23638823/microsoft-ethics-society-team-responsible-ai-layoffs

map, I posted this during the ilx poll; I enjoyed his rage: https://davidgolumbia.medium.com/chatgpt-should-not-exist-aab0867abace.

The Chiang piece looks good too, and his point about rewriting makes me think that one problem with the "chat-gpt will be an aid to writing" takes is they ignore that you have to be good at reading, or even editing, to make use of it that way.

rob, Tuesday, 14 March 2023 13:22 (two years ago)

the thing ai writing will be and im sure already is used is generating piles of bad but good enough writing

lag∞n, Tuesday, 14 March 2023 13:31 (two years ago)

you can tell that thats what its already trained on just sucking up data from a million multi level marketing websites

lag∞n, Tuesday, 14 March 2023 13:32 (two years ago)

if web search wasnt already dead it would be soon

lag∞n, Tuesday, 14 March 2023 13:34 (two years ago)

if they trained it on the famous prose of albert camus then it would become sentient

lag∞n, Tuesday, 14 March 2023 13:37 (two years ago)

lol yeah I agree with you. There's a bit in Bullshit Jobs about people whose job is to write reports that no one will ever read—those people might get relieved of their misery. There'll be a billion more "blogs" that are just content marketing

I was thinking more about how some people insist it can be a classroom aid or whatever, "the new paradigm will be learning to write *with* Chat-GPT" type crap

rob, Tuesday, 14 March 2023 13:39 (two years ago)

lol yeah thats just the take economy at work, it will certainly be used for tons of cheating tho

lag∞n, Tuesday, 14 March 2023 13:41 (two years ago)

yeah the Chiang piece is good. I thought the recent John Oliver episode on AI was pretty good too. one analogy I use sometimes is that my daughter, who is in kindergarten, may not be able to produce photorealistic images with her crayons, but she at least knows how many eyes and fingers a person has. as a software developer I guess I kind of instinctively understand this, how one line of bad code can singlehandedly dismantle a complex app, even though "it should have known what I was trying to do". because the users sure seem to think everything should just work.

frogbs, Tuesday, 14 March 2023 13:47 (two years ago)

it's funny because all computers can do is count and yet

I like that Golumbia piece because I think we're overdue for some genuine outrage over constantly comparing living beings to simple-ass computers. the sam altmans of the world should be condemned as monsters imo

rob, Tuesday, 14 March 2023 13:51 (two years ago)

admittedly have used ChatGPT to do menial work writing that i was too lazy to do. proofread it of course and fixed things that weren't entirely right.

hootenanny-soundtracking clusterfucks about milking cows (Neanderthal), Tuesday, 14 March 2023 13:53 (two years ago)

(I'm not a journo, this was for internal training materials and I attributed said writing to AI bots so pre-emptive EFF UUUU)

hootenanny-soundtracking clusterfucks about milking cows (Neanderthal), Tuesday, 14 March 2023 13:53 (two years ago)

lol that's fine! my point is only that it might be harder to learn to proofread and fix not-right things if people use chat-gpt in high school english classes or what have you

rob, Tuesday, 14 March 2023 13:59 (two years ago)

I also don't much care if this kills the stock photography industry (I'm open to persuasion though); I do care that some wealthy & powerful lunatics think art is now irrelevant, though maybe I shouldn't

rob, Tuesday, 14 March 2023 14:02 (two years ago)

it definitely is going to cause massive havoc in the education industry I think.

hootenanny-soundtracking clusterfucks about milking cows (Neanderthal), Tuesday, 14 March 2023 14:10 (two years ago)

I do care that some wealthy & powerful lunatics think art is now irrelevant, though maybe I shouldn't

― rob, Tuesday, March 14, 2023 10:02 AM (ten minutes ago) bookmarkflaglink

noticing art is having a moment as a tool for online freaks to launder their perverse worldviews when they obviously dont care about art at all, the ai people, weirdo retvrn "western traditionalist", fuckin gamers, name two artists lol

lag∞n, Tuesday, 14 March 2023 14:16 (two years ago)

funny parallel with the grindset guys getting into "reading" where the books they pretend to read are all called mindsprouts how to water your brain garden for greater success harvests

lag∞n, Tuesday, 14 March 2023 14:18 (two years ago)

this is definitely gonna change the workflow for lawyers I think

frogbs, Tuesday, 14 March 2023 14:22 (two years ago)

idk legal writing is so precise you dont want to introduce bugs, plus they bill for writing so they like that, i guess it could be used for cueing up the right form or something tho that might be a job more suited for a more traditional computer program

lag∞n, Tuesday, 14 March 2023 14:24 (two years ago)

the thing about these ai writing programs is will they ever be able to make them at all reliable as far as saying things that are true and make sense, now you could say just proof read them but starting out with a draft that likely has major errors in it isnt great, do you need to go through and check every claim, why not just write it yourself

lag∞n, Tuesday, 14 March 2023 14:30 (two years ago)

xxxpost heh yeah the people I know who were whinging about art on social media are people who before AI art had never posted about it once. and had zero understanding of how AI art generated its art.

beliefs spread solely via meme rn so not shocking.

hootenanny-soundtracking clusterfucks about milking cows (Neanderthal), Tuesday, 14 March 2023 14:31 (two years ago)

yeah you have this initial impression which is this stuff is impressive ive not see a computer do things like this but then the leap to this is good this will replace real art doesnt really make sense people cant just be impressed and leave it at that they have to generate a take

lag∞n, Tuesday, 14 March 2023 14:33 (two years ago)


You must be logged in to post. Please either login here, or if you are not registered, you may register here.