Artificial intelligence still has some way to go

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (6046 of them)

its very funny that these people have produced nothing that resembles artificial intelligence and their tech certainly has no hope of ever getting there and yet they feel like theyre just on the edge of some world changing event, it is kind of a religious situation

lag∞n, Wednesday, 22 November 2023 16:11 (one year ago)

saw a chart a while ago saying that american participation in organized religion had dropped 15% in just the last ten years, between this ai shit and taylor swift fans i think maybe people should just go back to church, its all set up already go once a week and get it out of your system

lag∞n, Wednesday, 22 November 2023 16:14 (one year ago)

xp lol this story is fun all the way

and yeah as mh and lag∞n say I think it wouldn't be that hard to take a model and retrain it on a big scrape or dump of ilx. But you might need a lot of computing power, money or patience?

assume everyone knows GPT is definitely trained on ilx already. But it's such a tiny part of the data - if you try to get it to mimic ilx style it can describe the tone it is aiming for but the results are not good (I won't clog up the thread with more chats).

woof, Wednesday, 22 November 2023 16:14 (one year ago)

i realise this is creepy but i’m hoping we get to that black mirror / house of usher tech soon where i can feed in someone’s social media history, emails, etc and get a reasonable facsimile of their vibe so that i can converse with them when they’re gone. sure it’s not them, but if it’s incorporated all their text it kinda is them, for certain purposes, and yknow if we can fool ourselves into thinking dogs have souls it can’t be too much of a jump to feeling a sense of a connection with a bot tuned to the frequencies of a loved one

Humanitarian Pause (Tracer Hand), Wednesday, 22 November 2023 16:24 (one year ago)

youre saying you want that

lag∞n, Wednesday, 22 November 2023 16:25 (one year ago)

yes!!

Humanitarian Pause (Tracer Hand), Wednesday, 22 November 2023 16:25 (one year ago)

maybe i could tune it to myself, just to see how annoying it is to talk to me

Humanitarian Pause (Tracer Hand), Wednesday, 22 November 2023 16:26 (one year ago)

lol they do have it already but its pretty low quality

lag∞n, Wednesday, 22 November 2023 16:31 (one year ago)

the whole AI/automation panic makes me wonder what jobs there are out there where you learn a stock set of skills and never have to learn new tools, techniques, etc

Fair point for a lot of jobs.

But if you, um, write poetry or play classical violin or do oile changes or bake baguettes or carve wooden figurines or coach gymnastics or make ceramic vases or trim shrubbery or teach tap-dancing, you might be forgiven for thinking that you would prefer to do the job without robotic help.

Oh I believe in Yetis' Day (Ye Mad Puffin), Wednesday, 22 November 2023 16:33 (one year ago)

https://i.imgur.com/Ofmdt5r.jpg

Boris Yitsbin (wins), Wednesday, 22 November 2023 16:44 (one year ago)

the pundit class worries about job losses because it’s their jobs that are the most replaceable

Humanitarian Pause (Tracer Hand), Wednesday, 22 November 2023 16:45 (one year ago)

So much of politics and ethics in america is built on what you described kate. The idea that if people were willing to work they would at least prosper moderately. If chat gpt destroys, say, 80% of lawyers’ jobs all of that kind of thinking is going to seem laughable, even to the formerly comfortable middle and lower bourgeoisie.

It could be a revolutionary moment. More likely America will just become colder and angrier.

― treeship

the "boot stamping on a human face forever" model is really tempting to buy into, particularly when there's no apparent path for us as individuals to, like. stop being stamped on.

you're right in that american values, which were never _good_ values, serve fewer and fewer people as time goes on. technology plays a role in it, but my opinion is that the role of technology in the changing economy is frequently vastly _overestimated_ and the role that capitalists play in those changes are frequently vastly _underestimated_.

because it's not just the jobs that are changing. if you look at how amazon warehouses are being run, the nature of the job is itself _changing_. it's changing into a job that's less suitable for humans to do and more suitable for machines to do. whether or not that's anybody's active intent doesn't matter. at some point warehouse jobs will be done by machines and not by humans.

what am i going to do? mourn the loss of those jobs? there are people out in kentucky lamenting the fact that there aren't coal mining jobs anymore. at the same time, at least in the '90s there were a significant number of people alive receiving financial compensation due to the long-term health conditions caused by the coal mining. to me, the issue isn't a question of coal mining, it's that people out in eastern kentucky have no opportunity, no prospects, no hope. the only thing they can even think of helping them is, well, bringing back the coal mining.

-

so they're voting for the leopards eating people's faces party, because that's what the leopards eating people's faces party is promising them. i see a lot of people just, like, mocking and laughing at people who are suffering, because those people are voting for their own oppressors. personally, i'm not laughing. i'm not morally condemning them.

what i see is people who... have chosen to try and eat my face rather than work together with people like me. they're suffering. and i guess a lot of them believe that if my face gets eaten they'll stop suffering. these are not people who deserve to have their faces eaten, but as long as they keep insisting on trying to eat my face, i'll stand back and let it happen.

i mean i'm implicated. for 40 years of my life i more or less acted in ways that were normal for a cishet white man to behave. i was ignorant, and my ignorance hurt me, and it hurt people who weren't like me. people who have been getting hurt for at least as long as america's been around.

back in the day, during vietnam, some people who were up for being drafted had a pretty cynical saying: "i'd rather switch than fight". i mean, in a sense that's what the "transmaxxers" are saying, isn't it? that stupid manifesto about how "the male gender is broken". what they mean is that they have no place in it. they have no place in that world.

me? i think they're lying to themselves. the male gender is _fine_. what's broken, what's _always_ been broken, is the values i was taught, they were taught. decent american values. that brokenness is becoming more and more apparent. the "safeguards" that were supposed to protect against its america's breakdown, like the electoral college, have only wound up accelerating the breakdown.

-

there's an episode of twin peaks where david lynch shows up in a cameo role, and he's talking about denise bryson, the trans woman who was working as a DEA special agent. and he shows up and he says "Fix your hearts or die." Literal fucking author tract. However I feel about the character of Denise Bryson...

Well, you know what it reminds me of? W.H. Auden, September 1, 1939. "We must love one another or die." He came to hate that poem, to be ashamed of it. I read that one time, it was reprinted, and Auden had changed the line. It said "We must love one another and die." You know what? I think it's better that way. We're all going to die. I'm going to die, and my death may well be meaningless. My life isn't, hasn't been, won't be. I'm saying that as fact. Axiomatic. Not up for dispute. it used to be important to me that I died for something, and it took me a long time to get over that belief. i'm getting over it, though.

we must love one another. not a command, but a statement of fact. it's an essential part of who we are, of what we do. i spent decades trying to fight that truth, trying to, well, die rather than love, and i lost that fight, and i'm going to die anyway.

america can be as cold and angry as it wants, kill like it's always been killing, exterminate like it's always been exterminating. its supporters can practice hate and call it love. blame us for their deeds, or else say it's ai, say it's computers. all of that stuff. it's been working for centuries.

it's not working like it used to. a lot of what keeps america going right now is fear of something worse. that fear isn't unfounded. maybe whatever comes next is openly cruel and vicious and winds up killing more people than the ones america's killing. maybe it kills me. there are a certain number of people who think my existence is a problem and would put a lot of effort into looking the other way if someone decided to fix that alleged problem.

i'm not afraid of death like i used to be. because that's what we're talking about here. it's not about ai taking lives. it's about people suffering and dying, and about how we choose to react to that. for a long time i pretended it wasn't happening, and i can't pretend anymore.

i mean i do think abraham lincoln is right. you can't fool all of the people all of the time. people stop pretending everything is fine. i don't know what happens after that.

every day i see people fixing their hearts. every day i see people growing up whose hearts aren't broken the way in the way mine was, when i was their age. my life is one of hope and joy, and when i see those things, well, that's a lot of where that hope and joy comes from.

-

idk. i took the meds that allow me to focus on my job and i don't really feel like focusing on my job. so i wrote this instead.

Kate (rushomancy), Wednesday, 22 November 2023 17:28 (one year ago)

i realise this is creepy but i’m hoping we get to that black mirror / house of usher tech

^^^lanchester novel incoming :|

mark s, Wednesday, 22 November 2023 17:29 (one year ago)

the pundit class worries about job losses because it’s their jobs that are the most replaceable

― Humanitarian Pause (Tracer Hand)

their jobs don't _need_ to be replaced. i don't see the point of paying people to have opinions. i think we should pay people to _not_ have opinions. that would help.

Kate (rushomancy), Wednesday, 22 November 2023 17:30 (one year ago)

yeah there will always be work for people willing to apologize for genocide and so forth

lag∞n, Wednesday, 22 November 2023 17:31 (one year ago)

Opinion-having is one of my income streams, not the main one. My unbiased belief is that my opinions should be valued and respected more. Way more.

treeship., Wednesday, 22 November 2023 18:13 (one year ago)

honestly it’s all like that

Humanitarian Pause (Tracer Hand), Wednesday, 22 November 2023 18:37 (one year ago)

If someone paid me, say, $150,000 a year to shut up and not have opinions, I would take that deal in a heartbeat. You would not hear from me again. And I (and my mostly corny stupid opinions) would not be missed.

Where do I sign up for that?

It's like that line in Ghost World where Thora Birch says "find someone who shares your interests" and Steve Buscemi says, "I hate my interests."

Oh I believe in Yetis' Day (Ye Mad Puffin), Wednesday, 22 November 2023 18:58 (one year ago)

you don’t get paid for not having the opinion, you get paid for not voicing the opinion

and by the opinion I mean speaking up when someone is like “we should do this even though there’s a high probability it’ll dump toxic sludge into the river” and you’re like, I have no opinion on this toxic sludge thing

ɥɯ ︵ (°□°) (mh), Wednesday, 22 November 2023 19:44 (one year ago)

the value i add as a human being is to say 'how about dump toxic sludge into the river 8k high resolution dramatic lighting high contrast'

i really like that!! (z_tbd), Wednesday, 22 November 2023 19:56 (one year ago)

If someone paid me, say, $150,000 a year to shut up and not have opinions, I would take that deal in a heartbeat. You would not hear from me again. And I (and my mostly corny stupid opinions) would not be missed.

― Oh I believe in Yetis' Day (Ye Mad Puffin)

i would like to be paid $150,000 a year to, when a cis person starts in on some ignorant bullshit about trans people, smile and politely explain to them why they are wrong, while going out of my way to help them to not feel insecure about being wrong

seriously that is a hard fucking job and i deserve to be paid a lot of money for doing it

Kate (rushomancy), Wednesday, 22 November 2023 20:59 (one year ago)

on the drama, maybe a reasonable summary from available sources?

woof, Wednesday, 22 November 2023 23:17 (one year ago)

I didn't know Sam Altman had been fired from Y Combinator in 2019. Not that Garry Tan is better

what you say is true but by no means (lukas), Wednesday, 22 November 2023 23:41 (one year ago)

Kate’s booming long post OTM. Re: the future of jobs and AI (or the future without coal),

The techno-utopians have always dreamed that the massive surpluses gained by automation would lead to a world where people are freed from the drudgery of labour and we’d all be able to achieve our full potential doing flower arrangements or hosting philosophical salons. Until I dunno fairly recently, certainly within my lifetime, it was inconceivable that no entity besides a state could marshal the kind of resources required to achieve these advances. And if corporations did, then they would be regulated heavily for the public good. And so everyone would reap the benefits of this race toward the future; we would all be Jetsons (altho as I remember George had some kind of soul-sucking Man in the Grey Flannel Spacesuit job, but youknowwhatimean).

But of course we’ve now seen that corporations are unwilling to be governed or regulated (and emboldened to take every measure to avoid it), and that capital has bludgeoned states into complete submission re: anything to do with the public good, and that for some people there is no such thing as enough, ever. If Musk owned the world he would also need to own the solar system, and if he achieved that he would need to own the galaxy, and everything in his purview would necessarily be bent towards more, more more. I really do remember a time when such behaviour would have been widely considered obscene; now it’s just the way things are. 19th C commie cartoons depicted capital as a ravening lion or the hopper of a big machine, but it’s really a supermassive black hole that will literally devour everything in its ambit.

Ehrm, anyway, there’s a whole class & generation of kids who’ve given up completely on the Dream and who just spend their money on whatever makes them happy for the moment, and most of their dough trickles up to its natural home in the pockets of the Bezoses & the like, because we’re all gonna be dead in 30 years anyway, this kind of fatalistic surrender to the Black Hole.

And I just all kind of see it heading toward this situation where there’s the 1%, and below them a tiny upper-middle class with McMansions & Teslas, and then everyone else just scrambling & eking out whatever kind of existence we can & chucking our money into the hole of Expedia and Bed Bath and Beyond in almost a sad parody of what used to be the middle class. And then there are the Proles, who vote for the interests of the rich because they’ll keep the cartoon villains at bay.

I guess this is AI-related because, like, first they came for the factory workers, then they came for the truck drivers, now they’re coming for the knowledge workers kinda vibe, ya know?

Just some dark thoughts that have been rattling around in my skull for the past while. Thanks for letting me vent. As you were.

lethbridge-pfunkboy (hardcore dilettante), Thursday, 23 November 2023 04:30 (one year ago)

It’s a paradox though because while automation causes individual firms to save money and increase profits, in the end it causes the economy to shrink.

I think. Wouldn’t it cause a crisis if unemployment drastically rose? Who would be spending money on goods and services?

https://en.wikipedia.org/wiki/Tendency_of_the_rate_of_profit_to_fall

― treeship., Tuesday, November 21, 2023 8:52 PM (two days ago) bookmarkflaglink

the resolution of the paradox is that marx was wrong

flopson, Thursday, 23 November 2023 08:05 (one year ago)

He was wrong because capitalism adapted to find jobs for people. And monopolies broke up due to crises, allowing the capital accumulation game to start over again and fuel growth.

treeship., Thursday, 23 November 2023 13:00 (one year ago)

This time though, who knows?

treeship., Thursday, 23 November 2023 13:00 (one year ago)

Marx is playing the long game of history

xyzzzz__, Thursday, 23 November 2023 13:17 (one year ago)

I don’t think a crisis caused by capitalism failing inevitably would lead to socialism though. I don’t even feel that is likely. Victories for the working class historically are achieved at boom times—crises are used to push through austerity. Marx was wrong about this, I think.

treeship., Thursday, 23 November 2023 13:19 (one year ago)

The search for the artificial general hype machine edges one step closer

https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/

Alba, Thursday, 23 November 2023 13:31 (one year ago)

"Q*" – you've got to hand it to them

Alba, Thursday, 23 November 2023 13:32 (one year ago)

Victories for the working class historically are achieved at boom times—crises are used to push through austerity. Marx was wrong about this, I think.

― treeship., Thursday, 23 November 2023 bookmarkflaglink

At the end of WWII the UK elected a pretty socialist government. Much of Europe adopted socialist leaning policy.

What about depression-era America?

xyzzzz__, Thursday, 23 November 2023 13:36 (one year ago)

I mean sure it's hardly communism and yet the pressure of these movements led to a lot of breaks. It's more to do with organisation than Marx, who I am sure was wrong about many things (my initial reply was a joke)

xyzzzz__, Thursday, 23 November 2023 13:40 (one year ago)

True, depression and postwar era led to the creation of welfare states. Subsequent crises usually have served as pretexts for their dismantling though. And in germany wasn’t the spd most influential during the boom years before wwi?

treeship., Thursday, 23 November 2023 13:46 (one year ago)

Don't know enough but I don't think there is a set pattern to the outcome of an economic crisis.

xyzzzz__, Thursday, 23 November 2023 13:55 (one year ago)

ha my limited 20th century brain figures technological breakthroughs mainly provide more efficient opportunities for scale, and more effective systemization. they aren’t outcome determinant.

if revolutionary tech is controlled by musks you will end up low income workers on a mars cargo.

if controlled by, i dunno, some 1920s technocrat, you may end up in the jetsons.

if by like, a lenin, you end up working on a mars cargo again with better poster art.

the tech is more efficient at hyperspeeding extinction events i guess, so it would be nice to try to save earth before betting on the mars cargo track.

digital chirping and whirring (Hunt3r), Thursday, 23 November 2023 15:10 (one year ago)

that’s what happens when boom times in technological progress have been linked directly to large scale government spending on major wars

it’s not that the AI wants us to kill each other, it’s that the people who want to maximize their profit want to be the next Raytheon

ɥɯ ︵ (°□°) (mh), Thursday, 23 November 2023 15:35 (one year ago)

I try to be blasé about the stuff AI can do but sometimes it does slap me

David Attenborough is now narrating my life

Here's a GPT-4-vision + @elevenlabsio python script so you can star in your own Planet Earth: pic.twitter.com/desTwTM7RS

— Charlie Holtz (@charliebholtz) November 15, 2023

woof, Thursday, 23 November 2023 16:13 (one year ago)

Another day, another frontier broken.

https://www.theguardian.com/business/2023/nov/23/openai-was-working-on-advanced-model-so-powerful-it-alarmed-staff

xyzzzz__, Thursday, 23 November 2023 18:43 (one year ago)

When the ai gets control of mobile robot forces like those things that can run like dogs and flip like gymnasts, that’ll be wild.

digital chirping and whirring (Hunt3r), Thursday, 23 November 2023 19:22 (one year ago)

The model, called Q* – and pronounced as “Q-Star” – was able to solve basic maths problems it had not seen before

This is the only new capability mentioned and so presumably it was the singular cause for the alarm. But "basic" mathematics are totally rules bound and "basic" binary math underlies every capability of computing. It would help to know what sorts of problems it was solving beyond the vague and unhelpful descriptor of "basic" and exactly what it was that triggered this alarm among the staff.

more difficult than I look (Aimless), Thursday, 23 November 2023 19:22 (one year ago)

yeah that article is a bit confusing when they state that "the ability to solve maths problems would be a huge advancement" or whatever..like, I assumed these things could solve most known math problems by this point.

I? not I! He! He! HIM! (akm), Thursday, 23 November 2023 19:47 (one year ago)

Not really. What's interesting about language models is that they're (seemingly) good at the fuzzy kind of intuitive reasoning that humans spend most of their time doing, not the exact logical manipulation computers normally do. If you ask them simple logic problems that aren't in their training data they often fail.

So if this report is right and you're an OpenAI person who's an AGI true believer, maybe you're thinking "we have intuitive reasoning, now we have logical reasoning" = AGI!

Assuming the report is right (and that this generalizes beyond math to other kinds of logical reasoning ... a couple big ifs) one thing I wonder is if the model "knows" when to shift between the different kinds of reasoning.

what you say is true but by no means (lukas), Thursday, 23 November 2023 20:10 (one year ago)

but LLMs don't reason at all?

rob, Thursday, 23 November 2023 20:21 (one year ago)

The biggest problem with this stuff is that the little that has come out is pure comedy. What is meant to make people "scared" are in these reports that nobody sees.

And it's like...just let us see it. Otherwise I can't even begin to believe it.

xyzzzz__, Thursday, 23 November 2023 20:35 (one year ago)

xp sometimes they kind of do. someone had an LLM play Go on an 8x8 board and found part of the neural network that corresponded to the state of the game board.

like in one sense they're just predicting text, yeah, but it turns out when you predict text with an absolutely massive network, inside that network weird things start to happen that sometimes behave like reasoning. but ... not all the time. it's fundamentally different than what our brains are doing, so it has different strengths and weaknesses.

NB I am not an AGI guy, don't think we should anthropomorphize them, blah blah.

what you say is true but by no means (lukas), Thursday, 23 November 2023 20:36 (one year ago)

According to my admittedly loose understanding, LLMs perform calculations to derive statistical probabilities captured in their data sets and apply rules to select among those probabilities, first to identify key components of a prompt that contain the core of the prompt's intent, and then uses a similar statistical p[rocess to build sentences designed to respond to those prompts in ways humans will accept as meaningful. They would be equally capable of creating other kinds of word salad based on the same prompts, but that wasn't the task the human programmers were trying to solve.

Long ago, when I fiddled with extremely crude natural language programs I decided that most, if not all of the "intelligence" in an AI program resided in the data set. Now that computers are capable of rapidly handling truly vast data sets created by human intelligence, they got more intelligent looking. But it is still a kind of smoke and mirrors. Our generalized intelligence developed over the course of millions of years of evolution and untold quadrillions of prototypes.

more difficult than I look (Aimless), Thursday, 23 November 2023 20:39 (one year ago)

xp
ok I get what you're saying. I feel a little more strongly that we should resist calling it "reasoning" but nbd.

an LLM that could consistently solve logic problems not included in their training data would def be a milestone, but my personal stance right now is that the OpenAI shenanigans have a big flag that says "fuckery and/or corruption" stuck on them, so I'm extremely skeptical

rob, Thursday, 23 November 2023 20:43 (one year ago)

yeah totally. the bigger issue to me is that by definition the people who work at OpenAI are going to be the people who *want* to believe in AGI, so I don't trust their judgment on this stuff. it's exciting for them if they're working on something that could destroy humanity, it makes them players in a grand drama.

or as a more level-headed AI researcher put it "I'm old enough to remember when GPT-2 was too dangerous to release"

what you say is true but by no means (lukas), Thursday, 23 November 2023 21:00 (one year ago)


You must be logged in to post. Please either login here, or if you are not registered, you may register here.