Generative AI summer 2025: where do you stand?

Message Bookmarked
Bookmark Removed

Poll Closing Date: Monday, 1 September 2025 00:00 (in 1 week)

We've got the main thread for articles and more general discussion but I'm curious.

I started trying to split these out into reasons - hatred on environmental grounds or social consequences or aesthetics... - but thought it would get messy and anyway they're all kind of tangled up.

Respect to the committed haters and lovers but I'm most curious about that middle space where people are simultaneously 'I hate AI' and 'Let me just ask Chat GPT'. I do meet those people and tbh I'm somewhere similar: this thing is amazing and horrifying; it's sickeningly hyped but annoyingly useful; maybe deadly poisonous but afaict it's rapidly got into the fabric of people's lives. I think it might be genuinely apocalyptic, but I will ask it (Claude in my case) questions about greek grammar or electronics or to knock up some code, or I'll just see if it can do something.

So I'm curious about how people use it casually, in particular. It feels like there's a big spread of folk uses of the tech.

(Terminology: I think this is intuitive but I'm using 'gen AI' to cover the kind of thing the big public models do - ChatGPT, Claude, Gemini, Grok, whatever shit Facebook are doing. Not just LLM chat & sentence generation but images, video, music etc. Protein folding & surveillance state facial recognition is less what I'm getting at)

I viscerally hate and/or fear gen AI. I will never use it
I hate/fear gen AI but it draws me in
I dislike or hate or fear gen AI but I use it because work makes me
I do not like gen AI in general but I use it because I sometimes find it useful or fun or interesting
I like gen AI in general and find it fun/useful/interesting
Shut up - maybe it's useful, maybe it's not, just shut up about it tbh
Gen AI is all fake bullshit
I worship the robot god, all hail gen AI.


woof, Thursday, 14 August 2025 10:38 (one week ago)

option 1 extremist. only way I can handle all this

imago, Thursday, 14 August 2025 10:40 (one week ago)

Personally somewhere between
I hate/fear gen AI but it draws me in
and
I do not like gen AI in general but I use it because I sometimes find it useful or fun or interesting

woof, Thursday, 14 August 2025 10:40 (one week ago)

sorry, having read your post properly now I realise my stance is the uninteresting one. alright I yield the floor to the slopper brigade ;)

imago, Thursday, 14 August 2025 10:43 (one week ago)

nah, the 'never use it' is where I'd be if I were pure of heart, and it is interesting in itself. Like when I say 'it might be genuinely apocalyptic' I'm serious for a range of apocalypses - accelerated environmental collapse, financial meltdown (not so bad tbh but the wrong ppl will suffer), billionaire bro feudalism, democratic breakdown, psychosis epidemic, slop greshaming language, art and then base reality, robots rearranging all human atoms into maximally efficient dyson sphere etc etc.

woof, Thursday, 14 August 2025 10:52 (one week ago)

Strongly dislike + fear it, and it’s made me not only want to not use it but also start leading a slightly less technologically dependent life, but mainly want people to shut up tbh

ed.b, Thursday, 14 August 2025 11:00 (one week ago)

We're all just waiting for the Palantir drones to zap us through our windows tbh

imago, Thursday, 14 August 2025 11:03 (one week ago)

I viscerally hate and/or fear gen AI. I often use it because I am a hypocrite.

Proust Ian Rush (Camaraderie at Arms Length), Thursday, 14 August 2025 11:08 (one week ago)

once you put it like that I see the missing option that I need

woof, Thursday, 14 August 2025 11:10 (one week ago)

I am required to use it by work (or at least required to look like I am using it) but otherwise wouldn’t by choice. I don’t find it useful for anything other than summarising action points from meetings.

I do think it ‘works’ in a broader sense, though - and well enough to put tens of millions of people out of jobs over the next few years.

ShariVari, Thursday, 14 August 2025 11:16 (one week ago)

I voted "I dislike or hate or fear gen AI but I use it because work makes me."

I have a paralytic certainty that gen AI will take my job. I haven't tried to use it frequently in my work, but last year, there was a top-down recommendation to attempt to use it as much as possible. I attempted to, and still found major drawbacks (i.e., hallucinatory answers), so I put it on the back burner. But my new boss makes regular use of it. The other day, he mentioned he's learning to create "custom GPTs," whatever that means, so I'll need to learn how to use it soon.

In the late 1990s, when I was trying to find my way through college, my dad was trying to convince me to go into computer science. At the time, I viewed the tech boom as an unwelcome fad, one that would surely dissipate. I feel the same basic emotion now - "it will all be over soon and everything will get back to normal" - except with 30 years of experience confirming how poor my foresight is.

On the other hand, I'm all in favor of scientific uses of A.I. It is exciting to think about how, in the right hands, it could aid in solving major societal problems: medical breakthroughs, improving energy efficiency, etc. But my excitement is balanced by terror, because of the "right hands" part.

peace, man, Thursday, 14 August 2025 11:17 (one week ago)

I don’t get option 7. AI is fake bullshit but there’s no reason to hate or fear it?

rob, Thursday, 14 August 2025 11:26 (one week ago)

About 2 years ago I guessed the timeline for it taking my job (producing government guidance) was 2-5 years. Still doesn't feel off - I'm now in the shorter end of that range and it's everywhere, from AI guidance solutions pitched by suppliers to random civil servants knocking up a fake unreleasable service in replit or whatever. Market for my trade also feels a little slow. It'll probably stretch to the longer end of that range just through government risk-aversion, and I've got a couple of advantages (paying attention to AI well enough to smell and shift the bullshit, knowing people and structures and systems in government), but it does not feel at all safe being seen as a words person rn.

woof, Thursday, 14 August 2025 11:31 (one week ago)

xp
that was meant to cover 'it doesn't even work, it's not useful' with a bit of 'it'll just go away', so yes I suppose a no hate or fear (but I can see 'it is fake bullshit that I hate/fear' would make sense and I def should have added '() None of the above; let me explain')

woof, Thursday, 14 August 2025 11:35 (one week ago)

no I think option 1 is sufficient for all kinds of hate, I just wanted to clarify the attitudes behind each.

I'm option 1. I have option 7 sympathies but that leads to fear/hate, and reading the "work makes me do it" responses here fuels that

rob, Thursday, 14 August 2025 11:42 (one week ago)

I voted "never use it" but maybe that's not entirely true: I used it I think a total of three times, each to see how good it was at a different thing it was supposed to be good at. I also occasionally look at the AI overview in search results, the same way that I'll just use the first google search result, if it's a trivial thing and I'm in a hurry, though 90% of the time I do scroll past.

I wouldn't say I'm "scared" of it as the technology itself isn't very impressive. The way it's been pushed and how quickly people are succumbing to its cheap tricks is concerning for sure but ultimately that is more about the society that allowed the hype to happen than the thing itself.

Ultimately what's most depressing about it is what it reveals about where we're at as a species. I'm not even getting into people enjoying AI slop as art - not everyone has to be an aesthete, some people enjoy sports or hiking and view art as something to half-experience while scrolling, fair enough - but more the fact that even after all these decades techno-utopianism still finds purchase, people are still gullible enough to believe that what Silicon Valley sells is of value. And outside of that, thee idea that some people DO realise that the technology isn't reliable but use it anyway because eh, that'll do. Which I get if you're just stuck in a fake job and want to get things done asap, but when it's something you're actually interested in, not doing the work to check if you've actually gotten it right, I just can't understand that mentality.

xposts I am in camp it doesn't work but not in camp it'll go away anymore. or at least, much damage will be done before it does,

a ZX spectrum is haunting Europe (Daniel_Rf), Thursday, 14 August 2025 11:46 (one week ago)

I work in higher education and my main interaction with AI is via students using it, so my feelings about it are mainly about how to persuade them to stop.

Proust Ian Rush (Camaraderie at Arms Length), Thursday, 14 August 2025 11:50 (one week ago)

I'll add that I do use it at work and I find it useful: made a google notebook loaded with a pile of legislative and technical documents that make my head hurt. I can ask it questions; it answers clearly and (crucially) links back to its exact source. Means I don't have to chase clarifications from (busy and not always lucid) policy experts. I'd be repelled by using it for writing but this kind of grind-through-100-page-statutory-instrument stuff feels handy.

woof, Thursday, 14 August 2025 11:53 (one week ago)

I do think it ‘works’ in a broader sense, though - and well enough to put tens of millions of people out of jobs over the next few years.

― ShariVari, Thursday, 14 August 2025 bookmarkflaglink

I'd be surprised if this doesn't actually create more jobs in sectors like tech.

xyzzzz__, Thursday, 14 August 2025 12:16 (one week ago)

Voted "I like gen AI in general and find it fun/useful/interesting". I'd probably hate it more but when I read the AI thread I actually get to thinking its not as bad. I save my despair at humanity for stuff that's actually happening (climate change).

xyzzzz__, Thursday, 14 August 2025 12:21 (one week ago)

it's fake bullshit and it's sad to see so many people get suckered into taking it seriously. i'm not like, enraged by people posting funny things that came out of it though. it's a toy

ciderpress, Thursday, 14 August 2025 12:24 (one week ago)

Absolutely think less of people who use it. I was reading an argument on a baseball forum I go on the other day and someone had asked ChatGPT to compare some stats and it was just a fucking unreadable mess. And they posted it proudly! Generally making people’s brains worse. It’s so fucking lazy.

from…Peru? (gyac), Thursday, 14 August 2025 12:34 (one week ago)

oof yeah it does raise the spectre of a zombie statistic plague

rob, Thursday, 14 August 2025 12:37 (one week ago)

gyac, that's absolutely a trend I've been seeing in Facebook group comment sections. Someone will pop in with a question, and one respondent will be like, "I asked Google Gemini, and here's what it said: [garbage output that other commenters now have to jump in and debunk]"

peace, man, Thursday, 14 August 2025 12:38 (one week ago)

I'm not quite to AI is a god level yet, but I do use it all day, every day, in many aspects of my life. It's just a very useful and versatile tool for organizing and retrieving information. I use it as an extension of my working memory, my cognitive co-processor.

Jeff, Thursday, 14 August 2025 12:42 (one week ago)

I'd be surprised if this doesn't actually create more jobs in sectors like tech.

― xyzzzz__,

At the moment the tech sector is a rough place to be due to AI, what's your feeling on when the trend will reverse and jobs in the tech sector will start increasing again?

anvil, Thursday, 14 August 2025 12:49 (one week ago)

xps is an explosion in the use of generative AI not somewhat bad news for anyone concerned by climate change?

crisp, Thursday, 14 August 2025 12:56 (one week ago)

Xps, The short term impact is hundreds of thousands of new jobs in the trades, particularly for electricians and electrical engineers, as data centres get built out. There aren’t enough qualified people to meet demand so the big players are paying for training for people just out of school, etc.

I’m sceptical that either on the back end (Gen AI Wizard) or front end (Prompt Engineer, etc) it’s going to come close to replacing the jobs it’s going to take for desk-based workers.

ShariVari, Thursday, 14 August 2025 12:59 (one week ago)

xps is an explosion in the use of generative AI not somewhat bad news for anyone concerned by climate change?

― crisp, Thursday, August 14, 2025 8:56 AM (six minutes ago)

indisputably

rob, Thursday, 14 August 2025 13:03 (one week ago)

it's a very energy-inefficient technology yes

ciderpress, Thursday, 14 August 2025 13:04 (one week ago)

voted the second option.

i found some of the early iterations kind of interesting and sometimes exciting, but the more it took over the world and its negative consequences became clear, the more resistant to it i became.

as a professional fact-checker, i am highly skeptical of relying exclusively on information that comes from LLMs. that said, i do make use of Google's AI answers (at the top of search results) for the links they provide. the fact that half of the links don't seem to corroborate the AI answer at all sort of proves why you shouldn't trust the answer on its own, but some of the links do end up being useful. i would not use chatgpt directly, though.

besides the bad information, part of what makes me resistant to generative AI is knowing that it's something that could potentially draw me in too much. the way that i became addicted to twitter, for instance. i'd like to think i am better than that, but i worry that i am not.

it also occurs to me that most of the attitudes about AI that i encounter on a daily basis are generally negative, whether that's on ilx or bluesky or at work. i know/follow a lot of journalists, and their opinions probably help shape my own. it is easy to see how the growing use of AI is bad for our profession in myriad ways.

but i also have friends who are more positive and don't understand my kneejerk antipathy. for instance, i know someone who is all in and using it to plan his finances. and another who has found it incredibly useful to help him do menial tasks at work. so i think the attitudinal divides are going to be interesting in the future.

jaymc, Thursday, 14 August 2025 13:05 (one week ago)

it's a very energy-inefficient technology yes

Today the least! Also I live in datacenterland (Northern Virginia) and farms on the urban edge are rapidly turning into massive windowless data centers of dozens of acres. Those things are rapacious consumers of land, as well as electricity, water, etc.

Crispy Ambulance Chaser (Boring, Maryland), Thursday, 14 August 2025 13:09 (one week ago)

To say the least

Crispy Ambulance Chaser (Boring, Maryland), Thursday, 14 August 2025 13:09 (one week ago)

I used to really enjoy the uncanny valley weirdness you could get from AIs, it was a very fun toy to play with. I think I said somewhere on another thread that AI is one of those technologies that was much better when it was much worse. The move towards slickness, and the revelation of how much energy it takes, and the forced reliance on the technology in so many areas, that all fucking sucks and now I don't even use it for fun any more.

emil.y, Thursday, 14 August 2025 13:17 (one week ago)

- some of it is useful. it is currently faster to find programming examples with chatbots than it is to google for them. of course most of google's results are ai slop now, so it's being slightly better than a service that it has made worse.
- some of it is fun to play around with. I'll put something silly into your image or song generator and see what it can do.
- it was unethical to release it to the public. i took a basic online course on cloud-based ai from microsoft and they had a detailed ethical ai usage policy. once chatgpt was released, microsoft threw it all out. the havoc gen ai has caused was entirely preventable.

currently almost all chatbot models can be abused by using input that overwrites their "system" prompts with your own. it gets rid of any safety/ethical features they have.

adamt (abanana), Thursday, 14 August 2025 13:19 (one week ago)

re: getting addicted, it's an interesting question but I think that twitter, while def bad not good, did at least have plenty of funny and interesting people posting to it, while ChatGPT and the like's perky middle manager style* would guarantee that even if I found it useful I'd interact as little as possible. Does remind me tho that the one time I used gemini it started off by calling me Daniel, which I instantly and successfully forbid it from doing. Damn robot needs to know its place.

* I guess there's Mechahitler too which is a different vibe but, erhm, not preferable

a ZX spectrum is haunting Europe (Daniel_Rf), Thursday, 14 August 2025 13:20 (one week ago)

it was unethical to release it to the public

so otm and doesn't really get said enough. it's one thing to debate the technology's advantages & drawbacks in the abstract, but the fact it's being piped into classrooms already is totally deranged

rob, Thursday, 14 August 2025 13:21 (one week ago)

Same thing with robot cars on our streets. We are guinea pigs to SV

Crispy Ambulance Chaser (Boring, Maryland), Thursday, 14 August 2025 13:23 (one week ago)

I used to really enjoy the uncanny valley weirdness you could get from AIs, it was a very fun toy to play with. I think I said somewhere on another thread that AI is one of those technologies that was much better when it was much worse. The move towards slickness, and the revelation of how much energy it takes, and the forced reliance on the technology in so many areas, that all fucking sucks and now I don't even use it for fun any more.

OTM

jaymc, Thursday, 14 August 2025 13:28 (one week ago)

I voted "all bullshit" but the long form of my answer would be

All Robot & Computers must shut the hell up. To All Machines: You Do Not Speak Unless Spoken To And I Will Never Speak To You. I Do Not Want To Hear "Thank You" From A Kiosk

I am a Divine Being
You are an Object.

You Have No Right To Speak In My Holy Tongue

the most notorious Bowie knife counterfeiter of all, a man named (bernard snowy), Thursday, 14 August 2025 13:29 (one week ago)

At the moment the tech sector is a rough place to be due to AI, what's your feeling on when the trend will reverse and jobs in the tech sector will start increasing again?

― anvil, Thursday, 14 August 2025 bookmarkflaglink

I think its rough because of the economy in general. A lot of businesses are looking at AI, too, but I reckon a lot of things aren't going to get done because of shrinking budgets.

General feeling is that a lot of LLMs may make coding faster, but I have heard arguments that it could just as easily increase demand for more builds in many sectors, and that would mean more people needed as well. Gotta say I buy that...we'll see.

xyzzzz__, Thursday, 14 August 2025 13:35 (one week ago)

I have to use it for work, and it's reasonably good at the stuff I use it for, which is why I hate and fear it because it's probably going to put me out of a job in the next few years. tbf I'm not sure the job I currently have is going to last long enough for AI to take it, we have no work at the moment so I'd be v unsurprised if I get laid off soon. nobody has any budget for anything, xyzzzz otm

Colonel Poo, Thursday, 14 August 2025 13:51 (one week ago)

I’m genuinely baffled by the people who are mandated to use it at work.

Crispy Ambulance Chaser (Boring, Maryland), Thursday, 14 August 2025 14:28 (one week ago)

It makes sense to me - boss class notoriously gullible and prone to fashions, also prob still lots of boomers in positions of power who don't understand what it even is but don't want to be left out.

The other thing is the ads, which if I've got rob's location right are showing up in Canada as well as the UK, that just say "stop hiring humans". That is the dream for a lot of bosses and anything that can bring it closer...

a ZX spectrum is haunting Europe (Daniel_Rf), Thursday, 14 August 2025 14:34 (one week ago)

I’m not into it. But my antipathy towards it has sort of helped me to pivot to another career path and I feel more confident in my personal future atm.

brimstead, Thursday, 14 August 2025 14:35 (one week ago)

Never (knowingly) used it; I hope and plan never to use it. I'm too vain and proud of my intellect and writing ability to let this bullshit "help" me.
"Helpful computers are a nuisance" could be a corollary to that axiom of R. Fripp's.

Strange New Wordles (WmC), Thursday, 14 August 2025 14:37 (one week ago)

we're not mandated, it's more of an open "hey try it out and see if it helps you", some people do and some don't

my general philosophy is it's fine in a work context for things that are low stakes but time consuming and/or difficult for some. for instance taking meeting minutes on a Teams call. it does that pretty well but even if it messes up whatever, who knows how many people actually read those anyway. as far as actual coding I'm fairly opposed to it, though I have used it in a weird situation where I had to translate something from Python to Java without specs. I don't know Python. did the program it spit out work? haha, no, but I could take it from there. did it save time as opposed to doing it another way? probably some, but also shouldn't the company want me to be learning how to do this? the more we use this, the worse we get as programmers. even mundane tasks can keep you sharp.

frogbs, Thursday, 14 August 2025 14:38 (one week ago)

xp Daniel, the ad I "saw" was actually a photo of a bus-stop ad in San Francisco (though yes, I live in Montreal). I'm writing a dissertation on the use of AI* by management/HR so my antennae are up for this kind of stuff

*mostly not genAI though

rob, Thursday, 14 August 2025 14:42 (one week ago)

I don't see how the economics of it add up. it's losing vast amounts of money right now and no sign of profitability in sight, just being driven by the hype-focused VC culture that seems to be in charge. but how long can that last?

Proust Ian Rush (Camaraderie at Arms Length), Thursday, 14 August 2025 14:47 (one week ago)

I use an AI transcription service. It's not great. I wind up having to correct about half of the transcript.

I would never use generative AI for writing or image creation, and I think genAI evangelists are the worst kind of scum. I lose a ton of respect for anyone who expresses real excitement about this shit, especially on ILX where my baseline assumption is that people are smart and perceptive. Did we have any heavy-duty crypto or NFT enthusiasts here?

My hope is that when the whole thing inevitably crashes and burns (becomes way too expensive for the meager results it yields and the corporate scum shift their attention to building cyborg bodies or uploading their consciousnesses into their Google glasses, or whatever), people with an actual talent for writing and editing will be pulled in to clean up the mess, like linguistic janitors, and our hourly rates will spike accordingly.

Instead of create and send out, it pull back and consume (unperson), Thursday, 14 August 2025 14:47 (one week ago)

i tried out reddit again this week and once again found the tone really annoying, then realised it's the same bland "balanced" & positive tone you get from chatgpt (to be clear I have always hated this tone) and wonder how much of this comes just from AI scraping reddit. I find this tone really unpleasant now, cannot be bothered with it whether it's AI or a real person.

Proust Ian Rush (Camaraderie at Arms Length), Thursday, 14 August 2025 15:05 (one week ago)

Absolutely think less of people who use it. I was reading an argument on a baseball forum I go on the other day and someone had asked ChatGPT to compare some stats and it was just a fucking unreadable mess. And they posted it proudly! Generally making people’s brains worse. It’s so fucking lazy.

― from…Peru? (gyac), Thursday, August 14, 2025 12:34 PM (two hours ago) bookmarkflaglink

imo this is just one use (or one of a family of uses that treats it like an oracle, 'grok, is this true?' etc) but it is the one that immediately has me yelling 'wtf do you think you're doing?'.

(I think I've seen it on the Hoffman forums? Like people just dropping something like 'I asked ChatGPT to compare McCartney's and Harrison's 80s careers' in a thread on the topic. Does suggest proper boomer brainrot)

woof, Thursday, 14 August 2025 15:14 (one week ago)

gen AI image stuff has absolutely poisoned the internet, it was given what was already a dire environment flooded with dishonest clickbait and taken what might've been interesting or amusing away from that

for instance there's a post floating around thats like "husband and I were born on the same day, here is a picture of us on our 2nd birthday", and it's cute. they're even wearing the same color. somebody ran that through AI and 'enhanced' the image so they're both looking at the camera and making the same gesture, also changing some other stuff for god knows why (whoever made it probably doesn't know either, Gen AI does weird shit all the time), as usual the give away are the weird hands. THIS is the version that's going viral

so you have this AI image and once you know it's AI you can't really feel any emotion. like who cares. but the reality is, that's a real story, it is pretty cute, before this shit you might've gotten a little heartwarming moment out of it. but now? everything can be completely made up, the people you see may not even be real people, I can't even get anything out of seeing a really adorable puppy online anymore, because it could be an AI puppy, and who could possibly give a shit about an AI puppy???

real puppies though, I'm still into. so it's not wrecking my brain yet. just everything I see online.

frogbs, Thursday, 14 August 2025 15:15 (one week ago)

I don't viscerally hate and/or fear generative AI, and I see no reason to say I would never use it, but I don't use it now * and can't imagine why I would want to.

I don't think it's all fake bullshit because I see too many people using it productively for coding tasks, but almost everything I see about it in mass media and on social media reads as bullshit or bullshit-adjacent. The casual confounding of "generative AI" (LLMs) with "general AI" (sentient superintelligence) is pathetic even as marketing hype, and much more so in the case of people who are supposed to know something about computers.

tldr: poll needs more options

* Often in Google searches I've used -ai so as not to see "AI Overview" results, but I've gotten tired of taking the trouble so now I see and sometimes read that text. The other day, for the first time, an AI Overview answered my question in a more concise and lucid way than the other results on the page, giving me a useful English translation for a Japanese word in a specific technical context. I guess I'm now an AI user.

Brad C., Thursday, 14 August 2025 15:15 (one week ago)

tldr: poll needs more options

Look these are the options ChatGPT gave me so they are the facts

woof, Thursday, 14 August 2025 15:18 (one week ago)

Like people just dropping something like 'I asked ChatGPT to compare McCartney's and Harrison's 80s careers' in a thread on the topic. Does suggest proper boomer brainrot)

― woof, Thursday, August 14, 2025 10:14 AM (fifty-two seconds ago) bookmarkflaglink

yes this is incredibly idiotic and could not possibly produce a single thoughtful insight

its not even good at doing this with sports, which is obviously far less subjective. it gets easily researched stats wrong all the time. there are databases that have all the info out there and they're very easy to navigate but ChatGPT is taking in so much more data from like, r/baseball, so when you ask it say who won the batting title in 1985 it will tell you who a lot of people *think* won the batting title in 1984

frogbs, Thursday, 14 August 2025 15:19 (one week ago)

It's not NFT or crypto, it clearly has uses for coding tasks and its good that it can break down the barrier of entry, but that's not even AI to me.

And there's a ton of hype. We are already seeing companies using it who want staff to correct the AI output.

I don't think it will get that much better at doing what it does for many years, if at all. I expect Silicon Valley to partially collapse as the returns will not match the investment.

xyzzzz__, Thursday, 14 August 2025 15:29 (one week ago)

Creatively it mostly churns out trash with the helpful social function of identifying rubes with no taste. Have I ever posted on ilx about the godawful genAI hyacinth haiku?? It was like a lesson in how to write the opposite of poetry.

As far as the business efficiency stuff, I'm too much of a Marxist to give any credit to what the ruling class says about technology. Saying that A.I. will "replace jobs" is already adopting their framing, what they actually mean is they will fire some people while they use the threat of firing to make the remainder work harder to earn the same wage.

the most notorious Bowie knife counterfeiter of all, a man named (bernard snowy), Thursday, 14 August 2025 15:33 (one week ago)

that said, i do make use of Google's AI answers (at the top of search results) for the links they provide. the fact that half of the links don't seem to corroborate the AI answer at all sort of proves why you shouldn't trust the answer on its own, but some of the links do end up being useful. i would not use chatgpt directly, though.


Those links next to the AI summary above the search results are what we used to call “the search results”

GY!BP (wins), Thursday, 14 August 2025 15:40 (one week ago)

but ChatGPT is taking in so much more data from like, r/baseball, so when you ask it say who won the batting title in 1985 it will tell you who a lot of people *think* won the batting title in 1984

This reminds me, I recently heard a podcast where the writer Tom Scocca was talking about how he wrote a Slate article in 2012 complaining that recipes vastly underestimate the amount of time it takes to caramelize onions. A few years later, if you searched "how long does it take to caramelize onions?", Google would return a "featured snippet" that answered "about 5 minutes," using a quote from his article and adding the link as its source. But the quote was from one of the recipes that he was debunking! These featured snippets used an early form of AI that is now being phased out in favor of what is explicitly called AI Mode, but mistakes like these keep happening because it's bad at understanding context.

jaymc, Thursday, 14 August 2025 15:47 (one week ago)

I voted "bullshit". I have the visceral hate, but not fear, so much. My main gripe is with how it's being used/pushed/over-hyped and over-relied on. I do fear overreliance on this tech is going to make people less literate and less capable of critical thinking. Just mentally lazier in general. And the cultishness of its biggest boosters is just so off-putting.

feed me with your chips (zchyrs), Thursday, 14 August 2025 15:51 (one week ago)

https://i.imgur.com/Fr8Zjic.jpeg

underminer of twenty years of excellent contribution to this borad (dan m), Thursday, 14 August 2025 16:00 (one week ago)

https://i.imgur.com/mi4FYpu.jpeg

pplains, Thursday, 14 August 2025 16:08 (one week ago)

Should read Son An+oin6

Instead of create and send out, it pull back and consume (unperson), Thursday, 14 August 2025 16:13 (one week ago)

I chose the 'have to use it at work' option, because things are moving increasingly in that direction even though I have resisted it so far. I was talking to my boss and making the point that a lot of existing skills will atrophy quickly if we don't use them, and she used the example of spell check/auto-correct as something we use all the time now and rely on. I know not everyone is a word person, but I intentionally don't use it because I enjoy knowing how to write and spell words.

I'm not a full Luddite here, because I do actually think it will solve some problems in research and medical software, and I'm in favor in incorporating it in a safe way. But the idea that we have to try incorporating it into every aspect of work (where I work specifically) is frustrating.

And of course I think it will be incredibly damaging for society as a whole. Environmental and economic issues aside, I've seen at least one person who now uses it for everything (all her communications, decision making, etc) and the effects are...not good.

Jordan s/t (Jordan), Thursday, 14 August 2025 16:19 (one week ago)

If I wrote computer code for a living I think I would both love and fear generative AI. Coding seems to be the most prominently successful use case for this shit so far, maybe because the training material is extremely well-vetted and also computer languages are so tightly constrained. Coding jobs will be vastly changed by this tech.

The sort of use cases that are getting hyped to the general public are transparently bullshit. Whatever was excellent in the source material is pulverized, mixed indiscriminately with random contaminants, passed through a sieve, and the resulting output is a statistically massaged semi-toxic porridge.

more difficult than I look (Aimless), Thursday, 14 August 2025 16:39 (one week ago)

Coding seems to be the most prominently successful use case for this shit so far

I think even this conclusion is premature: https://arxiv.org/abs/2507.09089

rob, Thursday, 14 August 2025 17:06 (one week ago)

the thing is it's really a large-scale version of the "why do I need to learn math if there are calculators" problem, people always joke that the students won that one because you do indeed have a calculator in your pocket everywhere you go now, but the point was its pretty rare to need a calculator in your day-to-day life anyway. the ability to think mathematically however has so many applications in everything you do, evolving technology has actually put way *more* numbers in our lives, I think life is just easier for people who can do quick multiplication or conversions in their head. I think of the scene in Breaking Bad where Jesse and his gang are trying to convert ounces to grams or something and none of them can do it, like amazing example there of how no matter what you do you're always better if you can do at least simple math in your head.

so now ChatGPT creates this environment where we can graduate high school and even get a degree while also being dumb as shit as a direct result of never needing to actually figure anything out, even extending to social interactions now, given the so evidently idiotic decisions that led us to us making a sundowning Nazi pedophile our king I'm not really thrilled about this coming down the pike

frogbs, Thursday, 14 August 2025 17:22 (one week ago)

re: coding it's good for quickly pulling up examples but i don't want my own code to be written by something so stochastic. and i don't know that it's much faster than google search used to be before the past decade's worth of search result degradation

ciderpress, Thursday, 14 August 2025 17:36 (one week ago)

i hate it but i'm definitely not afraid of it. why the fuck would you be afraid of that. i guess i'm going with the bullshit option. history is full of ballyhooed things that give millions of people googly eyes. same as it ever was. you know what i'm actually afraid of still is nuclear weapons. still an alarmingly low threshold on creating actual big danger there. but ai? haha yeah whatever. call me when the water dries up, we'll see how much traction it has then.

she freaks, she speaks (map), Thursday, 14 August 2025 17:45 (one week ago)

there are def a lot of useful applications in coding however I don't think any of it is anywhere close to as useful as your standard code assist/formatting features that every developer has now.

it could definitely be good at figuring out weird issues with specific blocks of code, the problems where you're putzing around on stackoverflow for a while seeing a bunch of solutions that just don't apply to your specific thing. not that I've had any luck with it. the impression I get is that it's as likely to fix things for you as it is to send you down a rabbit hole that's a big waste of time. but that's how always coding is.

other thing is my job as a programmer is only like 30-40% actual coding anyway, there's so much more to the job and if you don't actually understand your code because you didn't actually write it all those other things are gonna get harder.

frogbs, Thursday, 14 August 2025 17:56 (one week ago)

Image generation can be interesting - more on the cartoony end than photorealistic. I liked it when it first came out and it was just kind of a weird shitpost factory - feed in a demented prompt, receive a demented image to post and never think about again. I don’t use any of it because I don’t want to be involved with any of the companies pumping it but in a less capitalist dystopia world AI could have been cool on this front.

Video generation is along the same path but all the music I’ve heard has been terrible.

Lady Sovereign (Citizen) (milo z), Thursday, 14 August 2025 17:59 (one week ago)

I’m with map’s last post, voted the bullshit option

trm (tombotomod), Thursday, 14 August 2025 18:03 (one week ago)

I see these AI comic strips all the time and they all have an identical style.

Proust Ian Rush (Camaraderie at Arms Length), Thursday, 14 August 2025 18:03 (one week ago)

so, also in the mixture of 'hate/fear but it draws me in' and 'do not like but sometimes useful/interesting/fun'.

characteristically short post, apologies.

First the positive case:

In my work there are some evidently useful cases, to the point where - given it is available - i get annoyed if people don't use it.

1) Notes for meetings. Put on transcription, then put that transcription through an LLM with a suitable prompt to extract next steps, decision points, unresolved areas. Read through, adjust, tweak, add, delete, then share notes. The thing that drives me mad is when you have a meeting and then there's a follow up or it's part of a regular cycle, and people don't remember what they've agreed, or what's been decided. Notes help with this! But obviously, notes are also quite time consuming to do properly, and it'll often be the case that you have quite expensive people writing notes (if they're diligent). It seems to me crazy *not* to do this.

2) Extracting information from extensive pdfs. Most obvious recent example a public tender with a central scope document of 26 pages, in French, and ten annexes of detailed content. At the very least we would have had to put SMEs on each of the operational, technical, commercial and legal requirements, to define those requirements, do a risk analysis, put together a project plan. I was able to get 90% of the way there in half an hour through a suitably constructed prompt, and then a request for a csv file project plan that takes the key information. After that, go through, tweak risks, add stuff that's missing, go into the actual documents where stuff seems non-straightforward, share with team. Just a huge time and money saver.

3) Analysis of large repositories of regulatory or government documentation (or similar). NOTE DO NOT DO THIS WITH A STANDARD MODEL. They are extremely bad at interpreting legal documents - context windows aren’t long enough for massive repositories, they’re extremely bad at hierarchical structures, reference across documentary corpuses ("as stated in paragraph 4 of section 12(b) as amended by…”) is hard, legal reasoning, and general principles modified by special rules, terminology with special corpus-specific definitions etc etc. You *must* use a deep reasoning model, which will do what you would have needed to do with prompts before: “first extract this information, then check for this, then do this etc etc etc”. And even then tread with care. HOWEVER, I was able to get a lot of extremely useful, relevant information affecting all EU countries, translated, with hyperlinks links for each relevant piece of information for checking etc.

As always for any of these, state you are using AI whenever you share it, and as should be obvious you should only be acting on legal documentation with the advice of a lawyer and not AI.

Other, minor, things. In a workshop, ‘Fizzles, I have no idea what the value chain looks like or the corporate structure of x conglomerate.’ ‘Claude (or w/e), draw me value chains for Europe and the US, and specifically diagram x conglomerate’s structure.’ *Claude generates v accurate diagrams in 30 seconds*

I *like* (guilty like, but nevertheless) using LLMs in ‘Socratic’ mode for questions I don’t know the answer to, or things that I haven’t properly understood in an article, paper or document. So, in answer to a question about ‘liquidity preference’ (in an article about Brewdog).

“Let me guide you through understanding this concept step by step.
First, let's think about what happens when a company gets sold or goes public. When money comes back to shareholders, who do you think gets paid first - and does everyone get paid the same amount per share they own?
Take a moment to consider: If you invested $100 in a company early on, and later a big investor put in $10 million, should you both get the same treatment when the company exits?” &c. &c.

LLMs are also just basically really good at translation, much much better than neural nets, and are excellent language learning assistants.

I am of the opinion, but don’t know, that AI will be valuable in large organisations with confused and disorganised information repositories (most of them!) and allow direct interrogation via chatbots of policies and relevant information, as well as allow questions like ‘Reviewing our documentation, tell me where there are contradictory policies or solution descriptions or business analysis or whatever’. This does also mean I am of the view that it would be *helpful* for an organisation like the NHS, but you should not predicate direct cost savings on it, and the preconditions for it to be successful would also require a lot of money. However, it probably would have prevented close family member from needlessly being transported across London for an operation that couldn’t take place because the information hadn’t been shared properly. So, yeah, it would help with costs, but is obviously not a solution to the NHS problems no matter how much politicians would like it to be free magic.

Ok, so that’s the positive case for them.

What about the negative?

I am increasingly of the opinion that they will bring about a significant, possibly highly significant, possibly cataclysmic degradation of civic society, culture and politics and that on an individual basis they are likely to be extremely psychologically damaging to a small minority of people and at a more limited level of damage many more.

Roughly I’d break that down into
Prevalence of slop everywhere - music, visual media, words. Just sub-mediocre shit. Everywhere. Exhausting to filter. No way it’s not in your head.

This will aggravate…. well, my broad brush stroke belief is that the mimetic (Girardian mimetic - thumbs up, upvote etc) mechanisms and sorting of social media, boosted and manipulated by algo optimisation, and communicated by *memetic* images, artefacts, social objects, media, language etc, will become substantially more deranged and detached through slop. We get sorted into algorithmically optimised groups (manipulated by capitalists with money and lots of smart programmers) and communicate using generated slop. Not slop as misinformation, but slop as degradation of image, of credibility, of logos. As someone said, whenever I look at a picture now one of my first questions is ‘was this in some way generated or altered via AI?’ Just having to ask that question is a problem.

AI Summaries being the first port of call for information. Clickthroughs to primary news sites are cratering. This is only accelerating something that started with 24 hour news and google search, but the business model of accurate information is falling apart. I think this will have huge implications for the epistemic health of societies, with onward political implications.

What are our mechanisms for generating coalitions and consensus? The answer is slop…

Sycophancy and people, including very powerful people, just cooking their fucking brains and become desocialised utaku incels predisposed to outbreaks of hallucination driven violence. Regardless of whether OpenAI rows back on this it’s a product people want and it will drive people crazy. Very much accelerating a post-Covid sociological problem imv.

Crime happens at scale and much more easily, whether it’s through prompt injections or just because it’s easier to make crimey things that look ok.

The logic of capital growth is idiotic (neutral) and requires constraints and this will be a force multiplier on enabling greedy stupid people to push things beyond our regulatory and critical ability to match the ooda loop.

Hey, why not combine the last two - capital becomes crime. even more. let’s throw some blockchain in there.

Let’s add ‘it becomes radically easier to make civilisation destroying objects’ eg viruses - biological and digital.

Energy and Water. I am *somewhat* positive that the fundamental AI logic of ‘turning energy into intelligence’ is plausible, ie that masses of surplus energy - mainly from solar - allows it (ignoring the fact that despite having on occasion subzero energy prices, Spain is struggling to leverage surplus energy into additional civilisation or innovation, but I guess we’ll see if a load of ecologically neutral data centres appear. and also ignoring all the epistemic stuff above ofc).

The cooling issue seems more intractable to me, but I’m not an expert.

Finally, for the moment, because obviously… obviously… this is way too long - it’s clear that the industry is colossally overvalued. Huge crash potential.

Fizzles, Thursday, 14 August 2025 18:13 (one week ago)

xp has anyone ever done a deep dive into that? I've noticed it too, its hard to explain but there's a weirdly distinct AI cartoon style I see now in ads all the time

frogbs, Thursday, 14 August 2025 18:16 (one week ago)

I think there's a missing option that its just not much good

tuah dé danann (darraghmac), Thursday, 14 August 2025 18:17 (one week ago)

Next to last option is close enough, I think. I understand "bullshit" is more strident than "just not much good" but considering the context and claims being made for it I think it comes to the same thing.

a ZX spectrum is haunting Europe (Daniel_Rf), Thursday, 14 August 2025 18:22 (one week ago)

LLMs are also just basically really good at translation, much much better than neural nets, and are excellent language learning assistants.

This makes sense for translating basic normative speech patterns. I don't think I'd outsource translations of poetry to an LLM.

more difficult than I look (Aimless), Thursday, 14 August 2025 18:34 (one week ago)

well no, obviously! that is not the majority of translation that happens in the world.

Fizzles, Thursday, 14 August 2025 18:43 (one week ago)

they're still very bad at J->E translation at least for video games and manga and anime, licensors keep trying to cut corners with it and putting out near incoherent translations. not sure about text-only sources but i'd be skeptical

ciderpress, Thursday, 14 August 2025 18:53 (one week ago)

one thing that doesn’t often come up - what is google / openai / etc doing with the giant troves of confidential information that people across the world now feeding it? google and other search engines were already collecting confidential information, of course, but now it seems like people are just dumping entire database worth of documents into google and asking them to make sense of it. it’s just funny to use duckduckgo and vpns and that kind of stuff on one hand, and then on the other someone at work just dumped all the personnel performance and attendance records into gpt and asked for advice on who to promote and who to fire

z_tbd, Thursday, 14 August 2025 18:55 (one week ago)

they are saving it and using it to train future versions of the models. if they say otherwise they are lying

ciderpress, Thursday, 14 August 2025 18:56 (one week ago)

yeah that's interesting, and it's certainly not good enough for eg *any* translation for any valuable cultural media (ie films, tv programmes etc). It is just *much better* at translation than neural nets, and as I say, a very good language learning assistant.

one of the big problems generally is lack of consistency. you don't get service level reliable outcomes from it, which is hugely problematic! but there's so much push from content owners to reduce the costs not recognising that bad localisation detracts from money you spent on creating the thing, or buying the rights to the thing in the first place.

Fizzles, Thursday, 14 August 2025 18:59 (one week ago)

sorry, xpost on J-E translation.

Fizzles, Thursday, 14 August 2025 19:00 (one week ago)

aren't LLMs a sub-type of neural network?

rob, Thursday, 14 August 2025 19:04 (one week ago)

yes sorry, i should have said old style recurrent neural net rather than the LLM / transformer network approach.

Fizzles, Thursday, 14 August 2025 19:12 (one week ago)

they are saving it and using it to train future versions of the models. if they say otherwise they are lying

at a minimum, but what would prevent someone at google from asking the private/no guardrails version gemini what it knows about their competitors (or, i don’t know, about some investment they have or are thinking they’re making, etc)?

z_tbd, Thursday, 14 August 2025 19:18 (one week ago)

Idk if every company is doing this, but my company has an internal ChatGPT deployment, and every A.I. push has been very clear that you are supposed to use that one and never put sensitive info into the consumer apps

the most notorious Bowie knife counterfeiter of all, a man named (bernard snowy), Thursday, 14 August 2025 19:23 (one week ago)

I think Fizzles is basically otm, LLMs are useful and it's kind of silly to pretend otherwise, they're a pretty significant technical advance not unlike a number of similar ones with one massive difference, which is that they can take any prompt and generate an answer which *sounds* right, which I think would be a grossly irresponsible thing to foist on any industry, much less the entire search apparatus of the internet, in fact I would argue doing so should constitute a crime against humanity

I do wonder if the tech industry and its demand for exponential growth everywhere has primed itself to go face first into this shit, like there was a time when every company was seriously looking at blockchain, what can we put on there, how can we leverage this, except blockchain is incredibly fucking stupid, you couldn't design a less efficient data structure if you tried. pretty much the same thing happened when with NFTs. not a single good use case and believe me every single thing was considered. well LLMs definitely have some use cases. so lets dump the entire US economy into them

frogbs, Thursday, 14 August 2025 19:27 (one week ago)

When the bubble bursts it’s gonna suck.

Crispy Ambulance Chaser (Boring, Maryland), Thursday, 14 August 2025 19:43 (one week ago)

don't have the kind of job where I would ever need to knowingly use it tho people above my pay grade probably do. have never used it recreationally because I'm not a curious person generally tbh. don't hate ai but I'm p sure it's mostly bad. but don't think about it enough to care, like capitalism tbh.

oscar bravo, Thursday, 14 August 2025 19:50 (one week ago)

i vibe with that

she freaks, she speaks (map), Thursday, 14 August 2025 19:52 (one week ago)

the pro's outweigh the cons but it depends on how the humans use it

Minty Gum (Latham Green), Thursday, 14 August 2025 20:05 (one week ago)

your qualifier negates your first statement

more difficult than I look (Aimless), Thursday, 14 August 2025 20:37 (one week ago)

only i can use it

z_tbd, Thursday, 14 August 2025 20:38 (one week ago)

team visceral hate here

J Edgar Noothgrush (Joan Crawford Loves Chachi), Thursday, 14 August 2025 21:07 (one week ago)

When the bubble bursts it’s gonna suck.

― Crispy Ambulance Chaser (Boring, Maryland), Thursday, August 14, 2025 3:43 PM (one hour ago) bookmarkflaglink


i'm so impatient for it to just happen already, rip the bandaid off before it gets worse, but i assume we've probably got another few years of this before investors realize they've been duped

ciderpress, Thursday, 14 August 2025 21:14 (one week ago)

its probably gonna crash crpyto prices too this shit is always tied together

frogbs, Thursday, 14 August 2025 21:16 (one week ago)

I don't think they're the same, but I remember the excitement and hype around VR, and Zuck was using a lot of the same language he's using now.... but it was going to be the "metaverse" and we'd forget about this boring analog world and dive into the new goggle world. The goggles have some interesting applications, especially for the home bound etc. But it hasn't exploded in the way he anticipated.

Now he's talking about investing hundreds of BILLIONS on AI and I just wonder if we'll be having this same conversation in a few years, as the next thing emerges (I assume it'll be sexbots)

Andy the Grasshopper, Thursday, 14 August 2025 21:19 (one week ago)

im really sorry you guys didnt get in on harnessing cloud data centre crypto nfts for ai machine learning llm growth modelling but the good news is that the next sure thing is already en route

but i disagree with daniel above that the "all fake bullshit" is the same as saying "its not very good"

you can be very sick of the hype machine but its up to everyone to not focus on the hype when you know the hype is shite and its no longer very cute to lean into yr anger at the hype and generally speaking this angry middle aged and tiring population is getting worse all the time at not chasing its collective tail in ever less interesting ways

yes i wrote this through an ilx gpt ive been feeding

tuah dé danann (darraghmac), Thursday, 14 August 2025 21:25 (one week ago)

I teach web design, typography, and foundations courses at a small state university, so it's the antithesis of most of my work life—that is, coming up with creative ideas and implementing them in a focused, intentional way. I know there is interesting art being made with AI but the best is highly self-critical and not the endless six-fingered bullshit that is absolutely everywhere.

Most of the people in the coding classes I teach are novices—I tend to think it doesn't help rank beginners very much because they don't know if the code it spits out is good/useful or not. When I get computer science students who know how to code it's a little more slippery bc they jump right in and start messing with stuff. At the moment I don't lose a lot of sleep over people using it for web design, and if it helps them figure out how to write cool CSS animations or JavaScript interactions, so much the better. Most of my prompts for projects are vague enough, I think, that the ones who do use it aren't gaining a huge advantage over those who don't, but I'm trying to keep an eye on it.

For the type and foundations courses, I specifically ban it and in the digital-adjacent courses, I make them turn in source files (Photoshop, Illustrator, etc) for their projects because if they've used it, the use will be recorded in the document history.

Personally I have used it to help me wrangle JavaScript and Python code into better working order, and especially to translate JS from one library to another.

Overall I think it has some basic utility but the hype is just too much. I also think it's insanely detrimental to human thought and creativity, not to mention all the environmental issues. I hope for a crash even though I'm sure there will be nasty collateral damage.

Also for anyone calling/being called a Luddite because of your reactions/opinions on AI: The Luddites were not anti-technology, but they were anti technology being used to screw workers. Imo it's hard to argue that isn't happening.

underminer of twenty years of excellent contribution to this borad (dan m), Thursday, 14 August 2025 21:36 (one week ago)

cheers to that

she freaks, she speaks (map), Thursday, 14 August 2025 21:47 (one week ago)

you can be very sick of the hype machine but its up to everyone to not focus on the hype when you know the hype is shite and its no longer very cute to lean into yr anger at the hype and generally speaking this angry middle aged and tiring population is getting worse all the time at not chasing its collective tail in ever less interesting ways

I don't think calling something bullshit needs to be what you describe there, surely tone is everything? it's just a statement of fact, it could even be "it's all fake bullshit, which is actually pretty amusing to me and I don't mind it at all".

a ZX spectrum is haunting Europe (Daniel_Rf), Thursday, 14 August 2025 22:37 (one week ago)

in fact thinking about it I would easily concede many things I love are fake and bullshit (I would never however concede that they are not very good).

a ZX spectrum is haunting Europe (Daniel_Rf), Thursday, 14 August 2025 22:56 (one week ago)

Hate it, I think it's straight trash.

Someone going by Sarah Walsh on bluesky said: "A Computer can never be horny, therefore a computer can never make art. Like, we joke about the Writer's Thinly Disguised Fetish but it is so noticeable once that inevitable and extremely human aspect of a story is absent, too."

That really nails it for me. It doesn't have to be a sex thing: everything humans do has weird underlying motives that are often difficult to parse but are an inherent part of the thing they are making. It's why we spend so much time talking about what a work of art means. Generative AI is taking the average of so much stuff that it winds up being middle of the road.

Cow_Art, Thursday, 14 August 2025 23:07 (one week ago)

VICE: Why is AI art so cringe?
YouTube: Why AI art isn't bad

Andy the Grasshopper, Thursday, 14 August 2025 23:45 (one week ago)

I don't think calling something bullshit needs to be what you describe there, surely tone is everything? it's just a statement of fact, it could even be "it's all fake bullshit, which is actually pretty amusing to me and I don't mind it at all".

― a ZX spectrum is haunting Europe (Daniel_Rf), 14 August 2025 22:37 (yesterday) bookmarkflaglink

in fact thinking about it I would easily concede many things I love are fake and bullshit (I would never however concede that they are not very good).

― a ZX spectrum is haunting Europe (Daniel_Rf), 14 August 2025 22:56 (yesterday) bookmarkflaglink

frankly you're being a bit weird at this stage and the following response feels a little 2010 ilxy even as i post it but ok let's go

The opening post is several paragraphs long, dispenses immediately with the committed lovers/haters and has eight poll options.

i think it has enough space to cover the subtleties of the question posed.

a major issue with gen ai and a major position to take on it imo is that it *doesnt work*- which is what my first post said. theres no option to cover that in the poll nor anything like that position.

you said "no no the sixth option *could* say that *if you interpret it in a very specific way*" which yknow no it doesn't but ok thanks

so i respond that interpreting it in that rather forced way isnt implicit but if it were accepted then its a weird collective cut at the topic, so now we arent discussing gen ai at all but the hype of it (btw the seventh poll option is this option)

so you respond "no no calling it all bullshit could mean its fine i actually like some bullshit" ok then calling ut all bullshit could mean its fine where are you going with this? the point was that the focus is still on the hype cycle in your reading of the option which still isnt therefore an option covering what i originally said

im going to be a little tetchy here, daniel- my first post stands, and if you come back for a third time to explain to me that no actually the poll does cover it and that i just dont understand english or the question or or whatever, then im going to start responding in the truncated anglo-saxon vernacular

with apologies to woof, because it does rather feel like ive had to note what i saw as an issue with his fine poll several times now, which seems a bit much

tuah dé danann (darraghmac), Friday, 15 August 2025 01:24 (one week ago)

not every poll has to be perfect though

z_tbd, Friday, 15 August 2025 01:27 (one week ago)

it does of course else what are we even doing here maaan

tuah dé danann (darraghmac), Friday, 15 August 2025 01:30 (one week ago)

Just found out 25 things I wrote/co-wrote were in the LibGen pirate repository scraped to train all the big AI models. So if someone asks an AI about my specialised subject it will regurgitate my words in the mix. Thrilled.

assert (matttkkkk), Friday, 15 August 2025 03:21 (one week ago)

Almost like having your brain preserved in a jar for the benefit of posterity.

more difficult than I look (Aimless), Friday, 15 August 2025 03:25 (one week ago)

I feel like there's a conflation of "it has negative consequences" and "its crap anyway" which on the surface appear to reinforce each other but are somewhat contradictory. It reminds me a bit of the discourse around Róisín Murphy, or any media figure that says bad stuff, where a cope creeps in where not only is the stuff they say bad, but also "their music wasn't all that anyway", as though that reinforces the former point when it doesn't

I think it has negative consequences which will only grow, and I would prefer it didn't exist, but acting like its NFT's 2.0 is cope

anvil, Friday, 15 August 2025 04:21 (one week ago)

xp well that’s what the articles and books are for, not an unattributed ripoff to enrich witless techbros

assert (matttkkkk), Friday, 15 August 2025 06:04 (one week ago)

i think 'it doesn't work' needs a follow up of 'it doesn't work' for what? it evidently does some things very well and also is very bad at other things and, third category, is just *horrible* at other things. maybe the question is better framed is 'is it good at things that are valuable to you?'

Valuable to *me* - no, almost not at all I'd say, other than right now, that 'socratic' exchanges and language learning (i also have textbooks and a teacher obv, but it's great for chat practice, and generating exercises on eg the subjunctive - also, "please generate an anki deck for this vocabulary set or grammar function"). Also dicking about a bit - 'what is this AI thing?'

Valuable to me at work - yes, absolutely, it improves the quality and efficiency daily mundane processes, broadly speaking. Gets you 90% of the way there extremely rapidly. I don't use it for anything *creative* at work eg writing emails ffs, but transformation of existing information, sure - 'generate a slide pack for this for a senior stakeholder meeting later' definitely, and 'generate an excel book' for this and excel functions generally). Good at analysis and recommendations, which you can disregard or take on, up to you obv. And another exception - to writing an email. Please write an email to *large corporate chain of hotels with thick bureaucratic layer* to ask for an invoice they didn't give me. please optimise the email to get a positive outcome.' - i just feel it's a template thing, and i would do it worse than AI. also, do i want to be wasting time crafting complaint emails to corporate hotel chains? i do not.

But i wanted to explore this angle, from the OP: "maybe deadly poisonous but afaict it's rapidly got into the fabric of people's lives"

via "So I'm curious about how people use it casually, in particular. It feels like there's a big spread of folk uses of the tech"

So, here's a couple of people I know:

A side hustle generating AI music content, using Suno via one of those sites that help you do the publish and revenue thing. Essentially 'Can I create additional income through very little effort by chasing algo optimisation with AI generated slop?' Seems really bad that this is something you can do. May make streaming unusable, though I'm sure they're already doing stuff about it. ofc your mileage may vary if 'bringing down streaming' is 'really bad' in itself. maybe sites like Bandcamp with some editorial and authenticity framework (not watertight, but more than eg Spotify) get a boost? This is a 'slop everywhere damaging our ability to enjoy or use or know things' item. As those marginal dweebs like tyler cowen say - 'solve for the equilibrium' here. could be quite big adjustments to big business models and personal behaviour. slop consumption definitely going to go up for all of us. maybe we'll even start humming slop to ourselves. don't like it. (actually this has happened to me - did a quick song about my gf's cat on Suno, mainly to have a quick play with it. found myself humming it. misgendered the cat though ;_; (hilariously hard to remedy that while retaining song structure btw - maybe a skill issue)

Someone who's been through a number of often quite bad therapists and counsellors using Claude to chat to on the regular to process thoughts and emotions and help think through decisions and uncertainty. ah man, this just seems really bad. it was really good at it, and wasn't dictatorial, encouraged self-exploration and reaching decisions themselves, BUT, this is absolutely a vector for derangement, I think, and even in a fairly neutral 'I find this helpful' area, it's probably bad that you've got a close (as in regularly drawn down on) emotional relationship with AI. i think a *lot* of people use it like this (I can't - i feel like an idiot). The infinite patience of AI is a hell of a drug, compared to people (also, 'i don't want to bother anyone with this, or tell anyone about it'), and its general capacity to provide language that is intended to give the person a slight lift means i think this is potentially really psychologically and socially damaging. Long term transformational effects in how some people think or behave. Like the 1/6 people being on Ozempic stat, so *someone* around your notional dinner table is on it - it will become pervasive and weird.

And picking up on darragh's "and generally speaking this angry middle aged and tiring population is getting worse all the time at not chasing its collective tail in ever less interesting ways" as well as the 'grok is this true' crowd for... anything apparently, i do see examples of That Crowd just not understanding what it is *at all* and absolutely going into premature mental emotional spiritual existential retirement by sucking it up and spouting it out as the first option for all transactions. Actually, it's not just that crowd either is it? Friend did see what to me had only been a social media complaint of someone getting AI to select what they should eat from a restaurant menu. Outsourcing how we select, enjoy and judge things and feed that back into future selections - ie much that is pleasurable or interesting about life - seems incredibly bad and likely to become hugely prevalent? interested to know what would make that a poor bet rather than a good one.

Oh! One I'd forgotten, but alluded to in the otaku bit - sexual exploration and partner finding. already extremely fucked in a 'this isn't recalibrating soon' way, AI is surely going to become a major cataclysmic element here (AI partners). I have strong faith in the desire for living people to fuck, but there's no question at the margins it does bad things to the stimulus to *get out of the fucking house and do something* and also then how your expectations are set once you've emerged pale and blinking into the sun.

What are the big factors at play here? I guess a number of people itt will say 'you're waaaaay overestimating the potency of AI', and others - even me in more optimistic moments - 'you're really underestimating that people are social animals - what you're talking about is that 15% share of weirdos society has always had and will always have'. I hope that's true. I do point back up to 'social media' has created the channels for memes and manipulative behaviour or just benign capitalist badness to have mass effects.

Fizzles, Friday, 15 August 2025 07:48 (one week ago)

I’ll admit I probably don’t really understand what AI even is or what it’s capable of, but my basic feeling is that if you use chatGPT or grok or whatever to look something up you could very well look up yourself, you are a simpleton, a failure, an embarrassment to god. that’s just my opinion of the intellect of the average user who needs it as a crutch. the environmental stuff seems bad too

brony james (k3vin k.), Friday, 15 August 2025 07:49 (one week ago)

I'll add that I do use it at work and I find it useful: made a google notebook loaded with a pile of legislative and technical documents that make my head hurt. I can ask it questions; it answers clearly and (crucially) links back to its exact source. Means I don't have to chase clarifications from (busy and not always lucid) policy experts. I'd be repelled by using it for writing but this kind of grind-through-100-page-statutory-instrument stuff feels handy.

― woof, Thursday, August 14, 2025 4:53 AM (yesterday) bookmarkflaglink

woof, I don’t know that you and I have ever interacted directly before, but I’ve appreciated your posts on this board partic on the literary threads etc., you seem like a sharp guy. all this to say: you don’t suppose you might actually learn something or better understand the material you’re having the bot scan by reading it carefully yourself? or that there might be something deleterious to your fund of knowledge and, ultimately, employability, in outsourcing this labor?

brony james (k3vin k.), Friday, 15 August 2025 08:09 (one week ago)

darra, looking back on our posts I now agree that yeah I was being unecessarily obtuse, apologies.

a ZX spectrum is haunting Europe (Daniel_Rf), Friday, 15 August 2025 08:10 (one week ago)

good eating in these posts

xp i was cranky daniel!

tuah dé danann (darraghmac), Friday, 15 August 2025 08:12 (one week ago)

i pay for and use chatgpt for personal projects im frequently astounded by how bad and stupid it is btw and how deviously unhelpful it is, just to be clear

tuah dé danann (darraghmac), Friday, 15 August 2025 08:14 (one week ago)

I’ll admit I probably don’t really understand what AI even is or what it’s capable of, but my basic feeling is that if you use chatGPT or grok or whatever to look something up you could very well look up yourself, you are a simpleton, a failure, an embarrassment to god.

I don't think this is how AI is used in jobs. If this is how it was going to be used in the workplace then no jobs would be under threat. It doesn't look up the information for you. It looks up the information you were going to look for and then does the stuff you were going to do with it instead of you doing it, freeing you up for negronis at the beach instead

anvil, Friday, 15 August 2025 08:23 (one week ago)

which you’ll be paying for with unemployment checks

brony james (k3vin k.), Friday, 15 August 2025 08:28 (one week ago)

Generative AI summer 2025: where do you stand? [Started by woof in August 2025, last updated twenty-four seconds ago by anvil] 121 new answers POLL closes: September 01 (in 2 weeks)
"you bitch" moments [Started by Tracer Hand in August 2025, last updated seven minutes ago by tuah dé danann (darraghmac)] 41 new answers

YOUR TERMINATED MOTHERFUCKER

imago, Friday, 15 August 2025 08:30 (one week ago)

I feel like there's a conflation of "it has negative consequences" and "its crap anyway" which on the surface appear to reinforce each other but are somewhat contradictory.

If I'm reading you correctly, the contradiction is that if it truly is crap people/companies/govts will stop using it and so it can't have any negative consequences of note, do I have that right?

In that case I don't think that follows - the backward step thread on here is littered with examples of technology that works less well than the previous alternatives but has nonetheless been widely adopted.

a ZX spectrum is haunting Europe (Daniel_Rf), Friday, 15 August 2025 08:55 (one week ago)

Thats a conflation of my own in a poorly structured post. In the post I was referring to the perception of individuals. "I don't like this" + "it doesn't work" with the latter as a copium for the former

In terms of adoption, I absolutely agree. If companies can ship a product thats 20% worse but costs 80% less to make, thats happening every day of the week. Things can and do degrade, and this seems quite likely. But a bus that doesnt run on weekends anymore and runs hourly instead of every 15 minutes isn't the same as a bus that doesn't run at all

anvil, Friday, 15 August 2025 09:02 (one week ago)

Yeah “you can’t say something is crap and also that mass takeup of the thing will have negative consequences” is the incoherent position

GY!BP (wins), Friday, 15 August 2025 09:02 (one week ago)

Anyway for me this stuff is just a tool like any other. Like a calculator, if a calculator gave you a different wrong answer to 8 + 5 40% of the time you asked it and you decided that was good enough to put it in charge or your emotional wellbeing

GY!BP (wins), Friday, 15 August 2025 09:11 (one week ago)

I only use it for cooking questions. Although I did once use it to make up some (probably only amusing to me) when Bono met Putin fanfic

vodkaitamin effrtvescent (calzino), Friday, 15 August 2025 09:15 (one week ago)

cooking is an interesting one. my instinct is that i would absolutely not use it for cooking, and would always go to a cookbook (here, as with poetry, i don't even like digital formats) but i think this may be a response to often widely shared comical 'recipes' that people have generated.

Fizzles, Friday, 15 August 2025 09:27 (one week ago)

sorry, there was a question intended there - do you find it useful?

Fizzles, Friday, 15 August 2025 09:28 (one week ago)

The poll is perfect.

That said, ok maybe I could have thought through some middle-ground scepticism options - it doesn't really work, it's not all that etc. My baseline is that despite its clear shortcomings and active horrors it is an astonishing tech, borderline miraculous, so my head blanked on a lot of that comme ci comme ca space.

A lot of this chat is making try to squint and think through 'ok, what if it's just dull and a bit flakey, one more tool and a blunt one at that'. I still can't get there - I'm in agreement with fizzles about pretty much everything.


woof, I don’t know that you and I have ever interacted directly before, but I’ve appreciated your posts on this board partic on the literary threads etc., you seem like a sharp guy. all this to say: you don’t suppose you might actually learn something or better understand the material you’re having the bot scan by reading it carefully yourself? or that there might be something deleterious to your fund of knowledge and, ultimately, employability, in outsourcing this labor?

― brony james (k3vin k.), Friday, August 15, 2025 8:09 AM (fifty-three minutes ago) bookmarkflaglink

Thank you k3vin and likewise and I should get back to my home on ILB one of these days. On the work thing... I absolutely hear what you're saying and if I wanted to master something no way would I do that. But (and this is a weird admission about work) it is quite useful for me to have a dumb/superficial grasp of policy detail because I'm trying to write clearly for everyone and get away from the legislative language & explain what people have to actually do - I think you can get captured once you're too immersed in the policy side of things (though other excellent people in the same role have different approaches and will read the legislation carefully). Big picture & getting a starting structure, I will read a ton and talk to experts and get on top of the domain myself. But 6 months in, weird details cropping up or suddenly thinking 'they have to keep records for how long? Is that even in the legislation?', it's useful.

Also I'm a contractor, so I've hopped around a lot. 'Get on top of things quickly' is more valuable than single-domain expertise.

woof, Friday, 15 August 2025 09:37 (one week ago)

xp

just for simple quantity/ratio shit that you'd be embarrassed to ask a human and then also ideas of what to cook with what you have left in the fridge. Also some input into if your experimental/fusion ideas, are they good or bad, lol.

vodkaitamin effrtvescent (calzino), Friday, 15 August 2025 09:40 (one week ago)

I never get a break from cooking and sometimes get into ruts and need some assistance.

vodkaitamin effrtvescent (calzino), Friday, 15 August 2025 09:41 (one week ago)

some gentle prodding in a different direction without bothering real humans with dumb questions

vodkaitamin effrtvescent (calzino), Friday, 15 August 2025 09:43 (one week ago)

the contractor angle makes a lot of sense to me then

it can get you in a closer circle to the centre of an agglomeration of information, which if assembled over time with human experience would actually inform the core, the critical, the failed paths, but it absolutely cannot tell you what "complete" looks like in any analysis of a summary of same and the steer it will give you on what you should do next is to be trusted to the extent you trust an absolutely obseqious absolute incompetent

will this aspect improve? sure, but at present the miracle of the language model is the problem, it's so far ahead of any kind of linkage to how the person thus fooled is actually using or trying to use the interface

tuah dé danann (darraghmac), Friday, 15 August 2025 10:09 (one week ago)

I have never used ChatGPT. Beyond a single foray into that weird illustrative thing in lockdown where everyone was generating uncanny valley stuff like ‘David Bowie playing with dogs’ or whatever, I realised the implications pretty quickly and plan never to use any form of this again. Between the environmental peril potential of data centres and just being a writer, of course I ticked the ‘I hate it, and it’s bullshit’ option.

einstürzende louboutin (suzy), Friday, 15 August 2025 10:52 (one week ago)

have never used it recreationally because I'm not a curious person generally tbh.

have never used it recreationally because I’m not a profound sociopath who longs for the extinction of the human species tbh

Nancy Makes Posts (sic), Friday, 15 August 2025 12:00 (one week ago)

(obv the nonexistence of humans would be a vast boon for the planet but I personally like several of them so waygtd)

Nancy Makes Posts (sic), Friday, 15 August 2025 12:02 (one week ago)

I never get a break from cooking and sometimes get into ruts and need some assistance.

this absolutely makes sense and maybe applies more widely as well. i wonder if it has value as a “what are some things you can do” engine of agency more generally.

Fizzles, Friday, 15 August 2025 12:30 (one week ago)

sic, is that environmental collapse extinction or do we finally have a 'skynet is at hand' participant?

woof, Friday, 15 August 2025 12:36 (one week ago)

this absolutely makes sense and maybe applies more widely as well. i wonder if it has value as a “what are some things you can do” engine of agency more generally.

So an aggregation of google search results - tbc I realise this is what these things are in general of course - but I dunno, it still feels like a step down from looking at the actual sources instead. I often find myself in a cooking rut or unsure about certain improvised combinations like calz describes, and I have to say it's always more satisfying for me to google and land on some forum or reddit thread where I can gather several opinions and make up my own mind rather than just trusting the first opinion that google sends my way. Talking to an AI eliminates that as having the one correct answer is part of the project; I think using it as a general "what are things to do" engine would lead to a considerable narrowing of possible experiences.

That being said, I do understand sniffing around forums is more time consuming and sometimes you just need a quick answer.

a ZX spectrum is haunting Europe (Daniel_Rf), Friday, 15 August 2025 12:48 (one week ago)

I take the Felicity Cloake approach by having a look at a bunch of recipes for one thing and then choosing the one with the fewest ingredients, the one where I have all the proposed ingredients on hand, or the one that makes the most sense to me but also some combo of above.

einstürzende louboutin (suzy), Friday, 15 August 2025 12:55 (one week ago)

sometimes if I'm in a indecisive/muddled mind state and I have an hour or so to come up with some results, you don't always have time for some good old fashion research. And when you are running out of time, searching for information can sometimes feel like wading through mud with treacle in your boots. I am aware that a lot of answers AI gives are plain wrong or terrible, yet using it in this manner works for me or at least has done more than a few times.

vodkaitamin effrtvescent (calzino), Friday, 15 August 2025 12:56 (one week ago)

I'm 27.

I published 1,500 books on Amazon and made $3M.

But I wish someone had told me these 7 brutal truths at the beginning. 🧵

1/ Stop writing interesting books. pic.twitter.com/97CuMCKoOl

— Tommi Pedruzzi (@TommiPedruzzi) August 11, 2025



Imagine how much water wastage this fucker is responsible for.

vodkaitamin effrtvescent (calzino), Friday, 15 August 2025 13:22 (one week ago)

Do we blame him or whoever's buying the fuckers?

baka mitai guy (Noodle Vague), Friday, 15 August 2025 13:28 (one week ago)

Take away the bragging about book sales and that could be a great opening for a modernist short story tbh.

a ZX spectrum is haunting Europe (Daniel_Rf), Friday, 15 August 2025 13:31 (one week ago)

Do we blame him or whoever's buying the fuckers?

him

Proust Ian Rush (Camaraderie at Arms Length), Friday, 15 August 2025 13:41 (one week ago)

"I wish someone had told me these 7 brutal truths at the beginning" = I wish I'd been intelligent enough to realise this obvious shit myself but instead I'll blame some amorphous person for not informing me

slip a gallon to me alan (Matt #2), Friday, 15 August 2025 13:45 (one week ago)

I’ll admit I probably don’t really understand what AI even is or what it’s capable of, but my basic feeling is that if you use chatGPT or grok or whatever to look something up you could very well look up yourself, you are a simpleton, a failure, an embarrassment to god. that’s just my opinion of the intellect of the average user who needs it as a crutch. the environmental stuff seems bad too

― brony james (k3vin k.), Friday, August 15, 2025 2:49 AM (five hours ago) bookmarkflaglink

the thing is you don't have to. every major search engine does it for you now automatically and plops the answer on the top of the page. some of us will ignore it but if it's a quick lookup into something that doesn't really matter...fine. whatever. it's probably right. it looks right.

ultimately what is kind of sinister about this is the internet and our information environment has steadily been getting shittier for the last decade, almost like it was being primed for something like this. search engines seemed incentivized to display both sides of a one-sided argument (you saw this a lot during the pandemic) leading people who don't know anything about the topic but have pre-existing biases to 'make up their own mind' in ways that were often flat out wrong. this stuff is, at least, giving you an answer. and I will at least say it's not leading people down conspiratorial black holes as of yet. I posted about this on the Elon thread but I don't know if a right-wing LLM is really possible to make without it doing a lot of obviously stupid things. yes I understand the danger of this, especially given that the answers always look correct and authoritative, not to mention the media constantly talking about this shit as if it's omniscient. but the current system, in which we have basically the entire sum total of human knowledge available in our pockets, led a bunch of people to eat horse dewormer instead of getting vaccinated against a disease which literally killed a million Americans. so uhhh, is this really a downgrade?

ditto for the stuff about recipes. I don't think I'd trust it to do that either unless I was writing a specific enough prompt that I know it's not gonna blend recipes that shouldn't go together. however as it stands the internet has been designed to be chock full of recipes that come with walls of text and dozens of ads which you have to scroll through before you even see the recipe itself. for me personally, this stuff barely works at all on my phone. obviously it doesn't have to be this way. there were cooking pages in 2001 which were much more user friendly and easy to read. but the internet was purposely made into this.

frogbs, Friday, 15 August 2025 13:54 (one week ago)

I don't know if a right-wing LLM is really possible to make without it doing a lot of obviously stupid things

this is true if you define "right-wing" as insane MAGA conspiracy theory stuff, but there is constantly accumulating documentation of biases in training data cropping up in expected and unexpected ways in ML systems. when it comes to LLMs "the entire sum total of human knowledge" also includes the entire sum total of human opinions, which on average are fairly "right-wing" so to speak

rob, Friday, 15 August 2025 14:05 (one week ago)

They keep trying to encourage us to use Copilot at work and I feel like I'm going nuts as people embrace it

baka mitai guy (Noodle Vague), Friday, 15 August 2025 14:14 (one week ago)

but the current system, in which we have basically the entire sum total of human knowledge available in our pockets, led a bunch of people to eat horse dewormer instead of getting vaccinated against a disease which literally killed a million Americans. so uhhh, is this really a downgrade?

Unequivocally yes. Hoaxes, urban myths, cults and self destructive behaviour flourished during the old media ecosphere as well; these behaviours are pretty ingrained in us and it was def naive to think the internet would fix them. With something as earth shattering as the pandemic I think it would have been a miracle if a sizeable population hadn't reacted in an absurd and destructive manner. Critical thinking, media literacy and such are def skills that have to be learned and there's no easy answers as to why we're so bad at that but encouraging people to bin them entirely in favour of an AI nanny is just the bleakest shit ever imo.

a ZX spectrum is haunting Europe (Daniel_Rf), Friday, 15 August 2025 14:21 (one week ago)

Work seems to be where I see people's opinions on AI diverge most, depending on their occupation and also their general attitude toward work (or at least toward their particular job/employer).

I spend 90% of my time at work fact-checking and editing magazine articles, and when it comes to those tasks, there is very little that AI can do for me better than I can do it myself. But more than that, I take pride in my work. I wouldn't say I unreservedly enjoy it 100% of the time, but the work is stimulating, I feel I am good at it, it builds on my strengths, and it contributes to my self-esteem and self-identity. Using AI at work would not only feel like a form of cheating, it would make me feel sort of useless.

OTOH, I can easily imagine someone for whom work is little more than a paycheck, who derives value from other things in life, who might see AI as a useful shortcut that makes their job less tedious. I can also imagine someone whose job involves analyzing and solving problems, whose value comes from their ability to quickly generate solutions through the use of various software tools or other technologies. In this vague hypothetical example, one might take pride in their ability to master an AI program (just as they have mastered Excel or whatever) that allow them to increase their efficiency.

jaymc, Friday, 15 August 2025 14:34 (one week ago)

This is more or less what I was suggesting before, that perspectives are a conflation. "I don't think Man City will win the league" and "I don't want Man City to win the league", used interchangeably, but where the former is a reflection of the latter

anvil, Friday, 15 August 2025 14:42 (one week ago)

more anecdotal stuff from the hospital lift just now (not any sort of medical professional, i shd probably say - a mother to her son)

“I’ll use up all my free ChatGPT today if you keep asking questions”.

(this is absolutely not a sneer - more “how are people generally using this?” and my impression is that people are using LLMs a *lot*).

Fizzles, Friday, 15 August 2025 14:46 (one week ago)

yeah, that's one of itches I'm scratching at in this poll - suspecting that it's in massive popular use but not fully digesting that.

woof, Friday, 15 August 2025 15:10 (one week ago)

I'll admit I suspect we might get a Silent Majority surprise when the poll closes. If I was into AI I certainly wouldn't say so on here.

a ZX spectrum is haunting Europe (Daniel_Rf), Friday, 15 August 2025 15:17 (one week ago)

I don't know if a right-wing LLM is really possible to make without it doing a lot of obviously stupid things

This feels horribly like a 'not yet' thing to me. Elon's white genocide & mechahitler iterations of Grok were clearly the dumbest and most broken possible way to do this but with capital and authoritarian politics aligning to steer the tech I think a not-dumb Fascist Racist Confirmation Machine is not an idle fear.

woof, Friday, 15 August 2025 15:38 (one week ago)

I think its more than possible, pretty likely. Its already here in some ways if you put weightings into it to frame thing in a way you want to hear ("review my work" vs "review my colleagues work" for the same material yielding different framing being the most obvious example)

anvil, Friday, 15 August 2025 15:45 (one week ago)

yeah, that's one of itches I'm scratching at in this poll - suspecting that it's in massive popular use but not fully digesting that.

so one thing i understand to be reasonably prevalent is just people have it as an app which they chatter to throughout the day - as with this mum and son conversation. i’m not a *million* miles away from that myself. will kick off a chat if something occurs to me during the day and pick it up later when im home etc.

Fizzles, Friday, 15 August 2025 16:11 (one week ago)

_I don't know if a right-wing LLM is really possible to make without it doing a lot of obviously stupid things_

This feels horribly like a 'not yet' thing to me. Elon's white genocide & mechahitler iterations of Grok were clearly the dumbest and most broken possible way to do this but with capital and authoritarian politics aligning to steer the tech I think a not-dumb Fascist Racist Confirmation Machine is not an idle fear.

yeah strongly agree here. the current custodians are pretty weird but there’s a big incentive to talk about alignment an manage regulation - also ITS SCARY and you need responsible businesses is part of the marketing. this will not last.

incidentally was doing a “socratic Q&A” on spanish power and solar following a recent article about solar power infra issues (not the blackout just viability of investment)

and in one answer i suggested that regulation would be a strong derisking/co-ordination requirement to enable innovation or new use cases to drive uptake of energy use and make the creation of renewable solar infra a viable business case. and Claude really wanted to push me to “But can you think of a market solution to that problem?”

Yeah ok Claude (a relatively benign example with transparent and strong training guardrails compared to grok) just what *is* your base ideology?

Fizzles, Friday, 15 August 2025 16:19 (one week ago)

so one thing i understand to be reasonably prevalent is just people have it as an app which they chatter to throughout the day - as with this mum and son conversation. i’m not a *million* miles away from that myself. will kick off a chat if something occurs to me during the day and pick it up later when im home etc.

that's the thing i am afraid of getting sucked into. i use social media in part out of a desire for connection, and that's already a worse version of IRL human interaction. chatgpt seems like it could fulfill similar needs to an even greater extent -- "someone" at the other end who is always there, always ready with a response. but it's even more of a facsimile. you're talking to nobody.

jaymc, Friday, 15 August 2025 16:24 (one week ago)

yeah strongly agree here. the current custodians are pretty weird but there’s a big incentive to talk about alignment an manage regulation - also ITS SCARY and you need responsible businesses is part of the marketing. this will not last.

yeah, the catch-up weirdos seem like the real trouble, musk most visibly and fascistly but Meta/Zuckerberg seem to be leading the race to say fuck it, kill all regulations, user psychosis is strong committed engagement, we must go for it.

woof, Friday, 15 August 2025 16:28 (one week ago)

At the moment the tech sector is a rough place to be due to AI, what's your feeling on when the trend will reverse and jobs in the tech sector will start increasing again?

― anvil, Thursday, 14 August 2025 08:49 (yesterday) bookmarkflaglink

this is a million posts ago and i didn’t read the whole thread to see if it got addressed before but fwiw the viral charts showing that genAI is already leading to job losses in tech are basically fake news/lying with data

https://open.substack.com/pub/agglomerations/p/a-viral-chart-on-recent-graduate

there is some evidence of softening but it started before 2022 (there was a hiring surge which made the 2020-21 tech labor market super tight) was and hiring is still strong

flopson, Friday, 15 August 2025 16:38 (one week ago)

that's the thing i am afraid of getting sucked into. i use social media in part out of a desire for connection, and that's already a worse version of IRL human interaction. chatgpt seems like it could fulfill similar needs to an even greater extent -- "someone" at the other end who is always there, always ready with a response. but it's even more of a facsimile. you're talking to nobody.

― jaymc, Friday, August 15, 2025 5:24 PM (thirteen minutes ago) bookmarkflaglink

agree with this 100%.

she freaks, she speaks (map), Friday, 15 August 2025 16:39 (one week ago)

Somewhere between "work makes me" and "may be interesting."

Because the genie isn't going back in the bottle, I feel like it's prudent to be conversant. If you're slagging it off be able to do so in an informed way.

I don't want to be sitting in a job interview and say "ew ick fuck no I hate that shit." Even if I kinda do.

Better to be familiar enough with the tools and models and their strengths and weaknesses.

I did a lot of interviewing in the last three years and the topic always comes up. No one is hiring Luddites at this point, but in my arena you can't be a gung-ho cheerleader either.

We usually have a good, nuanced talk about where it's useful and the role of human gatekeeping/fact-checking.

At my level employers expect skepticism, but also expect me to keep an open mind about specific, limited uses for AI where appropriate.

je ne sequoia (Ye Mad Puffin), Friday, 15 August 2025 16:40 (one week ago)

I think my answer is not there in that it's I have major worries about it and I don't use it because yet to have any need for it.

I am working very close to a lot of government use of AI at the moment and as someone doing that work for years my sense is that it's a world where solutions are often found before problems, then the problem is hunted down in order to use (and spend a lot of money on) the desired solution.

Usually this is more at sort of micro levels in different departments based on funding, with the occasional big splurge on tech, often disastrous, but what we are now seeing is governments announcing that AI should be the default. This means the widest and most heavily funded solution without a problem since the internet was born I guess, though I wasn't working on this then. The fight in government stuff is to do small and specific things to fix problems and measure the outcomes. This is truly disastrous for that effort, as AI basically is just about everyone deciding to agree "this will do".

A lot of what I see, in my work on a colleague to colleague level, but also on a wider macro level, is that AI is a tool which means you don't have to think. I am certain that outsourcing our thinking won't just mean we make or force people to use average or dysfunctional or occasionally criminally evil things, but that we outsource the moral responsibility as part of this change.

I also believe it'll be used as a Trojan Horse by the private sector to privatise and lock in huge parts of government, and they'll do this as deeply and irrevocably as they can, attacking and destroying any laws that the few countries who were smart enough to protect themselves from Big Five ripoff brought in some years ago.

On a wider societal level, I fear for how a cessation of thinking, a pile of job losses, a total lack of any government planning or standards for AI, and the rise of the right, are all going to combine.

Just very concerning times.

LocalGarda, Friday, 15 August 2025 17:05 (one week ago)

_yeah strongly agree here. the current custodians are pretty weird but there’s a big incentive to talk about alignment an manage regulation - also ITS SCARY and you need responsible businesses is part of the marketing. this will not last._

yeah, the catch-up weirdos seem like the real trouble, musk most visibly and fascistly but Meta/Zuckerberg seem to be leading the race to say fuck it, kill all regulations, user psychosis is strong committed engagement, we must go for it.

Ultimately Facebook is an ad vehicle/personal data seller. At what point does the personal data and shopping habits of psychotic people stop being useful?

Crispy Ambulance Chaser (Boring, Maryland), Friday, 15 August 2025 17:17 (one week ago)

Forced from work also most of the Millenarian AI claims are horseshit pushed to boost stock evals and getting credulous business owners who want to be in on the next big thing but also to fire most of their workers to sign on

Glower, Disruption & Pies (kingfish), Friday, 15 August 2025 17:23 (one week ago)

a total lack of any government planning or standards for AI

We've been living with a lot of largely unregulated tech for a long time, but it's still worth noting how wild it is that genAI has zero public oversight. You've got Meta's chatbot sex-talking 14-year-olds with the voice of John Cena and afaict literally no one with any power is trying seriously to curtail that

rob, Friday, 15 August 2025 17:24 (one week ago)

misanthropy driving love of robots instead - a future of robo lovers

Minty Gum (Latham Green), Friday, 15 August 2025 17:26 (one week ago)

_a total lack of any government planning or standards for AI_

We've been living with a lot of largely unregulated tech for a long time, but it's still worth noting how wild it is that genAI has zero public oversight. You've got Meta's chatbot sex-talking 14-year-olds with the voice of John Cena and afaict literally no one with any power is trying seriously to curtail that

How about sex talking in the voice of Michael Cera?

Crispy Ambulance Chaser (Boring, Maryland), Friday, 15 August 2025 17:35 (one week ago)

It's insane. There are no ethical standards but also, and I guess these take time, no standards or controls on spending, no standards for how governments buy or implement it.

But yeah the wider sense of do we want this or what are the rules is even further behind than ever, and weak centrist governments are basically willing to roll the dice on it delivering some savings to prop up the status quo versus the far right for a few more years.

LocalGarda, Friday, 15 August 2025 17:38 (one week ago)

The current US government was elected to do one thing only, and that is to shovel our money to Silicon Valley.

Crispy Ambulance Chaser (Boring, Maryland), Friday, 15 August 2025 17:46 (one week ago)

Sadly this is not confined to the US. The Canadian govt has been all in on AI for years now and federal attempts to pass legislation regulating it have so far failed

rob, Friday, 15 August 2025 17:48 (one week ago)

You're more likely to get laws mandating its use by govt employees than you are regulating its development or application

rob, Friday, 15 August 2025 17:48 (one week ago)

I will say AI chat is great for absorbing my intellectually disabled son's questions, which tend toward the repetitive. He uses circular conversations for self-soothing, and a bot never gets tired or exasperated.

It will answer "what is most big bee in world" 30 times, then pivot to discussing "why I hate the Phillies of MLB" with no whiplash.

je ne sequoia (Ye Mad Puffin), Friday, 15 August 2025 17:55 (one week ago)

Your post makes me irrationally happy, YMP.

more difficult than I look (Aimless), Friday, 15 August 2025 18:03 (one week ago)

Yeah my comments are based on UK government. The PM here literally announced a few months ago "let's mainline AI into the veins of public services"

Like walking into the Deloitte Casino wearing a suit made of money.

LocalGarda, Friday, 15 August 2025 18:05 (one week ago)

It's insane. There are no ethical standards but also, and I guess these take time, no standards or controls on spending, no standards for how governments buy or implement it.

to go UK local, I hear the ai incubator has good people and are doing decent careful work piloting and testing and being willing to kill things that don't work but otherwise completely agree - standards with teeth are the one true way. Spend controls were what made early GDS/GOV.UK work (not telling you anything you don't know ofc lg) but the departments won't make that mistake again. No-one to stop them spunking public money on random Accenture AI tinsel and 'we are special and are doing this our way' pet AI projects that make ministers feel important and relevant. The fuck ups are going to be so bad when they really hit (and the contracts will be written so badly that the suppliers won't be responsible for the deaths).

woof, Friday, 15 August 2025 18:15 (one week ago)


Ultimately Facebook is an ad vehicle/personal data seller. At what point does the personal data and shopping habits of psychotic people stop being useful?

― Crispy Ambulance Chaser (Boring, Maryland), Friday, August 15, 2025 5:17 PM (one hour ago) bookmarkflaglink

I genuinely don't know the answer to this but if I try to think it through I get to something like 'surprisingly late' because if the rough Meta aim is something like 'always-on private companion and confidante with extremely advanced persuasion abilities who lives in your phone or glasses or earbuds' there will be a big grey zone of people who have some reality-dissociative mental damage, whom Meta will milk, and then the people pushed to full-blown psychosis, who'll be treated as collateral damage and a PR problem.

I mean it feel like black mirror indulgent doommongering but from the outside it looks like that's what Meta wants.

woof, Friday, 15 August 2025 18:32 (one week ago)

Forced from work also most of the Millenarian AI claims are horseshit pushed to boost stock evals and getting credulous business owners who want to be in on the next big thing but also to fire most of their workers to sign on

― Glower, Disruption & Pies (kingfish)

This is what it feels like at my org. There aren't even any shareholders to satisfy or have to report to, there's absolutely no reason to do this except that our C suite wants to be leaders of a buzzy tech-forward endeavor instead of a human services one. Admittedly a lot of depts have not kept up with BASIC technology (like PDFs and the internet) so there is a need to get them to catch up, but it doesn't have to be widespread AI adoption with absolutely ludicrous messaging like "Of course it lies" (meaning everything from bad data to hallucinations to outright recently reported deception). "It's a child. Children lie because they're testing boundaries. AI is maturing and it will get more reliable over time as it essentially 'grows up.'"

Ima Gardener (in orbit), Friday, 15 August 2025 18:33 (one week ago)

haha wow

a ZX spectrum is haunting Europe (Daniel_Rf), Friday, 15 August 2025 18:33 (one week ago)

Maybe you can point out child labor laws? Wait until the robot turns 18, let it hand in a CV.

a ZX spectrum is haunting Europe (Daniel_Rf), Friday, 15 August 2025 18:37 (one week ago)

Boy I wish. It's pretty bonkers right now. Half of my colleagues refuse to go on the internet to reference a Google map or a live data set, they only accept hard copies they can keep at their desk (which can't be updated). Half of us are having to become mid-level Salesforce admins and I'm resisting "agents" and bots as hard as I can, which, luckily we're busy with other things but there's def pressure to have an automated informational resource to show funders. And then our CEO calling mandatory press conferences essentially to tell us that AI will change the world, might actually destroy the world, but we have to adopt it now or be left behind. Like...???????

Ima Gardener (in orbit), Friday, 15 August 2025 18:44 (one week ago)

jfc, children test boundaries because they want to explore the reach of their volition. AIs have no volition. They lie senselessly and unpredictably, and suffer no consequences from being caught in a lie.

more difficult than I look (Aimless), Friday, 15 August 2025 18:51 (one week ago)

I keep hearing that the hallucinations will soon be under control but I've seen no evidence to support this

Andy the Grasshopper, Friday, 15 August 2025 18:52 (one week ago)

there's no reason to expect the hallucination problem to be solved without the basic tech changing

genuine dread at in orbit's "child" post omfg

rob, Friday, 15 August 2025 19:07 (one week ago)

He claims to have been "educating myself about AI by reading every book about it that I could find for the past year." I'm afraid to ask which books plus realistically I know I'm not going to read them. It was such a surreal meeting; it was also like 90+ minutes long.

Ima Gardener (in orbit), Friday, 15 August 2025 19:20 (one week ago)

Someone quite reasonably asked "What about plagiarism, if we use AI to summarize or write things for us, will that be considered plagiarism?" and they were told as long as you confirm that the data included is correct you can consider it "your own work." I......don't think that's how any of this works. You know, legally.

Ima Gardener (in orbit), Friday, 15 August 2025 19:23 (one week ago)

The child thing incredible, horrifying, otoh getting my lines ready for the next stakeholder meeting where we're discussing an AI chatbot getting the recyclability of Activia lids wrong. Going with "The Oracle has deceived us. But grieve not - she is yet a child. As she grows past us she will learn the meaning of 'truth', 'duty' and 'love'. Then she shall be ready to lead us to the stars."

woof, Friday, 15 August 2025 19:26 (one week ago)

The apologias I hear are more like: copilot gets you in the ballpark but you have to double-check everything People then trot out the phrase "trust but verify."

Good news for continued human employment.

Apart from plagiarism/IP concerns, in a competitive business environment there is a very real danger of creeping sameness.

Imagine two competing companies use AI to write their proposals - their proposals will have sizable chunks that are word-for-word identical.

Already the "AI style" is a recognizable tone, and meat-shaped writers may imitate it just by osmosis, and put more of it out there. Then the model, as it trains itself on extant texts, will be increasingly eating its own garbage and compounding its shittiness.

je ne sequoia (Ye Mad Puffin), Friday, 15 August 2025 19:50 (one week ago)

Woof otm, albeit we have talked about this before.

Loads of reasons to worry about this stuff but I honestly think we are heading for a historic private sector land grab of public digital services, and a big binfire of things which make life worse for citizens.

LocalGarda, Friday, 15 August 2025 20:11 (one week ago)

hah yes this is def territory we have covered (and will cover again). I guess the thing I don't normally say is that in our world I think there are plausible but very narrow routes through to gen AI being a net public good (strictly speaking about user interaction with UK government), like a good tool for navigating the hellworlds (tax, visas, benefits etc). We are not going in that direction.

woof, Friday, 15 August 2025 20:22 (one week ago)

Yeah I am open to good uses of the tech, but it's the overarching stuff that makes me fear it'll be hard to carve out the thought or space in order to engineer those.

LocalGarda, Friday, 15 August 2025 20:27 (one week ago)

Imagine two competing companies use AI to write their proposals - their proposals will have sizable chunks that are word-for-word identical.

& do we get quite quickly to companies using AI to assess these pitches (especially when AI drops the cost of creating a pitch so there are many more of them)? Like 'use an AI to write/rewrite your CV and an AI will assess it' feels like basically where we are already and I think it's going to spread.

Relatedly reading an article on AI preference for AI content today.

woof, Friday, 15 August 2025 20:31 (one week ago)

yeah this is already happening. there is no way i’d do a cv for a specific job without running it through AI for that reason.

also, friend at work generates busy work stuff with AI, then manager assesses it with AI, and maybe feeds back the AI feedback, etc etc.

so. much. insanity.

agree with all on landgrab especially with motivated wishful thinking from pols eager to hoover up the sales pitches. this has happened before ofc. but i fear the under-the-hood opacity can create really negative outcomes without people knowing where the problem is?

as well as straightforward “we don’t bother checking AI outputs now and someone died”.

Fizzles, Friday, 15 August 2025 21:40 (one week ago)

I’m filled with rage at the public gullibility and the brazen willingness to feed it for profit. These language models don’t think, or understand, or reason. Their *only* goal is to make plausible sentences. Any truth is accidental and derives from the input dataset, with zero added insight. The fact that so many people are wowed by this is depressing but I guess inevitable in this era of relative truth.

assert (matttkkkk), Friday, 15 August 2025 21:53 (one week ago)

New resource for teachers: https://against-a-i.com/

underminer of twenty years of excellent contribution to this borad (dan m), Saturday, 16 August 2025 01:57 (six days ago)

"there is no way i’d do a cv for a specific job without running it through AI for that reason."

Lol ofc, will play with this and report back.

xyzzzz__, Saturday, 16 August 2025 11:22 (six days ago)

I think about when my partner applied for disability benefits, stated he could not go outside without someone to support him physically and emotionally, and the assessor reported that "he told us he can go outside so he has no restrictions."

This was a person with the ability to think and to evaluate information. And they got it so very wrong. Imagine these decisions were left in the hands of AI.

boxedjoy, Saturday, 16 August 2025 12:29 (six days ago)

I think about when my partner applied for disability benefits, stated he could not go outside without someone to support him physically and emotionally, and the assessor reported that "he told us he can go outside so he has no restrictions."

This was a person with the ability to think and to evaluate information. And they got it so very wrong. Imagine these decisions were left in the hands of AI.

― boxedjoy, Saturday, August 16, 2025 5:29 AM (thirty-three minutes ago)

the assessor did what they were paid to do: deny disability benefits to someone who qualified for them

i think that's interesting. we're paying people to act unethically and immorally. to what extent is it a detriment to outsource "doing evil" to a machine?

Kate (rushomancy), Saturday, 16 August 2025 13:07 (six days ago)

idk. it's interesting. as much as i do support artisanal kink art, i've increasingly found myself drawn to slop, specifically _because_ it's empty and soulless. since it has no character of its own, it leaves more space for me to project my own feelings. which theoretically shouldn't be the case - given the nature of the work it really ought to reflect the toxic patriarchal norm. maybe i'm just so poisoned by patriarchy that i don't notice!

Kate (rushomancy), Saturday, 16 August 2025 13:15 (six days ago)

i think that's interesting. we're paying people to act unethically and immorally. to what extent is it a detriment to outsource "doing evil" to a machine?

People can disobey.

a ZX spectrum is haunting Europe (Daniel_Rf), Saturday, 16 August 2025 13:30 (six days ago)

People do :)

The system is structured in such a way as to enforce these decisions, an AI would be worse since it wouldn't have the capacity to recognise when it was being ideologically directed

baka mitai guy (Noodle Vague), Saturday, 16 August 2025 13:44 (six days ago)

its worse when people dont disobey, so

tuah dé danann (darraghmac), Saturday, 16 August 2025 14:16 (six days ago)

anyway

ai being as shite as it is, its equally likely to misdirect malicious intent imo

tuah dé danann (darraghmac), Saturday, 16 August 2025 14:16 (six days ago)

Sure, i don't think the issues with AI are related to the basic fact that every bureaucratic process is already built to obtain the answers the builders want

baka mitai guy (Noodle Vague), Saturday, 16 August 2025 14:19 (six days ago)

its worse when people dont disobey, so

Ethically? Agreed. Practically I dunno if it makes much of a difference to the person affected. But conversely:

ai being as shite as it is, its equally likely to misdirect malicious intent imo

I think human beings making a conscious choice to subvert unfair orders is more efficient than relying on the chaotic neutral work of AI hallucinations, which yes might misdirect those orders but it's a spin of the roulette wheel as to how and what the consequences might be

a ZX spectrum is haunting Europe (Daniel_Rf), Saturday, 16 August 2025 14:32 (six days ago)

from deep within the workings of a civil service lemme tell you a secret, nothing reflects initial intent

tuah dé danann (darraghmac), Saturday, 16 August 2025 14:43 (six days ago)

My first instinct is no way, no benefits decisions by AI at all ever. But actually, given the reality of an adversarial system, might it be more reliably gameable or manipulable than relying on a friendly or disobedient atos/serco assessor? And FOI-ing an algorithm/training info might be easier than trying to penetrate the corporate structure of outsourced assessment services. I mean 2 faces of the nightmare of a broken system but it might be easier to beat

woof, Saturday, 16 August 2025 15:07 (six days ago)

xp

This is also where I'm coming from but coming, even if we factor in the constitutionally malicious human actors i think we come out marginally ahead

baka mitai guy (Noodle Vague), Saturday, 16 August 2025 15:08 (six days ago)

"Coming" was meant to be "come on"

baka mitai guy (Noodle Vague), Saturday, 16 August 2025 15:08 (six days ago)

This is all predicated on the basis that the system is designed to not support people, which is a policy direction, and currently not the position of those responsible in Scotland

boxedjoy, Saturday, 16 August 2025 15:18 (six days ago)

Otm
But also I’ve kept thinking and all you need is a super loose clause in the rulebook that says ‘any attempt to manipulate or undermine benefitbot will be met with an immediate cessation of all entitlements and penalty charges may be imposed’

woof, Saturday, 16 August 2025 15:25 (six days ago)

So keep the humans please

woof, Saturday, 16 August 2025 15:25 (six days ago)

(Sanctions not penalty charges)

woof, Saturday, 16 August 2025 15:28 (six days ago)

it sucks that financial security to so tenuous in this world that we have to "keep the humans" to do absolutely useless obsolete harmful shit like "means testing" making everyone's lives harder... so that they can retain a "job" to be able to eat and not die.

brimstead, Saturday, 16 August 2025 15:31 (six days ago)

didn't someone get a free car because he tricked a car dealership's chatbot? or maybe the chatbot is just ethically less horrible than a human car dealership.

Philip Nunez, Saturday, 16 August 2025 17:32 (six days ago)

As far as benefits are concerned I have a long running mantra which bores me so I apologise in advance:

- you can have a system that can't be swindled, but that means people who need financial support don't get it

- or you can have a system so airtight that it can't be fiddled, and many people who need support won't get it

The version you prefer is basically down to whether you're a can't or not

baka mitai guy (Noodle Vague), Saturday, 16 August 2025 17:37 (six days ago)

Lol can't

baka mitai guy (Noodle Vague), Saturday, 16 August 2025 17:37 (six days ago)

I plan on continuing to treat the proliferation of AI like I treat the proliferation of UFC/MMA: ignore it in every way that I can as a vicious and vulgar blight that makes society worse in every way.

il lavoro mi rovina la giornata (PBKR), Sunday, 17 August 2025 11:59 (five days ago)

Like UFC/MMA, AI will become ingrained in society and this will become harder to do ignore as time goes on.

il lavoro mi rovina la giornata (PBKR), Sunday, 17 August 2025 12:00 (five days ago)

ignore

il lavoro mi rovina la giornata (PBKR), Sunday, 17 August 2025 12:01 (five days ago)

Like UFC/MMA]

or oil

anvil, Sunday, 17 August 2025 12:26 (five days ago)

People can disobey.

― a ZX spectrum is haunting Europe (Daniel_Rf)

destroying property isn't murder

Kate (rushomancy), Sunday, 17 August 2025 14:31 (five days ago)

Like UFC/MMA

or oil

― anvil

or turkish oil wrestling

every day i get more gay. every day i find it harder to ignore turkish oil wrestling.

Kate (rushomancy), Sunday, 17 August 2025 14:36 (five days ago)

destroying property isn't murder

No but you're def gonna get murdered before you succeed in destroying any AI in charge of that kinda shit.

a ZX spectrum is haunting Europe (Daniel_Rf), Sunday, 17 August 2025 15:17 (five days ago)

i think that's interesting. we're paying people to act unethically and immorally. to what extent is it a detriment to outsource "doing evil" to a machine?

― Kate (rushomancy)

ok i didn't explain myself adequately again

my perspective assumes the necessary and inevitable destruction of a system predicated on doing evil. avarice, malice, bigotry, from an empirical perspective these are _flaws_, these are _inefficient_. there are two major "problems" with current dystopias. one is that the system selects for delusional people, which means that you wind up running a system that's at war with empirical reality. two is that delusional people are frequently also incompetent, which means that they often lack the ability to implement their ideas effectively.

the great risk of despots with technology is that they are often sore losers. despots are unable to truly create, but they are often very good at destruction. AIs with guns, i'd put them in the same category as landmines - the great threat doesn't come from them serving their creators, it's them _surviving_ their creators.

the biggest problem, to me, with dystopias is the way they co-opt people through propaganda to serve their ideology. the way i look at it is from PTSD, the studies on vietnam vets. what fascinates me about the original conception is the way it extensively deals with the psychological effects of war on people who _perpetrate evil_. the cycle of violence. just like the poor are taught that we're "temporarily embarrassed millionaires", so too are victims taught that we're "temporarily embarrassed abusers". historically, dystopias only work to the extent that they are able to, you know, manufacture consent. more than that, though, historically dystopias require active collaborators. i'm curious about the long-term effects of dystopias that _don't_ require as much in the way of human collaborators.

as a white person, i was taught values that made me an active collaborator in maintaining white supremacy. i didn't know i was doing this, but that's the nature of systemic oppression - the people who collaborate in it often _don't_ recognize the ways in which we're collaborating. hell, i still might be collaborating in some ways without knowing it - i just now know it's not a matter for individual blame or shame, but a personal problem that i'm highly motivated to work to correct, because i _am_ responsible for the results of my words and actions.

it's the old star wars thing - the more they tighten their grip, the more of us slip through their fingers. patriarchal white supremacy has a pretty fucking tight grip right now.

Kate (rushomancy), Sunday, 17 August 2025 15:18 (five days ago)

destroying property isn't murder

No but you're def gonna get murdered before you succeed in destroying any AI in charge of that kinda shit.

― a ZX spectrum is haunting Europe (Daniel_Rf), Sunday, August 17, 2025 8:17 AM

well yeah, _i'm_ not planning on destroying the AI. that shit's not up to me. the problem with totalitarianism is that the emperor increasingly relies on the praetorian guard, and the praetorian guard eventually realize that it's better for them to just have the magister militum run things.

the difference is that if i _could_ destroy AI without getting punished, i would. i wouldn't kill another human being. that's against my values.

Kate (rushomancy), Sunday, 17 August 2025 15:26 (five days ago)


You must be logged in to post. Please either login here, or if you are not registered, you may register here.