the unsettling thing to me is not really knowing what the unintended consequence of AI will be - especially if AI starts creating it's own AI
― | (Latham Green), Thursday, 22 December 2022 18:32 (one year ago) link
I'm Mr Meeseecks look at me
― Fash Gordon (Neanderthal), Thursday, 22 December 2022 18:38 (one year ago) link
i think this is a really interesting point
want steel to build a particle detector? you have to source low-background steel—manufactured before contamination from mid-20th century nuclear testing. want to build a text or image generator AI? you’ll have to source low-background training data—collected before 2022.— Kyle McDonald (@kcimc) December 5, 2022
― 𝔠𝔞𝔢𝔨 (caek), Saturday, 24 December 2022 00:10 (one year ago) link
The obvious source is Google Books, which has a huge scanned-and-OCRed archive of newspapers and magazines dating back to the 1800s. Only a little bit of it is publicly-available, but Google has access to all of the original data. Here is Adam West looking jolly:https://books.google.co.uk/books?id=KkwEAAAAMBAJ&printsec=frontcover
I spent ages a while back trying to find out why Kangerlussuaq Airport was originally called Bluie West 8. The US Army used "bluie" as a codename for a network of airbases it built in Greenland in 1940. But why bluie? Was it random, or was there a system? Google Books is ideal for that sort of thing. It has scans of Armed Forces Talk from the 1950s and the US equivalent of Hansard etc. In this case it didn't help, but it was extremely useful.
But of course books are not a true reflection of popular thought. Books go through a complex, multi-stage filtering process. They are processed, censored. They present a clean, idealistic view of humanity. As we would like ourselves to be. Not how we are. The same is true of this very post, of Ilxor in general, perhaps all of written communication. The decision to use written communication in a multimedia world is a deliberate choice intended to achieve a desired effect.
A few posts ago I wondered how much space an archive of Usenet up until around 1996 would take up. If you don't count the binaries groups. A few hundred megabytes? A couple of gigabytes? It was all plaintext, and that until the mid-1990s most of the internet looked like this:http://users.umiacs.umd.edu/~oard/apollo/http://music.hyperreal.org/library/discogs/?M=D
Google in theory has a big archive of Usenet posts, most of it seems to have been thrown away. Which is a shame because it would be fascinating to see how ordinary people reacted to Star Trek III when it was new (for example). And by "ordinary people" I mean "a small group of North American computer science students and software engineers". Who ironically would be the key market for that film.
But, anyway, a complete archive of Usenet circa 1980-1996 would be portable, fungible, and culturally compatible with the typical modern-day AI researcher. It still wouldn't be a true reflection of popular thought. But that is probably impossible. Human beings communicate with grunts and hand movements, not words. Can a computer make grunting noises?
― Ashley Pomeroy, Sunday, 25 December 2022 14:21 (one year ago) link
good post!
speaking of "processed, censored", and "clean idealistic view of humanity", and "As we would like ourselves to be. Not how we are", that's what i think of when i look at how Google Image Search works, now. i wish i would have had the foresight to take a screenshot of what an Image Search for, say, "chair", produced back in the early 2000s. i remember it producing more "real" chairs. this is what it does, now:
https://i.imgur.com/Tb7c5qm.png
you get products. mint condition chairs that can be purchased. if there are people sitting in them, they are very attractive people who are well-lit. these are not the chairs that i know. you can also search for "used chair", of course, or "slightly dirty chair" or "normal chair with normal people" (a very normal search, lol), and maybe get around those things. maybe the captcha game for identifying the traffic lights and the crosswalks will gradually be extended to identify normal chairs vs ideal chairs
― Karl Malone, Sunday, 25 December 2022 15:56 (one year ago) link
but also, it makes me think of the human element of providing training material to an AI. you can provide it the whole of books.google.com. you could give it a bunch of recorded television and film (maybe removing Idiocracy from the dataset just to avoid that particular self-fulfilling prophecy). but will it distinguish between what is "real" and what is manufactured? particularly when humans often prefer the idealized form of life over the real thing?
― Karl Malone, Sunday, 25 December 2022 16:03 (one year ago) link
(...and particularly when the idealized/manufactured form of human life would seem to be the most readily available for AI training?)
― Karl Malone, Sunday, 25 December 2022 16:06 (one year ago) link
Remeber the days before 2022 - the pure world , unsullied.
I think we are in for a big anti-ai backlash soon
― | (Latham Green), Wednesday, 28 December 2022 15:04 (one year ago) link
Seems inevitable that as AI gets better people will become more and more blind to its flaws and invest more trust in its abilities than it deserves. As we put AI in charge of more systems we should be prepared for it to drive them periodically into a wall or over a cliff.
― more difficult than I look (Aimless), Wednesday, 28 December 2022 18:59 (one year ago) link
will we need ai to protect us from ai
― | (Latham Green), Wednesday, 28 December 2022 19:20 (one year ago) link
Irl lol at the mention of Idiocracy
― FRAUDULENT STEAKS (The Cursed Return of the Dastardly Thermo Thinwall), Wednesday, 28 December 2022 20:18 (one year ago) link
I do think at some point we reach the limits of human mental ability and have to use ai to go further. Especially writing software look at all this inferior garbage these days - also probably replacing most managers with as would be a notable improvement * sorry to be grouchy
― | (Latham Green), Wednesday, 28 December 2022 21:02 (one year ago) link
"i remember it producing more "real" chairs"
And of course if you search for e.g. "tagliatelle" Google just returns page after page of links to recipe sites with tagliatelle recipes. And yet if I search for "difference tagliatelle fettuccine" Google does return useful results, so perhaps I'm just being crap.
The subject of perfect people in perfect homes makes me wonder if any police forces have tried feeding a database of mugshots into an AI engine in order to generate "the face of a typical criminal". I could imagine it being treated as a joke - "do you look like a crook? click here to find out" - but on the other hand suppose the government of e.g. Myanmar decides that you really can tell whether someone is anti-social just by matching the shape of their face against a set of AI-generated faces. Like a modern-day version of phrenology.
My hunch is that perhaps as a joke at least one insurance company with access to mugshots or passport photos or driving licence photos etc has fed the results into an AI to look for patterns, or to generate generic faces. Wasn't there a website a while back that could generate generic LinkedIn profile faces?
― Ashley Pomeroy, Wednesday, 28 December 2022 21:46 (one year ago) link
unfortunately, something of the sort is already taking place. not by using facial characteristics (that i know of), but by using other data instead. i don't know how far beyond china the practice extends.
https://www.nytimes.com/2022/06/25/technology/china-surveillance-police.html
The latest generation of technology digs through the vast amounts of data collected on their daily activities to find patterns and aberrations, promising to predict crimes or protests before they happen. They target potential troublemakers in the eyes of the Chinese government — not only those with a criminal past but also vulnerable groups, including ethnic minorities, migrant workers and those with a history of mental illness.They can warn the police if a victim of a fraud tries to travel to Beijing to petition the government for payment or a drug user makes too many calls to the same number. They can signal officers each time a person with a history of mental illness gets near a school....In 2017, one of China’s best-known entrepreneurs had a bold vision for the future: a computer system that could predict crimes.The entrepreneur, Yin Qi, who founded Megvii, an artificial intelligence start-up, told Chinese state media that the surveillance system could give the police a search engine for crime, analyzing huge amounts of video footage to intuit patterns and warn the authorities about suspicious behavior. He explained that if cameras detected a person spending too much time at a train station, the system could flag a possible pickpocket.“It would be scary if there were actually people watching behind the camera, but behind it is a system,” Mr. Yin said. “It’s like the search engine we use every day to surf the internet — it’s very neutral. It’s supposed to be a benevolent thing.”He added that with such surveillance, “the bad guys have nowhere to hide.”Five years later, his vision is slowly becoming reality. Internal Megvii presentations reviewed by The Times show how the start-up’s products assemble full digital dossiers for the police.“Build a multidimensional database that stores faces, photos, cars, cases and incident records,” reads a description of one product, called “intelligent search.” The software analyzes the data to “dig out ordinary people who seem innocent” to “stifle illegal acts in the cradle.”A Megvii spokesman said in an emailed statement that the company was committed to the responsible development of artificial intelligence, and that it was concerned about making life more safe and convenient and “not about monitoring any particular group or individual.”
They can warn the police if a victim of a fraud tries to travel to Beijing to petition the government for payment or a drug user makes too many calls to the same number. They can signal officers each time a person with a history of mental illness gets near a school.
...In 2017, one of China’s best-known entrepreneurs had a bold vision for the future: a computer system that could predict crimes.The entrepreneur, Yin Qi, who founded Megvii, an artificial intelligence start-up, told Chinese state media that the surveillance system could give the police a search engine for crime, analyzing huge amounts of video footage to intuit patterns and warn the authorities about suspicious behavior. He explained that if cameras detected a person spending too much time at a train station, the system could flag a possible pickpocket.
“It would be scary if there were actually people watching behind the camera, but behind it is a system,” Mr. Yin said. “It’s like the search engine we use every day to surf the internet — it’s very neutral. It’s supposed to be a benevolent thing.”
He added that with such surveillance, “the bad guys have nowhere to hide.”
Five years later, his vision is slowly becoming reality. Internal Megvii presentations reviewed by The Times show how the start-up’s products assemble full digital dossiers for the police.
“Build a multidimensional database that stores faces, photos, cars, cases and incident records,” reads a description of one product, called “intelligent search.” The software analyzes the data to “dig out ordinary people who seem innocent” to “stifle illegal acts in the cradle.”
A Megvii spokesman said in an emailed statement that the company was committed to the responsible development of artificial intelligence, and that it was concerned about making life more safe and convenient and “not about monitoring any particular group or individual.”
― Karl Malone, Wednesday, 28 December 2022 22:15 (one year ago) link
2002: Don't be evil2022: It's supposed to be a benevolent thing
― Karl Malone, Wednesday, 28 December 2022 22:27 (one year ago) link
i know there's a chatGPT thread but i suppose it makes sense to keep this kind of talk/news in here:
For some students, the temptation is obvious and enormous. One senior at a Midwestern school, who spoke on the condition of anonymity for fear of expulsion, said he had already used the text generator twice to cheat on his schoolwork. He got the idea after seeing people expound on Twitter about how powerful the word generator is after it was released on Nov. 30.He was staring at an at-home computer-science quiz that asked him to define certain terms. He put them into the ChatGPT box and, almost immediately, the definitions came back. He wrote them by hand onto his quiz paper and submitted the assignment.Later that day, he used the generator to help him write a piece of code for a homework question for the same class. He was stumped, but ChatGPT wasn’t. It popped out a string of text that worked perfectly, he said. After that, the student said, he was hooked, and plans to use ChatGPT to cheat on exams instead of Chegg, a homework help website he’s used in the past.He said he’s not worried about getting caught because he doesn’t think the professor can tell his answers are computer-generated. He added that he has no regrets.“It’s kind of on the professor to make better questions,” he said. “Use it to your own benefit. … Just don’t get through an entire course on this thing.”
He was staring at an at-home computer-science quiz that asked him to define certain terms. He put them into the ChatGPT box and, almost immediately, the definitions came back. He wrote them by hand onto his quiz paper and submitted the assignment.
Later that day, he used the generator to help him write a piece of code for a homework question for the same class. He was stumped, but ChatGPT wasn’t. It popped out a string of text that worked perfectly, he said. After that, the student said, he was hooked, and plans to use ChatGPT to cheat on exams instead of Chegg, a homework help website he’s used in the past.
He said he’s not worried about getting caught because he doesn’t think the professor can tell his answers are computer-generated. He added that he has no regrets.
“It’s kind of on the professor to make better questions,” he said. “Use it to your own benefit. … Just don’t get through an entire course on this thing.”
https://www.washingtonpost.com/education/2022/12/28/chatbot-cheating-ai-chatbotgpt-teachers/
― Karl Malone, Wednesday, 28 December 2022 22:34 (one year ago) link
newspaper writer contemplating that this may be the first and last time they're allowed to let an AI write the closing paragraph of their article and it will still be kind of clever:
ChatGPT had its own ideas about the solution. Asked how to confront the possibility of cheating, the bot offered several suggestions: educate students about the consequences of cheating, proctor exams, make questions more sophisticated, give students support they need so they don’t see the need to cheat.“Ultimately, it is important to communicate clearly with students about your expectations for academic integrity and to take steps to prevent cheating,” the bot explained. “This can help to create a culture of honesty and integrity in your classroom.”
“Ultimately, it is important to communicate clearly with students about your expectations for academic integrity and to take steps to prevent cheating,” the bot explained. “This can help to create a culture of honesty and integrity in your classroom.”
― Karl Malone, Wednesday, 28 December 2022 22:38 (one year ago) link
Like "Liverpool aren't going to win the Premier League", this thread's title is increasingly testament to the grim longevity of ILX – keep a messageboard going long enough and everything will happen.
― Alba, Thursday, 29 December 2022 11:07 (one year ago) link
I have no idea what AI is and now I’m too afraid to ask.
― Allen (etaeoe), Thursday, 29 December 2022 14:24 (one year ago) link
**“Ultimately, it is important to communicate clearly with students about your expectations for academic integrity and to take steps to prevent cheating,” the bot explained. “This can help to create a culture of honesty and integrity in your classroom.”**
Naturally all tests and essays will have to be on paper with pencils while the teacher watches.
It seems like people are going to be opting out of technology if it gets too out of hand -o- like John in Brave New World
― | (Latham Green), Thursday, 29 December 2022 14:54 (one year ago) link
https://i.imgur.com/UFNmhPb.png
can i opt out of this car driving past me
― Karl Malone, Tuesday, 10 January 2023 16:15 (one year ago) link
james-bond-invisible-car.jpg
― fentanyl young (Neanderthal), Tuesday, 10 January 2023 16:21 (one year ago) link
i can't afford a can of soda right now and i have no idea how wealthy people actually live, but i assume you would just park this car at the dealer every night so that they can repair one of the millions of tiny things in your car that can no longer be fixed by hand
― Karl Malone, Tuesday, 10 January 2023 16:25 (one year ago) link
On Thursday, Microsoft researchers announced a new text-to-speech AI model called VALL-E that can closely simulate a person's voice when given a three-second audio sample. Once it learns a specific voice, VALL-E can synthesize audio of that person saying anything—and do it in a way that attempts to preserve the speaker's emotional tone.Its creators speculate that VALL-E could be used for high-quality text-to-speech applications, speech editing where a recording of a person could be edited and changed from a text transcript (making them say something they originally didn't), and audio content creation when combined with other generative AI models like GPT-3.
Its creators speculate that VALL-E could be used for high-quality text-to-speech applications, speech editing where a recording of a person could be edited and changed from a text transcript (making them say something they originally didn't), and audio content creation when combined with other generative AI models like GPT-3.
https://arstechnica.com/information-technology/2023/01/microsofts-new-ai-can-simulate-anyones-voice-with-3-seconds-of-audio/
― Karl Malone, Tuesday, 10 January 2023 18:27 (one year ago) link
microsoft should be sued for creating this fucked up software even if the claims for it are overhyped in the press release
― more difficult than I look (Aimless), Tuesday, 10 January 2023 18:43 (one year ago) link
i barely follow the field, and i am confident there are many, many competitors, and that this one will be out of date in a year or two.
one of the overarching problems is how to protect people from something that is inevitable. Microsoft says "To mitigate such risks, it is possible to build a detection model to discriminate whether an audio clip was synthesized by VALL-E." ok. what happens when there are a dozen competing TTS systems?
also, it it is not that hard to build and customize your own TTS. a few years ago i made a very bad piece of art (i know, i know: many bad pieces of art). with almost no skills in linux, python, TTS, and using cheap-ass raspberry pi computers, using free open source libraries and datasets, i was able to put together a quartet of machines that "spoke" to each using speakers, and then "listened" to what was heard through open air, then passed around the message in a "telephone"-like game that was interesting in my imagination and very boring and confusing to witness in real life. skilled people who actually know what they're doing, with the benefit of having money and access to capable computers, could do a million times better than what i did. the limitation on my end was just processing power. i couldn't afford to use better computers that could have handle large corpuses and real-time TTS translation. but. other people certainly can. and also, as we often hear, the smart phones in our pockets are more powerful than yada yada yada from 20 years ago. that trend still continues.
this is what i worry about, with things like chatgpt3 and other developments in the field. right now, the thing that keeps chatgpt3 "safe" is that there are artificial constraints that are placed on them. you can't ask chatgpt3 to tell you the easiest, quickest, lowest cost way to make a bomb, because if you do, it'll tell you it's not allowed to access that training data, etc. i know 0.00001% about this field but i am confident that in not too long, people will be making their own versions of all this stuff, DIY style, and they won't have the constraints. in fact, i would guess that quite a few people will get into the field because they're frustrated with the artificial constraints.
the chatbot / general AI thing is very complicated. the TTS advances seem trivial to me, and inevitable because they're already here
― Karl Malone, Tuesday, 10 January 2023 19:00 (one year ago) link
and we think this is a golden age of scamming. t'ain't nothing compared to what's coming down the pike.
― more difficult than I look (Aimless), Tuesday, 10 January 2023 19:08 (one year ago) link
brb creating a rap song with KM's voice
― fentanyl young (Neanderthal), Tuesday, 10 January 2023 19:09 (one year ago) link
called "Thread Delivers"
to push back against my own paranoia (which is not a healthy way to live, i know), i think it's already possible to scam people with this stuff, and it hasn't quite happened yet. i hope that the worst that happens is something similar to email spam, something that is ubiquitous, really does negatively affect a lot of people, but is still manageable and not a epoch-shifting problem, regardless
― Karl Malone, Tuesday, 10 January 2023 19:12 (one year ago) link
xp *programs KM.32_bot to tell a 3-minute anecdote about john stockton in iambic pentameter with as many internal rhymes as possible*
― Karl Malone, Tuesday, 10 January 2023 19:14 (one year ago) link
ubiquity occurring as quickly as possible is probably the best case scenario just so people will stop trusting things they shouldn't trust
― Lavator Shemmelpennick, Tuesday, 10 January 2023 19:16 (one year ago) link
i'm not sure which remote forms of communication would be outside of this realm, though.
voices on the telephone and online text interactions seem pretty common and not something that will be easily given up
― Karl Malone, Tuesday, 10 January 2023 19:21 (one year ago) link
nothing will stand up in court anymore, crime is abolished
― fentanyl young (Neanderthal), Tuesday, 10 January 2023 19:21 (one year ago) link
I am imaginin g such cars in 15 years at the shop "yeah my radar went again how much that gonna run me"
― | (Latham Green), Tuesday, 10 January 2023 19:25 (one year ago) link
people hate the subscription model but if i had a car like that, first i'd be rich so whatever, but secondly i could be easily convinced to get an expensive Apple Care-esque repair subscription to cover all of the millions of ways that the computers will fuck up
― Karl Malone, Tuesday, 10 January 2023 19:39 (one year ago) link
I think I'd rather just have some elaborate m achine that makes new cars for me every 5 years
― | (Latham Green), Tuesday, 10 January 2023 21:14 (one year ago) link
a car that drives itself to the car wash, then drives to the beach and watches the sunset while listening to the blue nile
― Karl Malone, Tuesday, 10 January 2023 21:21 (one year ago) link
one more negative AI story for today, sorry. there are already plenty of articles summing up what happened, but if you haven't learned about it already, see if you can figure out what went wrong here:
We provided mental health support to about 4,000 people — using GPT-3. Here’s what happened 👇— Rob Morris (@RobertRMorris) January 6, 2023
We used a ‘co-pilot’ approach, with humans supervising the AI as needed. We did this on about 30,000 messages...— Rob Morris (@RobertRMorris) January 6, 2023
Here’s a 2min video on how it worked: https://t.co/3gHvc5i0rURead on for the TLDR and some thoughts…— Rob Morris (@RobertRMorris) January 6, 2023
Messages composed by AI (and supervised by humans) were rated significantly higher than those written by humans on their own (p < .001). Response times went down 50%, to well under a minute.— Rob Morris (@RobertRMorris) January 6, 2023
And yet… we pulled this from our platform pretty quickly. Why?— Rob Morris (@RobertRMorris) January 6, 2023
Once people learned the messages were co-created by a machine, it didn’t work. Simulated empathy feels weird, empty.— Rob Morris (@RobertRMorris) January 6, 2023
― Karl Malone, Wednesday, 11 January 2023 02:27 (one year ago) link
Hello, I know that you've been feeling tired I bring you love and deeper understanding Hello, I know that you're unhappy I bring you love and deeper understanding
― scanner darkly, Wednesday, 11 January 2023 02:38 (one year ago) link
clearly we have learned nothing from the tragic malpractice of DR_SBAITSO.exe
― got it in the blood, the kid's a pelican (Doctor Casino), Wednesday, 11 January 2023 02:45 (one year ago) link
the irony, i guess, is that ELIZA was one of the first chatbots, in the 60's, and was designed to emulate Rogerian therapy
― Karl Malone, Wednesday, 11 January 2023 03:32 (one year ago) link
xp i didn't know about Dr. Sbaitso!
i guess the Rogerian kind of "and how does that make you feel?" kind of therapy is a natural fit for chatbots with limited capabilities
― Karl Malone, Wednesday, 11 January 2023 03:38 (one year ago) link
yes!! we had this on our computer when I was growing up! god my brothers and I would spend hours typing dirty words into that thing
― frogbs, Wednesday, 11 January 2023 03:42 (one year ago) link
https://i.imgur.com/VfgMMzt.png
also, yet more proof that although the internet gets worse every single year, at least this means that the further back in time you go, the better it getshttps://archive.ph/20130111132657/http://www.x-entertainment.com/articles/0952/
― Karl Malone, Wednesday, 11 January 2023 04:07 (one year ago) link
because it's the future, this already existshttps://bert.org/2023/01/06/chatgpt-in-dr-sbaitso/
― “Cheeky cheeky!” she trills, nearly demolishing a roadside post (forksclovetofu), Wednesday, 11 January 2023 06:59 (one year ago) link
I think ai is actually a good fit for cognitive therapy because it is logical and analytic - but can it provide human empathy?
― | (Latham Green), Wednesday, 11 January 2023 18:43 (one year ago) link
People fake empathy all the time. Why not a bot?
― The land of dreams and endless remorse (hardcore dilettante), Thursday, 12 January 2023 18:26 (one year ago) link
faking empathy is all a bot CAN do. it's the recipient that's objecting.
― more difficult than I look (Aimless), Thursday, 12 January 2023 22:24 (one year ago) link
I guess sometimes empathy backfires too like"I'm so tired after that walk in the prairie"
"really? I'm not tired at all! You must be an increasingly inferior individual to tire so easily!"
Also some humans give the worst &*^%&^% advice
― | (Latham Green), Friday, 13 January 2023 15:30 (one year ago) link