there's also a feedback loop where ai is enabling things like digitization of archival text at scales not previously possible, which then leads to higher quality data for ai to be trained on. an example is this research group at harvard (lead by an economic historian) who created a custom ai tool to identify layouts in newspapers, then applied it to create a data-set of headlines in local us newspapers, identifying pairs of local newspaper headlines describing the same underlying AP news story, which can then be used to train language models
A diversity of tasks use language models trained on semantic similarity data. While there are a variety of datasets that capture semantic similarity, they are either constructed from modern web data or are relatively small datasets created in the past decade by human annotators. This study utilizes a novel source, newly digitized articles from off-copyright, local U.S. newspapers, to assemble a massive-scale semantic similarity dataset spanning 70 years from 1920 to 1989 and containing nearly 400M positive semantic similarity pairs. Historically, around half of articles in U.S. local newspapers came from newswires like the Associated Press. While local papers reproduced articles from the newswire, they wrote their own headlines, which form abstractive summaries of the associated articles. We associate articles and their headlines by exploiting document layouts and language understanding. We then use deep neural methods to detect which articles are from the same underlying source, in the presence of substantial noise and abridgement. The headlines of reproduced articles form positive semantic similarity pairs. The resulting publicly available HEADLINES dataset is significantly larger than most existing semantic similarity datasets and covers a much longer span of time. It will facilitate the application of contrastively trained semantic similarity models to a variety of tasks, including the study of semantic change across space and time.
― flopson, Saturday, 22 July 2023 11:34 (one year ago) link
from my pov the advances in ai in the last year have been pretty incredibly useful. github copilot, a chat gpt application specialized at writing code, saves me an insane amount of time. since i now spend less time writing the code myself i have more time to de-bug and test it, which actually makes it less error prone (contrary to what one might expect given hallucinations). a prof i know (who's a bit of a "hacker" and uses the api versions of these tools) created a chatbot trained to help students with his courses. it answers questions, creates practice problems and gives students feedback on their solutions. he uses some tricks to reduce the rate of errors, like turning down the "temperature" parameter (which governs the amount of randomness in the answers) to zero, and somehow restricting it to focus only on the course material (using some kind of latent-space dimension reduction trick i don't understand). i haven't used it for writing yet but some of my friends are using it to write their dissertations, and say it's helpful in getting past writer's block cause you can just get it to start you off with a paragraph by giving it some stuff in point form, then edit from there
― flopson, Saturday, 22 July 2023 11:49 (one year ago) link
"there's also a feedback loop where ai is enabling things like digitization of archival text at scales not previously possible, which then leads to higher quality data for ai to be trained on. an example is this research group at harvard (lead by an economic historian)"
That highly specific use case doesn't disprove the point.
― xyzzzz__, Saturday, 22 July 2023 12:05 (one year ago) link
But yes it's not all terrible for sure. There is a lot to AI, mostly responding to the more outlandish stuff.
― xyzzzz__, Saturday, 22 July 2023 12:10 (one year ago) link
xp- i don't think it can be "disproven" one way or the other. but there are forces pushing it in both directions, it's not obvious that the proliferation of text written by ai online will be the dominant force. as allen said above, google training an ai on all of google books could lead to a compensating improvement in data quality
― flopson, Saturday, 22 July 2023 12:38 (one year ago) link
Good thread on AI being used by students, by an ex-academic/postgrad type. This is the key takeaway.
I would only really worry about the impact of ChatGPT on the situation if it were in some way seducing the students who did care, or might potentially be induced to care, into not caring. As for the rest, I suppose a convenient illusion is being dismantled -— a furred tail upon nothingness (@dynamic_proxy) July 31, 2023
― xyzzzz__, Monday, 31 July 2023 14:07 (one year ago) link
https://www.washingtonpost.com/technology/2023/08/10/san-francisco-robotaxi-approved-waymo-cruise/
SAN FRANCISCO — California regulators voted Thursday to allow self-driving car companies Waymo and Cruise to offer 24/7 paid taxi service in San Francisco, a major win for the industry that could pave the way for more widespread adoption of the technology.Cars without drivers have become a common sight on San Francisco’s winding, hilly and often foggy streets. Thursday’s vote stripped most limitations on operating and charging for rides, essentially creating more ride-hailing services like Uber or Lyft — just without the drivers.It’s a pivotal moment for the autonomous transportation industry, expanding one of the biggest test cases for a world in which many companies envision not needing drivers at all. For years, companies from Amazon to Google have experimented with self-driving vehicles, something that could prove incredibly disruptive to the labor economy if it ever materializes en masse.In California alone, there are more than 40 companies — ranging from young start-ups to tech giants — that have permits to test their cars in San Francisco, according to the California Department of Motor Vehicles. According to a Washington Post analysis of the data, the companies collectively clock millions of miles on public roads every year — along with hundreds of mostly minor accidents.
Cars without drivers have become a common sight on San Francisco’s winding, hilly and often foggy streets. Thursday’s vote stripped most limitations on operating and charging for rides, essentially creating more ride-hailing services like Uber or Lyft — just without the drivers.
It’s a pivotal moment for the autonomous transportation industry, expanding one of the biggest test cases for a world in which many companies envision not needing drivers at all. For years, companies from Amazon to Google have experimented with self-driving vehicles, something that could prove incredibly disruptive to the labor economy if it ever materializes en masse.
In California alone, there are more than 40 companies — ranging from young start-ups to tech giants — that have permits to test their cars in San Francisco, according to the California Department of Motor Vehicles. According to a Washington Post analysis of the data, the companies collectively clock millions of miles on public roads every year — along with hundreds of mostly minor accidents.
― z_tbd, Friday, 11 August 2023 15:17 (one year ago) link
put that on the t-shirt
"mostly minor accidents"
― Tracer Hand, Friday, 11 August 2023 16:58 (one year ago) link
700k a day for a company sitting on $10b isnt really that much but the fact remains ai does use a lot of computers just to come up with some half to full bullshit
"OpenAI spends about $700,000 a day, just to keep ChatGPT going. The cost does not include other AI products like GPT-4 and DALL-E2. Right now, it is pulling through only because of Microsoft's $10 billion funding"yoooo https://t.co/k8qm6Lo0j3— Olúfẹ́mi O. Táíwò (@OlufemiOTaiwo) August 12, 2023
― lag∞n, Sunday, 13 August 2023 11:57 (one year ago) link
In my SF neighborhood this weekend, I've seen a near constant parade of different (the cars have names) empty Cruise vehicles driving along various routes. I assume they are trying to collect as much training data as possible, but it feels a little like they are celebrating being unleashed.
― fajita seas, Sunday, 13 August 2023 20:06 (one year ago) link
thought this was pretty interesting (probably not interesting if you don't know what stack overflow is)
https://www.thediff.co/archive/inside-the-decline-of-stack-exchange/
― 𝔠𝔞𝔢𝔨 (caek), Thursday, 17 August 2023 03:09 (one year ago) link
I like the robot delivery vehicles in Santa Monica.
― immodesty blaise (jimbeaux), Thursday, 17 August 2023 03:15 (one year ago) link
These are AI generated, they look pretty good.
pic.twitter.com/ZYVa9k8MDz— Frank Manzano (@loved_orleer) August 17, 2023
― xyzzzz__, Friday, 18 August 2023 07:16 (one year ago) link
yeah, I've been following him, posted to the cursed image thread. These are pretty wild. Clearly there is AI involved, but there must be some real video footage in the mix as well. No idea how it all gets combined into a disturbing slurry.
― Muad'Doob (Moodles), Friday, 18 August 2023 14:27 (one year ago) link
Must there be? Feel like all of that could be totally fabricated from nothing, as uncanny valley as it is...
― But his face would not turn into hot Kirby (Evan), Friday, 18 August 2023 14:36 (one year ago) link
could be, I think you can input actual video and tell the AI to fuck it up, but yea looking at some of the details and background maybe it is all AI. either way hard to watch too much because this genuinely fucks with my head
― frogbs, Friday, 18 August 2023 14:38 (one year ago) link
Holy shit, that was wild
― the new drip king (DJP), Friday, 18 August 2023 14:41 (one year ago) link
this is a fun little one
gm to magic✨@niceaunties pic.twitter.com/9OgMdOFo96— alejandro cartagena (@halecar2) August 16, 2023
― Muad'Doob (Moodles), Friday, 18 August 2023 15:03 (one year ago) link
this is an absolute nightmare...
https://www.youtube.com/watch?v=pcW9U0AXiN4
...though I really liked the Beyond The Infinite one on that channel...
Also,
https://m.youtube.com/@robertoberagnoli has some more chill examples... the fictional artist series is great...
― m0stly clean (Slowsquatch), Saturday, 19 August 2023 03:00 (one year ago) link
is there a better/more dedicated thread for the impact of AI on artist rights?
https://www.hollywoodreporter.com/business/business-news/ai-works-not-copyrightable-studios-1235570316/
― out-of-print LaserDisc edition (sleeve), Saturday, 19 August 2023 22:41 (one year ago) link
just asked ChatGTP to add a long series of numbers, it was off by 20 million+
― Blues Guitar Solo Heatmap (Free Download) (upper mississippi sh@kedown), Wednesday, 23 August 2023 16:58 (one year ago) link
tbf there’s no reason a text generator should be able to do arithmetictblf people should shut up about AGI
― rob, Wednesday, 23 August 2023 17:49 (one year ago) link
see that's what Bard told me and I can respect thatChatGTP just spit out an authoritative answer that was 20 million off
― Blues Guitar Solo Heatmap (Free Download) (upper mississippi sh@kedown), Wednesday, 23 August 2023 20:52 (one year ago) link
Skip School Take Hormones Kill God Shirt is a bold and thought-provoking fashion statement that has gained popularity among teenagers and young adults. This shirt features a simple yet powerful message that challenges traditional beliefs and norms.
The phrase “Skip School Take Hormones Kill God” suggests that there is more to life than adhering to societal expectations and religious dogma. It encourages people to take control of their own lives and make choices that are true to themselves, even if it means going against the grain. The shirt speaks to those who are disillusioned with traditional institutions and values and are seeking a sense of independence and individuality.
The design of the shirt is minimalist, with the words printed in bold letters on a plain background. This simplicity adds to the impact of the message and draws attention to the words themselves. The black and white color scheme also adds to the starkness and seriousness of the statement.
Some people have criticized the shirt for its controversial message, particularly the reference to killing God. However, the intention behind the phrase is not necessarily to promote atheism or disrespect religious beliefs. Rather, it is a call to challenge the idea that God, or any other authority figure, has absolute control over our lives.
Overall, the Skip School Take Hormones Kill God Shirt is a powerful expression of individuality and rebellion. It encourages people to think critically about the world around them and make choices that are true to themselves, even if it means breaking away from the status quo. Whether you love or hate the message, there’s no denying that this shirt is a bold and impactful statement.
― Kate (rushomancy), Monday, 28 August 2023 03:13 (one year ago) link
I'd say that ChatGPT has a very cheerful and positive attitude about life, except it isn't alive, has no experiences, no feelings and therefore no real opinions about anything, and knows nothing at all.
― more difficult than I look (Aimless), Monday, 28 August 2023 03:26 (one year ago) link
so it posts to 4chan?
― I can't turn a fart into a question (Neanderthal), Monday, 28 August 2023 03:36 (one year ago) link
https://www.themarysue.com/now-ai-wants-to-poison-people-so-thats-fun/
― out-of-print LaserDisc edition (sleeve), Wednesday, 30 August 2023 15:43 (one year ago) link
https://arstechnica.com/cars/2023/09/are-self-driving-cars-already-safer-than-human-drivers/
Metz argued that in recent weeks, it has become “more and more clear to the people riding the cars, and to other citizens in the city, that they are flawed, that they do make mistakes, that they can gum up traffic, that they can cause accidents.”Of course self-driving cars are flawed—all technologies are. The important question is whether self-driving cars are safer than human-driven cars. And here Metz proclaimed ignorance.“We don't know yet whether it's safer than a human driver,” he said.But we actually do know a fair amount about the safety of driverless taxis. Waymo and Cruise have driven a combined total of 8 million driverless miles (a Waymo spokeswoman told me the company has completed more than 4 million driverless miles, and Cruise has said the same). That includes more than 4 million in San Francisco since the start of 2023. And because California law requires self-driving companies to report every significant crash, we know a lot about how they’ve performed.For this story, I read through every crash report Waymo and Cruise filed in California this year, as well as reports each company filed about the performance of their driverless vehicles (with no safety drivers) prior to 2023. In total, the two companies reported 102 crashes involving driverless vehicles. That may sound like a lot, but they happened over roughly 6 million miles of driving. That works out to one crash for every 60,000 miles, which is about five years of driving for a typical human motorist.These were overwhelmingly low-speed collisions that did not pose a serious safety risk. A large majority appeared to be the fault of the other driver. This was particularly true for Waymo, whose biggest driving errors included side-swiping an abandoned shopping cart and clipping a parked car’s bumper while pulling over to the curb.Cruise’s record is not impressive as Waymo’s, but there’s still reason to think its technology is on par with—and perhaps better than—a human driver.Human beings drive close to 100 million miles between fatal crashes, so it will take hundreds of millions of driverless miles for 100 percent certainty on this question. But the evidence for better-than-human performance is starting to pile up, especially for Waymo. It’s important for policymakers to allow this experiment to continue because, at scale, safer-than-human driving technology would save a lot of lives.
Of course self-driving cars are flawed—all technologies are. The important question is whether self-driving cars are safer than human-driven cars. And here Metz proclaimed ignorance.
“We don't know yet whether it's safer than a human driver,” he said.
But we actually do know a fair amount about the safety of driverless taxis. Waymo and Cruise have driven a combined total of 8 million driverless miles (a Waymo spokeswoman told me the company has completed more than 4 million driverless miles, and Cruise has said the same). That includes more than 4 million in San Francisco since the start of 2023. And because California law requires self-driving companies to report every significant crash, we know a lot about how they’ve performed.
For this story, I read through every crash report Waymo and Cruise filed in California this year, as well as reports each company filed about the performance of their driverless vehicles (with no safety drivers) prior to 2023. In total, the two companies reported 102 crashes involving driverless vehicles. That may sound like a lot, but they happened over roughly 6 million miles of driving. That works out to one crash for every 60,000 miles, which is about five years of driving for a typical human motorist.
These were overwhelmingly low-speed collisions that did not pose a serious safety risk. A large majority appeared to be the fault of the other driver. This was particularly true for Waymo, whose biggest driving errors included side-swiping an abandoned shopping cart and clipping a parked car’s bumper while pulling over to the curb.
Cruise’s record is not impressive as Waymo’s, but there’s still reason to think its technology is on par with—and perhaps better than—a human driver.
Human beings drive close to 100 million miles between fatal crashes, so it will take hundreds of millions of driverless miles for 100 percent certainty on this question. But the evidence for better-than-human performance is starting to pile up, especially for Waymo. It’s important for policymakers to allow this experiment to continue because, at scale, safer-than-human driving technology would save a lot of lives.
i'm curious what people think of that last sentence, in particular.
― i really like that!! (z_tbd), Sunday, 3 September 2023 23:56 (one year ago) link
People are terrible drivers, this is not a high bar. I would prefer good mass transit but if we're going to have cars, as a cyclist, the more robot cars the better imo.
― what you say is true but by no means (lukas), Monday, 4 September 2023 00:02 (one year ago) link
i have seen some hilarious driving recently. every city i've ever lived in, the locals have believed that their drivers were the absolute worst, but i can confirm that STL has the worst drivers of all time. the city is made up of an incredible number of four-way stops, like a simcity game where the transportation advisor shows up and he's like "due to budget cuts, all we got is 4-way stops mac!". no big deal, except that in addition the default driving behavior seems to be the 'rolling stop', which has some fun qualities but ultimately runs into deep fundamental problems as soon as two cars arrive to the intersection at the same time
― i really like that!! (z_tbd), Monday, 4 September 2023 00:11 (one year ago) link
It seems congruent with the longtermism p.o.v. that future people are just as important as people are now, which is a free pass to do any goddamn thing you want as long as you can create convincing enough mental gymnastics that what you're doing is "saving the planet." No one is stating the obvious that we'll save the most lives if we got rid of cars entirely.
― Elvis Telecom, Monday, 4 September 2023 00:13 (one year ago) link
These were overwhelmingly low-speed collisions that did not pose a serious safety risk.
Why would it be remarkable that cars that are being tested on SF city streets be involved in low-speed collisions? What would be remarkable would be high speed crashes under those conditions.
The whole article is special pleading designed to lead you to a particular conclusion. Especially the last paragraph. First the author admits that there's not enough evidence to draw any useful conclusions, but purposely phrases it as the evidence not meeting "100 percent certainty". Then the last sentence appears to mean much more than it actually says. Taken literally it simply says that safer-than-human driving technology would be safer than human-driving technology. That's correct, but only because a tautology is always correct.
― more difficult than I look (Aimless), Monday, 4 September 2023 00:20 (one year ago) link
perhaps we could compare with the number of rail travel fatalities per 1000 travelers
― Tracer Hand, Monday, 4 September 2023 10:06 (one year ago) link
The question is also framed as a fixed technology that we're trying to run a tricorder over, to get a reading, when the technology is still moving. A better framing would be: what are the effects that will cause it to get better and stay better? Which leads us back to the question of liability - a driver is encouraged (in general) to get better because they and/or their insurance will be on the hook if they fuck up - as far as I can tell (from 10 minutes on Wikipedia lol) that's still being sorted out for full driverless - the passengers want it to be not them, the operators* want it to be not them, the manufacturers want it to be not them, the insurers want it to be not them.
*as in the taxi companies, there's separate considerations for e.g. public transport with a remote operator
― Andrew Farrell, Monday, 4 September 2023 11:26 (one year ago) link
Weren't there culpability issues re: self-driving accidents/violations? I might be imagining seeing an article with a cop scratching his head wondering who to arrest in an empty car.
― Philip Nunez, Monday, 4 September 2023 16:14 (one year ago) link
https://imgc.artprintimages.com/img/print/does-your-car-have-any-idea-why-my-car-pulled-it-over-new-yorker-cartoon_u-l-py85cr0.jpg?artHeight=350&artPerspective=n&artWidth=550&background=fbfbfb
― Pontius Pilates (Ye Mad Puffin), Monday, 4 September 2023 16:27 (one year ago) link
First the author admits that there's not enough evidence to draw any useful conclusions, but purposely phrases it as the evidence not meeting "100 percent certainty".
just a quibble with that. "not meeting 100% certainty" ≠ "not enough evidence to draw any useful conclusions", and there are examples of that all the time, like tomorrow's weather, which cannot be forecast with 100% certainty. but i don't need 100% certain to make reasonable preparations based off of what is merely highly probable instead of 100% certain. or, from this thing i just read which reminded me of this thread:
...By hitching a ride on cargo ships and passenger jets, exotic species are bridging oceans, mountain ranges and other geographic divides otherwise insurmountable without human help. The result is a great scrambling of the planet’s flora and fauna, with dire implications for humans and the ecosystems they depend on.“One of the things that we stress that really is the tremendous threat this does pose to — and I know this is going to sound grandiose — but to human civilization,” said Peter Stoett, an Ontario Tech University professor who helped lead a group of about seven dozen experts in writing the report. The cost estimate [$423 billion a year], he added, is “extremely conservative.”
“One of the things that we stress that really is the tremendous threat this does pose to — and I know this is going to sound grandiose — but to human civilization,” said Peter Stoett, an Ontario Tech University professor who helped lead a group of about seven dozen experts in writing the report. The cost estimate [$423 billion a year], he added, is “extremely conservative.”
https://www.washingtonpost.com/climate-environment/2023/09/04/invasive-species-un-report/
― i really like that!! (z_tbd), Monday, 4 September 2023 19:13 (one year ago) link
i believe that but at the same time when they say stuff like “It’s not normal that a species crosses the Atlantic,” he said. “Not normal that it goes from Australia to Chile.” i'm like what is normal? hasn't this been going on since the 17th century? ships going back and forth between continents? at least now you don't get utopian botanists deliberately planting entire crops worth of foreign seeds everywhere? eg the tomato?
― Tracer Hand, Monday, 4 September 2023 23:44 (one year ago) link
Even during non-human-assisted evolutionary times species crossed the Atlantic from Africa toward Brazil, carried by prevailing winds, both plants and animals. Consider the Hawaiian Islands and other remote Pacific isands. They were formed in mid-ocean but still had native plants, insects, birds and a variety of animal life when humans arrived. Species travel. What's new is the speed of the transfer, not the transfer itself.
― more difficult than I look (Aimless), Tuesday, 5 September 2023 01:16 (one year ago) link
And the volume of stuff transferred.
― Tsar Bombadil (James Morrison), Tuesday, 5 September 2023 09:45 (one year ago) link
yeah idk in 1650 a ship could carry a boatload of animals and all their associated pests and burrs and seeds across the ocean in three weeks and did so many many times but ultimately i guess i have to (reluctantly) defer to people who have spent their entire professional careers studying these questions
― Tracer Hand, Tuesday, 5 September 2023 09:53 (one year ago) link
I used to be a lot more skeptical of whether the whole "native plants"/ecosystems thing really mattered or was just some kind of purist fantasy, but no, it really matters. What Aimless and James said - it's the speed and volume that's the issue. Evolution happens over millions of years, and there's only so much adaptation that can take place in a few hundred. I can see it in my own backyard and the woods in my town, invasive species really do create problems. E.g. my yard is full of these norway maples that spread like bamboo and they gradually suck resources from native trees, but native animals generally won't make homes in them. Species do actually kind of balance/harmonize over time, and that balance is achieved slowly, and disrupted quickly. Doesn't mean it's perfect or that you can't wind up with ecological problems even without that happening.
Cultivated food crops can at least be contained to limited areas.
― longtime caller, first time listener (man alive), Tuesday, 5 September 2023 13:59 (one year ago) link
I think "normal" is not really a helpful concept because these things are never static.
― longtime caller, first time listener (man alive), Tuesday, 5 September 2023 14:01 (one year ago) link
the north american maize crop was absolutely devastated in the 90s when the european corn borer had prime conditions for spreading and the dominant seed planted had limited resistance to predation
it's a european insect that primarily affected millet until it hit the americas (where maize is native) and was generally a cyclical threat over the years until the right conditions hit
as far as people can tell, it didn't actually arrive in north america until the 1900s. probably because cross-atlantic trade conditions weren't capable of moving a breeding population but who knows
― mh, Tuesday, 5 September 2023 15:21 (one year ago) link
"As dramatic as the recent advances in AI are, something is missing from this particular story of peril. Even as it prophesies technological doom, it is actually naïve about technological power. It’s the work of intellectuals enamored of intellect, who habitually resist learning the kinds of lessons we all must learn when plans that seem smart on paper crash against the cold hard realities of dealing with other people."
https://www.thenewatlantis.com/publications/ai-cant-beat-stupid
― xyzzzz__, Wednesday, 6 September 2023 12:51 (one year ago) link
An arresting, dystopian “what if” scenario published at the LessWrong forum — a central hub for debating the existential risk posed by AI — posits a large language model that, instructed to “red team” its own failures, learns how to exploit the weaknesses of others. Created by a company to maximize profits, the model comes up with unethical ways to make money, such as through hacking. Given a taste of power, the model escapes its containment and gains access to external resources all over the world. By gaining the cooperation of China and Iran, the model achieves destabilization of Western governments. It hinders cooperation among Western states by fostering discord and spreading disinformation. Within weeks, American society and government are in tatters and China is now the dominant world power. Next the AI begins to play Beijing like a fiddle, exploiting internal conflict to give itself greater computing resources. The story goes on from there, and Homo sapiens is soon toast.
hope this doesn't happen!
― difficult listening hour, Wednesday, 6 September 2023 15:08 (one year ago) link
reassuring tho that it would be something else's fault.
lesswrong doesn't mean it's right
― mh, Wednesday, 6 September 2023 15:11 (one year ago) link
They're rationalists, mh. They can't not be rational, it's right there in the name.
― Andrew Farrell, Wednesday, 6 September 2023 15:16 (one year ago) link
these scenarios feel so farfetched to me because it assumes AI is going to have the ability to execute decisions and will also get several very precise actions correct which seems difficult given those who work in AI can't seem to figure out how to get it to stop making things up
one fun scenario though is some combo of AI and quantum computing breaking SHA256 encryption
― frogbs, Wednesday, 6 September 2023 15:18 (one year ago) link