― Gravel Puzzeworth, Sunday, 27 November 2005 00:58 (eighteen years ago) link
― Ste (Fuzzy), Sunday, 27 November 2005 01:11 (eighteen years ago) link
― kephm (kephm), Sunday, 27 November 2005 01:18 (eighteen years ago) link
― Jorge 4 Time, Monday, 26 March 2007 13:14 (seventeen years ago) link
― peteR, Monday, 26 March 2007 13:19 (seventeen years ago) link
Artificial Intelligence :classic or dud
What if the dangers from AI come subtly, without or knowledge? What if it became our master without us even realizing it?
― Brian Eno's Mother (Latham Green), Tuesday, 28 May 2013 13:02 (eleven years ago) link
what if you are it and it are you
― OH NO, SECONDS LEFT, SECONDS LEFT, AND THERE IT IS. REGRET. (imago), Tuesday, 28 May 2013 13:05 (eleven years ago) link
Ideas lie before us like an endless sea of possibility.
Ai woul dbe much safer if individual intelilgences are kept seperate ie. math ability vs. logical thinking vs. natural languae trivia whatever vs music
Perhaps strong AI will nto be the way but stronger human intelligence supplemented by machinery and genetic alteration
Humans will ract to AI in a hostile way if they feel threatened or freedom taken away.
― Brian Eno's Mother (Latham Green), Tuesday, 28 May 2013 14:01 (eleven years ago) link
I just had a chat with "Jon" at Amazon customer service I am 100% certain was really a chat with a rather clumsy AI program. Obviously Amazon gets thousands of customer complaints every hour and they will almost all fall into a limited number of slots with a limited number of solutions, so an AI program makes sense, but the pretense that I was chatting with a real person was complete transparent fakery.
Here's the transcript:
Initial Question: I paid for two-day shipping and was given a "guaranteed delivery date" of May 2 in my confirmation email. It is May 3. The item tracking shows it is not "out for delivery" today, either. That means it will not arrive until at least May 5. I think the charge for two day shipping should be refunded.
12:20 PM PDT Jon(Amazon): Hello Aimless, my name is Jon.
12:21 PM PDT Aimless: Hello
12:22 PM PDT Jon(Amazon): I'll be glad to assist you with your order and see what I can do to help you.
12:22 PM PDT Aimless: Let me know when I have a real person's attention, please.
12:22 PM PDT Jon(Amazon): To better assist you with, may you please help me with the order number? (NB: Amazon wouldn't initiate a chat without my specifying the order number up front, before opening the chat box.)
12:22 PM PDT Aimless: Order #114-2881289-8313813
12:25 PM PDT Jon(Amazon): Thanks
12:26 PM PDT Jon(Amazon): May I have a moment to check on this further please?
12:26 PM PDT Aimless: Of course. What choice do I have?
12:27 PM PDT Jon(Amazon): Thanks for waiting Aimless.
12:30 PM PDT Jon(Amazon): I have checked the order and it seems that there will be a delay for the package due to unforeseen circumstances.I am sorry that you haven’t received your order though it was tagged as delivered. Sometimes, the carrier will accidentally scanned the package as delivered though it was still in transit. In this case, the package usually arrives within the next business day. You may wait until May 6, 2014 to receive your order.To compensate for the delay I will refund the shipping cost. How does that sound?
12:33 PM PDT Aimless: Thank you for refunding the shipping cost. May 6 might be a problem as I will be leaving town. I would also point out that at no time did order tracking tag this package as delivered. It was not put into the hands of the shipper until around 2:30pm May 1, according to Amazon's emails.
12:33 PM PDT Jon(Amazon): That is right Aimless!
12:34 PM PDT Jon(Amazon): The package has left seller facility and is in transit to carrier on May 1, 2014.
12:35 PM PDT Aimless: Well, no point in hashing this out. It will arrive when it arrives. If there are further problems, I'll be in contact. Bye.
― epoxy fule (Aimless), Saturday, 3 May 2014 20:13 (ten years ago) link
i wonder if a bot can be made that sounds progressively more upset in order to extract more concessions from amazonbot.
― Philip Nunez, Saturday, 3 May 2014 20:50 (ten years ago) link
https://www.youtube.com/watch?v=EfGD2qveGdQ
― the plight of y0landa (forksclovetofu), Friday, 27 February 2015 07:16 (nine years ago) link
Why is this super intelligence more likely to be a threat to humanity? Why couldn't it just as likely help us solve some of our greatest problems?
"I certainly hope that it will help us solve our problems, and I think that that might be a likely outcome, particularly if we put in the hard work now to solve how to "control" artificial intelligence. But, say one day we create a super intelligence and we ask it to make as many paper clips as possible. Maybe we built it to run our paper-clip factory.If we were to think through what it would actually mean to configure the universe in a way that maximizes the number of paper clips that exist, you realize that such an AI would have incentives, instrumental reasons, to harm humans. Maybe it would want to get rid of humans, so we don't switch it off, because then there would be fewer paper clips. Human bodies consist of a lot of atoms and they can be used to build more paper clips.If you plug into a super-intelligent machine with almost any goal you can imagine, most would be inconsistent with the survival and flourishing of the human civilization."
― dutch_justice, Sunday, 3 May 2015 20:47 (nine years ago) link
Or they just help us get plane tickets: http://www.wired.com/2014/08/viv/
― schwantz, Sunday, 3 May 2015 20:49 (nine years ago) link
https://www.youtube.com/watch?v=X_tvm6Eoa3g
― Balkan-Boogie (soref), Saturday, 18 July 2015 15:58 (nine years ago) link
when earth & humanity are long gone, there will be bots drifting through the galaxy in eternal courtship
― ogmor, Saturday, 18 July 2015 17:30 (nine years ago) link
If I ever went on a date it would probably go exactly like that.
― AdamVania (Adam Bruneau), Saturday, 18 July 2015 17:36 (nine years ago) link
http://i.imgur.com/xWmP24d.gifv
― you are extreme, Patti LuPone. (forksclovetofu), Sunday, 19 July 2015 16:33 (nine years ago) link
Chappie more like Crappie amirite?
― passive-aggressive rageaholic (snoball), Sunday, 19 July 2015 18:08 (nine years ago) link
It was diabolically poor. It actually put me off cinema for a bit.
― quixotic yet visceral (Bob Six), Sunday, 19 July 2015 19:03 (nine years ago) link
Nature: Mastering the game of Go with deep neural networks and tree search
The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
h/t hoooos
― Karl Malone, Wednesday, 27 January 2016 21:10 (eight years ago) link
i guess this should be the AI thread. post your comments about how AI is impossible because you saw a clip of a robot falling over here.
― Karl Malone, Wednesday, 27 January 2016 21:11 (eight years ago) link
important work they're doing over there *eyeroll*
― Οὖτις, Wednesday, 27 January 2016 21:11 (eight years ago) link
the go thing, you mean?
― Karl Malone, Wednesday, 27 January 2016 21:12 (eight years ago) link
yeah
re: AI in general, I wouldn't say it's impossible but it is very very very far away
― Οὖτις, Wednesday, 27 January 2016 21:12 (eight years ago) link
speaking of neural networks, there's this link caek accidentally posted: http://www.wired.com/2016/01/apple-buys-ai-startup-that-reads-emotions-in-faces
― Karl Malone, Wednesday, 27 January 2016 21:13 (eight years ago) link
it's close enough to figure out how you react to advertisements
I am not impressed
― Οὖτις, Wednesday, 27 January 2016 21:14 (eight years ago) link
I mean congratulations you've spent billions of dollars and tons of other resources on doing something a baby can do, good job
― Οὖτις, Wednesday, 27 January 2016 21:15 (eight years ago) link
(sorry I don't mean "you" you, not trying to make this personal)
― Οὖτις, Wednesday, 27 January 2016 21:16 (eight years ago) link
This is pretty exciting:http://www.eurekalert.org/pub_releases/2016-01/miop-sba012716.php
― schwantz, Wednesday, 27 January 2016 21:17 (eight years ago) link
haha, it's ok
buuuuuuut, when i was a baby, i wasn't capable of reading human emotions from millions of people at any given moment and then feeding that information to advertising corporations. of course, as i grew older i developed this ability but by that time other babies had already submitted job applications so mine was at the bottom of the pile
― Karl Malone, Wednesday, 27 January 2016 21:18 (eight years ago) link
lol
― Οὖτις, Wednesday, 27 January 2016 21:21 (eight years ago) link
but yeah the "reading human emotions" aspect does not impress me as a technological feat in and of itself. Biology still obviously way superior in that department. otoh the "helping corporations make even more effective advertisements!" aspect is just gross and sad.
― Οὖτις, Wednesday, 27 January 2016 21:23 (eight years ago) link
also, i think admitting that certain AI capabilities are similar to what a baby can do suggests enormous potential in the near term. the difference in capabilities of babies and adults seems enormous to us, but when you consider it on a logarithmic scale, they're very close. the difference between einstein and the livestreaming tech guy idiot in oregon is not very large in the grand scheme of things. if an AI's learning curve has already increased from an earthworm to baby level, einstein really isn't that far away.
obviously i'm referring to the scientific names of these universally agreed upon scales here
― Karl Malone, Wednesday, 27 January 2016 21:25 (eight years ago) link
thought the thread bump might be for Minsky
RIP big man
― Brad C., Wednesday, 27 January 2016 21:28 (eight years ago) link
yeah as i understand it the hope/fear is that at some unknown point of basic sophistication the gap all of a sudden closes itself
something i don't get about the superintelligence fear is why these new gods intelligent in ways we can't even imagine are just assumed to also be terminally discompassionate and sociopathically fixated on widget-making or nuclear supremacy
i do sometimes worry about the politics and very notions of intelligence of a lot of the people who do the actual work on this stuff, let alone of course the people who pay for it
rip minsky, yeah.
― denies the existence of dark matter (difficult listening hour), Wednesday, 27 January 2016 21:30 (eight years ago) link
enormous potential in the near term
there's always been enormous potential lol, it's the "near term" part that seems to be constantly pushed out
― Οὖτις, Wednesday, 27 January 2016 21:32 (eight years ago) link
I mean this fear of robots becoming *actually intelligent* and destroying humanity has been around basically since the concept of "robot" was first formalized, well before the first computers even existed.
― Οὖτις, Wednesday, 27 January 2016 21:34 (eight years ago) link
nick bostrom's book is basically about this (a lot of people seem to assume it's a kurzweil style book, but it's really all about risk management). he talks a lot about the end goals of an AI and their unintended consequences. one thing that comes up often is that for just about any goal, having more resources would be beneficial. or eliminating obstacles to the goal (such as humans).
The risks in developing superintelligence include the risk of failure to give it the supergoal of philanthropy. One way in which this could happen is that the creators of the superintelligence decide to build it so that it serves only this select group of humans, rather than humanity in general. Another way for it to happen is that a well-meaning team of programmers make a big mistake in designing its goal system. This could result, to return to the earlier example, in a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities. More subtly, it could result in a superintelligence realizing a state of affairs that we might now judge as desirable but which in fact turns out to be a false utopia, in which things essential to human flourishing have been irreversibly lost. We need to be careful about what we wish for from a superintelligence, because we might get it.
http://www.nickbostrom.com/ethics/ai.html
― Karl Malone, Wednesday, 27 January 2016 21:36 (eight years ago) link
xp ninety years not rly a v long time... in pre-singularity years
― denies the existence of dark matter (difficult listening hour), Wednesday, 27 January 2016 21:37 (eight years ago) link
i guess i am unconvinced that something can simultaneously be "superintelligent" and have an extremely rigid and unadaptable "goal system". people would get bored caring about paperclips, let alone one of these things.
really the problem w the whole line of speculation right is a lack of understanding of what we mean by intelligence let alone superintelligence
― denies the existence of dark matter (difficult listening hour), Wednesday, 27 January 2016 21:40 (eight years ago) link
agree that it could well be an alien brain w v incompatible values, also i suppose agree w the unmade point that the only really altruistic and compassionate thing to do is exterminate us
― denies the existence of dark matter (difficult listening hour), Wednesday, 27 January 2016 21:42 (eight years ago) link
yeah, i think part of the misunderstanding is that everyone tends to anthropomorphize AI. i mean, you're right: people would get extremely bored caring about paperclips. but computers aren't people. they'll switch off between 0 and 1 until they're gone.
agreed about lack of understanding of what the terms mean, though. i read through one of Edge's little collections of smart people talking about stuff, the AI issue, and it was incredibly frustrating because just about every writer seemed to be defining the same terms in different ways.
― Karl Malone, Wednesday, 27 January 2016 21:45 (eight years ago) link
^^otfm
― Οὖτις, Wednesday, 27 January 2016 21:48 (eight years ago) link
humanity's never really gotten around to a good working definition of what "consciousness" is but here we are thinking we have some fancy new way to create it (apart from the old fashioned way of biological reproduction + social engineering), even when we don't know or can't agree on what *it* really is
― Οὖτις, Wednesday, 27 January 2016 21:50 (eight years ago) link
people have been overestimating the proximity of AI for decades, in the sense of an AI as some kind of autonomous problem-solving agent, but maybe to an equal extent underestimating the kinds of intelligence programmers have built to work in specific problem spaces
if you had shown me Google search autocomplete 25 years ago, I don't think my reaction would have been, "Oh that's just an algorithm, where's some real AI?"
― Brad C., Wednesday, 27 January 2016 21:56 (eight years ago) link
obviously there's no denying technological advances. But yeah I don't think I consider what a Google search engine does "intelligence" in any meaningful way, and yeah maybe that is related to it being in the service of a specific, non-autonomous function.
― Οὖτις, Wednesday, 27 January 2016 22:02 (eight years ago) link
it's more like a representation/prediction of group intelligence
though yeah "intelligence" maybe not the word for millions of Google searchers
― Brad C., Wednesday, 27 January 2016 22:03 (eight years ago) link
xposts
i don't know, i guess i don't think that emulating "consciousness" is necessarily essential to a superintelligence. again, the anthropomorphizing thing is a problem. but maybe i'm going too far down the Turing road, thinking that the most important things to measure are outcomes (if an AI can detect human emotions via facial muscle movements more accurately than a human being can and react accordingly, then who cares if it's "conscious" or not?)
― Karl Malone, Wednesday, 27 January 2016 22:13 (eight years ago) link
if an AI can detect human emotions via facial muscle movements more accurately than a human being can and react accordingly
begs the question of what constitutes "more accurately" and "react[ing] accordingly"
― Οὖτις, Wednesday, 27 January 2016 22:17 (eight years ago) link