agree that it could well be an alien brain w v incompatible values, also i suppose agree w the unmade point that the only really altruistic and compassionate thing to do is exterminate us
― denies the existence of dark matter (difficult listening hour), Wednesday, 27 January 2016 21:42 (nine years ago)
yeah, i think part of the misunderstanding is that everyone tends to anthropomorphize AI. i mean, you're right: people would get extremely bored caring about paperclips. but computers aren't people. they'll switch off between 0 and 1 until they're gone.
agreed about lack of understanding of what the terms mean, though. i read through one of Edge's little collections of smart people talking about stuff, the AI issue, and it was incredibly frustrating because just about every writer seemed to be defining the same terms in different ways.
― Karl Malone, Wednesday, 27 January 2016 21:45 (nine years ago)
really the problem w the whole line of speculation right is a lack of understanding of what we mean by intelligence let alone superintelligence
^^otfm
― Οὖτις, Wednesday, 27 January 2016 21:48 (nine years ago)
humanity's never really gotten around to a good working definition of what "consciousness" is but here we are thinking we have some fancy new way to create it (apart from the old fashioned way of biological reproduction + social engineering), even when we don't know or can't agree on what *it* really is
― Οὖτις, Wednesday, 27 January 2016 21:50 (nine years ago)
people have been overestimating the proximity of AI for decades, in the sense of an AI as some kind of autonomous problem-solving agent, but maybe to an equal extent underestimating the kinds of intelligence programmers have built to work in specific problem spaces
if you had shown me Google search autocomplete 25 years ago, I don't think my reaction would have been, "Oh that's just an algorithm, where's some real AI?"
― Brad C., Wednesday, 27 January 2016 21:56 (nine years ago)
obviously there's no denying technological advances. But yeah I don't think I consider what a Google search engine does "intelligence" in any meaningful way, and yeah maybe that is related to it being in the service of a specific, non-autonomous function.
― Οὖτις, Wednesday, 27 January 2016 22:02 (nine years ago)
it's more like a representation/prediction of group intelligence
though yeah "intelligence" maybe not the word for millions of Google searchers
― Brad C., Wednesday, 27 January 2016 22:03 (nine years ago)
xposts
i don't know, i guess i don't think that emulating "consciousness" is necessarily essential to a superintelligence. again, the anthropomorphizing thing is a problem. but maybe i'm going too far down the Turing road, thinking that the most important things to measure are outcomes (if an AI can detect human emotions via facial muscle movements more accurately than a human being can and react accordingly, then who cares if it's "conscious" or not?)
― Karl Malone, Wednesday, 27 January 2016 22:13 (nine years ago)
if an AI can detect human emotions via facial muscle movements more accurately than a human being can and react accordingly
begs the question of what constitutes "more accurately" and "react[ing] accordingly"
― Οὖτις, Wednesday, 27 January 2016 22:17 (nine years ago)
of the top of my head, for "more accurately", assume that 100 people are asked to film themselves while thinking of either a tragic or sexy memory. a test group of humans then views the videos and guesses the emotions on display, while an AI also completes the same task. the AI is more accurate if they guess the emotion more frequently than the humans do.
"react accordingly": i don't know, i guess it would just be recognizing when someone is sad or angry and backing off for a while (in contemporary Siri terms, maybe holding off on the automated reminder that your toddler's doctor's appointment is tomorrow afternoon). or laughing at a joke. or doing the fake laughter thing you have to do when someone that is respected tells a mediocre joke in a public setting.
it might seem that "consciousness" is necessary to register human emotions and react as humans do, but i'm not sure that's true.
― Karl Malone, Wednesday, 27 January 2016 22:26 (nine years ago)
dreaming seems to be a signal of consciousness. but does an AI have to dream in order to complete tasks at superhuman levels?
― Karl Malone, Wednesday, 27 January 2016 22:28 (nine years ago)
do you really want me to deconstruct the problems with your examples cuz they seem obvious to me
― Οὖτις, Wednesday, 27 January 2016 22:28 (nine years ago)
what if the superintelligent ai says stuff like that all the time
― I expel a minor traveler's flatulence (Sufjan Grafton), Wednesday, 27 January 2016 22:31 (nine years ago)
oh great thx for outing me
― Οὖτις, Wednesday, 27 January 2016 22:37 (nine years ago)
better test would record how 100 people react to jute gyte dinner music and compare with ai's reaction
― I expel a minor traveler's flatulence (Sufjan Grafton), Wednesday, 27 January 2016 22:39 (nine years ago)
xpost i don't know, do what you want i guess
i guess it's likely that i'm just totally misunderstanding what you're saying. i mean, you're saying that emulating "consciousness" is a pre-requisite for superintelligence, right? (earlier you wrote "humanity's never really gotten around to a good working definition of what "consciousness" is but here we are thinking we have some fancy new way to create it"). i'm trying (and failing) to argue that it's not necessary to obtain superintelligent results.
i guess an unstated premise i'm using that others might not agree with is that our brains and computers are already very similar. when i feel "sad" i think that's the result of many stimuli working in concert, leading to my neurons doing what they do.
loool sufjan
― Karl Malone, Wednesday, 27 January 2016 22:42 (nine years ago)
also, i guess an obvious point, but there are AI research paths like machine learning that aren't trying to emulate the brain's behavior, and certainly aren't trying to create "consciousness"
― Karl Malone, Wednesday, 27 January 2016 22:44 (nine years ago)
anyway lolz aside I'm gonna take a crack at this cuz it's a slow day at work
assume that 100 people are asked to film themselves while thinking of either a tragic or sexy memory. a test group of humans then views the videos and guesses the emotions on display
this scenario is subject to a lot of problems that plague sociology/psychology experiments, not all of which can be controlled for. Are "tragic" and "sexy" memories actually typically accompanied by facial expressions? (OK tears stereotypically accompany "tragic", but "sexy"? I dunno what facial expression correlates to "sexy"). Are the test subjects intentionally emoting for the camera or otherwise not presenting an objective sample set? Do the people doing the filming write down or otherwise indicate what they're thinking of while their being filmed? How reliable is that? What if their faces are not expressive? Are they all filmed the same way (lighting and framing do a lot of work with film...)? etc. etc.
this is something that actual humans have problems doing. People misread other people's emotional cues *all the time*. It is socialized, learned behavior, and it varies really widely among people, situations, social strata, culture. This is hardly a simple operation for an AI to complete.
xxp
― Οὖτις, Wednesday, 27 January 2016 22:49 (nine years ago)
please keep in mind that i came up with the scenario in less than 60 seconds
― Karl Malone, Wednesday, 27 January 2016 22:50 (nine years ago)
i guess an unstated premise i'm using that others might not agree with is that our brains and computers are already very similar.
yeah I don't agree with this at all. When I was referring to consciousness upthread you can just swap that out for "human brain" or whatever. We have a very very limited understanding of how the brain works. By contrast we have a very detailed understanding of how computers work. There is a massive gap between the two, not the least of which is how our brains manage to process, sift and recall such a vast amount of information while using so little energy.
xp
― Οὖτις, Wednesday, 27 January 2016 22:52 (nine years ago)
AI as "a machine that thinks like a human" is a pretty dated definition, the idea that machine intelligence can augment human cognition in ways that are nearly instant or imperceptible is the goal of most projects, or creating software that can adapt to new situations using past recorded data
the article about the hacker who is trying to out-tesla tesla on the augmented driving front, building a self-driving system that reacts based on recorded human responses to traffic conditions seems to be on the right track, whether or not his work is viable
general emulation of things we consider "consciousness" is a route that's well-trodden in the chatbot "can I tell whether this is a human" way and isn't really that important outside of customer support or w/e
― μpright mammal (mh), Wednesday, 27 January 2016 22:55 (nine years ago)
imo we're going to find out more about the human brain by creating systems that learn than we are going to create systems that learn by determining how the human brain works
― μpright mammal (mh), Wednesday, 27 January 2016 22:56 (nine years ago)
the idea that machine intelligence can augment human cognition in ways that are nearly instant or imperceptible is the goal of most projects
sure, this is something we're already living with.
but when people talk about AI superintelligences taking over, I don't think this is what they're referring to - they're referring to something that not only does what a human brain can do, but does it exponentially better. And we're nowhere near the former, much less the latter.
― Οὖτις, Wednesday, 27 January 2016 22:58 (nine years ago)
I think it's more a matter of creating systems that have a gestalt decision-making process or evolutionary algorithm that comes up with things that humans would not, or would possibly not even conceive of
making machines think like humans is silly, imo, we should determine the better parts of abstract reasoning and develop that
― μpright mammal (mh), Wednesday, 27 January 2016 23:00 (nine years ago)
machines that not only _do not_ do what human brains do, but do things in a way so differently that it seems foreign to our ideas of cognition
― μpright mammal (mh), Wednesday, 27 January 2016 23:01 (nine years ago)
that makes more sense to me than trying to build the nine millionth robot that can't walk through a door
― Οὖτις, Wednesday, 27 January 2016 23:04 (nine years ago)
(just to bring it all back full circle)
yes
i always warn against conceptually anthropomorphizing AI in these kind of discussions, and then end up up in a wormhole of rebutting anthropomorphic arguments anyway. and inevitably i mention sexy memories and things fall apart
― Karl Malone, Wednesday, 27 January 2016 23:06 (nine years ago)
hey you're the one that said "our brains and computers are already very similar"
― Οὖτις, Wednesday, 27 January 2016 23:08 (nine years ago)
with sexy results
we've come a long way. our computers' sexy memories are now not so different from our own.
― I expel a minor traveler's flatulence (Sufjan Grafton), Wednesday, 27 January 2016 23:09 (nine years ago)
ilx plays a mind forever voyaging imo
― denies the existence of dark matter (difficult listening hour), Wednesday, 27 January 2016 23:11 (nine years ago)
at ilx, we've developed an ai that is convinced it left its sunglasses in the booth at lunch as those very sunglasses sit atop its monitor
― I expel a minor traveler's flatulence (Sufjan Grafton), Wednesday, 27 January 2016 23:12 (nine years ago)
chilling
― denies the existence of dark matter (difficult listening hour), Wednesday, 27 January 2016 23:14 (nine years ago)
I think that people are definitely trying to build computers/AIs that they can't understand (see my memristor article above, or even certain types of machine-learning). These also seem like the ones (IMO) that are most likely to yield the most interesting AIs or consciousnesses.
― schwantz, Wednesday, 27 January 2016 23:20 (nine years ago)
oh hey this thread
The DeepMind Go thing looks really really cool and I'll definitely read the paper but it's basically a big search problem with a relatively small representation and a clear reward signal. It's nothing like learning to act within the complexity of the real world, which is the big thing that nobody has any idea how to do.
― conditional random jepsen (seandalai), Thursday, 28 January 2016 00:25 (nine years ago)
I sure don't
― μpright mammal (mh), Thursday, 28 January 2016 00:25 (nine years ago)
mh & km, you talk of emergent results, but what results are these? What do you expect your hypothetical non-anthropomorphic AI to do? And how will the AI do this without some (necessarily anthropomorphic?) semantic understanding? To accomplish anything that would impress me or shakey, an AI would have to manipulate things in the world, take a variety of sensory (and to us, possibly extrasensory) measurements, and "think" in a way that allowed it to either create something novel or make a useful "true" "assertion" (and this latter accomplishment would require semantic understanding in order to communicate that assertion).
You seem loath to anthropomorphize AI, but I'm skeptical that useful AI accomplishments can be achieved without very human-like semantic understanding.
I'd also like to argue with the proposed timeline that's been touted itt, as if a hard-coded parlor trick (computers can beat humans at rock-paper-scissors, too) means that AI has reached "baby level." It hasn't, and I'm skeptical that we've even reached "earthworm level" (cf. https://en.wikipedia.org/wiki/OpenWorm).
Have you read this?http://www.skeptic.com/eskeptic/06-08-25/#feature
It's 9 years old, and I can hardly say with confidence that it's irrefutable, but the article makes a convincing, comprehensive case against anything but narrowly specific, hard-coded AI (like a program that plays Go).
I'd like to see an argument as to how, e.g., google will ever remotely understand what the hell I want on the Internet.
― bamcquern, Thursday, 28 January 2016 00:30 (nine years ago)
Google is pretty good at understanding what people want on the Internet tbh. Maybe just not you.
― conditional random jepsen (seandalai), Thursday, 28 January 2016 00:35 (nine years ago)
google has virtually no semantic understanding
― bamcquern, Thursday, 28 January 2016 00:36 (nine years ago)
and its basic underlying principles don't even try to
― bamcquern, Thursday, 28 January 2016 00:37 (nine years ago)
Comparing AI to organic intelligences isn't really that informative - their strengths and weaknesses are so different.
― conditional random jepsen (seandalai), Thursday, 28 January 2016 00:37 (nine years ago)
But what is it that AI-proponents itt expect AI to eventually do?
― bamcquern, Thursday, 28 January 2016 00:39 (nine years ago)
Google doesn't need a whole lot of "semantic understanding" to do a good job of ranking search results. They do have more than "virtuallly no" component that explicitly handles this stuff anyway - the Knowledge Graph is a big part of their system these days.
― conditional random jepsen (seandalai), Thursday, 28 January 2016 00:40 (nine years ago)
put a lot of people out of work xp
― conditional random jepsen (seandalai), Thursday, 28 January 2016 00:41 (nine years ago)
be a terrible replacement for spurned religious beliefs
― bicyclescope (mattresslessness), Thursday, 28 January 2016 00:41 (nine years ago)
http://i.imgur.com/wgIjdZv.gif
― denies the existence of dark matter (difficult listening hour), Thursday, 28 January 2016 00:42 (nine years ago)
xp That's really the only thing I'm sure about in the medium-term. I don't think that means that AI is bad or dangerous, but society will need to work out how to handle a jump in unemployment long before it has to worry about killer superintelligences.
― conditional random jepsen (seandalai), Thursday, 28 January 2016 00:42 (nine years ago)
http://www.ncbi.nlm.nih.gov/pubmed/8110662
― μpright mammal (mh), Thursday, 28 January 2016 00:44 (nine years ago)
the rhetoric of inevitability around ai is so maddeningly stupid, where the hell did it come from?
― bicyclescope (mattresslessness), Thursday, 28 January 2016 00:45 (nine years ago)