Artificial intelligence still has some way to go

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (6105 of them)

AI as "a machine that thinks like a human" is a pretty dated definition, the idea that machine intelligence can augment human cognition in ways that are nearly instant or imperceptible is the goal of most projects, or creating software that can adapt to new situations using past recorded data

the article about the hacker who is trying to out-tesla tesla on the augmented driving front, building a self-driving system that reacts based on recorded human responses to traffic conditions seems to be on the right track, whether or not his work is viable

general emulation of things we consider "consciousness" is a route that's well-trodden in the chatbot "can I tell whether this is a human" way and isn't really that important outside of customer support or w/e

μpright mammal (mh), Wednesday, 27 January 2016 22:55 (nine years ago)

imo we're going to find out more about the human brain by creating systems that learn than we are going to create systems that learn by determining how the human brain works

μpright mammal (mh), Wednesday, 27 January 2016 22:56 (nine years ago)

the idea that machine intelligence can augment human cognition in ways that are nearly instant or imperceptible is the goal of most projects

sure, this is something we're already living with.

but when people talk about AI superintelligences taking over, I don't think this is what they're referring to - they're referring to something that not only does what a human brain can do, but does it exponentially better. And we're nowhere near the former, much less the latter.

Οὖτις, Wednesday, 27 January 2016 22:58 (nine years ago)

I think it's more a matter of creating systems that have a gestalt decision-making process or evolutionary algorithm that comes up with things that humans would not, or would possibly not even conceive of

making machines think like humans is silly, imo, we should determine the better parts of abstract reasoning and develop that

μpright mammal (mh), Wednesday, 27 January 2016 23:00 (nine years ago)

machines that not only _do not_ do what human brains do, but do things in a way so differently that it seems foreign to our ideas of cognition

μpright mammal (mh), Wednesday, 27 January 2016 23:01 (nine years ago)

that makes more sense to me than trying to build the nine millionth robot that can't walk through a door

Οὖτις, Wednesday, 27 January 2016 23:04 (nine years ago)

(just to bring it all back full circle)

Οὖτις, Wednesday, 27 January 2016 23:04 (nine years ago)

yes

i always warn against conceptually anthropomorphizing AI in these kind of discussions, and then end up up in a wormhole of rebutting anthropomorphic arguments anyway. and inevitably i mention sexy memories and things fall apart

Karl Malone, Wednesday, 27 January 2016 23:06 (nine years ago)

hey you're the one that said "our brains and computers are already very similar"

Οὖτις, Wednesday, 27 January 2016 23:08 (nine years ago)

with sexy results

Οὖτις, Wednesday, 27 January 2016 23:08 (nine years ago)

we've come a long way. our computers' sexy memories are now not so different from our own.

I expel a minor traveler's flatulence (Sufjan Grafton), Wednesday, 27 January 2016 23:09 (nine years ago)

ilx plays a mind forever voyaging imo

denies the existence of dark matter (difficult listening hour), Wednesday, 27 January 2016 23:11 (nine years ago)

at ilx, we've developed an ai that is convinced it left its sunglasses in the booth at lunch as those very sunglasses sit atop its monitor

I expel a minor traveler's flatulence (Sufjan Grafton), Wednesday, 27 January 2016 23:12 (nine years ago)

chilling

denies the existence of dark matter (difficult listening hour), Wednesday, 27 January 2016 23:14 (nine years ago)

I think that people are definitely trying to build computers/AIs that they can't understand (see my memristor article above, or even certain types of machine-learning). These also seem like the ones (IMO) that are most likely to yield the most interesting AIs or consciousnesses.

schwantz, Wednesday, 27 January 2016 23:20 (nine years ago)

oh hey this thread

The DeepMind Go thing looks really really cool and I'll definitely read the paper but it's basically a big search problem with a relatively small representation and a clear reward signal. It's nothing like learning to act within the complexity of the real world, which is the big thing that nobody has any idea how to do.

conditional random jepsen (seandalai), Thursday, 28 January 2016 00:25 (nine years ago)

I sure don't

μpright mammal (mh), Thursday, 28 January 2016 00:25 (nine years ago)

mh & km, you talk of emergent results, but what results are these? What do you expect your hypothetical non-anthropomorphic AI to do? And how will the AI do this without some (necessarily anthropomorphic?) semantic understanding? To accomplish anything that would impress me or shakey, an AI would have to manipulate things in the world, take a variety of sensory (and to us, possibly extrasensory) measurements, and "think" in a way that allowed it to either create something novel or make a useful "true" "assertion" (and this latter accomplishment would require semantic understanding in order to communicate that assertion).

You seem loath to anthropomorphize AI, but I'm skeptical that useful AI accomplishments can be achieved without very human-like semantic understanding.

I'd also like to argue with the proposed timeline that's been touted itt, as if a hard-coded parlor trick (computers can beat humans at rock-paper-scissors, too) means that AI has reached "baby level." It hasn't, and I'm skeptical that we've even reached "earthworm level" (cf. https://en.wikipedia.org/wiki/OpenWorm).

Have you read this?
http://www.skeptic.com/eskeptic/06-08-25/#feature

It's 9 years old, and I can hardly say with confidence that it's irrefutable, but the article makes a convincing, comprehensive case against anything but narrowly specific, hard-coded AI (like a program that plays Go).

I'd like to see an argument as to how, e.g., google will ever remotely understand what the hell I want on the Internet.

bamcquern, Thursday, 28 January 2016 00:30 (nine years ago)

Google is pretty good at understanding what people want on the Internet tbh. Maybe just not you.

conditional random jepsen (seandalai), Thursday, 28 January 2016 00:35 (nine years ago)

google has virtually no semantic understanding

bamcquern, Thursday, 28 January 2016 00:36 (nine years ago)

and its basic underlying principles don't even try to

bamcquern, Thursday, 28 January 2016 00:37 (nine years ago)

Comparing AI to organic intelligences isn't really that informative - their strengths and weaknesses are so different.

conditional random jepsen (seandalai), Thursday, 28 January 2016 00:37 (nine years ago)

But what is it that AI-proponents itt expect AI to eventually do?

bamcquern, Thursday, 28 January 2016 00:39 (nine years ago)

Google doesn't need a whole lot of "semantic understanding" to do a good job of ranking search results. They do have more than "virtuallly no" component that explicitly handles this stuff anyway - the Knowledge Graph is a big part of their system these days.

conditional random jepsen (seandalai), Thursday, 28 January 2016 00:40 (nine years ago)

put a lot of people out of work xp

conditional random jepsen (seandalai), Thursday, 28 January 2016 00:41 (nine years ago)

be a terrible replacement for spurned religious beliefs

bicyclescope (mattresslessness), Thursday, 28 January 2016 00:41 (nine years ago)

http://i.imgur.com/wgIjdZv.gif

denies the existence of dark matter (difficult listening hour), Thursday, 28 January 2016 00:42 (nine years ago)

xp That's really the only thing I'm sure about in the medium-term. I don't think that means that AI is bad or dangerous, but society will need to work out how to handle a jump in unemployment long before it has to worry about killer superintelligences.

conditional random jepsen (seandalai), Thursday, 28 January 2016 00:42 (nine years ago)

http://www.ncbi.nlm.nih.gov/pubmed/8110662

μpright mammal (mh), Thursday, 28 January 2016 00:44 (nine years ago)

the rhetoric of inevitability around ai is so maddeningly stupid, where the hell did it come from?

bicyclescope (mattresslessness), Thursday, 28 January 2016 00:45 (nine years ago)

again, mostly repetition using human-known rules, but molecular design using AI to address combinatorial problems can give results that would be found through brute force repetition but might be unintuitive to humans -- coming up with novel solutions that people might not stumble upon

molecular modeling in genetics is huge right now

μpright mammal (mh), Thursday, 28 January 2016 00:48 (nine years ago)

Like if tesla or whoever builds a self-driving car, that would be a pretty incredible (and seemingly inevitable) accomplishment, but guidance systems aren't new, and it would hardly be "strong AI."

"Ranking search results" is not an admirable feat. That's glorified punch card territory.

Knowledge Graph is scarcely functionally semantic. I don't care that google can crib the appropriate parts of wikipedia and put them in an infobox.

bamcquern, Thursday, 28 January 2016 00:49 (nine years ago)

I accept that AI research is useful, but using an AI tool so that a human can make a decision about a drug is a far cry from KM's implied "earthworm to baby to Einstein" scenario.

bamcquern, Thursday, 28 January 2016 00:50 (nine years ago)

self-driving cars are kind of defeatist when it comes to artificial intelligence, because by design it has to emulate actions humans do -- the control systems of cars, shape of roads, reacting to other drivers not under the same control (although organizations are starting to recognize the need for cars to be able to communicate with other cars) mean they're stuck with a number of constraints

μpright mammal (mh), Thursday, 28 January 2016 00:54 (nine years ago)

general purpose language semantics are kind of a brick wall when it comes to knowledge, but it's not inconceivable that you could have a biomass-consuming big old robot lumbering through the countryside that would be self-sustaining, maybe eventually self-repairing, and would be able to learn from past interactions what works and what doesn't, assuming one of the things it has to learn is not to walk into a canyon or river

the problem being that spoken or written language is the basis for shared knowledge, and there's no real "language" of artificial beings

μpright mammal (mh), Thursday, 28 January 2016 00:57 (nine years ago)

yet

μpright mammal (mh), Thursday, 28 January 2016 00:57 (nine years ago)

it comes down to whether you think human cognition is a special thing, or just an amazingly huge number of iterations of things that worked or didn't, and we don't have the equivalent of randomly throwing molecules together over billions of years until single cell organisms rise up

μpright mammal (mh), Thursday, 28 January 2016 00:59 (nine years ago)

it's not quite what we're going for in that we are looking for a particular response, but a simple step to programs that write programs to create that intended response exist:
http://www.primaryobjects.com/2013/01/27/using-artificial-intelligence-to-write-self-modifying-improving-programs/

μpright mammal (mh), Thursday, 28 January 2016 01:04 (nine years ago)

To repeat myself

New Yorker magazine alert thread

service desk hardman (El Tomboto), Thursday, 28 January 2016 01:06 (nine years ago)

that's why we have to have the computer program it for us, then program a better version, and continue onward for a few trillion cycles

μpright mammal (mh), Thursday, 28 January 2016 01:10 (nine years ago)

and then it forgets the password to itself because storage capacity turns out to be finite

service desk hardman (El Tomboto), Thursday, 28 January 2016 01:14 (nine years ago)

Like if tesla or whoever builds a self-driving car, that would be a pretty incredible (and seemingly inevitable) accomplishment, but guidance systems aren't new, and it would hardly be "strong AI."

"Ranking search results" is not an admirable feat. That's glorified punch card territory.

Knowledge Graph is scarcely functionally semantic. I don't care that google can crib the appropriate parts of wikipedia and put them in an infobox.

― bamcquern, Thursday, January 28, 2016 12:49 AM (15 minutes ago) Bookmark Flag Post Permalink

I actually agree that AI with an understanding of how it affects the world, creativity, real conversational ability is not going to turn up any day soon. But "glorified punch card territory" is horseshit.

Anyway: AI is whatever hasn't been done yet

conditional random jepsen (seandalai), Thursday, 28 January 2016 01:14 (nine years ago)

Yes, obviously it's glorified bazillions of punch cards territory

service desk hardman (El Tomboto), Thursday, 28 January 2016 01:22 (nine years ago)

it's the infinite monkeys/typewriters problem, only there's a set number of monkeys (although faster monkeys keep appearing every day) and they have everything every monkey has ever typed is available for reference, and you start out with a proscribed outcome of a copy of Hamlet

then you introduce the problem that you want something in the vein of Hamlet, but with some new plot twists, but you don't have humans capable of saying whether the result is sensible or good. so you need some parameters, like grammatical rules, and some other way to evaluate whether the story is any good without employing infinitely many humans

μpright mammal (mh), Thursday, 28 January 2016 01:22 (nine years ago)

no one should lump me together with mh; i have ~zero expertise (sorry if i implied i did - like most other things i'm interested in, i'm an amateur and easily schooled).

and also sorry if i implied that superintelligence is inevitable. i don't think that. but i do think it's possible, and if it is, it presents incredible problems. i suppose i often fall into the appeal to authority fallacy, but when people like hawking, musk, gates, woz, etc are explicitly warning about AI (from an open letter published last July - "AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades", and they voiced similar warnings on the risks posed by superintelligence) i pay attention. it's possible that everyone on ILX is more knowledgeable than those guys. but... i don't think so. no offense. if there's even a sliver of possibility that they're correct, it's something worth discussing. to my knowledge, i've never seen anyone (here or on the internet in general) rebut nick bostrom's points about the security/containment problems with superintelligence. everything i've read in opposition just attacks the idea of superintelligence ever existing in the first place. so it seems like there's the group of people who dismiss AI in general, and then there's the group of people who are open to the possibility of AI and think it could be a huge existential problem, and very few people in between. since the smartest people in the room fall into the latter category, i tend to pay attention to what they say.

(also i mentioned the earthworm/baby/einstein thing just because i do think that it's possible that an AI capable of teaching itself would be able to do so at an exponential rate, don't think that human intelligence is the ceiling, and the difference between the least and most intelligent human is not as large as we think it is.)

but for real don't lump me in with mh because i feel sorry for anyone who has to be on the People Who Bring Up Sexy Memories team

Karl Malone, Thursday, 28 January 2016 01:22 (nine years ago)

imo a number of punch cards equal to the number of atoms on earth might be sufficient

μpright mammal (mh), Thursday, 28 January 2016 01:23 (nine years ago)

Ever use voice commands on your phone? There's one very practical and widespread recent benefit of AI research.

AdamVania (Adam Bruneau), Thursday, 28 January 2016 01:24 (nine years ago)

siri is basically dragon naturallyspeaking plus the eliza bot plus twenty years of faster computers

μpright mammal (mh), Thursday, 28 January 2016 01:25 (nine years ago)

I had a buddy of mine in undergrad who wrote an evolving algorithm to make drum machine patterns. Inevitably after a few iterations trying to select for the grooviest phattest funkiest loops around, we would end up with a hit on nearly every 16th-note step, so even with some built-in preferences for the trad backbeat, fusion jazz fills were what you got. It was a fun project though.

service desk hardman (El Tomboto), Thursday, 28 January 2016 01:29 (nine years ago)

xps How is it horseshit? It sorts based on terms, but in a sophisticated ("glorified") way. The thing at the very top of my want-list for AI research is for a search engine to have any clue as to what I'm looking for on the internet.

That brainfuck program that eventually writes "hello" and "reddit" is disappointing.

I would reply to Hofstadter that whatever hasn't happened yet will be as equally underwhelming as what has happened. No one says that technology doesn't have the potential to improve efficiency. We're saying that technology is very unlikely to produce anything resembling "strong AI," which is a proposition you're not necessarily arguing against.

I'd go further and say that our service sector, which comprises about 81% of US jobs, is pretty secure from advancements in AI, and will probably merely be augmented and enhanced by it.

bamcquern, Thursday, 28 January 2016 01:36 (nine years ago)


You must be logged in to post. Please either login here, or if you are not registered, you may register here.