and then it forgets the password to itself because storage capacity turns out to be finite
― service desk hardman (El Tomboto), Thursday, 28 January 2016 01:14 (nine years ago)
Like if tesla or whoever builds a self-driving car, that would be a pretty incredible (and seemingly inevitable) accomplishment, but guidance systems aren't new, and it would hardly be "strong AI."
"Ranking search results" is not an admirable feat. That's glorified punch card territory.
Knowledge Graph is scarcely functionally semantic. I don't care that google can crib the appropriate parts of wikipedia and put them in an infobox.
― bamcquern, Thursday, January 28, 2016 12:49 AM (15 minutes ago) Bookmark Flag Post Permalink
I actually agree that AI with an understanding of how it affects the world, creativity, real conversational ability is not going to turn up any day soon. But "glorified punch card territory" is horseshit.
Anyway: AI is whatever hasn't been done yet
― conditional random jepsen (seandalai), Thursday, 28 January 2016 01:14 (nine years ago)
Yes, obviously it's glorified bazillions of punch cards territory
― service desk hardman (El Tomboto), Thursday, 28 January 2016 01:22 (nine years ago)
it's the infinite monkeys/typewriters problem, only there's a set number of monkeys (although faster monkeys keep appearing every day) and they have everything every monkey has ever typed is available for reference, and you start out with a proscribed outcome of a copy of Hamlet
then you introduce the problem that you want something in the vein of Hamlet, but with some new plot twists, but you don't have humans capable of saying whether the result is sensible or good. so you need some parameters, like grammatical rules, and some other way to evaluate whether the story is any good without employing infinitely many humans
― μpright mammal (mh), Thursday, 28 January 2016 01:22 (nine years ago)
no one should lump me together with mh; i have ~zero expertise (sorry if i implied i did - like most other things i'm interested in, i'm an amateur and easily schooled).
and also sorry if i implied that superintelligence is inevitable. i don't think that. but i do think it's possible, and if it is, it presents incredible problems. i suppose i often fall into the appeal to authority fallacy, but when people like hawking, musk, gates, woz, etc are explicitly warning about AI (from an open letter published last July - "AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades", and they voiced similar warnings on the risks posed by superintelligence) i pay attention. it's possible that everyone on ILX is more knowledgeable than those guys. but... i don't think so. no offense. if there's even a sliver of possibility that they're correct, it's something worth discussing. to my knowledge, i've never seen anyone (here or on the internet in general) rebut nick bostrom's points about the security/containment problems with superintelligence. everything i've read in opposition just attacks the idea of superintelligence ever existing in the first place. so it seems like there's the group of people who dismiss AI in general, and then there's the group of people who are open to the possibility of AI and think it could be a huge existential problem, and very few people in between. since the smartest people in the room fall into the latter category, i tend to pay attention to what they say.
(also i mentioned the earthworm/baby/einstein thing just because i do think that it's possible that an AI capable of teaching itself would be able to do so at an exponential rate, don't think that human intelligence is the ceiling, and the difference between the least and most intelligent human is not as large as we think it is.)
but for real don't lump me in with mh because i feel sorry for anyone who has to be on the People Who Bring Up Sexy Memories team
― Karl Malone, Thursday, 28 January 2016 01:22 (nine years ago)
imo a number of punch cards equal to the number of atoms on earth might be sufficient
― μpright mammal (mh), Thursday, 28 January 2016 01:23 (nine years ago)
Ever use voice commands on your phone? There's one very practical and widespread recent benefit of AI research.
― AdamVania (Adam Bruneau), Thursday, 28 January 2016 01:24 (nine years ago)
siri is basically dragon naturallyspeaking plus the eliza bot plus twenty years of faster computers
― μpright mammal (mh), Thursday, 28 January 2016 01:25 (nine years ago)
I had a buddy of mine in undergrad who wrote an evolving algorithm to make drum machine patterns. Inevitably after a few iterations trying to select for the grooviest phattest funkiest loops around, we would end up with a hit on nearly every 16th-note step, so even with some built-in preferences for the trad backbeat, fusion jazz fills were what you got. It was a fun project though.
― service desk hardman (El Tomboto), Thursday, 28 January 2016 01:29 (nine years ago)
xps How is it horseshit? It sorts based on terms, but in a sophisticated ("glorified") way. The thing at the very top of my want-list for AI research is for a search engine to have any clue as to what I'm looking for on the internet.
That brainfuck program that eventually writes "hello" and "reddit" is disappointing.
I would reply to Hofstadter that whatever hasn't happened yet will be as equally underwhelming as what has happened. No one says that technology doesn't have the potential to improve efficiency. We're saying that technology is very unlikely to produce anything resembling "strong AI," which is a proposition you're not necessarily arguing against.
I'd go further and say that our service sector, which comprises about 81% of US jobs, is pretty secure from advancements in AI, and will probably merely be augmented and enhanced by it.
― bamcquern, Thursday, 28 January 2016 01:36 (nine years ago)
Self-driving cars will decimate the service sector in a couple of years.
― schwantz, Thursday, 28 January 2016 01:38 (nine years ago)
I would expect the concerns of hawking, musk, gates, et. al. over autonomous weapons is not due to their overwhelming strong-AI capability taking over the world, but rather their low cost, mobility and firepower, coupled with the fact that their owners will no doubt have extremely low standards about who these weapons kill or maim. The development of cheap mobile autonomous weapons is just an extension of the idea of land mines or booby traps, which are autonomous weapons once they are put in place, and which are highly indiscriminate.
― a little too mature to be cute (Aimless), Thursday, 28 January 2016 01:39 (nine years ago)
KM, smart celebrities can be wrong, and, yes, you are fallaciously appealing to authority by siding with them because they're smart celebrities.
If a program can teach itself something useful and novel in the vein of a superintelligent being, I think it will be doing it through feedback loops analogous to ours that require sensory input and actual experiences. How else will a superintelligent being develop semantic awareness without those things? And how will superintelligent beings develop exponentially if they're living experiential lives more or less like us?
― bamcquern, Thursday, 28 January 2016 01:42 (nine years ago)
http://www.bls.gov/emp/ep_table_201.htm
transportation and warehousing - 3%
― bamcquern, Thursday, 28 January 2016 01:43 (nine years ago)
I buried the lede on that page that was generating programs in brainfuck -- in the second part, the algorithm ends up generating programs that can add, subtract, and reverse strings -- all without having any idea what those operations are
given input and desired outcomes, it figured out subtraction
― μpright mammal (mh), Thursday, 28 January 2016 01:49 (nine years ago)
I completely buy that autonomous weapons are likely to be deployed within the next two decades! I totally buy that. I am also 100% confident that they will have multiple disastrous flaws that make them not at all an existential threat, and unlikely to be much of a threat at all to any sufficiently prepared and equipped target.
I'm far, far more concerned about the threat of pervasive semi-autonomous civilian "intelligence" that just happens to be readily exploited and abused by any half-curious IT dropout. Part of that is because it's my job but the other part of it is because it's my job I get to be intimately aware of how atrocious and shoddy all this shit is. Move fast and break things, indeed.
― service desk hardman (El Tomboto), Thursday, 28 January 2016 01:52 (nine years ago)
google has virtually no semantic understanding
― bamcquern, Wednesday, January 27, 2016 4:36 PM (1 hour ago
Absolutely! And this has been reinforced by ilx ... so many missed posts due to bad timing due to irrelevant Google Image Search results.
― sarahell, Thursday, 28 January 2016 01:53 (nine years ago)
An AI that learns through experiences but that happens to live on the infrastructure that our current IT ecosystem lives on - i.e. we don't in the meantime develop all-new kinds of memory and transistors and operating systems and trust networks that are basically nothing at all like what we have now - is going to be like an earthworm that becomes a baby that then becomes an army of pubescent Von Neumanns that all die instantly as soon as Mozilla decides their CA is no good
― service desk hardman (El Tomboto), Thursday, 28 January 2016 02:02 (nine years ago)
idk man every workplace has those proxies that install trusted root certs that let them crack open and spy on https sessions
― μpright mammal (mh), Thursday, 28 January 2016 02:22 (nine years ago)
Xps The say-hello program inadvertently and unwittingly "learns" to subtract, etc, but that seems inevitable and almost necessary. It's never going to write a program that doesn't produce hello, reddit, etc, and even if it was heavily repurposed to write a smaller or more efficient hello-program-writing program, it would never write anything that wasn't a hello-program-writer writer. The almost infinitely more sophisticated goal of a program forever writing improved and more "intelligent" iterations of itself is a goal that probably shares very few or none of the same solutions as Mr. Hello.
― bamcquern, Thursday, 28 January 2016 02:24 (nine years ago)
I think the point is it can subtract any two numbers fed to it after it was fed only one combination, meaning it's learned an algorithm, not an if statement
― μpright mammal (mh), Thursday, 28 January 2016 02:40 (nine years ago)
one big takeaway from this thread is that people were REALLY into talking to jabberwacky 10 years ago
― Karl Malone, Thursday, 28 January 2016 02:45 (nine years ago)
i prefer a certain mysterious panda
― μpright mammal (mh), Thursday, 28 January 2016 02:55 (nine years ago)
https://public.tableau.com/profile/mckinsey.analytics#!/vizhome/AutomationandUSjobs/Technicalpotentialforautomation
― service desk hardman (El Tomboto), Thursday, 28 January 2016 03:46 (nine years ago)
bamcquern otm, strong ai is nothing without semantic understanding and that's as far away as ever. although as i basically think that such a thing is magic, i have to grant that it's not impossible that if we throw enough money and transistors and what have you at the problem, it will magically appear out of nowhere. if that does happen we'll almost certainly still be in the dark about what happened, how it happened, and whether it even did happen.
― ledge, Thursday, 28 January 2016 09:13 (nine years ago)
When I stumble on a conversation bout AI and intelligence I often think of this post I read like two years ago:
"Personally, I predict that if we do succeed in inventing autonomous, free-thinking, self-aware, hyper-intelligent beings, they will do the really smart thing, and reprogram themselves to be Mountain Dew-guzzling Dungeons & Dragons-playing slackers. Or maybe fashion-obsessed 17-year-old Vancouver skater kids. Or the main character from the movie Amelie. Or something like this: "
http://noahpinionblog.blogspot.com/2014/02/the-slackularity.html
― rap is dad (it's a boy!), Thursday, 28 January 2016 14:36 (nine years ago)
will they write blogs to justify a lame pun as well?
― AdamVania (Adam Bruneau), Thursday, 28 January 2016 15:19 (nine years ago)
haha
― rap is dad (it's a boy!), Thursday, 28 January 2016 16:22 (nine years ago)
this really can't be stated enough, and tbf I have particular issues with appealing to idiots like Musk as any bellwether of anything (not that impressed w Hawking either tbh, esp when it comes to things outside of his area of expertise)
― Οὖτις, Thursday, 28 January 2016 17:54 (nine years ago)
i'm not captain save a musk but that guy seems smarter than you tbh
― rap is dad (it's a boy!), Thursday, 28 January 2016 18:03 (nine years ago)
i mean didn't he build spaceship or something
― rap is dad (it's a boy!), Thursday, 28 January 2016 18:04 (nine years ago)
"idiots like Musk" c'mon.
Also, this is interesting:http://www.independent.co.uk/news/science/why-evolution-may-be-smarter-than-we-thought-a6839186.html
― schwantz, Thursday, 28 January 2016 18:10 (nine years ago)
i mentioned gates/hawking/woz/musk because they're more well known, but the letter they signed concerning autonomous weapons was also signed by hundreds of leading AI researchers.
i guess i don't fight back often enough (here or IRL), and i often shoot myself in the foot by talking shit on myself before others can, but to reduce the warnings of a bunch of leading researchers in the field as "smart celebrities" is kind of baloney
there's no way to prove that they're right or wrong - it's speculation about danger many decades away. it's pointless. so i'm not exactly tying my ego to the outcome of what a bunch of people think about this. but i do get incredibly annoyed by people who feign certainty about something that it is impossible to be certain about
― Karl Malone, Thursday, 28 January 2016 18:12 (nine years ago)
Musk does some good stuff (I am all for SpaceX) but then also comes up with and says a lot of dumb shit (Hyperloop) so yeah I don't have a ton of respect for him
― Οὖτις, Thursday, 28 January 2016 18:13 (nine years ago)
Autonomous weapons are scary! I think everyone agrees with that. Superintelligences turning the world into a massive typewriter (or w/e the example is) is maybe less of an imminent danger.
― conditional random jepsen (seandalai), Thursday, 28 January 2016 18:15 (nine years ago)
not everyone
Autonomous Weapons
― Karl Malone, Thursday, 28 January 2016 18:18 (nine years ago)
I'm no celebrity, but I'm pretty convinced that some deep, powerful AI is around the corner. A lot of money is being poured into this right now, and not just into brute-force type stuff (which, at its best, might be good enough to be mistaken for AI, but does seem, intuitively, to lack consciousness), but also into systems that model neural networks and more opaque systems (evolving FPGA systems, memristor-based circuits, massively-parallel machine-learning things) that, to me, seem likely to actually generate something more generally intelligent or even conscious..
Dismissing Google's search algorithm seems a bit hasty, considering how powerful it is. No, I wouldn't ever call it conscious, but (I think) it is definitely intelligent.
― schwantz, Thursday, 28 January 2016 18:23 (nine years ago)
in a large way navigating social media has made The Turing Test into an everyday banality
― AdamVania (Adam Bruneau), Thursday, 28 January 2016 18:30 (nine years ago)
Google's search algorithm is so smart that it allows people to follow links to sites infested with malicious code literally all the time.All the big technology companies in the world fight an endless battle against common criminals on a daily basis and haven't come up with a way to clear the web of malware being delivered via their own advertising networks. Do you understand why I don't have any confidence that any kind of impressive, stable AI is "around the corner?"
― service desk hardman (El Tomboto), Thursday, 28 January 2016 19:09 (nine years ago)
idk you're assuming google really cares whether you get malware
― μpright mammal (mh), Thursday, 28 January 2016 19:14 (nine years ago)
might be a selling point for their users
― Οὖτις, Thursday, 28 January 2016 19:21 (nine years ago)
but I wouldn't know since I don't use Google lol
So now the bar is "smarter than teams of malware developers?"
― schwantz, Thursday, 28 January 2016 19:23 (nine years ago)
I'm just saying that it's hard for me not to be impressed when I type in a couple of words, and within 8ms Google returns the page/video/news article I was looking for.
― schwantz, Thursday, 28 January 2016 19:24 (nine years ago)
Okay, so you're a rocket scientistThat don't impress me, MuskSo you got the brain but have you got the touchDon't get me wrong, yeah I think SpaceX's alrightBut that won't keep me warm in the middle of the nightThat don't impress me, Musk
― I expel a minor traveler's flatulence (Sufjan Grafton), Thursday, 28 January 2016 19:35 (nine years ago)
lol
― conditional random jepsen (seandalai), Friday, 29 January 2016 00:33 (nine years ago)
otm
― bicyclescope (mattresslessness), Friday, 29 January 2016 00:36 (nine years ago)
i'm going to read this thread properly because i strongly suspect you are all wrong
but andrew ng's 'worrying about evil AI is like worrying about overpopulation on mars' is i think i useful way of thinking about how productive this debate is right now
(although i heard that at NIPS last month he changed this to 'worrying about overpopulation on alpha centauri')
― 𝔠𝔞𝔢𝔨 (caek), Friday, 29 January 2016 15:51 (nine years ago)
glad to see the string theory experts posting here
― 𝔠𝔞𝔢𝔨 (caek), Friday, 29 January 2016 15:53 (nine years ago)
i openly will attest to being wrong
― μpright mammal (mh), Friday, 29 January 2016 15:58 (nine years ago)