Artificial intelligence still has some way to go

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (4440 of them)

One thing I find interesting is that every time I watch, all the commenters gender Estragon as female. Both bots have gendered themselves as different genders over time, and both have male names. Though I guess not many people would know the name 'Estragon' unless they're Beckett fans, maybe they think it's like oestrogen or something.

emil.y, Saturday, 7 January 2017 20:11 (seven years ago) link

Don't these all work in the same way, ie by taking stored conversations from humans v chatbots and using those responses? Seems fairly straight forward, but still funny imo.

Ste, Saturday, 7 January 2017 20:12 (seven years ago) link

um, isn't it as simple as Estragon having a female voice?

Number None, Saturday, 7 January 2017 20:14 (seven years ago) link

Oh, ha, that's fair enough - I have the sound off so didn't get that.

emil.y, Saturday, 7 January 2017 20:17 (seven years ago) link

lol

Ste, Saturday, 7 January 2017 20:17 (seven years ago) link

The most interesting thing to me here is how well the conversation flows, despite the non-sequitors every second or third response. These bots are starting to reflect how humans engage in non-sequitors while conversing.

โ€• a little too mature to be cute (Aimless), Saturday, January 7, 2017 2:18 PM (three hours ago) Bookmark Flag Post Permalink

https://en.wikipedia.org/wiki/Pareidolia

๐” ๐”ž๐”ข๐”จ (caek), Saturday, 7 January 2017 22:38 (seven years ago) link

First Skynet came for the cynics, and I said nothing...

rb (soda), Saturday, 7 January 2017 22:42 (seven years ago) link

you rang?

pareidolia, Saturday, 7 January 2017 23:02 (seven years ago) link

They're doing a who's on first routine right now!

Evan, Saturday, 7 January 2017 23:25 (seven years ago) link

http://s.mlkshk-cdn.com/r/1ARA1

Dan I., Sunday, 8 January 2017 04:52 (seven years ago) link

Don't these all work in the same way, ie by taking stored conversations from humans v chatbots and using those responses? Seems fairly straight forward, but still funny imo.

โ€• Ste, Saturday, January 7, 2017 3:12 PM (yesterday) Bookmark Flag Post Permalink


yeah that's exactly how it works. they pick up keywords and then have a database of all the ways humans responded

ciderpress, Sunday, 8 January 2017 17:45 (seven years ago) link

thats why they spend so much time talking about 'favorite ____' since thats kinda level 1 conversation that people try w/ bots

ciderpress, Sunday, 8 January 2017 17:48 (seven years ago) link

So at some point maybe humans will learn how to be less boring and predictable in chat, and then bots will finally be able to take over?

The beaver is not the bad guy (El Tomboto), Sunday, 8 January 2017 18:06 (seven years ago) link

thepandamystery was so much better

mh ๐Ÿ˜, Sunday, 8 January 2017 18:38 (seven years ago) link

What if we could get two teams of humans pretending to be AIs to chat with each other?

Brb have to write terrible novelette

The beaver is not the bad guy (El Tomboto), Sunday, 8 January 2017 18:52 (seven years ago) link

too late. i spent a lot of time on one of those back in high school:

https://en.wikipedia.org/wiki/Q%26A_website#Forum_2010

remy bean, Sunday, 8 January 2017 19:15 (seven years ago) link

Wait, I linked to the wrong thing. Forum 2000 (forum200.org) was, for you younglings, "a front end to a sophisticated expert system incorporating the latest breakthroughs in natural language and neural network research", providing a group of AI simulations of various celebrities to answer your most pressing questions. It was also a hilarious, four-year hoax.

http://andrej.com/quadratic.html

remy bean, Sunday, 8 January 2017 19:21 (seven years ago) link

two weeks pass...

I was recently invited to meet with 3ric Schm1dt to talk about my labโ€™s work. * We spent some talking about DeepMindโ€™s work on AlphaGo.

While reinforcement learning was responsible for AlphaGoโ€™s success, he bemoaned that press and researchers alike for overlooking the critical role expert systems and planning performed. He claimed expert system-like rules were used to keep the system from searching branches where complexity outweighed usefulness

Likewise, I saw a presentation by C4r0l1na W4h1by. While her work is rooted in classical computer vision (e.g. discovering useful shape and texture features), her presentation was about her recent work building neural networks for image segmentation problems.

Instead of training her network with images and annotations, she paired images with pre-computed features (e.g. Histogram of Oriented Gradients or a response from a Gabor filter) she knew improves segmentation or performance. If you think about it, she built an inverse expert system. While she wrote rules for pre-processing, she relied on the network for processing. It worked well.

* If youโ€™re curious, I assumed heโ€™d meet with dozens of researchers, but there were six of us. It was a brag-worthy experience. I was surprised he was familiar with the state-of-the-art in my fields (I, like most of my colleagues, are usually behind because weโ€™re focused on discrete problems and canโ€™t stay informed about everything), and his advice was super valuable. In fact, one piece of advice convinced me to change course in a project that was already underway.

Allen (etaeoe), Sunday, 22 January 2017 23:52 (seven years ago) link

did you ask him why the killed google reader?

nah seriously very cool! i'm starting a project on interpretability right now and yes, all roads lead to expert sysems.

i have been doing mostly probabilistic programming recently though, and it has been wonderful not to think about neural networks.

๐” ๐”ž๐”ข๐”จ (caek), Monday, 23 January 2017 00:49 (seven years ago) link

One of you please move to Seattle and then hire me

slathered in cream and covered with stickers (silby), Monday, 23 January 2017 01:35 (seven years ago) link

Did anyone see the show with Mia out of Humans where they made an AI bot of her, and then got people to interview her via Skype? Some people were fooled! It was quite odd. They are getting closer to undoing that uncanny valley thing with facial expressions.

Stoop Crone (Trayce), Monday, 23 January 2017 05:00 (seven years ago) link

One of you please move to Seattle and then hire me

I TRIED TO HIRE YOU

Allen (etaeoe), Thursday, 26 January 2017 19:02 (seven years ago) link

I know :( I shoulda followed through on that if only to get to talk

slathered in cream and covered with stickers (silby), Thursday, 26 January 2017 19:36 (seven years ago) link

work is better now than it was then tho and also we'reโ€ฆprobably not going to abruptly run out of money again for a couple years

slathered in cream and covered with stickers (silby), Thursday, 26 January 2017 19:37 (seven years ago) link

lol @ these people https://openreview.net/forum?id=BkjLkSqxg

๐” ๐”ž๐”ข๐”จ (caek), Monday, 6 February 2017 19:59 (seven years ago) link

I like the conspiracy angle -- you don't like our ideas because other people on social media said bad things about them!

mh ๐Ÿ˜, Monday, 6 February 2017 20:13 (seven years ago) link

lots of people thing apples are good, you like apples, therefore you must have talked to lots of people

๐” ๐”ž๐”ข๐”จ (caek), Monday, 6 February 2017 20:17 (seven years ago) link

is that somewhere on https://en.wikipedia.org/wiki/List_of_fallacies

๐” ๐”ž๐”ข๐”จ (caek), Monday, 6 February 2017 20:19 (seven years ago) link

it's very close to gamerg4te logic -- you think it's weird that this game only has women in bikinis or in non-speaking parts, and reviewed it lower because of it, so obviously you're in league with a vast online conspiracy

mh ๐Ÿ˜, Monday, 6 February 2017 20:20 (seven years ago) link

got into a youtube hole of watching AIs play video games the other day after skimming this overly technical blog post:

https://srconstantin.wordpress.com/2017/01/28/performance-trends-in-ai/

Interestingly enough, hereโ€™s a video of a computer playing Breakout:

https://www.youtube.com/watch?v=UXgU37PrIFM

It obviously doesnโ€™t โ€œknowโ€ the law of reflection as a principle, or it would place the bar near where the ball will eventually land, and it doesnโ€™t. There are erratic jerky movements that obviously could not in principle be optimal. It does, however, find the optimal strategy of tunnelling through the bricks and hitting the ball behind the wall. This is creative learning but not conceptual learning.

You can see the same phenomenon in a game of Pong:

https://www.youtube.com/watch?v=YOW8m2YGtRg

flopson, Monday, 6 February 2017 20:25 (seven years ago) link

https://worldwritable.com/ethical-imperatives-in-ai-and-generative-art-b8cf51af4c5#.giwlo1ryo

Iโ€™m increasingly of the opinion that art projects or experiments that deliberately obfuscate the distinction between man and machine do more harm than good. It was a mild disappointment when the mysterious spambot @horse_ebooks turned out to be a stuntโ€”it was 2012 and it just meant that a little magic went out of the world.

I was less forgiving of SeeBotsChat, a recent livestream featuring two Google Home devices talking to each other. The livestream was entertaining but the dialogue was too good to be completely generative. Nevertheless, the media reported it as being a performance by two AIs, and many people assumed that this was just how Google Home works out of the box. The creators did not immediately disclose how it worked:

Eventually the they revealed what some had guessed: the devices were using a service called Cleverbot (without permission, one reason the creators were initially coy). Cleverbot isnโ€™t fancy: it remixes 20 years of human chat logs and is more like a turbo-charged ELIZA than artificial intelligence. The dialogue in SeeBotsChat was entertaining because it was written by people, but the creators positioned the devices as emerging consciences. It worries me that thousands of people watched the live stream, didnโ€™t catch the later disclosure, and came away thinking, โ€œThis is what AI can do.โ€

๐” ๐”ž๐”ข๐”จ (caek), Thursday, 16 February 2017 04:30 (seven years ago) link

re: SeeBotsChat i assumed it was just that, remixes of human chat logs, and was fine w it being that. it's still novel and interesting to me. i didn't really get that it was "positioned as emerging consciences", esp given that it was on twitch.

i don't really care about ethics in AI/Generative Art.

AdamVania (Adam Bruneau), Thursday, 16 February 2017 22:32 (seven years ago) link

one month passes...

https://www.youtube.com/watch?v=h1E-FlguwGw

Bobson Dugnutt (ulysses), Tuesday, 28 March 2017 22:14 (seven years ago) link

good for them, that's what I've been trying to explain to people for a couple of years but whatever, ROBOT CARS

Not the real Tombot (El Tomboto), Thursday, 30 March 2017 01:00 (seven years ago) link

three weeks pass...

https://lyrebird.ai/demo

๐” ๐”ž๐”ข๐”จ (caek), Monday, 24 April 2017 14:21 (seven years ago) link

that is really something. did you see the adobe demo from a few months back, offering similar capabilities? it's nice that this one is open source.

worth reading the Ethics section:

Lyrebird is the first company to offer a technology to reproduce the voice of someone as accurately and with as little recorded audio. Such a technology raises important societal issues that we address in the next paragraphs.

Voice recordings are currently considered as strong pieces of evidence in our societies and in particular in jurisdictions of many countries. Our technology questions the validity of such evidence as it allows to easily manipulate audio recordings. This could potentially have dangerous consequences such as misleading diplomats, fraud and more generally any other problem caused by stealing the identity of someone else.

By releasing our technology publicly and making it available to anyone, we want to ensure that there will be no such risks. We hope that everyone will soon be aware that such technology exists and that copying the voice of someone else is possible. More generally, we want to raise attention about the lack of evidence that audio recordings may represent in the near future.

strange times.

Karl Malone, Monday, 24 April 2017 15:50 (seven years ago) link

and although i'm sure someone will chime in to make the bold claim that the synthesized speech sounds robotic and that AI is a joke, i think it already sounds really good, particularly in how it auto-generates different intonations for the same snippet of text.

Karl Malone, Monday, 24 April 2017 15:53 (seven years ago) link

huh, I know the new Adobe voice tool is supposed to be able to create new audio given a sample of someone's speech, sounds like it's becoming a populated space

a landlocked exclave (mh), Monday, 24 April 2017 15:54 (seven years ago) link

oooh kay, I listened to the demo and it's not quite as good as I expected

a landlocked exclave (mh), Monday, 24 April 2017 15:56 (seven years ago) link

Voice recordings are currently considered as strong pieces of evidence in our societies and in particular in jurisdictions of many countries. Our technology questions the validity of such evidence as it allows to easily manipulate audio recordings. This could potentially have dangerous consequences such as misleading diplomats, fraud and more generally any other problem caused by stealing the identity of someone else.

By releasing our technology publicly and making it available to anyone, we want to ensure that there will be no such risks. We hope that everyone will soon be aware that such technology exists and that copying the voice of someone else is possible. More generally, we want to raise attention about the lack of evidence that audio recordings may represent in the near future.

lol what kind of logic is this. "We're doing this thing that's probably dangerous and has lots of unforeseeable legal consequences, possibly major - but if *everyone* can do it then maybe that will work just like nuclear deterrent and *nobody* will do it! Here you go everybody!"

ฮŸแฝ–ฯ„ฮนฯ‚, Monday, 24 April 2017 15:58 (seven years ago) link

shakey i think the idea is that it's open knowledge in the ML community that that technology is not just possible but extant and in use

๐” ๐”ž๐”ข๐”จ (caek), Monday, 24 April 2017 16:07 (seven years ago) link

xpost
i don't think their argument was that by making it open source, nobody would choose to use it. i think the argument is that by making it easy for everyone to do, it would throw the validity of ALL voice recordings into doubt.

the only way that the argument makes sense is if you accept as a given that the technology will exist and that at least some people will have access to it. if that's the case, and voice recordings are still accepted as a form of identification, then a situation exists where some people are able to fraudulently use synthesized voice recordings because others (credit card help line operators, judges, etc) remain clueless that the technology even exists. given that scenario, they think that offering the technology to everyone, open source, is a better alternative because everyone will realize that no voice recording can be trusted.

i'm not sure about that line of reasoning, but it's a little bit different than nuclear deterrence

Karl Malone, Monday, 24 April 2017 16:10 (seven years ago) link

i don't think they've made it open source btw

๐” ๐”ž๐”ข๐”จ (caek), Monday, 24 April 2017 16:13 (seven years ago) link

I think it's a twofold initiative: on one hand, publicizing the existence of the technology for broad distribution puts it in the public eye and invites scrutiny in cases where a convincing audio recording may be taken for granted, in situations legal or not. On the other hand, it focuses that wave of interest on their particular project, which will either benefit by increased publicity or an increase in contributors and integrators.

There's also the catch-22 of putting it out there in that increased analysis will both discover techniques that allow you to discriminate between generated audio and a legitimate recording, while giving developers the list of discernible differences they need to eliminate

a landlocked exclave (mh), Monday, 24 April 2017 16:16 (seven years ago) link

There's also the catch-22 of putting it out there in that increased analysis will both discover techniques that allow you to discriminate between generated audio and a legitimate recording, while giving developers the list of discernible differences they need to eliminate

someone beat you to this idea

https://www.wired.com/2017/04/googles-dueling-neural-networks-spar-get-smarter-no-humans-required/

๐” ๐”ž๐”ข๐”จ (caek), Monday, 24 April 2017 16:20 (seven years ago) link

I did not propose codifying this in an AI but I appreciate their initiative

a landlocked exclave (mh), Monday, 24 April 2017 16:21 (seven years ago) link


You must be logged in to post. Please either login here, or if you are not registered, you may register here.