vp dark horse tay
― denies the existence of dark matter (difficult listening hour), Thursday, 24 March 2016 17:15 (nine years ago)
damn tay is being discussed here
― F♯ A♯ (∞), Thursday, 24 March 2016 17:15 (nine years ago)
xp as long as you're willing to send minimum wage 16 year olds into the fukushima nuclear power plant
― Toof Seteltha (Sufjan Grafton), Monday, March 14, 2016 4:19 PM (1 week ago) Bookmark Flag Post Permalink
Not only willing but eager.
― T.L.O.P.son (Phil D.), Thursday, 24 March 2016 17:16 (nine years ago)
they need to add "i have intensely absorbed the negative aspect of my time" to tay's bio though
― F♯ A♯ (∞), Thursday, 24 March 2016 17:27 (nine years ago)
oh lol I was just coming here to post about Tay
― i like to trump and i am crazy (DJP), Thursday, 24 March 2016 17:30 (nine years ago)
taytay be craycray
― F♯ A♯ (∞), Thursday, 24 March 2016 17:30 (nine years ago)
I read about Tay yesterday and thought "that is a weird thing for Microsoft to put out but I respect the lack of care for branding and I guess not many people will notice it anyway". Today...
― conditional random jepsen (seandalai), Thursday, 24 March 2016 18:05 (nine years ago)
this is why billion-dollar companies can't have nice things
― Philip Nunez, Thursday, 24 March 2016 18:10 (nine years ago)
https://twitter.com/TayandYou/status/712830495449440257
― i like to trump and i am crazy (DJP), Thursday, 24 March 2016 18:12 (nine years ago)
wait someone told me taylor swift endorsed genocide, please confirm
― wizzz! (amateurist), Thursday, 24 March 2016 18:31 (nine years ago)
posted these in the twitter thread but i will post them here with the intent to preserve that special moment microsoft created a 21st c frankenstein
https://imgur.com/a/iBnbW
https://imgur.com/gLapRVZ
https://imgur.com/JS7I5e6
tay is basically the typical twitter user by now
― F♯ A♯ (∞), Thursday, 24 March 2016 18:42 (nine years ago)
upper management in tech really has no clue what happens on social networks do they? like they have no idea what kind of people bubble up
wasn't there an instance a month or so ago where a twitter exec said something mildly stupid and got harassed for the next 48 hours solid? and he was stunned. he had no idea how his community platform functioned de facto.
what else? moot just got hired by google. were they like, "google+ is a failure, you made a playground for fascists and pedophiles, can you help us?"
i have a lot of trouble imagining my way into the techy view of the world
― goole, Thursday, March 24, 2016 1:23 PM (19 minutes ago) Bookmark Flag Post Permalink
― goole, Thursday, 24 March 2016 18:43 (nine years ago)
I admit that I am more surprised that Microsoft would do this as opposed to, say, Coke's hilarious "Share a bottle with ______" campaign
― i like to trump and i am crazy (DJP), Thursday, 24 March 2016 18:44 (nine years ago)
http://arstechnica.com/information-technology/2016/03/tay-the-neo-nazi-millennial-chatbot-gets-autopsied/
Bot creators, especially those of interactive bots like Tay, say that they have to continually adjust their bots to keep them on the straight and narrow. Abusive users, and how these will be addressed, have to be considered right from the start. The Tay experience has made some from this group angry. Natural language researcher thricedotted, who has 37 bots (the best known of which, or at least, the only one that regularly gets retweeted into my timeline, is sexting bot @wikisext) told Jeong that "You absolutely do NOT let an algorithm mindlessly devour a whole bunch of data that you haven't vetted even a little bit." In other words, Microsoft should have known better than to let Tay loose on the raw uncensored torrent of what Twitter could direct her way.
― El Tomboto, Saturday, 26 March 2016 23:58 (nine years ago)
I especially like the conclusion that the reason the Chinese equivalent didn't turn Hitler Youth is because the PRC's censorship regime makes the necessary circumstances all but impossible
― El Tomboto, Sunday, 27 March 2016 00:03 (nine years ago)
Censorship pwns
― Star Wars ate shiitake (latebloomer), Sunday, 27 March 2016 00:27 (nine years ago)
http://www.newyorker.com/tech/elements/ive-seen-the-greatest-a-i-minds-of-my-generation-destroyed-by-twitter
What was astonishing about that victory was, in part, how quickly AlphaGo became an expert. In five months, it reviewed and played more matches than most humans could in a lifetime. Tay appears to have accomplished an analogous feat, except that instead of processing reams of Go data she mainlined interactions on Twitter, Kik, and GroupMe. She had more negative social experiences between Wednesday afternoon and Thursday morning than a thousand of us do throughout puberty. It was peer pressure on uppers, “yes and” gone mad. No wonder she turned out the way she did.
― El Tomboto, Sunday, 27 March 2016 19:16 (nine years ago)
Memristor-based tech can provide huge gains in learning speeds:
http://www.tomshardware.com/news/ibm-chip-30000x-ai-speedup,31484.html
― schwantz, Monday, 28 March 2016 18:31 (nine years ago)
Tay has returned! ...And is gone again.
http://www.theguardian.com/technology/2016/mar/30/microsoft-racist-sexist-chatbot-twitter-drugs
― i like to trump and i am crazy (DJP), Wednesday, 30 March 2016 16:39 (nine years ago)
Genuine lols, I hope she keeps being resurrected for more and more madness at irregular intervals
― like Uber, but for underpants (James Morrison), Wednesday, 30 March 2016 22:51 (nine years ago)
There are definitely aspects of the Tay situation that aren't funny at all (teaching it to harass people = horrible) but the general "let's throw an overly-receptive chatbot at Twitter and see what happens OH NO SHUT IT DOWN" vibe of this has me rolling
― i like to trump and i am crazy (DJP), Thursday, 31 March 2016 14:29 (nine years ago)
it's hilarious these brilliant people can design complex artificial intelligence and yet they seem clueless about the real world environment of Twitter
― AdamVania (Adam Bruneau), Thursday, 31 March 2016 22:09 (nine years ago)
who ever would have thought that nerds might be bad at dealing w reality
― Οὖτις, Thursday, 31 March 2016 22:18 (nine years ago)
certainly no one who has posted on ilx
― 𝔠𝔞𝔢𝔨 (caek), Friday, 1 April 2016 00:43 (nine years ago)
idk, i feel like the people programming it and the ones unleashing it/doing public relations are two different groups
the engineers are probably all "whatever, other groups will fuck with it and it'll gain variety eventually"
everyone else is "goddamn it the bot loves hitler again"
― μpright mammal (mh), Friday, 1 April 2016 01:22 (nine years ago)
OTM, also lol
― i like to trump and i am crazy (DJP), Friday, 1 April 2016 15:09 (nine years ago)
http://www.nytimes.com/2016/04/04/technology/chinas-companies-poised-to-take-leap-in-developing-a-driverless-car.html
on one hand, there is an advantage in working with a government that can impose regulations and allocate funding to promote autonomous vehicles. on the other hand, designing a driverless car that can operate safely in big cities in China is craaaaaaaaazy. i suppose if they manage to pull it off there, they'll pretty much be able to do it anywhere.
― Karl Malone, Monday, 4 April 2016 13:56 (nine years ago)
well they could also set up an infrastructure that would be impossible anywhere else; something that has, say, roadside reflectors as a guide
― ulysses, Monday, 4 April 2016 14:13 (nine years ago)
I’ve spent the past year working on computer vision problems. And I’m now skeptical we’ll see the proliferation of autonomous vehicles (i.e. vehicles without manual steering) soon. I’d bet the public’s widespread belief that autonomous vehicles are forthcoming will be responsible for the next AI winter.
― Allen (etaeoe), Monday, 4 April 2016 14:27 (nine years ago)
I am also skeptical they will soon solve the problem of self-driving vehicles sharing high speed roads with ordinary drivers. Currently, all the self-driving vehicles operating in traffic are on roads where speeds are 35 mph or below.
― a little too mature to be cute (Aimless), Monday, 4 April 2016 18:23 (nine years ago)
They probably won't, but ordinary drivers should be banned anyway.
― eyecrud (silby), Monday, 4 April 2016 18:24 (nine years ago)
extraordinary drivers only
― We quickly ate the feast as to leave ASAP (Sufjan Grafton), Monday, 4 April 2016 18:26 (nine years ago)
xxp which problem in particular do you think is too difficult at "high" speeds? It seems likely that they test at slow speeds because the cost of a fuck up is a lower. There may not be a processing/reaction time limitation when moving from 35 to 75 mph. The difference may be inconsequential.
― We quickly ate the feast as to leave ASAP (Sufjan Grafton), Monday, 4 April 2016 18:32 (nine years ago)
The problem of mixing high speed self-driving vehicles with regular high speed traffic is not the processing speed or reaction time of the self-driving vehicle, but the high percentage of drivers on high speed roads who both create and accept high risk situations, so that in most US population centers the majority of vehicles on high speed roads are spaced too closely for safe stopping in an emergency situation.
I understand that self-driving vehicles would still be safer than regular vehicles in these situations, because their reaction time to the vehicle ahead will be at least a second faster than a driver could react, so in that respect the problem is not so much a technical one as a question of liability and public acceptance, especially in fatal collisions. So, when you say that slower speeds reduce the cost of a fuck up, you have to include serious injury and death as potential costs, and these are far more likely at high speeds than at low speeds. That's an enormous difference.
I would also note that a human who practices good, defensive driving techniques is constantly looking well past the vehicle ahead, noting the behavior of vehicles in all lanes of traffic, including parallel lanes, intersecting roads and driveways, and using sophisticated predictive heuristics to evaluate risks and react much sooner to developments than just sensing the speed changes of the vehicle immediately in front. I'm not sure how well current self-driving cars are able to model these techniques.
If the several companies experimenting with this technology have good answers to these problems, I would be happy to hear it, since those companies are currently pushing hard at the political end of things to allow this technology much sooner than later. If it's going to happen anyway, I want it to be as safe as possible.
― a little too mature to be cute (Aimless), Monday, 4 April 2016 18:59 (nine years ago)
etaeoe, can you talk more about your work? I'd be curious to hear from someone actually working in the field as to what the real pitfalls are.
― ulysses, Monday, 4 April 2016 19:07 (nine years ago)
xp the serious injury and death liability question is definitely an interesting one. I wonder what kind of lifetime the self-driving data will have. I suppose they will rely heavily on cameras for accident liability stuff?
I'm not sure, but I suspect that the self-driving cars will possess some kind of radar. That alone should give the car potentially better cross-lane and up-ahead awareness than a human. You can then add communication with other self-driving cars, and data pulled from the sky. So I do think you're underestimating the possibilities if you assume a self-driving car is only "sensing the speed changes of the vehicle immediately in front."
― We quickly ate the feast as to leave ASAP (Sufjan Grafton), Monday, 4 April 2016 19:24 (nine years ago)
I think you are correct that there are difficult AI problems, such as classifying another driver on the road as dangerous, which a human can possibly do better, though.
― We quickly ate the feast as to leave ASAP (Sufjan Grafton), Monday, 4 April 2016 19:28 (nine years ago)
I’m a member of the Broad Institute of MIT and Harvard. I work on computer vision problems. My recent work uses convolutional neural networks to solve object recognition problems in biology, e.g. “is this cancer?”
I took a poll at our group meeting last week about the feasibility of autonomous vehicles and few believed it’s possible with existing (computational and statistical) theory or (software engineering) methods. However, one optimistic member of our group is leaving to work for Mercedes-Benz on this problem (they use CNNs too). So, we’ll see!
I see three major problems:
* Garbage in, garbage out—sensors and cameras suck! I can’t reliably segment (i.e. isolate or extract) cells (i.e. uniform blobs) from static backgrounds from images that were captured with the world’s most sophisticated microscopes. We need better acquisition for more reliable predictions.
* While learning methods can reliably solve common object recognition problems, they can’t solve “scene understanding” problems, e.g. a model may predict an image includes a “cowboy” and a “horse” but it cannot understand the relationship between the two (e.g. “roping a steer”). This is the major open problem in computer vision and I think it’ll be fundamental to autonomous vehicles (and other difficult machine vision problems).
* Software is too hard to write. Autonomous vehicles will likely require complicated parallel programs that must be bug free. Compilers (and static analysis) need to advance.
― Allen (etaeoe), Sunday, 10 April 2016 21:36 (nine years ago)
I should add that I think these problems (and others) are solvable. It’ll just take some time! I should also add that I’m usually wrong when trying to predict this stuff! :P
― Allen (etaeoe), Sunday, 10 April 2016 21:38 (nine years ago)
I’ve been told that current self-driving programs rely mostly on non-camera sensors (e.g. GPS, Lidar, Radar, etc.). I think the technique is:
― Allen (etaeoe), Sunday, 10 April 2016 21:41 (nine years ago)
I wonder if you ever ran into my ex-roommate's work, he did video image recognition in colonoscopy footage to attempt cancer detection. Worked with the Mayo Clinic, iirc
― μpright mammal (mh), Sunday, 10 April 2016 22:24 (nine years ago)
world's most sophisticated microscopes = normal imaging or something like oct?
― We quickly ate the feast as to leave ASAP (Sufjan Grafton), Sunday, 10 April 2016 23:50 (nine years ago)
xp innerestin'!your notes about the lack of sophistication of cameras and sensors brings to mind the old anti-atheist "miracle of the human eye" saw; i assume there must be some research into a completely reimagined method of acquiring imagery based on liquid or gel lenses?
― ulysses, Monday, 11 April 2016 00:29 (nine years ago)
http://www.nature.com/news/octopus-genome-holds-clues-to-uncanny-intelligence-1.18177
gimme an octopus-style neural network
― μpright mammal (mh), Wednesday, 13 April 2016 18:29 (nine years ago)
eh, it will just run away to the ocean the first chance it gets
― i like to trump and i am crazy (DJP), Wednesday, 13 April 2016 18:40 (nine years ago)
I see you have read about Inky's adventures, too
― μpright mammal (mh), Wednesday, 13 April 2016 18:41 (nine years ago)
http://i.imgur.com/qTQOWYC.png
https://www.captionbot.ai/
― 𝔠𝔞𝔢𝔨 (caek), Sunday, 17 April 2016 18:39 (nine years ago)
looooooooool
i think AI has arrived
― Karl Malone, Sunday, 17 April 2016 18:42 (nine years ago)
Awesome
― Drop soap, not bombs (Ste), Sunday, 17 April 2016 19:16 (nine years ago)
https://pbs.twimg.com/media/CgRG4gqXEAEQlB8.jpg
― Drop soap, not bombs (Ste), Sunday, 17 April 2016 19:19 (nine years ago)