It'd be like possible to do some sort of certificate signing on every piece of media or whatever, but some asshole is going to insist on doing it with blockchain
anyway easier imo to stop consuming information
― I have measured out my life in coffee shop loyalty cards (silby), Tuesday, 30 October 2018 16:29 (five years ago) link
I know that we all hate the block chain, but since I'm a dummy could someone explain why it is so bad and hated?
― gbx, Tuesday, 30 October 2018 18:14 (five years ago) link
personally i've always thought it could have legitimate use as a way of enforcing contracts (or confirming identity, in this case), but then again, i'm a dummy 2
― Karl Malone, Tuesday, 30 October 2018 18:17 (five years ago) link
I mean it's not not a like interesting-ish thing from a distributed systems point of view but like bitcoin now uses more electricity than Peru to manufacture fictional libertarian value tokens
― I have measured out my life in coffee shop loyalty cards (silby), Tuesday, 30 October 2018 18:19 (five years ago) link
oh yeah, i mean cryptocurrency is fucking terrible, particularly the kind that relies on mining. but that's not blockchain.
― Karl Malone, Tuesday, 30 October 2018 18:28 (five years ago) link
utilities are pretty excited about blockchain, here's an example:
https://www.greentechmedia.com/articles/read/wepower-is-the-first-blockchain-firm-to-tokenize-an-entire-grid
― sleeve, Tuesday, 30 October 2018 18:30 (five years ago) link
is it the crypto specific application or blockchain itself that's not efficient at scale?
― ciderpress, Tuesday, 30 October 2018 18:32 (five years ago) link
the issue is that there's an incentive to join the distributed mining operation, and therefore people do in an attempt to turn electricity into libertarian value tokens, and the way the proof-of-work system works guarantees that a bunch of those miners will be burning a lot of cycles racing to verify transaction blocks but failing to do so.
― I have measured out my life in coffee shop loyalty cards (silby), Tuesday, 30 October 2018 18:46 (five years ago) link
idk I don't understand it but I instinctively loathe it
you guys know what MNIST is? it's 9.5MB of data. it would cost ~$100,000 to put that on the blockchain. so quite aside from the fact that the cryptocurrency applications are a scam, the actual practical distributed storage situation is ... not good.
― 𝔠𝔞𝔢𝔨 (caek), Tuesday, 30 October 2018 18:53 (five years ago) link
Someone compiled a list of instances of AI doing what creators specify, not what they mean: https://t.co/OqoYN8MvMN pic.twitter.com/I1bgCFfO8c— Jim Stormdancer (@mogwai_poet) November 7, 2018
― ciderpress, Thursday, 8 November 2018 19:00 (five years ago) link
this is why you should never tell your robot to cure world hunger
― rip van wanko, Thursday, 8 November 2018 19:45 (five years ago) link
http://static.tvtropes.org/pmwiki/pub/images/tumblr_lm77uuW5G71qb98uxo1_500-2_dragged_6225.jpg
― nickn, Thursday, 8 November 2018 19:55 (five years ago) link
on the specific issue of deepfakes starting a war, i didn't find this essay as concretely comforting as i hoped i would, but it was interesting...
https://reallifemag.com/faked-out/
As long as mass media has existed in the West, there have been complaints about social acceleration, uncertainty, and the loss of a real, knowable world. In other words, our current conversations about the loss of reality are familiar; while each writer attempts to sound innovative, the concerns are evergreen. If the term “infocalypse” is useful, it is as a synonym for modernity, where truth is always two decades ago and dying today, and a new dark age always on the horizon.
― 𝔠𝔞𝔢𝔨 (caek), Wednesday, 21 November 2018 23:19 (five years ago) link
i guess the two things i think are
1) infocalypse looks better on a page, but infopocalypse rolls off the tongue a little easier
2) as the article itself says near the end:
This analysis echoes philosopher Georges Bataille’s notion of “nonknowledge,” that the creation of knowledge always implies a corresponding creation of new fields of ignorance. Every revolution in information is also a revolution in misinformation. Information isn’t a light that shines down to answer questions; it also produces new unknowns, and possibilities for the unknown. Giving people more data and more information is as likely to create new readings and narratives than to align or “correct” beliefs. As Rob Horning recently put it, “any information, no matter how damaging it may seem to a particular side, can be put to any political use by any side; in fact no fact has any intrinsic meaning.” With each of modernity’s new technologies we are brought to recognize anew that belief precedes data rather than follows from it.
i don't know how to quantify the amount of information that is out there. not only the amount, but our ability to create it and share it, at a level of quality where an invented lie can more or less pass as something that could be true. but it seems like at least a metric shit ton to me. concerns about truth (and the ability capability to manipulate it) aren't new, but the potential scale for abuse seems much larger (and worrisome) now than ever before.
― Karl Malone, Wednesday, 21 November 2018 23:42 (five years ago) link
^^^
― 21st savagery fox (m bison), Wednesday, 21 November 2018 23:45 (five years ago) link
Someone took the neural net Yahoo trained to recognize NSFW pics and ran it "backwards" to generate images:
https://open_nsfw.gitlab.io/
― o. nate, Monday, 26 November 2018 02:28 (five years ago) link
whoa
― Trϵϵship, Monday, 26 November 2018 02:33 (five years ago) link
the desert ones are really good
― Karl Malone, Monday, 26 November 2018 02:34 (five years ago) link
i was just gonna say that. very surreal
― Trϵϵship, Monday, 26 November 2018 02:34 (five years ago) link
these are like, no based on specific images right? just representations of the schema the AI generated to recognize deserts, beaches, genitalia, etc? and that is based on like identifying patterns across billions of images right?
― Trϵϵship, Monday, 26 November 2018 02:37 (five years ago) link
so we are looking into the dreamwork of AI, i guess. if i recognize what is happening, which i may not because this involved equations.
― Trϵϵship, Monday, 26 November 2018 02:38 (five years ago) link
i think that's generally, right, although i must admit i skipped past all the text and went straight for the desert penis pics
― Karl Malone, Monday, 26 November 2018 02:39 (five years ago) link
god that shit is horrifying, reminds me of the artwork for Chris Cunningham's Rubber Johnny
― an incoherent crustacean (MatthewK), Monday, 26 November 2018 03:12 (five years ago) link
it's extremely uncanny. a machine's vision of pornography, which doesn't even render actual bodies.
― Trϵϵship, Monday, 26 November 2018 03:21 (five years ago) link
this is from someone with a dog in the race, and the numbers are shaky, but the basic point seems sound to me: no autonomous vehicles any time soon, except in very simple situations (or, if the super rich are able to swing it, unless we establish legal no go zones for non-autonomous vehicles)
https://medium.com/may-mobility/the-moores-law-for-self-driving-vehicles-b78b8861e184
― 𝔠𝔞𝔢𝔨 (caek), Friday, 1 March 2019 18:34 (five years ago) link
That's a nice combination of simplified explanation and corporate self-promotion. I bet it is based on a presentation that CEO has given to dozens of VCs and banks.
― A is for (Aimless), Friday, 1 March 2019 19:37 (five years ago) link
I don't doubt any of it, but it does seem like a pretty convenient way for him to say essentially "It's okay that my company's cars are bad at autonomous driving (and aren't getting better fast enough), because everyone else's are too!"
― Dan I., Friday, 1 March 2019 19:44 (five years ago) link
yes, stipulated in my post (and his!). doesn't change the fact that he's right: it will be many decades before self driving cars actually work in the way typically claimed in the human environment we live in today.
― 𝔠𝔞𝔢𝔨 (caek), Friday, 1 March 2019 20:07 (five years ago) link
I realize that, if this article were attempting to be rigorous, it wouldn't derive a 'Moore's Law for Autonomous Vehicles' by citing two data points for autonomous cars and one data point for human-driven cars and graphing them. It's miles from any kind of rigor. But what it suggests is more convincing to me than the claims that such fully autonomous vehicles will be arriving much sooner.
― A is for (Aimless), Friday, 1 March 2019 20:20 (five years ago) link
Between human performance (10⁸ miles per fatality) and the best-reported self-driving car performance (10⁴ miles per disengagement) is a gap of 10,000x. Put another way, self-driving cars are 0.01% as good as humans.
AYFKM
This ... only works if every disengagement would have resulted in a fatality.
I mean this is all messy stuff, I'm not saying his time estimates on general self-driving are wrong. But if Waymo removed its safety drivers early I think we'd see a lot more stuck vehicles than fatalities.
― lukas, Tuesday, 12 March 2019 20:55 (five years ago) link
trump otm
In conversations on Air Force One and in the White House, Trump has acted out scenes of self-driving cars veering out of control and crashing into walls. https://t.co/RMUC3kcXKR— Jonathan Swan (@jonathanvswan) March 17, 2019
― 𝔠𝔞𝔢𝔨 (caek), Monday, 18 March 2019 17:36 (five years ago) link
lukas mentioned it above, but that medium article makes an incredibly disingenuous, apples-to-abstract-photography argument.
this is the graph that accompanies to quote:
https://i.imgur.com/POKRZWK.png
if we're asking self-driving cars to encounter a "disengagement" ("when the technology fails and a safety driver must take over") as often as a human being in a car has a fatality, then yes, it's going to be a while. but i'd argue that a better comparison would be human accidents to self-driving accidents. in the U.S., drivers have accidents roughly every 165,000 (a little over 10 to the 5th power). it's still an unfair comparison to compare human accidents to self-driving "disengagements", but even doing that makes things look quite a bit different:
https://imgur.com/IJ5wuxA
and just for fun, maybe it would be useful to compare self-driving "disengagements" to "human fuck ups". how often have you been in a car when the driver, perhaps yourself, did something catastrophically dumb but you escaped physical harm? personally, i have done so hundreds of times. i have had 1 accident in my life, and it was a low speed one and it was 99% the fault of a very confused elderly woman in a parking lot. anyway, i think i can conservatively estimate that at minimum, human drivers "fuck up" but escape physical harm 10 times more often than they have accidents. so that's at least an order of magnitude less frequent than accidents, or 10 to the 4th:
https://i.imgur.com/07LZsJo.jpg
so if the premise of the medium author's argument is right, self-driving cars should start having "disengagements" less frequently than human fuck-ups sometime a few years ago. i have no idea if that's true, but i think more than anything it suggests the whole argument is off
― but i'm there are fuckups (Karl Malone), Tuesday, 19 March 2019 00:19 (five years ago) link
Another point to consider is that if a significant fraction of cars were driving autonomously, the driving AIs might well communicate to each other, or with Google maps or w/e, so that driving speeds and routes might be chosen so as to avoid risky situations altogether.
― an incoherent crustacean (MatthewK), Tuesday, 19 March 2019 01:31 (five years ago) link
I hate AI. Kill it.
― Trϵϵship, Tuesday, 19 March 2019 01:34 (five years ago) link
the driving AIs might well communicate to each other, or with Google maps or w/e, so that driving speeds and routes might be chosen so as to avoid risky situations altogether
Can you imagine how much chaos could be created by hacking into that network?
― A is for (Aimless), Tuesday, 19 March 2019 01:35 (five years ago) link
Doesn’t mean they won’t make it
― Trϵϵship, Tuesday, 19 March 2019 01:43 (five years ago) link
Or fail to protect it adequately.
― A is for (Aimless), Tuesday, 19 March 2019 01:47 (five years ago) link
I don’t know what to tell you if you think self driving cars coordinating is a good idea in the present security environment.
― 𝔠𝔞𝔢𝔨 (caek), Tuesday, 19 March 2019 02:05 (five years ago) link
Indeed, but done right it could seed traffic with a kind of "swarm intelligence" so that autonomous vehicles were acting in ways which improved overall flow. It sounds a little creepy, but I have no problems with traffic lights being managed to do the same thing, or routers managing the network traffic I use.The collabortion (decided to let the typo stand) between Boeing and the FAA doesn't fill me with confidence though.
― an incoherent crustacean (MatthewK), Tuesday, 19 March 2019 02:06 (five years ago) link
remember the robot bees yall
― Trϵϵship, Tuesday, 19 March 2019 02:07 (five years ago) link
remember _KILLER_ bees yall, like when they were gonna be something that killed everyone
― Fuck the NRA (ulysses), Tuesday, 19 March 2019 02:27 (five years ago) link
not everyone. only the poor sods who lived far enough south to allow Africanized bees to survive the winter.
― A is for (Aimless), Tuesday, 19 March 2019 03:47 (five years ago) link
so are they still a problem? i dont recall hearing a word about killer bees for at least a decade.
― Fuck the NRA (ulysses), Tuesday, 19 March 2019 04:06 (five years ago) link
If the NY Post no longer pays attention to them, then I guess they've proved to be a manageable problem.
― A is for (Aimless), Tuesday, 19 March 2019 04:09 (five years ago) link
the killer bees murdered all the other bees and then went into hiding
― but i'm there are fuckups (Karl Malone), Tuesday, 19 March 2019 04:12 (five years ago) link
Background: Last night, Google Maps' pricy contract with Zenrin, the high resolution map service in Japan, ended.
Google's cost savings solution was to implement their AI based on user behavior to build their new maps (what could possibly go wrong?):
https://www.reddit.com/r/japan/comments/b44wwt/so_many_people_were_cutting_across_the_konbini/
mountain shadows become lakes:
Googleマップが劣化したらしいと聞いて、近所の地図見たら、山影が湖になってたw pic.twitter.com/B237RUOpPA— りん (@rin_kawakoubou) March 22, 2019
roads travel through/into buildings:
線路上や路上に建物あったり… pic.twitter.com/c2dqP1065D— りん (@rin_kawakoubou) March 22, 2019
― Jersey Al (Albert R. Broccoli), Friday, 22 March 2019 19:03 (five years ago) link
somewhat tangentially related: https://onezero.medium.com/how-googles-bad-data-wiped-a-neighborhood-off-the-map-80c4c13f1c2b
― Fuck the NRA (ulysses), Friday, 22 March 2019 19:38 (five years ago) link
Apple maps and NAVITIME have have been better in japan for some time now, this definitely explains a few things.
― American Fear of Pranksterism (Ed), Saturday, 23 March 2019 01:29 (five years ago) link
NLP's moving so fast that benchmarking tasks introduced only a year ago are having to be replaced because they're running out of headroom: https://med✧✧✧.c✧✧✧@w✧✧✧.a✧✧✧.✧/introducing-superglue-a-new-hope-against-muppetkind-2779fd9dcdd5
― Dan I., Friday, 19 April 2019 14:15 (five years ago) link