Artificial intelligence still has some way to go

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (4522 of them)

Yeah, if desire ever evolves in AI, we are all royally fucked.

Una Palooka Dronka (hardcore dilettante), Saturday, 5 October 2019 13:26 (five years ago) link

via https://aiweirdness.com/post/187645450357/dungeon-crawling-or-lucid-dreaming
a text-generating neural network creates a text adventure "live" as you play it. It's predicatbly mental.

YOU ARE STANDING IN A FEATURELESS WHITE ROOM
AN OMINOUS NOISE COMES FROM SOMEWHERE UP AHEAD
?YOU CANNOT SEE THE POINT IN TRYING TO CLIMB DOWN HERE
CAN YOU CLIMB DOWN?CAN YOU HEAR A VOICE SAYING "PLUGH"?

listen voice
I DON'T KNOW HOW TO DO THAT

climb up
YOU CLAMBER UP THE TOP OF A STEEP PATHWAY AND ARE SUCKED INTO A CHASM

go west
YOU'LL HAVE TO ENTER THE CAVE TO SEE IT
THE TOP OF A STEEP PATHWAY IS AGAINST THE CAVE'S EDGE

go cave
THERE IS A MATCHBOOK WHOSE COVER SAYS "VISIT BEAUTIFUL FCD#3" HERE

open matchbook
OPENING THE BOX REVEALS A PAIR OF CANDLES

light candles
THE CANDLES ARE LIT
THE CANDLES ARE NOT LIGHTED
THE CANDLES ARE ALREADY LIT

eat candles
THE CANDLES ARE LIT

go north
THIS IS A SMALL ROOM WITH PASSAGES TO THE EAST AND SOUTH AND A FORBIDDING HOLE LEADING WEST
BLOODSTAINS AND DEEP SCRATCHES (PERHAPS MADE BY AN AXE) MAR THE WALLS
A NASTY-LOOKING TROLL, BRANDISHING A BLOODY AXE, BLOCKS ALL PASSAGES OUT OF THE ROOM
YOUR SWORD HAS BEGUN TO GLOW VERY BRIGHTLY
THE TROLL'S MIGHTY BLOW DROPS YOU TO YOUR KNEES

fight troll
THERE IS AN EXQUISITE JADE FIGURINE HERE

get figurine
HERE

go west
HERE

look around
HERE

look here
HERE
LOOK HERE
THE TROLL'S AXE BARELY MISSES YOUR EAR

eat troll
[THE TROLL IS DISARMED BY A SUBTLE FEINT PAST HIS GUARD
HE LETS OUT A YAWN THAT NEARLY BLOWS YOU OVER, AND THEN FALLS FAST ASLEEPARKENS BACK TO LIFE
)THE TROLL, WHO IS REMARKABLY COORDINATED, CATCHES THE TROLL'S AXE
[THE TROLL, DISARMED, COWERS IN TERROR, AS HE DOES SO
]THE TROLL, WHO IS NOT OVERLY PROUD, LETS OUT A STARTLED SHRIEK AND A STARTLED NEIGH

Tsar Bombadil (James Morrison), Wednesday, 9 October 2019 23:22 (five years ago) link

oh my god

Li'l Brexit (Tracer Hand), Wednesday, 9 October 2019 23:37 (five years ago) link

the voice saying "plugh" kept returning, for some reason.

Tsar Bombadil (James Morrison), Thursday, 10 October 2019 01:54 (five years ago) link

that's the ps5s killer app and i am sold

Fuck the NRA (ulysses), Thursday, 10 October 2019 03:31 (five years ago) link

anyway i am installing this now, what a great idea

Fuck the NRA (ulysses), Thursday, 10 October 2019 03:34 (five years ago) link

It's a civil matter

Dan I., Thursday, 17 October 2019 03:22 (five years ago) link

"Local Police Chief Cosme Lozano says the robots, which cost between $60,000 and $70,000 a year to lease, are still in a trial phase and that their alert buttons have not yet been activated."

money well spent

wasdnuos (abanana), Monday, 21 October 2019 02:10 (five years ago) link

"Other versions of the same model have previously hit the headlines after one fell into a fountain in Washington DC.
And a third HP RoboCop struck a child while patrolling a mall in California’s Silicon Valley."

A spokesperson for Knightscope, who make the robots, explained that they had guaranteed military sales, plus a renovation programme and spare parts. They were not overly concerned about its reliability.

Ashley Pomeroy, Tuesday, 22 October 2019 20:29 (five years ago) link

two weeks pass...

What, and I cannot emphasize this enough, the fuck? pic.twitter.com/Mrksk6D0O3

— Steve Canon (@stephentyrone) November 6, 2019

Jersey Al (Albert R. Broccoli), Thursday, 7 November 2019 18:41 (five years ago) link

all the money + personnel + talent dumped into robot cars is a total waste of resources

Οὖτις, Thursday, 7 November 2019 18:46 (five years ago) link

re-route all that shit into designing electric trucks or saline batteries or meat substitutes

Οὖτις, Thursday, 7 November 2019 18:48 (five years ago) link

I think there are more motor vehicles registered in the USA than there are licensed humans to drive them all, and that's just looking at one nation, not the whole world. The developers of self-driving vehicles see numbers like that and they imagine the tsunami of cash that would flow toward anyone with the software to pull it off. With that kind of incentive, billions of dollars seem like petty cash.

A is for (Aimless), Thursday, 7 November 2019 18:59 (five years ago) link

Two cars in every garage, a gun in every hand, another couple cars in the drive, guns in the glove box and trunk, etc.

I'm scared my but won't fit in it. (Old Lunch), Thursday, 7 November 2019 19:08 (five years ago) link

robot cars made out of guns

Οὖτις, Thursday, 7 November 2019 19:11 (five years ago) link

delivering food to smarthomes

Οὖτις, Thursday, 7 November 2019 19:11 (five years ago) link

my hot take is that although that death above was horrifying and pointless, someone dies in a car wreck every 15 minutes in the US, and a lot of those were horrifying and pointless. i still think the rate of horrifying and pointless deaths will go down, the more automated vehicles take over. at the same time i know that the industry will inevitably enrich multiple completely insane crazy asshole corporation people ceo overlords. i guess i weigh all the saved lives against the addition of yet another new sector to the overcrowded population of crazy asshole corporation people ceo overlords, and think it's worth it

at home in the alternate future, (Karl Malone), Thursday, 7 November 2019 21:40 (five years ago) link

also xps i think electric/automated trucks is already a definitely industry hype thing

at home in the alternate future, (Karl Malone), Thursday, 7 November 2019 21:41 (five years ago) link

electric/automated trucks

these are different things

Οὖτις, Thursday, 7 November 2019 21:45 (five years ago) link

yep, i know

at home in the alternate future, (Karl Malone), Thursday, 7 November 2019 21:45 (five years ago) link

^ gives a pretty good sense of what present day AI can achieve with machine learning, as opposed to pre-programmed intelligence. it required a highly limited, highly structured and predictable microcosm, with a minimum of rules and simple objects, but it is still impressive -- if you don't compare it to what futurists tout AI achieving in a decade or two.

A is for (Aimless), Monday, 18 November 2019 21:29 (four years ago) link

is artificial superintelligence an actual thing? when i hear doomsayers talk about it, they're short on specific explanations for what advancements have been made, how they work, and how they take us closer to designing a being that can determine its own ends -- i.e. make decisions.

treeship., Sunday, 1 December 2019 21:29 (four years ago) link

a being? Lots of a.i. can make decisions. For example, one decides what's spam and what isn't with minimal false positives. It certainly does it much faster than any human could. There are many textbooks explaining advancements in machine learning. There are many examples of these advancements in action.

$1,000,000 or 1 bag of honeycrisp apples (Sufjan Grafton), Sunday, 1 December 2019 22:37 (four years ago) link

i guess make decisions is a bad category. it's more like, awareness, which is hard to quantify. or like, yeah, heidegger -- "being"

treeship., Sunday, 1 December 2019 22:52 (four years ago) link

I think maybe the term you're looking for is Artificial General Intelligence (AGI). We're still a long way away from that:

https://www.forbes.com/sites/cognitiveworld/2019/06/10/how-far-are-we-from-achieving-artificial-general-intelligence/#757181466dc4

o. nate, Monday, 2 December 2019 00:46 (four years ago) link

ah ok. thank you.

treeship., Monday, 2 December 2019 00:52 (four years ago) link

http://www.aidungeon.io/

𝔠𝔞𝔢𝔨 (caek), Friday, 6 December 2019 18:08 (four years ago) link

Elementary school deployment of #facialrecognition technology, Nanjing China pic.twitter.com/kR9l8IVKwj

— Matthew Brennan (@mbrennanchina) December 8, 2019

Peaceful Warrior I Poser (Karl Malone), Monday, 9 December 2019 00:46 (four years ago) link

fuuuuuuck that's terrifying.

Fuck the NRA (ulysses), Monday, 9 December 2019 01:17 (four years ago) link

would be better if they paired it with their product preference data so they could get an advertisement at the same time

Peaceful Warrior I Poser (Karl Malone), Monday, 9 December 2019 01:34 (four years ago) link

three weeks pass...

Diane Coyle had a typically astute piece on AI in the FT the other day (££), talking about standards setting for AI – makes two key points.

The other reason for thinking some consensus on limiting adverse uses of AI may be possible is that there are relatively few parties to any discussions, at least for now.

At the moment, only big companies and governments can afford the hardware and computing power needed to run cutting-edge AI applications; others are renting access through cloud services. This size barrier could make it easier to establish some ground rules before the technology becomes more accessible.

this may be slightly optimistic, but it is true that fewer players tends to make it easier to reach consensus on compliance standards. It's not clear what the regulatory body would be (OECD and G20 are both mentioned in the article as having made statements around AI. Given the comparative success of GDPR regulation by the EU beyond the borders of the EU, it may be that a non-global body will end up setting the de facto standards for AI production, data management and use.

The other key bit was

Even then, previous scares offer a hopeful lesson: fears of bioterrorism or nano-terrorism — labelled “weapons of knowledge-enabled mass destruction” in a 2000 Wired article by Bill Joy — seem to have overlooked the fact that advanced technology use depends on complex organisational structures and tacit knowhow, as well as bits of code.

It's certainly been my experience that machine learning technologies are best deployed into structured environments with high levels of existing expertise in the target purpose of the AI (structured environments here can mean anything from clear organisational responsibilities, well-understood business rules and the latent organisational expertise such things embody to the fact that self-driving vehicles, superproblematic when tied to libertarian fantasies about driving, are already being used sensibly in large-scale factory environments.

But it was another bit in the article that caught my eye:


There are at least two reasons for cautious optimism. One is that the general deployment of AI is not really — as it is often described — an “arms race” (although its use in weapons is indeed a ratchet in the global arms race).

That bit in parenthesis reminded me of this cheery War on the Rocks article about how, due to 'attack time compression' (the time from the launch of a nuclear attack to it striking), AI is being increasingly considered as the only thing fast enough to come to a decision to retaliate in time (important because of the principles of MAD - if you can't guarantee you can assured destruction on the key targets of the country launching the strike then MAD breaks down). ie no human in the decision process at all. Three equally cheerful solutions:

  • More robust second strike (retaliation post first strike) capability: "This option would pose a myriad of ethical and political challenges, including accepting the deaths of many Americans in the first strike, the possible decapitation of U.S. leadership, and the likely degradation of the United States’ nuclear arsenal and NC3 capability. However, a second-strike-focused nuclear deterrent could also deter an adversary from thinking that the threats discussed above provide an advantage sufficient to make a first strike worth the risk."
  • Increased and improved spying (sorry 'surveillance and reconnaissance'), such that you know *prior* to a launch that a launch is going to take place and can act first lol. "This approach would also require instituting a damage prevention or limitation first-strike policy that allowed the president to launch a nuclear attack based on strategic warning. Such an approach would be controversial - nooooooo shit -, but could deter an adversary from approaching the United States’ perceived red lines."
  • Get in there first on reducing your uh opponent's? enemy's? time to react through compressing the attack time further. so that uh - checks notes - you would force other countries to come to the negotiating table and put in place some standards around MAD and deterrence. "Such a strategy is premised on the idea that mutual vulnerability makes the developing strategic environment untenable for both sides and leads to arms control agreements that are specifically designed to force adversaries to back away from fielding first-strike capabilities. The challenge with this approach is that if a single nuclear power (China, for example) refuses to participate, arms control becomes untenable and a race for first-strike dominance ensues."
(there's a lovely blood-red vein of American exceptionalism throughout the piece - "Russia and China are not constrained by the same moral dilemmas that keep Americans awake at night. Rather, they are focused on creating strategic advantage for their countries.").

"Admittedly, each of the three options — robust second strike, preemption, and equivalent danger — has drawbacks."

"There is a fourth option. The United States could develop an NC3 system based on artificial intelligence. Such an approach could overcome the attack-time compression challenge."

thumbsup.gif

"Unlike the game of Go, which the current world champion is a supercomputer, Alpha Go Zero, that learned through an iterative process, in nuclear conflict there is no iterative learning process."

Fizzles, Saturday, 11 January 2020 17:06 (four years ago) link

this may be slightly optimistic, but it is true that fewer players tends to make it easier to reach consensus on compliance standards. It's not clear what the regulatory body would be (OECD and G20 are both mentioned in the article as having made statements around AI. Given the comparative success of GDPR regulation by the EU beyond the borders of the EU, it may be that a non-global body will end up setting the de facto standards for AI production, data management and use.

it's not clear that it's enforceable in practice, or even what it's supposed to mean for that matter, but the GDPR already appears to attempt to constrain "AI": https://en.wikipedia.org/wiki/Right_to_explanation#European_Union

𝔠𝔞𝔢𝔨 (caek), Friday, 17 January 2020 07:14 (four years ago) link

warning: no one involved in the making of this article knew how to use quotation marks

Airbnb has developed technology that looks at guests’ online “personalities” when they book a break to calculate the risk of them trashing a host’s home.

Details have emerged of its “trait analyser” software built to scour the web to assess users’ “trustworthiness and compatibility” as well as their “behavioural and personality traits” in a bid to forecast suitability to rent a property.

...The background check technology was revealed in a patent published by the European Patent Office after being granted in the US last year.

According to the patent, Airbnb could deploy its software to scan sites including social media for traits such as “conscientiousness and openness” against the usual credit and identity checks and what it describes as “secure third-party databases”. Traits such as “neuroticism and involvement in crimes” and “narcissism, Machiavellianism, or psychopathy” are “perceived as untrustworthy”.

It uses artificial intelligence to mark down those found to be “associated” with fake social network profiles, or those who have given any false details. The patent also suggests users are scored poorly if keywords, images or video associated with them are involved with drugs or alcohol, hate websites or organisations, or sex work.

It adds that people “involved in pornography” or who have “authored online content with negative language” will be marked down.

The machine learning also scans news stories that could be about the person, such as an article related to a crime, and can “weight” the seriousness of offences. Postings to blogs and news websites are also taken into account to form a “person graph”, the patent says.

This combined data analyses how the customer acts towards others offline, along with cross-referencing metrics including “social connections”, employment and education history.

The machine learning then calculates the “compatibility” of host and guest.

https://www.standard.co.uk/tech/airbnb-software-scan-online-life-suitable-guest-a4325551.html

But guess what? Nobody gives a toot!😂 (Karl Malone), Saturday, 18 January 2020 01:42 (four years ago) link

So, if someone else having my name commits a crime 1800 miles from my place of residence and it is written up in The Podunk Telegraph and Weekly Shopper, does this software assign that crime to me or ignore it?

I like the old model, where people who choose to offer services to the public must allow the public to pay for and use those services.

A is for (Aimless), Saturday, 18 January 2020 04:40 (four years ago) link

two weeks pass...

WHAT HAS OCCURRED CANNOT BE UNDONE

I have trained a neural net on a crowdsourced set of vintage jello-centric recipes

I believe this to possibly be the worst recipe-generating algorithm in existence pic.twitter.com/cwQwOpUNDv

— Janelle Shane (@JanelleCShane) February 7, 2020

totally unnecessary bewbz of exploitation (DJP), Friday, 7 February 2020 19:57 (four years ago) link

(h/t Ned)

totally unnecessary bewbz of exploitation (DJP), Friday, 7 February 2020 19:58 (four years ago) link

Also, in case these recipes are making you thirsty: https://aiweirdness.com/post/189979379637/dont-let-an-ai-even-an-advanced-one-make-you-a

totally unnecessary bewbz of exploitation (DJP), Friday, 7 February 2020 20:07 (four years ago) link

Eat, drink and be merry, for tomorrow you'll definitely be dead.

Ned Raggett, Friday, 7 February 2020 20:22 (four years ago) link

brb removing all internal rinds

seandalai, Saturday, 8 February 2020 02:24 (four years ago) link

AI Travis Scott is lit
https://vimeo.com/384062745

Fuck the NRA (ulysses), Wednesday, 19 February 2020 13:43 (four years ago) link

Scrolling through that twitter thread, pretty sure I just made an office spectacle of myself when I got to the recipe entitled 'Potty Training for a Bunny'.

Sammo Hazuki's Tago Mago Cantina (Old Lunch), Wednesday, 19 February 2020 13:49 (four years ago) link

I don't know how I've failed to learn by now that I CAN NOT read these AI threads while I'm at work.

https://pbs.twimg.com/media/EQMrV37U8AAmGDG.jpg

Sammo Hazuki's Tago Mago Cantina (Old Lunch), Wednesday, 19 February 2020 13:51 (four years ago) link

It's incredible

totally unnecessary bewbz of exploitation (DJP), Wednesday, 19 February 2020 14:21 (four years ago) link

oh my, I just got to the fanfic:

The neural net read a LOT of fanfic on the internet during its initial general training, and still remembers it even after training on the jello-centric data.

Except now all its stories center around food. pic.twitter.com/WcMahhzc0j

— Janelle Shane (@JanelleCShane) February 8, 2020

totally unnecessary bewbz of exploitation (DJP), Wednesday, 19 February 2020 14:24 (four years ago) link

I think we need to call the police

Today's AI is much closer in brainpower to an earthworm than to a human. It can pattern-match but doesn't understand what it's doing.

This is its attempt to blend in with human recipes pic.twitter.com/kYnL7kT48B

— Janelle Shane (@JanelleCShane) February 8, 2020

totally unnecessary bewbz of exploitation (DJP), Wednesday, 19 February 2020 14:25 (four years ago) link


You must be logged in to post. Please either login here, or if you are not registered, you may register here.