In the Q&A that followed Bender’s talk, a bald man in a black polo shirt, a lanyard around his neck, approached the microphone and laid out his concerns. “Yeah, I wanted to ask the question about why you chose humanization and this character of human, this category of humans, as the sort of framing for all these different ideas that you’re bringing together.” The man did not see humans as all that special. “Listening to your talk, I can’t help but think, you know, there are some humans that are really awful, and so being lumped in with them isn’t so great. We’re the same species, the same biological kind, but who cares? My dog is pretty great. I’m happy to be lumped in with her.”He wanted to separate “a human, the biological category, from a person or a unit worthy of moral respect.” LLMs, he acknowledged, are not human — yet. But the tech is getting so good so fast. “I wondered, if you could just speak a little more to why you chose a human, humanity, being a human as this sort of framing device for thinking about this, you know, a whole host of different things,” he concluded. “Thanks.”
Bender listened to all this with her head slightly cocked to the right, chewing on her lips. What could she say to that? She argued from first principles. “I think that there is a certain moral respect accorded to anyone who’s human by virtue of being human,” she said. “We see a lot of things going wrong in our present world that have to do with not according humanity to humans.”
The guy did not buy it. “If I could, just very quickly,” he continued. “It might be that 100 percent of humans are worthy of certain levels of moral respect. But I wonder if maybe it’s not because they’re human in the species sense.”
Many far from tech also make this point. Ecologists and animal-personhood advocates argue that we should quit thinking we’re so important in a species sense. We need to live with more humility. We need to accept that we’re creatures among other creatures, matter among other matter. Trees, rivers, whales, atoms, minerals, stars — it’s all important. We are not the bosses here.
But the road from language model to existential crisis is short indeed. Joseph Weizenbaum, who created ELIZA, the first chatbot, in 1966, spent most of the rest of his life regretting it. The technology, he wrote ten years later in Computer Power and Human Reason, raises questions that “at bottom … are about nothing less than man’s place in the universe.” The toys are fun, enchanting, and addicting, and that, he believed even 47 years ago, will be our ruin: “No wonder that men who live day in and day out with machines to which they believe themselves to have become slaves begin to believe that men are machines.”
The echoes of the climate crisis are unmistakable. We knew many decades ago about the dangers and, goosed along by capitalism and the desires of a powerful few, proceeded regardless. Who doesn’t want to zip to Paris or Hanalei for the weekend, especially if the best PR teams in the world have told you this is the ultimate prize in life? “Why is the crew that has taken us this far cheering?” Weizenbaum wrote. “Why do the passengers not look up from their games?”
Creating technology that mimics humans requires that we get very clear on who we are. “From here on out, the safe use of artificial intelligence requires demystifying the human condition,” Joanna Bryson, professor of ethics and technology at the Hertie School of Governance in Berlin, wrote last year. We don’t believe we are more giraffelike if we get taller. Why get fuzzy about intelligence?
Others, like Dennett, the philosopher of mind, are even more blunt. We can’t live in a world with what he calls “counterfeit people.” “Counterfeit money has been seen as vandalism against society ever since money has existed,” he said. “Punishments included the death penalty and being drawn and quartered. Counterfeit people is at least as serious.”
Artificial people will always have less at stake than real ones, and that makes them amoral actors, he added. “Not for metaphysical reasons but for simple, physical reasons: They are sort of immortal.”
We need strict liability for the technology’s creators, Dennett argues: “They should be held accountable. They should be sued. They should be put on record that if something they make is used to make counterfeit people, they will be held responsible. They’re on the verge, if they haven’t already done it, of creating very serious weapons of destruction against the stability and security of society. They should take that as seriously as the molecular biologists have taken the prospect of biological warfare or the atomic physicists have taken nuclear war.” This is the real code red. We need to “institute new attitudes, new laws, and spread them rapidly and remove the valorization of fooling people, the anthropomorphization,” he said. “We want smart machines, not artificial colleagues.”
― z_tbd, Thursday, 2 March 2023 03:19 (two years ago)