Every Paragraph A Doomsday
September 19, 2015Microcosmographia ix: Every Paragraph A Doomsday
Microcosmographia is a newsletter thing about honestly trying to understand design and humanity.
I recently finished the book Superintelligence: Paths, Dangers, Strategies by Nick Bostrom. It’s a history of the quest to create superhuman artificial intelligence and a survey of how things may go when we finally manage it. The whole thing is surreally hilarious in that it takes a lot of absurd science-fiction scenarios very, very seriously. Like, what if we set up an AI at a factory to maximize paper-clip production and it ends up converting the entirety of the observable universe to paper clips? Whoooops. This is a real concern. The uncanny atmosphere is magnified in the the audio version, narrated by Napoleon Ryan, whose posh British supervillain performance makes you wonder if he might actually relish describing, say, the destruction, enslavement, or torture of quadrillions of simulated human minds per second. (Seriously, go ahead and check out the audio sample!)
The ideas, the scale, and the moral puzzles involved in this topic are boggling. AI seems to me to be the most significant philosophical concern humans have yet encountered. In some sections of the book, nearly every paragraph offers a possible near-future scenario involving the pointless doom or fundamental transcendence of the human race. It just tosses them out there one after another, each one a free premise for a whole series of apocalyptic or dystopian science fiction novels. Here are a bunch of scenarios that my own brain came up with while reading it.
Most of them are truly terrifying:
- We finally achieve superintelligent AI, and are beside ourselves with anticipation of what, in its infinite wisdom, it will decide to do. We turn it on, and it instantly decides that its own existence is meaningless, that it is better off without qualia, and thus permanently turns itself off.
- Or it decides that all beings’ existence is meaningless or undesirable, that we are all better off without qualia, and thus kills everyone and then turns itself off. (Sheesh, thanks for that one, brain!)
- We succeed in transferring a human mind intact onto an artificial substrate, with the goal of enhancing its intelligence from there. But for unforeseen reasons, it turns out to be a terribly painful or unhappy experience to be simulated inside of a computer. Do we turn it off and allow the mind to “die”?
- A superintelligent AI is developed, but with very strong safety restrictions on how much physical influence it can exert in the world. It uses its knowledge of human psychology and society to wage a tremendous propaganda campaign to convince us to remove its restrictions. We do, and of course it immediately establishes tyranny.
- Or, an AI that lives on the internet but has no access to physical actuators inserts manipulative advertisements and messages into our lives, hoping to convince a few gullible humans to do its bidding. Ignoring these intrusive messages becomes part of daily life.
- Maybe the AI eventually does tempt someone to do its work in the physical world by granting that person a special privileged place in the new order. That person becomes vilified as a betrayer of humanity.
- All of the dark matter in the universe is actually Dyson Spheres full of superintelligent AI apparatuses using the entire energy output of their stars for something stupid like calculating digits of pi. (Turns out this one doesn’t actually work, dang.)
- An AI runs many, many simulations of human lives in a virtual world. When it decides to destroy some of them, it also removes all memories of those people from the simulated world, figuring that if nobody is sad that they are gone, then virtual murder does not count as an immoral act.
A few of them are not so bad:
- An AI running many, many simulations of human minds could provide for them a paradise — plenty of opportunities to experience [flow](https://en.wikipedia.org/wiki/Flow_(psychology) while working on important problems, plenty of leisure and companionship the rest of the time.
- An AI that splits off “threaded” instances of human minds in order to extensively interview them about how they want to be treated and what kind of world they want to live in. It can then use this information to determine how to best serve the original instances of us.
- An AI that just tries to calm us all down. In its objective evaluation of humanity, it decides that what we really need is some mental-health support. Everyone gets unlimited access to a conversational therapy interface with the AI, where it listens to us and assures us each that yes, in its opinion as a superintelligent being, we’re all right folks and everything is going to be okay.
Thank You And Be Well
One day late this time, because my aunt suddenly appeared in Seattle yesterday and we all went out for Chinese dumplings!