Worse than death: The S-risk
Elsewhere I touched upon the idea that there are fates worse than death. Having your mind uploaded to a machine could make colonizing Mars easier, sure, but it could also go wrong in some horrible ways. The image below is from a more recent Bostrom paper, this time in 2013, some 11 years after the previous one. It is the same threat matrix from before, but expanded and revamped to include especially, the inclusion of a “(hellish)” end to the severity spectrum. Charming. 🙂
Bostrom’s framework now covers various hellish outcomes that may be even worse than annihilation – detailing them in gruesome detail in his paper. To him, these are still nonetheless part of the existential risk category, because they result in the “drastic curtailing” of human potential. Another thinker has expanded this idea further though; Max Daniel, Executive Director of the Foundational Research Institute, a group that “focuses on reducing risks of dystopian futures in the context of emerging technologies” (Foundational Research Institute, 2019). Daniel suggests that x-risks with hellish outcomes are their own type of unique risk; the S-risk.
The S stands for suffering 🙂
Daniel’s online essay is adapted from a talk, given for the group Effective Altruism Global (EAG) in Boston – a sustainability-minded think tank. In it, Daniel focuses on Bostrom’s paper above, narrowing in on the “hellish” part of the grid to explore how suffering can be as negative an outcome as annihilation – yet often a less-discussed existential risk.
S-Risks and Hellish outcomes – Netflix’s Black Mirror
“To illustrate what s-risks are about, I’ll use a story from the British TV series Black Mirror, which you may have seen. Imagine that someday it will be possible to upload human minds into virtual environments. This way, sentient beings can be stored and run on very small computing devices, such as the white egg-shaped gadget depicted here.”
“BEHIND THE COMPUTING DEVICE YOU CAN SEE MATT. MATT’S JOB IS TO CONVINCE HUMAN UPLOADS TO SERVE AS VIRTUAL BUTLERS, CONTROLLING THE SMART HOMES OF THEIR OWNERS. IN THIS INSTANCE, HUMAN UPLOAD GRETA IS UNWILLING TO COMPLY”
“TO BREAK HER WILL, MATT INCREASES THE RATE AT WHICH TIME PASSES FOR GRETA. WHILE MATT WAITS FOR JUST A FEW SECONDS, GRETA EFFECTIVELY ENDURES MANY MONTHS OF SOLITARY CONFINEMENT.”
The preceding excerpt, taken from Daniel’s essay illustrates how technology might be used as a torture device that could cause (almost literally) infinitely more suffering than current technology enables. If it’s possible to upload our minds into machines, then someone with absolute control over those machines and malicious intent may be able to harm us in profoundly new and disturbing ways. It’s simply not possible today to torture someone for a thousand years. But Black Mirror shows us how it might not only be possible, but as easy as setting an egg timer. Fun stuff!
Black Mirror achieves something that, for me, few science fiction narratives do. It makes me happy with my own stupid little life that will, mercifully, end someday.
It captures the grace and joy in annihilation. It sounds pessimistic, I know, or fatalistic, or defeatist. It’s only something you understand more clearly when you witness something like Black Mirror and realize how much better death would be to some of the outcomes the creator’s dark imaginations have dished up.
While it may not seem an especially happy note to end this discussion on, I raise it mostly here because of the various overlaps of ideas. Daniel focuses on Bostrom and uses Netflix’s Black Mirror to illustrate his point. These strike me as not only important, but somewhat intuitive – ideas I think many people have and share. Black Mirror has enjoyed success because its visions of the future, unlike so many other drab Hollywood bullshit, perfectly captures our collective anxieties.
Hopefully in time these ideas about risk will grow in influence, helping shape our response to the threats ahead in a way that is more open-minded, more considered, and (hopefully again) more effective.
 Bostrom, N. (2013, February). Existential Risk Prevention as Global Priority. Global Policy, 4(1), pp. 15-31. doi:10.1111/1758-5899.12002
 Daniels, M. (2017, June 20). S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017). Retrieved from Foundational Research Institute: https://foundational-research.org/s-risks-talk-eag-boston-2017/