S-Risks

Worse than death: The S-risk

Elsewhere I touched upon the idea that there are fates worse than death. Having your mind uploaded to a machine could make colonizing Mars easier, sure, but it could also go wrong in some horrible ways. The image below is from a more recent Bostrom paper, this time in 2013, some 11 years after the previous one[1]. It is the same threat matrix from before, but expanded and revamped to include especially, the inclusion of a “(hellish)” end to the severity spectrum. Charming. 🙂

RiskModelBostromNew
New and improved threat matrixes. Shiny.

Bostrom’s framework now covers various hellish outcomes that may be even worse than annihilation – detailing them in gruesome detail in his paper. To him, these are still nonetheless part of the existential risk category, because they result in the “drastic curtailing” of human potential. Another thinker has expanded this idea further though; Max Daniel, Executive Director of the Foundational Research Institute, a group that “focuses on reducing risks of dystopian futures in the context of emerging technologies”[2] (Foundational Research Institute, 2019). Daniel suggests that x-risks with hellish outcomes are their own type of unique risk; the S-risk.

The S stands for suffering 🙂

Daniel’s online essay is adapted from a talk[3], given for the group Effective Altruism Global (EAG) in Boston – a sustainability-minded think tank. In it, Daniel focuses on Bostrom’s paper above, narrowing in on the “hellish” part of the grid to explore how suffering can be as negative an outcome as annihilation – yet often a less-discussed existential risk.

S-Risks and Hellish outcomes – Netflix’s Black Mirror

“To illustrate what s-risks are about, I’ll use a story from the British TV series Black Mirror, which you may have seen. Imagine that someday it will be possible to upload human minds into virtual environments. This way, sentient beings can be stored and run on very small computing devices, such as the white egg-shaped gadget depicted here.”

“BEHIND THE COMPUTING DEVICE YOU CAN SEE MATT. MATT’S JOB IS TO CONVINCE HUMAN UPLOADS TO SERVE AS VIRTUAL BUTLERS, CONTROLLING THE SMART HOMES OF THEIR OWNERS. IN THIS INSTANCE, HUMAN UPLOAD GRETA IS UNWILLING TO COMPLY”

“TO BREAK HER WILL, MATT INCREASES THE RATE AT WHICH TIME PASSES FOR GRETA. WHILE MATT WAITS FOR JUST A FEW SECONDS, GRETA EFFECTIVELY ENDURES MANY MONTHS OF SOLITARY CONFINEMENT.”

The preceding excerpt, taken from Daniel’s essay[4] illustrates how technology might be used as a torture device that could cause (almost literally) infinitely more suffering than current technology enables. If it’s possible to upload our minds into machines, then someone with absolute control over those machines and malicious intent may be able to harm us in profoundly new and disturbing ways. It’s simply not possible today to torture someone for a thousand years. But Black Mirror shows us how it might not only be possible, but as easy as setting an egg timer. Fun stuff!

Black Mirror achieves something that, for me, few science fiction narratives do. It makes me happy with my own stupid little life that will, mercifully, end someday.

It captures the grace and joy in annihilation. It sounds pessimistic, I know, or fatalistic, or defeatist. It’s only something you understand more clearly when you witness something like Black Mirror and realize how much better death would be to some of the outcomes the creator’s dark imaginations have dished up.

While it may not seem an especially happy note to end this discussion on, I raise it mostly here because of the various overlaps of ideas. Daniel focuses on Bostrom and uses Netflix’s Black Mirror to illustrate his point. These strike me as not only important, but somewhat intuitive – ideas I think many people have and share. Black Mirror has enjoyed success because its visions of the future, unlike so many other drab Hollywood bullshit, perfectly captures our collective anxieties.

Hopefully in time these ideas about risk will grow in influence, helping shape our response to the threats ahead in a way that is more open-minded, more considered, and (hopefully again) more effective.


Footnotes

[1] Bostrom, N. (2013, February). Existential Risk Prevention as Global Priority. Global Policy, 4(1), pp. 15-31. doi:10.1111/1758-5899.12002

[2] Foundational Research Institute. (2019). Retrieved from Foundational Research Institute: https://foundational-research.org/

[3] https://www.youtube.com/watch?v=jiZxEJcFExc

[4] Daniels, M. (2017, June 20). S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017). Retrieved from Foundational Research Institute: https://foundational-research.org/s-risks-talk-eag-boston-2017/

Risk and Survival

Survival isn’t everything

The risk-based framework I’ve mentioned elsewhere might appear to leave some things out. Climate change (of a sort) happened once already, and our species did survive it. The end of the Ice Age and the arrival of the Holocene was something that Australian Indigenous peoples, for example, managed to overcome. It even afforded them opportunities to settle in previously uninhabitable areas once covered by ice.

The onset of the Holocene climatic optimum … coincides with rapid expansion, growth and establishment of regional populations across ~75% of Australia, including much of the arid zone.[1]

In a similar theme, birds are dinosaurs. Importantly, they’re not related to dinosaurs, but actual modern-day dinosaurs; the survivors of the mass-extinction event that was a terminal event for most of their kin.

That previous climate change event, and that mass extinction event, might both therefore be examples of endurable risks, using Bostrom’s terminology. The groups at risk (humans, and dinosaurs) were not entirely wiped out. However, in at least the case of dinosaurs, their time as the dominant lifeform on Earth was arguably over once we got a foothold.

Recall Bostrom’s definition of existential risk:

One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

Existential risk doesn’t just require the annihilation of life. It’s enough that the potential for that life is ‘drastically curtailed’ for it to be considered an existential threat. This isn’t the case for Indigenous Australians, of course, who may have even thrived thanks to the effects of the last great changes in climate. For them, the greatest existential risk would sadly come later, in the form of European Colonization. For the dinosaurs though, their future potential was drastically curtailed. Despite this, some still live on as birds (endure, in Bostrom’s terms).

The case of the birds could seem to pose a problem for a framework like the one I’ve built from Bostrom’s ideas, or at least demonstrates that rigid categorization on paper won’t always translate perfectly to the real world. This is because it doesn’t seem to allow for there to be endurable risks that are also terminal, or at least requires some further thinking when we have a risk that can be either terminal or endurable depending on perspective (endurable for the birds, but not the dinosaurs?). Bostrom’s own table, interestingly, only includes examples of terminal risks that involve annihilation, and not “drastic curtailing of potential”. It’s a trickier idea to pin down. How hard is that line between them?

RiskModelBostromR
Adapted from Bostrom’s 2002 paper[2]

We can explore this idea further using the earlier example of transhumanism, which represents another “grey area”. What happens when our species (humans) no longer exists but is replaced by something that is still in some way “human”?

To the same extent that modern birds still “carry the torch” for the dinosaurs, what if some future version of us ends up doing the same for our species? What we define as “terminal” might actually vary according to personal beliefs and preferences, and that reveals the immensely sticky link between risks and threats, and people’s closely held beliefs, values, and norms.

For example, imagine we can upload our brains to machine bodies. This could present a vast new realm of possibilities for us in terms of sustainability. Why terraform Mars when we’ve already seen how well robots can do there?!

MarsRoverCuriosity
Self-portrait of Curiosity located at the foothill of Mount Sharp (October 6, 2015).

If robots can thrive there, maybe we should be more like them?

But then, to some people, the moment we do that, we lose something important about our humanity. The era of the human is effectively over, they say. The point is deeply debatable, and has been debated many times: If we replace enough of ourselves with machines, computers, and technology – to the point we are arguably no longer human – does that mean our species no longer exists? Is it a terminal or endurable event for the human species?

Timothy Morton’s ideas are relevant here too. If we are a kind of cyborg, as he says, then this question isn’t even a theoretical. The same applies to his claim that industrial capitalism is a primitive AI ruling us – a claim that in some senses is quite hard to refute. Are these terminal or endurable events?

A related thought, perhaps another way to think about this, is speciation. This is a term from biology referring to the formation of new and distinct species in the course of evolution. Speciation has happened with humans before; other species like Neanderthals all share a common ancestor with us – one that speciated at various points. Humans themselves have driven artificial speciation in other species, from dogs to domestic livestock to produce – and we’ve been doing it for tens of thousands of years. Technology has often played a key role too, in creating new species of flora and fauna (often to our own benefit). From this perspective, the idea of further technology-driven speciation of humans themselves may be possible, especially if it benefits us – or appears to.

From Corgis to Corn: A Brief Look at the Long History of GMO Technology[3] does a great job at providing some specific examples of speciation over time, stretching back millennia: 

GMHarvard
Image from Harvard’s paper[4].

Bringing it all back to Bostrom’s framework, are outcomes where our humanity fades away a terminal event for our species? Or because something else persists, are they endurable in some way?

Transhumans are to humans what birds are to dinosaurs. They may carry the torch of the species forward, but they do leave many things behind in the process. The potential of a flesh-and-bone species to fully flourish may very well be curtailed in a future where we shed our biological limitations and transition to new forms. It might seem a distant possibility relegated to the realm of thought experiment, but it nonetheless presents moments for reflection when it comes to ideas of risk, and especially, the risk of species annihilation. This shows, hopefully, that annihilation can mean quite a few things, and not all are as bad as the word itself might imply.


Footnotes

[1] Williams, A. N., Ulm, S., Turney, C. S., Rohde, D., & White, G. (2015). Holocene Demographic Changes and the Emergence of Complex Societies in Prehistoric Australia. PLoS ONE, 10(6).

[2] Bostrom, N. (2002). Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. Journal of Evolution and Technology, 9(1).

[3] Gabriel Rangel, figures by Anna Maurer. (2015, August 9). From Corgis to Corn: A Brief Look at the Long History of GMO Technology. Harvard University Blog. Retrieved from: http://sitn.hms.harvard.edu/flash/2015/from-corgis-to-corn-a-brief-look-at-the-long-history-of-gmo-technology/

[4] As above.

Defining Risk

If you wish to make an apple pie from scratch, you must first invent the universe.

 –  Carl Sagan, Cosmos, Episode 1.

The wonderful Sagan quote above demonstrates a way of thinking that embraces the complexity we can find in even the simplest thing. In this case, we’re looking at an apple pie, because Sagan was a good American patriot.

He re-frames the pie as something that exists in a broader context – inside a physical universe. He then redefines “from scratch” to mean “from the very beginning of that universe”. This perspective, he suggests, shows us the real recipe for making a pie. And it’s a lot more complicated than just slapping it in the oven for 30 minutes.

In that same spirit, we must consider very carefully how we go about defining the word “risk”. Like Sagan, we must see that this word exists in a broader context, and that coming up with a good definition might take us a little longer than we first thought.

Returning once again to that high-level framework for building a triage-focused, risk-based model of sustainability, the Global Challenges Foundation (GCF) report illustrates an important feature, not fully expressed in Step 1.

  1. Identify candidate issues for consideration.
  2. Develop criteria to rank them.
  3. Apply criteria and develop a ranked list.

If you wish to build a list of risks, you must first define what you mean by “risk”.

Before we can identify candidate issues for consideration (Step 1) we first need a comprehensive definition of risk that ensures we do not forget anything important. I’ve complained at length that sustainability risk discourse focuses too much on environmentalism, which implies there must be other areas being left out of the discussion. In the GCF report, they’ve broadened the definition of risk to include a new category – infinite risks – and demonstrated the importance of areas previously under-explored. This all suggests that defining risk is itself a necessary and important part of building a risk-based model of sustainability. It sounds blindingly obvious, I know, but this pedantic stating of the fact is important!

This first of steps is a deceptively complex and important one: If we don’t define “risk” well enough, we will leave blind spots, some of which could be fatal. In other words, how well we define “risk” will determine our ability to manage it.

Getting strung out over the importance of definitions is often the work of philosophers. Usually that word invokes Ancient Greeks, or some idea of heady thoughts that make you say, “deep stuff dude”. But philosophy can be something more basic too; like thinking hard about what “risk” means – because there are a number of ways we can frame it, and more pragmatically, because if we don’t, we could all die.

Meet Nick Bostrom

There are few better to call in for this job than the philosopher Nick Bostrom, who has written at length on existential risk and is influential in this space. As an example of this, we can see a body of work on infinite risk going all the way back to 2002[1] that eventually culminates in a number of important think tanks using that same framework.

Bostrom’s approach to sustainability and risk is brilliant, and a little bit disturbing. That’s a theme with him and a reason I like his work. That darker underbelly translates into some compelling stories and visions. Sometimes his work feel less like a journal article, and more like science fiction (he has written a paper arguing that we are living inside a simulation, for example). He represents well, I think, the kind of philosophers we’ll need in the Apeilicene.

It’s no accident that his thoughts are echoed in some of the most prominent stories today, such as the film The Matrix or Netflix’s TV series Black Mirror. He very much grounded in our

He’s also quite prolific in this space. The GCF report was co-steered by him, and its ideas about infinite risk in 2015 echo earlier work on “infinite value” from a 2011 paper:

As a piece of pragmatic advice, the notion that we should ignore small probabilities is often sensible. Being creatures of limited cognitive capacities, we do well by focusing our attention on the most likely outcomes. Yet even common sense recognizes that whether a possible outcome can be ignored for the sake of simplifying our deliberations depends not only on its probability but also on the magnitude of the values at stake. The ignorable contingencies are those for which the product of likelihood and value is small. If the value in question is infinite, even improbable contingencies become significant according to common sense criteria.[2]

In other words: infinite risk completely changes with the importance of probability. It doesn’t matter much how unlikely something is, if that something can wipe us out.

Bostrom’s model of risk

Thinking about the GCF report’s formula: Risk = Probability x Impact, their formula can essentially be interpreted out of Bostrom’s passage above.

It’s worth looking in some detail at this model of risk, starting with his 2002 paper[3]. The image below outlines Bostrom’s attempt to distinguish between six ‘qualitatively different’ types of risk.

RiskModelBostrom
Image adapted from Bostrom’s 2002 paper.

The grid is relatively simple. Bostrom uses scope and intensity to differentiate different types of risk[4]. Scope is essentially the same as “scale”. Intensity describes how severe the outcome is; how survivable, or reversible. A personal risk that is endurable is something like your car getting stolen, while a personal risk that is terminal is that stolen car driving into your face at 100km. Local essentially means large-scale, but not global. A genocide in a single country is a local terminal risk.

Importantly, these are all well-known and familiar risks – things that we have dealt with before. That is not to say we are prepared for them necessarily, but that they are known risks. What’s new is the global, terminal risk. The spot marked X. A global-scale, terminal risk (sometimes called an “X-Risk”) is a special type; one Bostrom labels as “existential”. He defines them in the following way:

Existential Risks – One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[5]

As Bostrom argues: ‘risks in this sixth category are a recent phenomenon. This is part of the reason why it is useful to distinguish them from other risks. We have not evolved mechanisms, either biologically or culturally, for managing such risks’[6]. Evolving and developing these mechanisms is no easy task. Why? Because there is no place for the trial and error approach we typically use. We cannot learn from a mistake when its consequences are fatal. Nobody will be left to draw any lessons from it, nor have a world to apply that lesson to.

Our approach is therefore inherently and unavoidably speculative. We are trying to build our capacities for accurate foresight. We are trying to cultivate and encourage the imagination of strange futures. We do this so that we can better anticipate an unknowable future.

Bostrom makes this point in a broader sense too, arguing that the ‘institutions, moral norms, social attitudes or national security policies that developed from our experience with managing other sorts of risks’ may be less useful in dealing with existential risks which Bostrom describes as a ‘different type of beast’[7]. Arguably, some of the best work on existential risk comes from non-traditional institutions and think tanks – groups outside the mainstream. In a somewhat paradoxical sense, they must remain on that fringe; it’s easier to think outside the box when you already live outside of it. In another sense, I do feel that we need to begin paying closer attention to these kinds of institutions and their bodies of work, even if they may seem esoteric or alarmist at times.

Illustrating this forward-looking approach are outfits like the previously mentioned Global Challenges Foundation, as well as their collaborators the Future of Humanity Institute, which focuses on AI development and other so-called “exotic” threats like the risks of molecular nanotechnology. The similarly named Future of Life Institute is yet another think tank devoted to existential risks that focuses (again) on the dangers of unchecked AI development. There are many such groups in existence, and while well-funded and influential, in comparison to the UN’s own stature they are not yet mainstream.

These kinds of groups are newer, and sometimes explore areas well outside of the typical fare of the UN and the frameworks it develops. They exemplify what Bostrom means when he says that institutions experienced with past threats may be less useful in dealing with future ones.

In the future, I hope to look more closely at groups like these; to reflect on the threats they identify as important, to investigate what kinds of thoughts drove them to these conclusions, and to look more pragmatically at anything resembling a “triaged list” they might have developed. A meta-analysis and synthesis of their work would be a good step in building a risk-based model that can enjoy some consensus and attention.

I’ve offered glimpses of this landscape, but honestly only that – glimpses. There is a wealth of work and good ideas here that deserve greater attention from the media, from academia, from policymakers, and from the public. It might be helpful to consider their methodologies too – what frameworks and approaches they use that might be of value in the broader project of creating a triaged list of existential risks. For now, I’ve highlighted just a few notable outfits and thinkers and some of their most important ideas. 


Footnotes

[1] Bostrom, N. (2002). Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. Journal of Evolution and Technology, 9(1).

[2] Bostrom, N. (2011). Infinite Ethics. Analysis and Metaphysics, 10, 9-59.

[3] Bostrom, N. (2002). Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. Journal of Evolution and Technology, 9(1).

[4] A third dimension, probability is also important – especially for any kind of triage to occur. This was evident in the GCF report’s own model. Bostrom points out that probability can be “superimposed” on to the matrix he’s developed. So again, this earlier work seems to align with the later reports coming out of collaborations, like the one with GCF.

[5] You might recall an earlier critique of our definition of sustainability using the word “persist” in “persist over time”– since it doesn’t capture the idea of human “flourishing”. Here, I think, Bostrom captures that idea better! A drastic curtailing of our potential is essentially the antithesis to human flourishing, so its avoidance makes flourishing possible, even more probable. This section not only latches on to Bostrom’s idea of going beyond annihilation as a concern, but tries to address this idea of flourishing, and of maximizing human potential.

[6] Bostrom, N. (2002). Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. Journal of Evolution and Technology, 9(1).

[7] As above.

Infinite risk

A new category of risk.

The Global Challenges Foundation (GCF) is a good example of a sustainability-focused think tank that also focuses on risk. Board members include Johan Rockström, whose pioneering work on the concept of “planetary boundaries” has been influential in sustainability. For this report, GCF worked closely alongside a similar outfit, the Future of Humanity Institute, based out of Oxford University and led by philosopher Nick Bostrom whose ideas continually reappear in these spaces.

As the report’s title suggests, they’ve focused on developing definitions of risk that include a new category; infinite risks. The term “infinite” here refers to their potential impact. As the formula below demonstrates, impact is part of how they are essentially “calculating” risk. This simple formula drives their approach, and it’s the kind of criteria-driven classification system we need for triage to occur.

Their argument is, at least partly, that impacts and probabilities are often “masked” by government and business policies, which typically under-report both parameters. Their more cold-blooded assertion is that:

RiskFormulaGCF
“A scientific approach requires us to base our decisions on the whole probability distribution.”[1]

This formula captures what is probably common sense thinking to most: A high probability event with low impact isn’t as risky as a low probability event with high impact. A 90% chance of a common cold isn’t a serious as a 1% chance of terminal cancer, right?

Yet, as the opening examinations of risk and sustainability highlighted, factors like probability rarely weigh into our everyday calculations of risk – we fear terrorists more than we do cheeseburgers, even though cheeseburgers are more likely to cause us harm.

If it seems odd for a bunch of serious thinkers, in a serious report, to be stating something so simple as the above formula, it’s because despite being both simple and true, this idea doesn’t get much traction in the real world.

Using the formula above it might occur to us that some “calculations” don’t quite boil down to numbers: the impact of some risks are essentially infinite. To illustrate this, the report starts with some history, and shows how that guides them today. It’s an interesting story, worth quoting in full:

It is only 70 years ago that Edward Teller, one of the greatest physicists of his time, with his back-of-the-envelope calculations, produced results that differed drastically from all that had gone before. His calculations showed that the explosion of a nuclear bomb – a creation of some of the brightest minds on the planet, including Teller himself – could result in a chain reaction so powerful that it would ignite the world’s atmosphere, thereby ending human life on Earth.

Robert Oppenheimer, who led the Manhattan Project to develop the nuclear bomb, halted the project to see whether Teller’s calculations were correct. The resulting document, LA-602: Ignition of the Atmosphere with Nuclear Bombs, concluded that Teller was wrong. But the sheer complexity drove the assessors to end their study by writing that “further work on the subject [is] highly desirable”. The LA-602 document can be seen as the first global challenge report addressing a category of risks where the worst possible impact in all practical senses is infinite.[2]

The idea from this opening scenario of us igniting the atmosphere is a great example of a risk where the impacts would be so far-reaching and devastating, they are effectively infinite. The end of Earth is not something we can quantify with a number[3], it is an impact with no upper limits – and therefore infinite, in the author’s minds.

This idea of infinite risk is useful to a triage model of sustainability because it is inherently focused on severity of impacts as a determinant of a risk’s importance. In this model, a broad range of threats is assessed using a new definition of risk, and then, using a criteria  (probability and impact) we determine what to prioritize. This is essentially that three-steps model of triage mentioned earlier at work, and the list of threats this model identified as important is notably different from something like the UN’s Sustainable Development Goals.

Communications challenges.

Another notable point in the report is the way it approaches communicating sustainability and risk specifically. As mentioned elsewhere a risk-based model presents challenges to communicating sustainability, since it can often veer into negative messaging, which in turn can cause disengagement and other undesirable outcomes.

The authors clearly recognize this, and the report has a stated focus on turning challenges into opportunities:

The idea that we face a number of global challenges threatening the very basis of our civilisation at the beginning of the 21st century is well accepted in the scientific community and is studied at a number of leading universities. However, there is still no coordinated approach to address this group of challenges and turn them into opportunities.

The interrelationship between danger and opportunity is worth identifying and targeting, because it is arguably present in many sustainability challenges. The idea here echoes an old truism, famously stated by US President John F. Kennedy: ‘In the Chinese language, the word “crisis” is composed of two characters, one representing danger and the other, opportunity’[4].

Even though he’s not quite correct, the point here is that if it’s good enough for JFK, it’s good enough for me. A focus on positive messaging and finding opportunities from the crises ahead seems to me a better starting mindset than one focused on impending doom.

Of course, as a final note, this assumes we want to change people’s minds, or that we even have an ethical right to. This is a far stickier issue and gets its own examination at the end of this section on risk.


Footnotes

[1] Global Challenges Foundation. (2015). 12 risks that threaten human civilization – The case for a new risk category. Stockholm: Global Challenges Foundation.

[2] As above.

[3] As an aside: We see this argument echoed elsewhere, perhaps. Economic costings of environmental degradation are criticised at times under the notion that nature is “priceless”, effectively of infinite value and not reducible, once again, to numbers. This resembles arguments that some risks to the environment, such as igniting the atmosphere, are infinite and unquantifiable too.

[4] FK Library. (2019). John F. Kennedy Quotations. Retrieved from John F. Kennedy Presidential Library and Museum: https://www.jfklibrary.org/learn/about-jfk/life-of-john-f-kennedy/john-f-kennedy-quotations#C