Defining Risk

If you wish to make an apple pie from scratch, you must first invent the universe.

 –  Carl Sagan, Cosmos, Episode 1.

The wonderful Sagan quote above demonstrates a way of thinking that embraces the complexity we can find in even the simplest thing. In this case, we’re looking at an apple pie, because Sagan was a good American patriot.

He re-frames the pie as something that exists in a broader context – inside a physical universe. He then redefines “from scratch” to mean “from the very beginning of that universe”. This perspective, he suggests, shows us the real recipe for making a pie. And it’s a lot more complicated than just slapping it in the oven for 30 minutes.

In that same spirit, we must consider very carefully how we go about defining the word “risk”. Like Sagan, we must see that this word exists in a broader context, and that coming up with a good definition might take us a little longer than we first thought.

Returning once again to that high-level framework for building a triage-focused, risk-based model of sustainability, the Global Challenges Foundation (GCF) report illustrates an important feature, not fully expressed in Step 1.

  1. Identify candidate issues for consideration.
  2. Develop criteria to rank them.
  3. Apply criteria and develop a ranked list.

If you wish to build a list of risks, you must first define what you mean by “risk”.

Before we can identify candidate issues for consideration (Step 1) we first need a comprehensive definition of risk that ensures we do not forget anything important. I’ve complained at length that sustainability risk discourse focuses too much on environmentalism, which implies there must be other areas being left out of the discussion. In the GCF report, they’ve broadened the definition of risk to include a new category – infinite risks – and demonstrated the importance of areas previously under-explored. This all suggests that defining risk is itself a necessary and important part of building a risk-based model of sustainability. It sounds blindingly obvious, I know, but this pedantic stating of the fact is important!

This first of steps is a deceptively complex and important one: If we don’t define “risk” well enough, we will leave blind spots, some of which could be fatal. In other words, how well we define “risk” will determine our ability to manage it.

Getting strung out over the importance of definitions is often the work of philosophers. Usually that word invokes Ancient Greeks, or some idea of heady thoughts that make you say, “deep stuff dude”. But philosophy can be something more basic too; like thinking hard about what “risk” means – because there are a number of ways we can frame it, and more pragmatically, because if we don’t, we could all die.

Meet Nick Bostrom

There are few better to call in for this job than the philosopher Nick Bostrom, who has written at length on existential risk and is influential in this space. As an example of this, we can see a body of work on infinite risk going all the way back to 2002[1] that eventually culminates in a number of important think tanks using that same framework.

Bostrom’s approach to sustainability and risk is brilliant, and a little bit disturbing. That’s a theme with him and a reason I like his work. That darker underbelly translates into some compelling stories and visions. Sometimes his work feel less like a journal article, and more like science fiction (he has written a paper arguing that we are living inside a simulation, for example). He represents well, I think, the kind of philosophers we’ll need in the Apeilicene.

It’s no accident that his thoughts are echoed in some of the most prominent stories today, such as the film The Matrix or Netflix’s TV series Black Mirror. He very much grounded in our

He’s also quite prolific in this space. The GCF report was co-steered by him, and its ideas about infinite risk in 2015 echo earlier work on “infinite value” from a 2011 paper:

As a piece of pragmatic advice, the notion that we should ignore small probabilities is often sensible. Being creatures of limited cognitive capacities, we do well by focusing our attention on the most likely outcomes. Yet even common sense recognizes that whether a possible outcome can be ignored for the sake of simplifying our deliberations depends not only on its probability but also on the magnitude of the values at stake. The ignorable contingencies are those for which the product of likelihood and value is small. If the value in question is infinite, even improbable contingencies become significant according to common sense criteria.[2]

In other words: infinite risk completely changes with the importance of probability. It doesn’t matter much how unlikely something is, if that something can wipe us out.

Bostrom’s model of risk

Thinking about the GCF report’s formula: Risk = Probability x Impact, their formula can essentially be interpreted out of Bostrom’s passage above.

It’s worth looking in some detail at this model of risk, starting with his 2002 paper[3]. The image below outlines Bostrom’s attempt to distinguish between six ‘qualitatively different’ types of risk.

RiskModelBostrom
Image adapted from Bostrom’s 2002 paper.

The grid is relatively simple. Bostrom uses scope and intensity to differentiate different types of risk[4]. Scope is essentially the same as “scale”. Intensity describes how severe the outcome is; how survivable, or reversible. A personal risk that is endurable is something like your car getting stolen, while a personal risk that is terminal is that stolen car driving into your face at 100km. Local essentially means large-scale, but not global. A genocide in a single country is a local terminal risk.

Importantly, these are all well-known and familiar risks – things that we have dealt with before. That is not to say we are prepared for them necessarily, but that they are known risks. What’s new is the global, terminal risk. The spot marked X. A global-scale, terminal risk (sometimes called an “X-Risk”) is a special type; one Bostrom labels as “existential”. He defines them in the following way:

Existential Risks – One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[5]

As Bostrom argues: ‘risks in this sixth category are a recent phenomenon. This is part of the reason why it is useful to distinguish them from other risks. We have not evolved mechanisms, either biologically or culturally, for managing such risks’[6]. Evolving and developing these mechanisms is no easy task. Why? Because there is no place for the trial and error approach we typically use. We cannot learn from a mistake when its consequences are fatal. Nobody will be left to draw any lessons from it, nor have a world to apply that lesson to.

Our approach is therefore inherently and unavoidably speculative. We are trying to build our capacities for accurate foresight. We are trying to cultivate and encourage the imagination of strange futures. We do this so that we can better anticipate an unknowable future.

Bostrom makes this point in a broader sense too, arguing that the ‘institutions, moral norms, social attitudes or national security policies that developed from our experience with managing other sorts of risks’ may be less useful in dealing with existential risks which Bostrom describes as a ‘different type of beast’[7]. Arguably, some of the best work on existential risk comes from non-traditional institutions and think tanks – groups outside the mainstream. In a somewhat paradoxical sense, they must remain on that fringe; it’s easier to think outside the box when you already live outside of it. In another sense, I do feel that we need to begin paying closer attention to these kinds of institutions and their bodies of work, even if they may seem esoteric or alarmist at times.

Illustrating this forward-looking approach are outfits like the previously mentioned Global Challenges Foundation, as well as their collaborators the Future of Humanity Institute, which focuses on AI development and other so-called “exotic” threats like the risks of molecular nanotechnology. The similarly named Future of Life Institute is yet another think tank devoted to existential risks that focuses (again) on the dangers of unchecked AI development. There are many such groups in existence, and while well-funded and influential, in comparison to the UN’s own stature they are not yet mainstream.

These kinds of groups are newer, and sometimes explore areas well outside of the typical fare of the UN and the frameworks it develops. They exemplify what Bostrom means when he says that institutions experienced with past threats may be less useful in dealing with future ones.

In the future, I hope to look more closely at groups like these; to reflect on the threats they identify as important, to investigate what kinds of thoughts drove them to these conclusions, and to look more pragmatically at anything resembling a “triaged list” they might have developed. A meta-analysis and synthesis of their work would be a good step in building a risk-based model that can enjoy some consensus and attention.

I’ve offered glimpses of this landscape, but honestly only that – glimpses. There is a wealth of work and good ideas here that deserve greater attention from the media, from academia, from policymakers, and from the public. It might be helpful to consider their methodologies too – what frameworks and approaches they use that might be of value in the broader project of creating a triaged list of existential risks. For now, I’ve highlighted just a few notable outfits and thinkers and some of their most important ideas. 


Footnotes

[1] Bostrom, N. (2002). Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. Journal of Evolution and Technology, 9(1).

[2] Bostrom, N. (2011). Infinite Ethics. Analysis and Metaphysics, 10, 9-59.

[3] Bostrom, N. (2002). Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. Journal of Evolution and Technology, 9(1).

[4] A third dimension, probability is also important – especially for any kind of triage to occur. This was evident in the GCF report’s own model. Bostrom points out that probability can be “superimposed” on to the matrix he’s developed. So again, this earlier work seems to align with the later reports coming out of collaborations, like the one with GCF.

[5] You might recall an earlier critique of our definition of sustainability using the word “persist” in “persist over time”– since it doesn’t capture the idea of human “flourishing”. Here, I think, Bostrom captures that idea better! A drastic curtailing of our potential is essentially the antithesis to human flourishing, so its avoidance makes flourishing possible, even more probable. This section not only latches on to Bostrom’s idea of going beyond annihilation as a concern, but tries to address this idea of flourishing, and of maximizing human potential.

[6] Bostrom, N. (2002). Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. Journal of Evolution and Technology, 9(1).

[7] As above.

Advertisement

3 thoughts on “Defining Risk

  1. Pingback: Risk-based sustainability models – The Grass Ceiling

  2. Pingback: Risk and Survival – The Grass Ceiling

  3. Pingback: Episode 4: Peter Piper picked a peck of … climate change solutions?! – The Grass Ceiling

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s