Elsewhere I touched upon the idea that there are fates worse than death. Having your mind uploaded to a machine could make colonizing Mars easier, sure, but it could also go wrong in some horrible ways. The image below is from a more recent Bostrom paper, this time in 2013, some 11 years after the previous one. It is the same threat matrix from before, but expanded and revamped to include especially, the inclusion of a “(hellish)” end to the severity spectrum. Charming. 🙂
Bostrom’s framework now covers various hellish outcomes that may be even worse than annihilation – detailing them in gruesome detail in his paper. To him, these are still nonetheless part of the existential risk category, because they result in the “drastic curtailing” of human potential. Another thinker has expanded this idea further though; Max Daniel, Executive Director of the Foundational Research Institute, a group that “focuses on reducing risks of dystopian futures in the context of emerging technologies” (Foundational Research Institute, 2019). Daniel suggests that x-risks with hellish outcomes are their own type of unique risk; the S-risk.
The S stands for suffering 🙂
Daniel’s online essay is adapted from a talk, given for the group Effective Altruism Global (EAG) in Boston – a sustainability-minded think tank. In it, Daniel focuses on Bostrom’s paper above, narrowing in on the “hellish” part of the grid to explore how suffering can be as negative an outcome as annihilation – yet often a less-discussed existential risk.
S-Risks and Hellish outcomes – Netflix’s Black Mirror
“To illustrate what s-risks are about, I’ll use a story from the British TV series Black Mirror, which you may have seen. Imagine that someday it will be possible to upload human minds into virtual environments. This way, sentient beings can be stored and run on very small computing devices, such as the white egg-shaped gadget depicted here.”
“BEHIND THE COMPUTING DEVICE YOU CAN SEE MATT. MATT’S JOB IS TO CONVINCE HUMAN UPLOADS TO SERVE AS VIRTUAL BUTLERS, CONTROLLING THE SMART HOMES OF THEIR OWNERS. IN THIS INSTANCE, HUMAN UPLOAD GRETA IS UNWILLING TO COMPLY”
“TO BREAK HER WILL, MATT INCREASES THE RATE AT WHICH TIME PASSES FOR GRETA. WHILE MATT WAITS FOR JUST A FEW SECONDS, GRETA EFFECTIVELY ENDURES MANY MONTHS OF SOLITARY CONFINEMENT.”
The preceding excerpt, taken from Daniel’s essay illustrates how technology might be used as a torture device that could cause (almost literally) infinitely more suffering than current technology enables. If it’s possible to upload our minds into machines, then someone with absolute control over those machines and malicious intent may be able to harm us in profoundly new and disturbing ways. It’s simply not possible today to torture someone for a thousand years. But Black Mirror shows us how it might not only be possible, but as easy as setting an egg timer. Fun stuff!
Black Mirror achieves something that, for me, few science fiction narratives do. It makes me happy with my own stupid little life that will, mercifully, end someday.
It captures the grace and joy in annihilation. It sounds pessimistic, I know, or fatalistic, or defeatist. It’s only something you understand more clearly when you witness something like Black Mirror and realize how much better death would be to some of the outcomes the creator’s dark imaginations have dished up.
While it may not seem an especially happy note to end this discussion on, I raise it mostly here because of the various overlaps of ideas. Daniel focuses on Bostrom and uses Netflix’s Black Mirror to illustrate his point. These strike me as not only important, but somewhat intuitive – ideas I think many people have and share. Black Mirror has enjoyed success because its visions of the future, unlike so many other drab Hollywood bullshit, perfectly captures our collective anxieties.
Hopefully in time these ideas about risk will grow in influence, helping shape our response to the threats ahead in a way that is more open-minded, more considered, and (hopefully again) more effective.
 Bostrom, N. (2013, February). Existential Risk Prevention as Global Priority. Global Policy, 4(1), pp. 15-31. doi:10.1111/1758-5899.12002
The risk-based framework I’ve mentioned elsewhere might appear to leave some things out. Climate change (of a sort) happened once already, and our species did survive it. The end of the Ice Age and the arrival of the Holocene was something that Australian Indigenous peoples, for example, managed to overcome. It even afforded them opportunities to settle in previously uninhabitable areas once covered by ice.
The onset of the Holocene climatic optimum … coincides with rapid expansion, growth and establishment of regional populations across ~75% of Australia, including much of the arid zone.
In a similar theme, birds are dinosaurs. Importantly, they’re not related to dinosaurs, but actual modern-day dinosaurs; the survivors of the mass-extinction event that was a terminal event for most of their kin.
That previous climate change event, and that mass extinction event, might both therefore be examples of endurable risks, using Bostrom’s terminology. The groups at risk (humans, and dinosaurs) were not entirely wiped out. However, in at least the case of dinosaurs, their time as the dominant lifeform on Earth was arguably over once we got a foothold.
Recall Bostrom’s definition of existential risk:
One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
Existential risk doesn’t just require the annihilation of life. It’s enough that the potential for that life is ‘drastically curtailed’ for it to be considered an existential threat. This isn’t the case for Indigenous Australians, of course, who may have even thrived thanks to the effects of the last great changes in climate. For them, the greatest existential risk would sadly come later, in the form of European Colonization. For the dinosaurs though, their future potential was drastically curtailed. Despite this, some still live on as birds (endure, in Bostrom’s terms).
The case of the birds could seem to pose a problem for a framework like the one I’ve built from Bostrom’s ideas, or at least demonstrates that rigid categorization on paper won’t always translate perfectly to the real world. This is because it doesn’t seem to allow for there to be endurable risks that are also terminal, or at least requires some further thinking when we have a risk that can be either terminal or endurable depending on perspective (endurable for the birds, but not the dinosaurs?). Bostrom’s own table, interestingly, only includes examples of terminal risks that involve annihilation, and not “drastic curtailing of potential”. It’s a trickier idea to pin down. How hard is that line between them?
We can explore this idea further using the earlier example of transhumanism, which represents another “grey area”. What happens when our species (humans) no longer exists but is replaced by something that is still in some way “human”?
To the same extent that modern birds still “carry the torch” for the dinosaurs, what if some future version of us ends up doing the same for our species? What we define as “terminal” might actually vary according to personal beliefs and preferences, and that reveals the immensely sticky link between risks and threats, and people’s closely held beliefs, values, and norms.
For example, imagine we can upload our brains to machine bodies. This could present a vast new realm of possibilities for us in terms of sustainability. Why terraform Mars when we’ve already seen how well robots can do there?!
If robots can thrive there, maybe we should be more like them?
But then, to some people, the moment we do that, we lose something important about our humanity. The era of the human is effectively over, they say. The point is deeply debatable, and has been debated many times: If we replace enough of ourselves with machines, computers, and technology – to the point we are arguably no longer human – does that mean our species no longer exists? Is it a terminal or endurable event for the human species?
Timothy Morton’s ideas are relevant here too. If we are a kind of cyborg, as he says, then this question isn’t even a theoretical. The same applies to his claim that industrial capitalism is a primitive AI ruling us – a claim that in some senses is quite hard to refute. Are these terminal or endurable events?
A related thought, perhaps another way to think about this, is speciation. This is a term from biology referring to the formation of new and distinct species in the course of evolution. Speciation has happened with humans before; other species like Neanderthals all share a common ancestor with us – one that speciated at various points. Humans themselves have driven artificial speciation in other species, from dogs to domestic livestock to produce – and we’ve been doing it for tens of thousands of years. Technology has often played a key role too, in creating new species of flora and fauna (often to our own benefit). From this perspective, the idea of further technology-driven speciation of humans themselves may be possible, especially if it benefits us – or appears to.
From Corgis to Corn: A Brief Look at the Long History of GMO Technologydoes a great job at providing some specific examples of speciation over time, stretching back millennia:
Bringing it all back to Bostrom’s framework, are outcomes where our humanity fades away a terminal event for our species? Or because something else persists, are they endurable in some way?
Transhumans are to humans what birds are to dinosaurs. They may carry the torch of the species forward, but they do leave many things behind in the process. The potential of a flesh-and-bone species to fully flourish may very well be curtailed in a future where we shed our biological limitations and transition to new forms. It might seem a distant possibility relegated to the realm of thought experiment, but it nonetheless presents moments for reflection when it comes to ideas of risk, and especially, the risk of species annihilation. This shows, hopefully, that annihilation can mean quite a few things, and not all are as bad as the word itself might imply.
 Williams, A. N., Ulm, S., Turney, C. S., Rohde, D., & White, G. (2015). Holocene Demographic Changes and the Emergence of Complex Societies in Prehistoric Australia. PLoS ONE, 10(6).
 Bostrom, N. (2002). Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. Journal of Evolution and Technology, 9(1).
In making podcasts for this project, we had the pleasure of talking with Elizabeth Boulton, a PhD researcher studying the work of Timothy Morton, who developed the concept of a hyperobject in attempting to better account for how exactly existential risks like climate change are a ‘different beast’, as Bostrom describes.
Having set global warming in irreversible motion, we are facing the possibility of ecological catastrophe. But the environmental emergency is also a crisis for our philosophical habits of thought, confronting us with a problem that seems to defy not only our control but also our understanding. Global warming is perhaps the most dramatic example of what Timothy Morton calls “hyperobjects”—entities of such vast temporal and spatial dimensions that they defeat traditional ideas about what a thing is in the first place.
The idea of a hyperobject can be confusing, but echoes concepts from Bostrom. Global warming, for example, is a process that occurs over geological timescales. This is not our default mode of thinking due to our biology limiting us to far shorter lifespans. When Bostrom says ‘we have not evolved mechanisms, either biologically or culturally, for managing such risks’ he ultimately alludes to a very basic, truth about our biology and the kind of mindset it locks us into. An argument like Morton’s is essentially building upon this idea in greater detail, arguing that climate change is an example of something so vast in time and space that it defies the ability of biologically-evolved human minds to comprehend (See also: Building a Map, about how AI and other technological progress might help us meet sustainability challenges beyond the human mind’s ability to solve).
Natural selection did not equip us for problems like this, for the simple reason that natural selection only works with endurable threats: there must be something alive left with favourable traits to select for. Since these are terminal risks, there is no room for natural selection, and therefore, no (or exceedingly little) room for our biology to help us.
One obvious point here is that technology may help us overcome that complexity. Climate models, for example, already employ tremendously advanced AI and other technological innovations that allow us to reduce informational complexity to levels a human mind can understand and respond to.
Going further, this idea of technology-driven innovation can be a key argument in transhuman or posthuman interpretation of sustainability. In short: smarter, more capable humans can solve bigger, more challenging problems. Bostrom suggests we need new societal institutions, new priorities, new policies, and new norms – all to face new threats. Similarly, if human minds cannot comprehend these new threats, then perhaps we need new minds and maybe even new bodies, too?
‘A reckoning for our species’: the philosopher prophet of the Anthropocene
Part of what makes Morton popular are his attacks on settled ways of thinking.
His most frequently cited book, Ecology Without Nature, says we need to scrap the whole concept of “nature”. He argues that a distinctive feature of our world is the presence of ginormous things he calls “hyperobjects” – such as global warming or the internet – that we tend to think of as abstract ideas because we can’t get our heads around them, but that are nevertheless as real as hammers.
He believes all beings are interdependent, and speculates that everything in the universe has a kind of consciousness, from algae and boulders to knives and forks. He asserts that human beings are cyborgs of a kind, since we are made up of all sorts of non-human components; he likes to point out that the very stuff that supposedly makes us us – our DNA – contains a significant amount of genetic material from viruses. He says that we’re already ruled by a primitive artificial intelligence: industrial capitalism. At the same time, he believes that there are some “weird experiential chemicals” in consumerism that will help humanity prevent a full-blown ecological crisis.
If you wish to make an apple pie from scratch, you must first invent the universe.
– Carl Sagan, Cosmos, Episode 1.
The wonderful Sagan quote above demonstrates a way of thinking that embraces the complexity we can find in even the simplest thing. In this case, we’re looking at an apple pie, because Sagan was a good American patriot.
He re-frames the pie as something that exists in a broader context – inside a physical universe. He then redefines “from scratch” to mean “from the very beginning of that universe”. This perspective, he suggests, shows us the real recipe for making a pie. And it’s a lot more complicated than just slapping it in the oven for 30 minutes.
In that same spirit, we must consider very carefully how we go about defining the word “risk”. Like Sagan, we must see that this word exists in a broader context, and that coming up with a good definition might take us a little longer than we first thought.
Returning once again to that high-level framework for building a triage-focused, risk-based model of sustainability, the Global Challenges Foundation (GCF) report illustrates an important feature, not fully expressed in Step 1.
Identify candidate issues for consideration.
Develop criteria to rank them.
Apply criteria and develop a ranked list.
If you wish to build a list of risks, you must first define what you mean by “risk”.
Before we can identify candidate issues for consideration (Step 1) we first need a comprehensive definition of risk that ensures we do not forget anything important. I’ve complained at length that sustainability risk discourse focuses too much on environmentalism, which implies there must be other areas being left out of the discussion. In the GCF report, they’ve broadened the definition of risk to include a new category – infinite risks – and demonstrated the importance of areas previously under-explored. This all suggests that defining risk is itself a necessary and important part of building a risk-based model of sustainability. It sounds blindingly obvious, I know, but this pedantic stating of the fact is important!
This first of steps is a deceptively complex and important one: If we don’t define “risk” well enough, we will leave blind spots, some of which could be fatal. In other words, how well we define “risk” will determine our ability to manage it.
Getting strung out over the importance of definitions is often the work of philosophers. Usually that word invokes Ancient Greeks, or some idea of heady thoughts that make you say, “deep stuff dude”. But philosophy can be something more basic too; like thinking hard about what “risk” means – because there are a number of ways we can frame it, and more pragmatically, because if we don’t, we could all die.
Meet Nick Bostrom
There are few better to call in for this job than the philosopher Nick Bostrom, who has written at length on existential risk and is influential in this space. As an example of this, we can see a body of work on infinite risk going all the way back to 2002 that eventually culminates in a number of important think tanks using that same framework.
Bostrom’s approach to sustainability and risk is brilliant, and a little bit disturbing. That’s a theme with him and a reason I like his work. That darker underbelly translates into some compelling stories and visions. Sometimes his work feel less like a journal article, and more like science fiction (he has written a paper arguing that we are living inside a simulation, for example). He represents well, I think, the kind of philosophers we’ll need in the Apeilicene.
It’s no accident that his thoughts are echoed in some of the most prominent stories today, such as the film The Matrix or Netflix’s TV series Black Mirror. He very much grounded in our
He’s also quite prolific in this space. The GCF report was co-steered by him, and its ideas about infinite risk in 2015 echo earlier work on “infinite value” from a 2011 paper:
As a piece of pragmatic advice, the notion that we should ignore small probabilities is often sensible. Being creatures of limited cognitive capacities, we do well by focusing our attention on the most likely outcomes. Yet even common sense recognizes that whether a possible outcome can be ignored for the sake of simplifying our deliberations depends not only on its probability but also on the magnitude of the values at stake. The ignorable contingencies are those for which the product of likelihood and value is small. If the value in question is infinite, even improbable contingencies become significant according to common sense criteria.
In other words: infinite risk completely changes with the importance of probability. It doesn’t matter much how unlikely something is, if that something can wipe us out.
Bostrom’s model of risk
Thinking about the GCF report’s formula: Risk = Probability x Impact, their formula can essentially be interpreted out of Bostrom’s passage above.
It’s worth looking in some detail at this model of risk, starting with his 2002 paper. The image below outlines Bostrom’s attempt to distinguish between six ‘qualitatively different’ types of risk.
The grid is relatively simple. Bostrom uses scope and intensity to differentiate different types of risk. Scope is essentially the same as “scale”. Intensity describes how severe the outcome is; how survivable, or reversible. A personal risk that is endurable is something like your car getting stolen, while a personal risk that is terminal is that stolen car driving into your face at 100km. Local essentially means large-scale, but not global. A genocide in a single country is a local terminal risk.
Importantly, these are all well-known and familiar risks – things that we have dealt with before. That is not to say we are prepared for them necessarily, but that they are known risks. What’s new is the global, terminal risk. The spot marked X. A global-scale, terminal risk (sometimes called an “X-Risk”) is a special type; one Bostrom labels as “existential”. He defines them in the following way:
Existential Risks – One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
As Bostrom argues: ‘risks in this sixth category are a recent phenomenon. This is part of the reason why it is useful to distinguish them from other risks. We have not evolved mechanisms, either biologically or culturally, for managing such risks’. Evolving and developing these mechanisms is no easy task. Why? Because there is no place for the trial and error approach we typically use. We cannot learn from a mistake when its consequences are fatal. Nobody will be left to draw any lessons from it, nor have a world to apply that lesson to.
Our approach is therefore inherently and unavoidably speculative. We are trying to build our capacities for accurate foresight. We are trying to cultivate and encourage the imagination of strange futures. We do this so that we can better anticipate an unknowable future.
Bostrom makes this point in a broader sense too, arguing that the ‘institutions, moral norms, social attitudes or national security policies that developed from our experience with managing other sorts of risks’ may be less useful in dealing with existential risks which Bostrom describes as a ‘different type of beast’. Arguably, some of the best work on existential risk comes from non-traditional institutions and think tanks – groups outside the mainstream. In a somewhat paradoxical sense, they must remain on that fringe; it’s easier to think outside the box when you already live outside of it. In another sense, I do feel that we need to begin paying closer attention to these kinds of institutions and their bodies of work, even if they may seem esoteric or alarmist at times.
Illustrating this forward-looking approach are outfits like the previously mentioned Global Challenges Foundation, as well as their collaborators the Future of Humanity Institute, which focuses on AI development and other so-called “exotic” threats like the risks of molecular nanotechnology. The similarly named Future of Life Institute is yet another think tank devoted to existential risks that focuses (again) on the dangers of unchecked AI development. There are many such groups in existence, and while well-funded and influential, in comparison to the UN’s own stature they are not yet mainstream.
These kinds of groups are newer, and sometimes explore areas well outside of the typical fare of the UN and the frameworks it develops. They exemplify what Bostrom means when he says that institutions experienced with past threats may be less useful in dealing with future ones.
In the future, I hope to look more closely at groups like these; to reflect on the threats they identify as important, to investigate what kinds of thoughts drove them to these conclusions, and to look more pragmatically at anything resembling a “triaged list” they might have developed. A meta-analysis and synthesis of their work would be a good step in building a risk-based model that can enjoy some consensus and attention.
I’ve offered glimpses of this landscape, but honestly only that – glimpses. There is a wealth of work and good ideas here that deserve greater attention from the media, from academia, from policymakers, and from the public. It might be helpful to consider their methodologies too – what frameworks and approaches they use that might be of value in the broader project of creating a triaged list of existential risks. For now, I’ve highlighted just a few notable outfits and thinkers and some of their most important ideas.
 Bostrom, N. (2002). Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. Journal of Evolution and Technology, 9(1).
 Bostrom, N. (2011). Infinite Ethics. Analysis and Metaphysics, 10, 9-59.
 Bostrom, N. (2002). Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. Journal of Evolution and Technology, 9(1).
 A third dimension, probability is also important – especially for any kind of triage to occur. This was evident in the GCF report’s own model. Bostrom points out that probability can be “superimposed” on to the matrix he’s developed. So again, this earlier work seems to align with the later reports coming out of collaborations, like the one with GCF.
 You might recall an earlier critique of our definition of sustainability using the word “persist” in “persist over time”– since it doesn’t capture the idea of human “flourishing”. Here, I think, Bostrom captures that idea better! A drastic curtailing of our potential is essentially the antithesis to human flourishing, so its avoidance makes flourishing possible, even more probable. This section not only latches on to Bostrom’s idea of going beyond annihilation as a concern, but tries to address this idea of flourishing, and of maximizing human potential.
 Bostrom, N. (2002). Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. Journal of Evolution and Technology, 9(1).
The Global Challenges Foundation (GCF) is a good example of a sustainability-focused think tank that also focuses on risk. Board members include Johan Rockström, whose pioneering work on the concept of “planetary boundaries” has been influential in sustainability. For this report, GCF worked closely alongside a similar outfit, the Future of Humanity Institute, based out of Oxford University and led by philosopher Nick Bostrom whose ideas continually reappear in these spaces.
As the report’s title suggests, they’ve focused on developing definitions of risk that include a new category; infinite risks. The term “infinite” here refers to their potential impact. As the formula below demonstrates, impact is part of how they are essentially “calculating” risk. This simple formula drives their approach, and it’s the kind of criteria-driven classification system we need for triage to occur.
Their argument is, at least partly, that impacts and probabilities are often “masked” by government and business policies, which typically under-report both parameters. Their more cold-blooded assertion is that:
This formula captures what is probably common sense thinking to most: A high probability event with low impact isn’t as risky as a low probability event with high impact. A 90% chance of a common cold isn’t a serious as a 1% chance of terminal cancer, right?
If it seems odd for a bunch of serious thinkers, in a serious report, to be stating something so simple as the above formula, it’s because despite being both simple and true, this idea doesn’t get much traction in the real world.
Using the formula above it might occur to us that some “calculations” don’t quite boil down to numbers: the impact of some risks are essentially infinite. To illustrate this, the report starts with some history, and shows how that guides them today. It’s an interesting story, worth quoting in full:
It is only 70 years ago that Edward Teller, one of the greatest physicists of his time, with his back-of-the-envelope calculations, produced results that differed drastically from all that had gone before. His calculations showed that the explosion of a nuclear bomb – a creation of some of the brightest minds on the planet, including Teller himself – could result in a chain reaction so powerful that it would ignite the world’s atmosphere, thereby ending human life on Earth.
Robert Oppenheimer, who led the Manhattan Project to develop the nuclear bomb, halted the project to see whether Teller’s calculations were correct. The resulting document, LA-602: Ignition of the Atmosphere with Nuclear Bombs, concluded that Teller was wrong. But the sheer complexity drove the assessors to end their study by writing that “further work on the subject [is] highly desirable”. The LA-602 document can be seen as the first global challenge report addressing a category of risks where the worst possible impact in all practical senses is infinite.
The idea from this opening scenario of us igniting the atmosphere is a great example of a risk where the impacts would be so far-reaching and devastating, they are effectively infinite. The end of Earth is not something we can quantify with a number, it is an impact with no upper limits – and therefore infinite, in the author’s minds.
This idea of infinite risk is useful to a triage model of sustainability because it is inherently focused on severity of impacts as a determinant of a risk’s importance. In this model, a broad range of threats is assessed using a new definition of risk, and then, using a criteria (probability and impact) we determine what to prioritize. This is essentially that three-steps model of triage mentioned earlier at work, and the list of threats this model identified as important is notably different from something like the UN’s Sustainable Development Goals.
Another notable point in the report is the way it approaches communicating sustainability and risk specifically. As mentioned elsewhere a risk-based model presents challenges to communicating sustainability, since it can often veer into negative messaging, which in turn can cause disengagement and other undesirable outcomes.
The authors clearly recognize this, and the report has a stated focus on turning challenges into opportunities:
The idea that we face a number of global challenges threatening the very basis of our civilisation at the beginning of the 21st century is well accepted in the scientific community and is studied at a number of leading universities. However, there is still no coordinated approach to address this group of challenges and turn them into opportunities.
The interrelationship between danger and opportunity is worth identifying and targeting, because it is arguably present in many sustainability challenges. The idea here echoes an old truism, famously stated by US President John F. Kennedy: ‘In the Chinese language, the word “crisis” is composed of two characters, one representing danger and the other, opportunity’.
Even though he’s not quite correct, the point here is that if it’s good enough for JFK, it’s good enough for me. A focus on positive messaging and finding opportunities from the crises ahead seems to me a better starting mindset than one focused on impending doom.
Of course, as a final note, this assumes we want to change people’s minds, or that we even have an ethical right to. This is a far stickier issue and gets its own examination at the end of this section on risk.
 Global Challenges Foundation. (2015). 12 risks that threaten human civilization – The case for a new risk category. Stockholm: Global Challenges Foundation.
 As an aside: We see this argument echoed elsewhere, perhaps. Economic costings of environmental degradation are criticised at times under the notion that nature is “priceless”, effectively of infinite value and not reducible, once again, to numbers. This resembles arguments that some risks to the environment, such as igniting the atmosphere, are infinite and unquantifiable too.
Although it enjoys less mainstream attention it is important to note that much work has been done already on steps 1 and 2 above, and some groups have even attempted to create ranked lists. Below is a list of such threats, from a report created in 2015.
The list is a mixed bag featuring the familiar (nuclear war, meteors, climate change) and the lesser-known, such as AI development, synthetic biology, and nanotechnology – the bad fruits of unchecked modernization.
Notably, publishing this article in 2020, after it was written in 2018-2019, we can it was spot on to include Global Pandemic as an existential risk:
There are grounds for suspecting that such a high impact epidemic is more probable than usually assumed…The world has changed considerably, making comparisons with the past problematic. Today it has better sanitation and medical research, as well as national and supra-national institutions dedicated to combating diseases. But modern transport and dense human population allow infections to spread much more rapidly.
…and alongside it, Global System Collapse.
An economic or societal collapse on the global scale…Such intricate, interconnected systems are subject to unexpected system-wide failures caused by the structure of the network – even if each component of the network is reliable. This gives rise to systemic risk, when parts that individually may function well become vulnerable when connected as a system to a self-reinforcing joint risk that can spread from part to part, potentially affecting the entire system and possibly spilling over to related outside systems.
[At some point I hope to update this with analysis of a comparable model seen in this more recent report, linked below.]
The Drawdown project spearheaded by Paul Hawken, identified a range of potential methods to reverse climate change, and then prioritized them according to certain criteria. Specifically, they were ranked according to emissions reductions and cost. Emissions reductions is the key component of reversing climate change, and so this was considered the critical indicator of a given solution’s potential impact.
The inclusion of costs is intended to act as a proxy for feasibility in general, suggesting that projects with economic gains are arguably more feasible – although ascertaining costs in some areas was too difficult for this first version of the project. Project leader Paul Hawken also stressed that various co-benefits existed with these solutions that went far beyond economic considerations. Empowering women, delivering rooftop solar, and regenerating our natural environments are all examples of ways to achieve emissions reductions that come with other profound benefits. The image below illustrates this idea beautifully:
This famous cartoon by artist Joel Pett went viral before the Copenhagen Climate Change Conference in 2009, helping promote the simple yet powerful idea of “co-benefits”.
Drawdown demonstrates prioritization, but not triage: The focus is on prioritizing solutions by their effectiveness, rather than ranking threats by level of severity. This isn’t to say Drawdown is bad, however. This is not lazy thinking, simply different. Different approaches should be encouraged because each framework lends different strengths. A drawdown-type approach can be good for identifying lesser-known issues, for example, refrigeration management (a surprising #1 on the list, as shown below) and aligning our capacity for solutions with problems we can solve in a way that maximizes our potential positive impacts. That part is commendable.
More so than the results of Drawdown, their prioritization methodology might end up ultimately as their greatest achievement. One key point here is to examine what the Drawdown project does at this higher level, because it is an instructive example in highlighting a process resembling triage.
Identify candidate issues for consideration.
Develop criteria to rank them.
Apply criteria and develop a ranked list.
As Turner’s previous lamentations would highlight, however, the focus with Drawdown is still problematically on just one domain – the environmental, and even more specifically, on reversing climate change (just one environmental challenge of many).
What if, instead, there was a work comparable to Drawdown that identified existential risks, developed a criteria for prioritization, and produced a ranked list like the one above? Something like the list below?
Unintended consequences of AI development
Global nuclear war
What if we had something like this to help guide us?
Perhaps more humbly, I should ask: What if we already do, but it just doesn’t get the attention it deserves?
 I attended a talk on this report delivered by the editor Paul Hawken, which is where some of the information here is drawn from.
 Pett, J. (2012, March 18). Joel Pett: The cartoon seen ’round the world’. Lexington Herald Leader.
 Drawdown.org. (2017). Summary of Solutions by Overall Rank. Retrieved from Drawdown.org: https://www.drawdown.org/solutions-summary-by-rank Taken from their website in 2017. Notably, much has changed in the years since writing this. The 2020 review of Drawdown appears to have changed things considerably. The extent this undermines arguments here won’t be clear until I get a chance to take a closer look.
The grass ceiling’s role in creating ‘blind spots’
Returning once again to the core problem of our research – that of “over-greening” in sustainability – it is arguable that this issue plays a role in focusing mainstream conceptualizations of existential risktoo much on environmental problems. Sure, the SDGs might mention non-environmental risks like economic inequality, but they say very little on others, such as the risks of unchecked modernization (technological advancement and human progress gone bad, to put it simply – discussed elsewhere under the term modernity).
Still today, we focus on sustainability and existential risks by placing environmental challenges above others. Climate Change is just one existential risk of many, yet it utterly dominates discourse in sustainability, including discourse explicitly related to existential risk.
Previously I spoke about the idea of reframing this era not as the Anthropocene (the era of humans) but instead as the Apeilicene (the era of threats). In 2014, Australian academic Graham Turner revisited the seminal sustainability study The Limits to Growth and found that, 40 years on, the book’s predictions of global collapse due to resource constraints (and not climate change) were still on track to occur. Part of the reason this is happening, Turner argued, is because we have failed to triage effectively: we have given too much attention to climate change as a single issue, at the cost of ignoring other highly pressing threats:
Somewhat ironically, the apparent corroboration here of the LTG BAU implies that the scientific and public attention given to climate change, whilst tremendously important in its own right, may have deleteriously distracted from the issue of resource constraints, particularly that of oil supply.
Turner’s quote here is excellent at illuminating the absence of a triage model and the profoundly dangerous consequences that absence invites – in this case, heading towards global collapse because we are not adequately prioritizing other existential threats.
In an especially important sense, it does not matter whether Turner is actually correct here. It may be the case that climate change is Threat #1 and resource constraints are Threat #2. What matters most is that our frameworks aren’t geared towards guiding us when we encounter conflicts like this.
Turner’s finding is also a good example because resource constraints remain a primarily environmental issue. Of course, just as with climate change, there are elements of this issue that are political, social, ethical, and so on. My argument would be, however, that these considerations stem from what is an environmental issue, or even just one of basic physics: resource constraints. Compare climate change or resource constraints to the existential threat of Artificial Intelligence development, and you can see more clearly the difference between “environmental” threats and others. Importantly, this is not quite the same as the difference between anthropogenic and “natural” threats since some anthropogenic threats can manifest environmentally. Climate change is the obvious example of this.
What Turner shows is that even within an “overly-greened” conceptualization of sustainability like the focus he takes, the absence of triage is still a critically important issue, and still undervalued as an approach. To put it simply, even when we’re over-greening things, we are seemingly still not prioritizing effectively.
This would suggest that the “over-greening” of sustainability, while a key issue, isn’t as important to our survival as how we manage risk.
Certainly, one other factor at play is that we are yet to develop a comprehensive system for identifying and classifying existential risks; a necessary step before we can begin to prioritize them – and doing so will be immensely difficult. Returning once again to the SDGs, we don’t just need them to be ranked but also expanded, to include other areas (blind spots) the grass ceiling has hidden from us, such as the threats from unchecked modernization, which many argue are greater than the environmental challenges ahead of us.
Efforts are certainly underway to develop threat-based frameworks, and there is a growing body of work in this field, but it remains a long way from garnering the attention and profile that other frameworks enjoy (such as the SDGs), and even further away from shaping how we communicate sustainability. So, to help shed some greater light on this work, we’ll look closer at some examples.
 The definition of existential risk, for now, can be considered a global-scale threat of annihilation to our species, or a similarly catastrophic curtailing of our potential. It is explored in greater detail elsewhere in other articles. See: #existential risk.
 The recent release of the IPCC warning that we have only 12 years to address climate change has only amplified the growing dominance of climate change in the broader “risk discourse”.
 In plain speak, the confirmation of the Limits to Growth “Business as Usual” model – in other words, the confirmation that this scenario – in which global collapse occurs – is underway.
 Turner, G. (2014). ‘Is Global Collapse Imminent?’. Melbourne: Melbourne Sustainable Society Institute, The University of Melbourne.
 Something unavoidable, to be fair, given his project involves studying an environmental work.
Typically, this term appears in a medical context where it refers to the process of determining priority of treatment based on the severity of a condition. For example, a hospital emergency room may have multiple patients to treat, and its triage that helps determine how to prioritize their treatment. Urgent and severe problems are dealt with first, and so on down the list of patients.
This would probably strike most people as “common sense” – it is clearly stupid to treat someone’s tooth ache while another patient with multiple gunshot wounds dies in the waiting room from lack of attention. Importantly, however, not every situation is so clear cut. Sometimes a patient with an urgent medical problem may not obviously present that way, while someone with a lesser issue can make a lot of noise demanding urgent attention (a tooth ache is a good example – exceptionally painful at times, but rarely life-threatening. The identification and classification of risk is, therefore, just as critical an aspect of triage as the ensuing prioritization that it informs. In other words, we need a way to identify the “quiet” but high-risk patients, just like we need a way to identify the high-risk threats that may not obviously present themselves, like someone screaming about a toothache or something with a gaping chest wound.
[Editor’s comment: The other part of triage which you haven’t mentioned is that patients who are urgent but too far gone are not treated – what are the sustainability parallels? Do we need to jettison certain causes in favour of those that are still saveable?]
The point here is that triage needs two things to work well: it’s not just about ranking threats, it’s also about identifying them in the first place – as many as possible that might be of relevance or importance. A good classification and identification scheme can help us in the more uncertain situations, when multiple high-priority issues present simultaneously. One can easily argue this is the case in sustainability, where climate change, resource constraints, economic equality, human rights, peace and justice, and other issues all present as equally urgent (and frustratingly, are often interrelated – making it harder to separate and then prioritize just one).
The UN Sustainable Development goals exemplifies, quite well I think, how we have frameworks already attempting something like the first half the work of triage – identifying threats. It’s not quite framed that way, but many can be read out of each goal. Eliminating poverty, for example, reduces the risk of harm at a personal level, and reduces the risk of broader societal disorder – and the reason we want to do this is, partly, to reduce such risks. The UN even demonstrates thinking “beyond the grass ceiling” and include issues like economic inequality, justice and peace, and human rights. What the SDGs lack, however, is an explicit risk-based focus, and any kind of serious ranking or triage. In a world of finite time and other resources, should we focus on SDG #1 or #10? Which one minimizes potential risks the most? Clearly, the framework isn’t that useful overall for a triage approach.
Now, perhaps, we need to begin the work of sorting out what’s most important from lists like these. We need something as accessible and well-supported as the SDGs, but we need it ranked, so that we can prioritize. This won’t be easy, since developing a criteria to rank these things would be immensely difficult and complex, and because as said earlier, these issues often interrelate. Despite the challenge of this task, we must take it on. Without a roadmap of prioritized risks, we are blindly hoping the things we focus most on (like climate change) are indeed the biggest threats. It’s fair to question if this focus comes with a cost.
Why can’t we do both?
We can do two things at once, of course. Issues can be “equally important” too. Just as they are interrelated. But we cannot pretend we have infinite resources to tackle sustainability challenges either. Money is limited. People’s attention spans are limited. The time people can devote to the cause actively and consciously is limited. Time, especially, is limited.
It is within the context of these constraints that a simple truth emerges: we need some level of focus here. We can’t just say “it’s all important” and proceed haphazardly, according to our own interests and agendas. Do that, and there’s a real risk we run out of time to fix certain problems in an optimal way. This truth is as uncomfortable as it is obvious – it implies there will be sacrifices; issues deprioritized along the way. A good example of this playing out, as I write these words, is the conflict between quarantine protocols protecting public health, and people’s right to protest. The clash between the Black Lives Matter movement and the restrictions of COVID-19 illustrate well, the kinds of difficult conversations ahead. In the Apeilicene, we are likely to see these situations with increasing frequency, as time-sensitive threats arise and demand extraordinarily difficult choices of us.
Similarly, we can’t expect that the current focal points themselves aren’t the product of political agendas and self-interest. If triage enables progression via focused prioritization, then it demands sacrifice as I’ve said. If sacrifices are required, it’s overwhelmingly more likely to be demanded of the powerless by the powerful. This perspective sheds some light on the landscape of global sustainable development. Often it is rich, industrialized nations pressuring less-wealthy countries to leapfrog coal and jump into more expensive solar – for example. In other words, the powerful expecting the powerless to do the heavy lifting.
What about root causes?
This is an issues-oriented approach, clearly. We might question why we aren’t identifying whole systems as problems. Capitalism, consumerism, and so on. The guy in the emergency room with heart attack symptoms isn’t just there because of his individual circumstances. There are larger, structural forces like globalization, modernism, reductions in manual labour, and consumerism that likely shaped his individual circumstances and the choices he could exercise.
But triage is not about root causes or systemic, structural change. It is what you practice in the emergency room. When someone’s heart is about to stop, it’s that immediate crisis you focus on, not systemic change.
Explaining the lack of triage in mainstream sustainability
In contrast with a triage approach to sustainability, triage in health care is not a peripheral concern – it is a core practice supported by years of research and used in basically every medical institution around the world. Why then, in sustainability, is this same approach not taken?
One possible explanation is that focusing on risks; on problems and challenges, often places a negative frame on a given issue, making it harder to identify potential opportunities. Similarly, focusing on risks can present challenges for communicating sustainability. Research shows that fearful messages cause disengagement, apathy, and a sense of hopelessness and incompetence. Continued exposure causes most people to “tune out the messages and move on to other, more pleasant concerns”. Despite its obvious importance, a threat-based approach to sustainability clearly presents some challenges and considerations, and in looking at some examples of threat-based models, we’ll start to see the devil, as always, is lurking in the details.
 Robertson, M. (2017). Communicating Sustainability. New York: Routledge.
It’s not like we’re doing nothing, but overall, the way we handle risk seems shockingly bad at times, as if we’re a species with a subconscious death wish.
We’re not throw-yourself-off-a-building suicidal, though. We’re more like someone who starts their day with a Vodka and Coke, chain-smoking Marlboro Reds in between downing pizza slices as they binge TV, while living in fear of terrorists being the cause of our demise.
We’re not violently self-destructive, we’re passively, ignorantly, apathetically, and indulgently so. Heart disease and cancer, followed by many ailments found disproportionally in affluent countries (lung cancer, for example) are what kills us Westerners, but rarely what scares us.
‘According to the New America Foundation, jihadists killed 94 people inside the United States between 2005 and 2015. During that same time period, 301,797 people in the US were shot dead’.
Despite this, Americans are more afraid of terror attacks and government-enforced gun restrictions than they are of gun violence.
Clearly, and this is just one of many potential examples, we do not manage risk well. As the cartoon implies, influential societal institutions such as the media play a role in that response; in shaping our fears. The same is true of politicians, stoking our fear.
Our response to these types of arguments is often an irrational and misplaced fear based on a warped view of the threats we face. The same is true elsewhere in how we look at, and manage, risks to our species-level sustainability. The question here is a simple one: If we can’t think about and deal rationally with the threats that we, personally, are confronted with then what hope do we have of thinking about and dealing rationally with the threats that we, collectively as a species, face? Given these include serious, potentially civilization-ending threats, avoiding a cold reckoning of the facts could get literally get us all killed.
This failure of risk management is evident in mainstream sustainability too, I’d argue. The Sustainable Development Goals (SDGs), enjoy a global profile, are highly resourced, and are operationalized around the world. What we seem to lack, however, are any equally high-profile frameworks that identify and classify threats, and using carefully developed criteria, prioritize them. We don’t have an SDG-type list of threats – and that strikes me as not only odd, but dangerous.
[A comment from our editor here:] Where would you place Rokstroms planetary boundaries in relation to this assertion? Obviously it doesn’t prioritise within the 9, but it suggests they think those are the 9 greatest priorities. [A good point that will have to be revisited someday!]
Focusing on risks might seem like a strange suggestion, but it’s quite a common practice in other areas. Every day on my way to university I pass a fire danger rating sign – an amazingly simple and important risk indicator.
If not strange, then it might seem a little paranoid to assemble a list of risks and focus on minimizing them, but that’s what our era calls for. We’re doing things today we’ve never done before.
Tomorrow’s challenges are unlike any our ancestors faced. It’s not (just) dinosaur-killing meteors we must account for now, it’s also things that are closer, nearer-term, and often of our own making. Climate change. Nuclear war. Super viruses. The myriad unintended consequences of AI development or biotechnology. We’re quite good at making brand new problems for ourselves these days. Worse still, we’re not entirely sure which problems are the most threatening. It might end up being something unexpected that wipes us all out.
Recognizing this, there are numerous studies, entire think tanks even, dedicated to a threat-based based approach to sustainability, developed by people who want us to consider worst-case scenarios and all the other negative outcomes our well-intentioned blundering can bring about. With their list of humanity’s potential future sins in hand, they want us to take steps now to avoid mistakes they say could be significant, or even fatal.
This is a new kind of risk identification and management and the body of work around it is growing, but this mode of thinking – a type of triage – is quite a bit more established. It’s going to need to become a lot more prominent in sustainability though, if long-term survival is our goal.
 Anderson, J. (2017, January 31). The psychology of why 94 deaths from terrorism are scarier than 301,797 deaths from guns. Quartz.
 Chapman University. (2016). Chapman University Survey of American Fears. Orange, California: Chapman University.