Podcast Ep4: Transcript

Subscribe to us on Spotify, Apple Podcasts, Google Podcasts, and Stitcher.

Click here to go back to the show notes and episode resources.

SUMI: All planning and recording for this episode have taken place on Ngunnawal Country.

Hi, it’s Sumi, one of the hosts of The Grass Ceiling podcast. This episode’s a little bit different from what we usually do. So, The Grass Ceiling is a project that comprises both podcast episodes as well as written articles. The articles are all on our website, and they explore in depth some of the concepts in sustainability that maybe were too long to cover here, or are just better read than heard.

A lot of time and research have gone into writing them, and they’re really mind-blowing. I know, because I’m still recovering from having my mind blown by some of the stuff Nick’s written. Our supervisor, Edwina (shoutout!), can vouch for that too.

Today, we’ve got a bit of a live reading-slash-discussion, that follows on from some of the philosophical stuff that we talked about in our last episode. Nick’s going to be reading from one of the pieces he’s written, and we’ll kind of have chats here and there if things come up.

As always, thanks for tuning in, and you can find this article and way more at www.thegrassceiling.net.

This is The Grass Ceiling, a guided tour of sustainability. Sustainability is ever-changing and complex, so join us as we break it down and figure it out. Nick, take it away!

NICK: This section I’m reading from is Chapter 17.3, Drawdown: A Case Study in Prioritisation. Essentially, what I’m doing in this chapter is, looking at this project called Drawdown, which was led by Paul Hawken. Hawken was essentially overseeing a huge army of scientists. The purpose of the Drawdown project was to find ways to reverse climate change. Hawken gave a talk on this project at ANU, and that’s where I’m drawing a lot of the knowledge about it from – really good to get it firsthand from the man himself. There were a lot of things he said in that talk that were a bit more hard to find online, so that’s kind of what I wanted to share and what I wanted to focus on.

As I said, they were looking for solutions on ways to reverse climate change, and Hawken made a specific point about this. He was like, he doesn’t understand, philosophically, the idea of mitigation – why would you mitigate something that’s trying to destroy you? You want to reverse that process; you never want it to even happen in the first place. With that in mind, that’s sort of what the purpose of the Drawdown project was.

SUMI: What’s mitigation?

NICK: Mitigation, as opposed to reversing, would be to reduce the effects of, rather than stop the process from happening. So you might lessen the worst impacts of climate change, as opposed to reverse the whole process entirely.

SUMI: Is it like making earthquake-resilient buildings?

NICK: Exactly, as compared to stopping earthquakes entirely. Not really possible for us to do right now, but that’s a great example that illustrates the difference between the two. I’ll launch into what I’ve written, and maybe paraphrase as I go along.

The purpose of the Drawdown project was to identify a range of potential methods to reverse climate change, and then prioritise them according to certain criteria – in this case, emissions reductions and costs. Emissions reductions is the key component of reversing climate change, and so this was considered the critical factor of a given solution’s potential impact.

The inclusion of costs is intended to act as a proxy for feasibility in general, suggesting that projects with economic gains are arguably more feasible – although in many cases, as Hawken noted during his talk, ascertaining costs in some areas was really difficult, often times too difficult to put a dollar figure on, at least for this first version of the project. He also stressed something, it’s worth pointing out here, which is that various co-benefits existed with these solutions that go far beyond just purely economic considerations or considerations about emissions reductions. For example, empowering women to choose how many women they want to have, educating young girls, delivering rooftop solar, and regenerating our natural environment – these are all examples of ways to achieve emissions reductions that may generate economic value, but they obviously also come with some pretty profound other benefits. Allowing women the same kind of freedom and autonomy that men get is something that goes far beyond just economic or environmental considerations. It just so happens to have huge emissions reductions benefits as well.

In the essay, there’s a comic included. It’s a very famous cartoon, but the artist Joel Pett. It went viral before the Copenhagen climate change conference in 2009, and it kind of helped promote this idea of simple yet powerful co-benefits that come along with some of these things. So, just to visually paint a picture for you, if you’ve not seen the cartoon – I’m sure some of you already know the one I’m talking about – there’s a guy up there delivering a PowerPoint to a big crowd at a climate summit. There’s a bullet-pointed list of things and it says sustainability, green jobs, liveable cities, renewables, clean water, air, health children, et cetera. And somebody in the crowd stands up and goes, “What if it’s a big hoax and we create a better world for nothing?” It’s like, even in the process of trying to combat climate change, even if it was a hoax, we would still create all these other huge co-benefits. That’s just an important thing the point out, it’s not strictly related to this chapter.

The key focal point of this chapter is that Drawdown demonstrates something we can call prioritisation. Prioritisation feeds into a larger idea, which this whole chapter is about, which is triage. Triage is, in a nutshell, the idea that you should treat the most severe problem first, and then go on down the list. It’s a way of doing treatment that is based on prioritisation, and that prioritisation is based on the severity of a condition.

SUMI: This is something that they do in the emergency room at hospitals?

NICK: Right, exactly. Triage typically belongs in a medical context. You might be a triage nurse working, say, in an emergency ward, and people come in and present with different problems. Your job – and it’s quite a tricky job, it’s quite a skill – is to figure out who should see treatment first. What complicates this job, what makes it more difficult is that you might have somebody who’s very vocal about their problem, but they might not be in a life-threatening situation. You might have somebody who’s very quiet, and they might be quietly dying in the corner. So you need to be very good at identifying risks in ways that, when the behaviour or symptoms that those risks exhibit aren’t as obvious as they might be.

Obviously, it’s very easy to deal with somebody who has a toothache versus somebody who’s wheeled in with a gunshot wound to the head. But when two people are wheeled in and both of them are dying from poison, and you don’t know what it is, it’s very hard to figure out who to treat first and so on. That’s the analogy I use to describe climate change and sustainability more broadly: we lack a triage-based approach to sustainability. If we’re going to have a triage-based approach to sustainability, the one of the first things we need to do is get really good at prioritisation. That’s the whole reason I’m looking at this Drawdown project is, this is a really good example of how to go about prioritising something.

The only problem, if there is a problem here, is that Drawdown is focused just on climate change. What if we had, instead, a model of sustainability focused on all the different risks facing us, and then it prioritised that. It used some sort of criteria – Paul Hawken’s used net savings, net costs, and emissions reductions, to prioritise all the candidate solutions for reversing climate change. What would be the metrics that we use to rank and prioritise how we deal with different global existential threats?

Drawdown demonstrates prioritisation, but not triage: The focus is on prioritizing solutions by their effectiveness, rather than ranking threats by level of severity. This isn’t to say Drawdown is bad, however. This isn’t lazy thinking, it’s just different. Different approaches should be encouraged because each framework is going to lend different strengths. A Drawdown-type approach can be good for identifying lesser-known problems: for example, refrigeration management. That’s a surprising number one on the list, is refrigeration management. If we did better at cooling and heating homes around the world, we would have massive reductions in global emissions. And that’s the beauty, I think, of doing a number-crunching thing that prioritises in that way, because it can lead you against your own intuitions, and take you to places you might not otherwise have found and identify solutions that you may not have prioritised otherwise.

As I’ve said to you once before, Sums, I think that more so than the results of Drawdown, this methodology, this prioritisation that they’re doing, that might ultimately end up being the greatest achievement of this project. One key point here is to examine what that project is doing at that high level, because it’s quite instructive in highlighting a process that might resemble triage or something on the first steps towards a triage model. The first step is to identify candidate issues for consideration. The next step is to develop criteria, so that you can rank them. And then, the third step is simply combining the two; you apply that criteria and you develop a ranked list.

As I mentioned though, the problem with Drawdown is it’s only focused on climate change, and even then you could say it’s focused on an environmental issue – it’s still stuck beneath the grass ceiling. What if instead there was a work comparable to it that identified existential risks, it developed a criteria for prioritising them, and it produced a ranked list – kind of like Drawdown has. What would that look like?

SUMI: Before you launch into it, could you define what an existential risk is?

NICK: We’re on our way to understanding what that is, and it’s very difficult to explain all these concepts because they’re interrelated. Just a working definition until we get to that definition is “global existential risk is a risk that threatens to either annihilate humans, or to drastically curtail our potential”. It doesn’t necessarily need to annihilate us, but just leave us in a really dire situation.

SUMI: So it’s necessarily a human-centric idea, existential risk?

NICK: Yeah, pretty much. Although, it can take a non-anthropocentric perspective insofar as if we destroy the planet, then it will also destroy all humans. But yeah, it is very anthropocentric, it’s worth noting.

This sort of stuff enjoys less mainstream attention than, say, the attention that climate change gets. But it’s worth noting and giving due credit that there has been a lot of work done already, in attempting to create a kind of ranked list of global existential threats. An example of that might come from a 2015 report called “Global Challenges: 12 risks that threaten human civilization – The case for a new risk category”. This comes out of a group called the Global Challenges Foundation. I’ll talk about them briefly in a second, but I just want to talk about the list of 12 risks that they identified, because it’s a bit of a mixed bag. It features some familiar things, you know, nuclear war, meteors, climate change, but then it also features some lesser-known stuff, such as AI (Artificial Intelligence) development, synthetic biology, nanotechnology – basically the bad fruits of unchecked modernisation.

I’ll just read quickly. We have extreme climate change, nuclear war, a global pandemic, ecological catastrophe, global system collapse, major asteroid impact, synthetic biology, super-volcano, nanotechnology, artificial intelligence, and future bad global governance. And then another one, just represented by a question marks, “unknown consequences”. We’ve talked about this at other times, how the development of plastic was originally the saviour of the environment, and then had some unknown consequences, unintended consequences much later down the line. Then it became a kind of bane on the environment.

SUMI: A lot of existential risks that you’ve just listed there, that that article talked about, I can’t even begin to wrap my head around what some of those are. Like, I have no idea what you mean by nanotechnology, for example. Do you want to talk a bit about some of them, or do you just want to talk about what unites all of these different risks?

NICK: There isn’t really a common theme, other than the fact that their impact can be so severe, that it constitutes a global existential risk; it could either annihilate us, or it could drastically curtail our future potential.

In terms of nanotechnology specifically, there’s actually kind of a laundry list of all of the things that could go wrong with nanotechnology. Just to give one example… nanotechnology is essentially just working with things at very tiny scales – that could be biotechnology, that could be us modifying crops or livestock, for example, or it could be genetically modifying humans and so on. It could be the creation of an engineered virus, so an engineered pandemic rather than a naturally-created or naturally-mutated one.

Nanotechnology could also involve things like, say we develop a new robot, kind of drone that can go through your bloodstream – it’s like the size of a red blood cell – and it goes around zapping all the bad stuff. But what happens if somebody hacks that and we have 60 millions people with these things inside of them and suddenly they go rogue? There could be some serious problems there. Or what happens if there’s just an unintended consequence from having these things running around inside of us? So that’s at least part of the problem of nanotechnology.

Nanotechnology typically integrates with other sorts of existential risks. So, you often see it, for example, in dystopian science fiction, combining with the idea of artificial intelligence gone bad – either intentionally bad, or unintended consequences bad. A kind of famous example is called the “grey goo scenario”, where you have tiny little self-assembling robots. But somebody hacks the off switch, and so they never stop self-assembling. And it’s this nanomolecular goo just ends up covering the earth in this grey mass of nastiness … Anyways! [laughs]

So there’s lots of different ways – that probably wasn’t the most convincing or compelling argument against nanotechnology’s risks – there’s probably some more pragmatic threats that they pose in the nearer term, but I didn’t actually look at that section on the paper.

Quickly, a bit about the Global Challenges Foundation, because I think they’re a good example of a sustainability-focused think tank that also focuses on risk. One of their board members is Johan Rockström, who’s a bit of a pioneering figure in sustainability, he pioneered the concept of planetary boundaries and he’s been quite influential in sustainability. Just his presence alone indicates what kind of a level of influence and profile this organisation has.

For this report, the GCF – the Global Challenges Foundation – worked closely alongside a similar outfit, which is the Future of Humanity Institute, and they’re based out of Oxford University and led by the philosopher Nick Bostrom, whose own work I’ve looked at in quite a lot of detail and who is quite an influential thinker in terms of this space of sustainability and how it relates to risk.

Just briefly, I think it’s interesting to note a few details from that report, the 12 risks report. You’ll notice in the title of the report – “Global Challenges: 12 risks that threaten human civilization – The case for a new risk category”. What do they mean by a new risk category? They don’t talk necessarily about global existential risk, they instead talk about infinite risk, and this is what they want to be a new category.

As the report’s title suggests, they’re focused on developing a new definition of risk, and that includes a new category, infinite risks. The term “infinite” here refers to their potential impact. They have this figure, which is just Risk = Probability x Impact. Very simple formula, Risk is the same as Probability times Impact. This is essentially how they calculate risk, and it drives their entire approach. It’s this criteria-driven classification system that I think we need if we’re going to have anything resembling a triage system. This is how you get to triage.

SUMI: Could you give an example of how that formula might be used to compare, say, two different risks?

NICK: Right, so, a high probability event with a low impact isn’t as risky as a low probability event with a high impact. To put that more simply, a 90 percent chance of a common cold isn’t as serious as a one percent chance of terminal cancer. Probability is an important thing, how likely is this to happen, but also impact is the huge thing, you know. Just because it’s highly probable, if it’s low impact we don’t really care about it, but if it’s really high impact and even has a tiny, little bit of probability of happening, then it’s something that we need to be very serious about.

This ties into the idea of the precautionary principle – this is an idea we see a fair bit in sustainability, where it’s like, because what’s at stake is so high, because the impact is so large in other words, it doesn’t really matter so much the probability. The precautionary principle is a slight rephrasing of that. It says, we shouldn’t let the fact that we don’t have a hundred percent complete scientific knowledge stand in the way of us taking action. The reason why that is, is ultimately an argument – or you could rephrase it as an argument – about the impact, because the impact could be so severe that we need to take action, even if we’re not 99 percent sure that we need to take it.

SUMI: What about those unknown consequences, then?

NICK: Well, we look at those in terms of their potential impacts, but we don’t really have a clear idea about probabilities. That’s, I think, the value of having a formula like this is because, if one part of the formula is really hard to complete – say, probability for example – we can still get some idea about how to rank it, at least based on its impact.

This kind of strikes most people as common sense; I’m going to care more about something that has a high impact regardless of its probability, than I will care about, you know … But the point is, in our everyday calculations of risk, we don’t really behave rationally like this formula, we don’t do a little calculation in our head when we think about risk a lot of the time.

SUMI: When you’re driving on a 100km/hour road on a daily basis, if you crash into someone, that’s a massive impact.

NICK: Right, and you probably aren’t scared of that. If you’re a typical American, for example, you’re more likely to die in a car accident, but you’re definitely more scared of terrorists. Statistically, based on rigorous sampling of the American population for example. I open this whole chapter by talking about that. Americans are more scared of the governments taking away their guns than they are of gun violence directed at them, but they are statistically way more likely to die of gun violence than they are of … the government’s never done anything to take away guns, you know what I mean? Part of what I talked about is how politicians and the media get a lot of value out of stoking certain fears. And we know this, this is tale as old as time, you know, when a politician wants to win an election, he’ll bring up something for everybody to feel scared of, and I think Donald Trump is a good example of this. The xenophobia that he demonstrates is a good example of stoking up fears that aren’t necessarily rational.

To get back to this paper, why it’s good to have it written down – even if it’s completely blindingly obvious – Risk = Probability x Impact, because if it doesn’t get much traction in the real world, it’s good to just have it stated out loud from time to time. Maybe something else has occurred to you, thinking about this formula, which is that some calculations don’t quite boil down to numbers. The impacts of some risks are essentially infinite. To illustrate this, the report actually starts with some history, and it’s a very interesting story worth quoting in full. So this is the story.

It is only 70 years ago that Edward Teller, one of the greatest physicists of his time, with his back-of-the-envelope calculations, produced results that differed drastically from all that had gone before. His calculations showed that the explosion of a nuclear bomb – a creation of some of the brightest minds on the planet, including Teller himself – could result in a chain reaction so powerful that it would ignite the world’s atmosphere, thereby ending human life on Earth.

Robert Oppenheimer, who led the Manhattan Project to develop the nuclear bomb, halted the project to see whether Teller’s calculations were correct. The resulting document, LA-602: Ignition of the Atmosphere with Nuclear Bombs, concluded that Teller was wrong. But the sheer complexity drove the assessors to end their study by writing that “further work on the subject [is] highly desirable”. The LA-602 document can be seen as the first global challenge report addressing a category of risks where the worst possible impact in all practical senses is infinite.

So yeah, there’s a pretty scary moment in history where we were going to test that first nuclear bomb and somebody’s back-of-the-napkin calculations said, “Uh, this could theoretically ignite the atmosphere of the planet and kill us all.” Which is a new kind of risk that we’d never really seen before. The idea from this scenario is a great example where the impacts would be so far-reaching and devastating that they’re basically infinite, for all intents and purposes. The end of the earth is not something that we can really quantify with a number. It’s an impact with no upper limits, and so the impact is infinite, in the authors’ minds.

This idea of infinite risk is really useful if you want to build a triage model of sustainability, because it’s inherently focused on the severity of impacts as a determinant of a risk’s importance. In this model, a broad range of threats is assessed using a new definition of risk and then using a criteria – probability and impact, that’s the criteria – we can then determine what to prioritise. This is essentially that three-steps model of triage mentioned earlier that Drawdown kind of showed, you know, identify candidate risks, develop criteria, and then apply those criteria. This is essentially what this group, the Global Challenges Foundation, is doing. What you get out of that, that list of threats, is quite different to something like the United Nations’ Sustainable Development Goals.

The SDGs, they imply risks. SDG number one is to eliminate global poverty. That would eliminate risks, the personal risk to a person from being poor, and it would eliminate larger-order societal risk – it would avoid social disorder that could come about from poverty. But the SDGs aren’t really about risk reduction at the end of the day, are they. They don’t mention anything about nanotechnology, for example, or meteors, and only indirectly talk about the risk of an ecological collapse or the risk of global governance collapse.

To go back to the report, another notable point is the communications challenges – this is going to be a recurring topic as we discuss the grass ceiling, but – another notable point is the way that they approach communicating sustainability and communicating existential risk. This does represent, I think, a unique kind of challenge. When you talk about existential risks, it can easily lead into kind of negative messaging and that can cause people to disengage, it can cause people to be scared and create a raft of undesirable outcomes. How you communicate this is particularly important, and the authors clearly recognise this. I’ll just quote from the report here.

The idea that we face a number of global challenges threatening the very basis of our civilisation at the beginning of the 21st century is well accepted in the scientific community and is studied at a number of leading universities. However, there is still no coordinated approach to address this group of challenges and turn them into opportunities.

So there’s an interrelationship here between danger and opportunity, and it’s worth identifying, as they have, and targeting, as they have, because I think that’s present in a lot of sustainability challenges. This idea kind of echoes an old truism – I think it was famously stated by US President John F. Kennedy – he said, “In the Chinese language, the word crisis is composed of two characters. One representing danger, and the other representing opportunity.”

I actually looked that up and he’s not quite correct, and it was a bit of a white boy misappropriation of the language. But the point here is, regardless of whether or not it’s true, it’s a nice way to think about it. Within every crisis, there is both a danger and an opportunity. If that mode of thinking is good enough for JFK then it’s good enough for me. Focus on positive messaging, finding opportunities from the crisis ahead … that is a better starting mindset than one focused on impending doom.

SUMI: When you’re talking about things like risk or crisis or opportunities, a question that I have here is: to whom, for whom, and by whom? Is it considered an existential risk if it threatens to absolutely decimate the population of an entire continent, for example, or is it only an existential risk if every single human is killed? Is it considered an existential risk if, say, the planet is destroyed, but we have developed the technology to fly off onto Mars and build a colony there? And also, when we talk about crises and opportunities, does it really matter who – is it going to be the rich people who are going to have those opportunities? Is it going to be everybody, how do we make sense of the fact that society and the world is unequal, within all that?

NICK: Excellent question, and it leads perfectly into that next section. At this point, I stop taking us too far down the rabbit hole and I say, hey, okay, first of all, we need to nail down a definition of risk that can answer those sorts of questions. When I got to this section, I realised how foundational and important that was, because there’s a lot of work that’s going to go into defining risk. That in itself is a huge step, and it’s a necessary first step before you can do any of this other stuff. You need to have a clear definition of risk, but the act of defining risk is itself very important. It’s kind of a meta task, too, because if you don’t define risk sufficiently well, then you open yourself up to risk. It’s very meta. Anyways –

SUMI: I’m still trying to wrap my head about that. If you don’t define risk well, then you open yourself up to risk … Is that because you’re less likely to recognise it as a risk?

NICK: Exactly! So, it’s a very meta task. It’s a philosophical undertaking to worry too much about definitions, and that’s typically what the work of a philosopher is, to tear their hair out wondering about what the correct definition of something is. But this has real, practical implications, and really high stakes implications too. Because if we define risk too narrowly, then we risk getting blindsided by something that we didn’t include in our definition, and then it’s game over for the species.

SUMI: Right, so if, say, the physical bodies of humans were to still be in existence, but we were to not have the same autonomy or control over them as we might right now, does risk encompass that kind of loss of humanity? Or is it only the absolute and total decimation of – like our hearts no longer beat and our brains no longer work?

NICK: Yeah, again, I think that’s going to depend hugely on your definition of risk. I’ll take us through to the last chapter that I’ll share, and that’ll talk a lot to this idea and what I think is a good definition of risk.

It starts with a quote, from Carl Sagan, who said, “If you wish to make an apple pie from scratch, you must first invent the universe.” The point he’s making there is that, before you can make the apple pie from scratch, you first need to invent gravity and you need to invent atoms, and you need to do all this other stuff first. Even just making the apple pie is a much more complex task, and same here with building a risk-based model, you first need to do all this other stuff. You need to build the universe underneath it.

Quickly return to that high-level framework we talked about for building a triage-focused, risk-based model of sustainability. Step one: identify candidate issues for consideration. Step two: you develop criteria to rank them. And then step three: you apply that criteria and develop a ranked list. If you wish to build that list of risks, you must first define what you mean by risk. In that GCF report, they brought in their definition; they redefined risk to include a new category, which was infinite risks. They demonstrated the importance of areas previously underexplored.

Let’s look at another definition that’s sort of similar to that but also different. This comes from Nick Bostrom, who, as you recall, led up one of the groups – the Future of Humanities Institute that co-operated with the Global Challenges Foundation on that report. I think there’s few better to call in for this job than Nick Bostrom. He’s written at length on existential risk, and he’s quite influential in this space. He’s been talking and developing on this idea of risks since a paper back in 2002, and more recently back in 2013 he revisited that idea and throughout that time, he’s been trying to come up with a comprehensive definition of risk that speaks to those questions you’re asking.

Here’s a figure that’s kind of visually hard to describe, but along the top-to-bottom axis, the y-axis, is scope. You can think about scope also as scale – personal scale, local scale, and global scale. Then, going along the horizontal axis, the –

SUMI: X –

NICK: – the x-axis, you have the intensity or the severity of a certain risk. It could be an endurable risk, something like, your car gets stolen – you can survive that, it’s not going to completely annihilate your existence. Whereas if that stolen car drives over your face, at 100 kilometres an hour, that is not an endurable risk, that is a terminal risk.

So we’ve got variants in scale, and we’ve got variants in severity. And we can superimpose onto that a third thing, which is probability. I’ll just flag that we can do that; I won’t disappear too much into the discussion of that, but I talked before about how risk equals probability times impact, right? Well, this is a different conceptualisation of risk as a combination of things at different scales, things of different impacts, and then you can superimpose onto that the probability aspect.

SUMI: I have a question about the y-axis and you go from personal or individual, and then you go to regional, global. What if, say, there is a risk that threatens to affect one person or one group of people – and it’s a very, very small group of people – but that kills a whole host of knowledge that they may have that could unlock the secrets to saving a really important ecological species? Or what if the one person that you kill is someone that has a lot of power in the world?

NICK: That would be an example, then, of a risk that appears to be endurable, but is actually terminal. I think that’s what you’re trying to argue is, a group of people might drop off the face of the planet, and we’ll say, “Ah, the human species can endure that. That’s not a problem.” But it turns out that they were going to play some key role that would have helped all of us avoid existential annihilation. In that case, what we do have there is a tricky situation where something looks endurable but is actually terminal for us.

SUMI: Right, and sometimes you may not know that until you have hindsight.

NICK: Exactly, and this is sort of what Bostrom is trying to get at. I’ll come to that in a bit, but he points out the fact that we’re not going to be very good at dealing with these kinds of terminal risks because typically nothing survives them, so there’s nothing around to learn the lesson. If you think about this as a biologist, or if you think about this in terms of evolution, typically, nature selects for an advantage and whittles away something that is disadvantageous by comparison. So nature, and natural selection, teaches organisms over time. In this account, there’s nobody left to be taught, so there’s no way that we’ve evolved biologically to deal with these problems. As Bostrom argues, we haven’t evolved culturally, to deal with these problems either, because we’ve never seen problems that are global existential risks before.

I’ll just read his definition here: “Existential risks: one where an adverse outcome would either annihilate earth-originating intelligent life” – notice how that’s not necessarily anthropocentric – “or permanently and drastically curtail its potential.” You asked earlier about how anthropocentric it is and, with that in mind, reading that quote out, I realised it’s not actually necessarily anthropocentric. “Annihilating earth-originating intelligent life” could mean that we stick around but the rest of the biosphere is gone, and we could consider that to be an existential risk.

As Bostrom argues, risks in this sixth category are a recent phenomenon. This is part of the reason why it is useful to distinguish them from other risks. By sixth category, I mean – sorry I should clarify here – existential risks are ones that are global in scale, and terminal in impact. Their probability is less important; the fact is that they have such a large scale and such a high impact, that they are essentially global existential risks – sometimes just called X risks, but that’s such a tongue twister.

SUMI: Even if the probability that something might happen is 0.0000001 percent, that fact that –

NICK: It’s still a global existential risk. It’s just a very low probability risk. As Bostrom argues, risks in this sixth category are a recent phenomenon. This is part of the reason why it is useful to distinguish them from other risks, say, a local scale terminal risk, or an international scale terminal risk.

We have not evolved mechanisms, either biologically or culturally, for managing such risks, Bostrom says. Evolving and developing these mechanisms is no easy task; there’s no place for the trial and error approach we often use. We cannot learn from a mistake, when its consequences are fatal, simply because nobody’s left around to draw any lessons from it. Our approach, therefore, is going to be inherently and unavoidably speculative. It’s going to be a process in which we’re trying to anticipate an unknowable future and trying to build our capacities for foresight that is accurate. Additionally, Bostrom says, the institutions, moral norms, societal attitudes and national security policies, that developed from our experience with managing other sorts of risks, may be less useful in dealing with existential risks, which are a different type of beast.

This is sort of demonstrated, I think, by the work that, for example, came out of the Global Challenges Foundation – which Bostrom himself is speaking from a position of personal  experience and authority on, because that was partly his work. If you look at these so-called exotic threats like molecular nanotechnology, for example, that’s not really something the UN really talks about – or not, certainly, when it uses its most mainstream frameworks of sustainability, such as the SDGs, the Sustainable Development Goals. I think this is what Bostrom’s getting at when he’s saying these traditional institutions, these traditional norms and whatever, they’re not particularly well-equipped, in every case, to handle these new types of threats.

So it’s got to be these new emerging institutions with a fundamentally different philosophical approach, different starting point, about how to conceive of sustainability as a response to different threats. They’re going to produce and create information and responses that are not only important, but are going to exist on the periphery, outside of the mainstream. And that, in itself, represents all kinds of problems with engagement, and profile, and influence, and so on.

I think that’s just a good glimpse of the underlying philosophical ideas about risk, and what they mean pragmatically in terms of what kind of perspective on sustainability you get out of them. It’s very different than a lot of the mainstream conceptualisations of it. When you look at, say, climate change right now, climate change kind of dominates the discourse on existential risk. If we are having a conversation right now, a high-profile, influential, mainstream conversation about global existential risk – and we are – like 99 percent of the time it’s about climate change.

SUMI: Why do you think that is?

NICK: I think it’s part of the grass ceiling, we’re still trying to break through the grass ceiling. It comes from that history of sustainability being rooted in environmentalism, the dominance of environmentalism, I think that’s a huge factor in it. That environmental agenda still dominates when we talk about sustainability. Climate change is, obviously a pertinent issue – it’s really hard to tell without a ranked list, but – it would surely have to be a top three in terms of severity, in terms of impact, in terms of endurability… It’s a big question mark on how endurable it is. In your interview with Will Steffen, for example, he talked about how –

SUMI: One second. The interview with Will Steffen that Nick’s mentioning here was conducted before this episode was recorded, but it’s not been released yet. So no, it’s not missing from your feed and Nick’s not just returned from the future, it’s just a scheduling thing. Alright – back to it.

NICK: In your interview with Will Steffen, for example, he talked about how, if we continue down on a business-as-usual approach, then there’s going to be about a billion people left on the planet. That’s about the most that we could sustain according to some study he’d looked at. You might say, oh, well that’s an endurable threat for us, but that means six billion people die. And what that collapse looks like could also just be awful, how we respond to a collapse situation might itself present existential risks of its own.

Margaret Atwood’s The Handmaid’s Tale – and I don’t know about the book, because I haven’t read it, but the TV show – I think it’s explicitly trying to make this point that this nightmarish world of hyper-misogynistic, religious authoritarianism didn’t come out of a vacuum; this was people trying to save the planet from ecological collapse, and what they reached to was this ugly, violent, horrible world. It’s a really hard show to watch at times, but every now and then you get this glimpse of the world beyond and why they’re doing it, and you suddenly realise, “Holy crap, as bad as these people are, and as rotten as this society is they’ve built, they’re literally trying to save their species. That’s what’s at stake here. And this is the dark path that they’ve had to take in response to that challenge.”

I guess the point there is, we haven’t annihilated human life, in that example. This is the other definition that Bostrom provided, this is the drastic curtailing of potential. The same way that a billion people being left instead of seven billion people is a drastic curtailing of our potential. It’s the curtailing of the potential of six billion people and their offspring to come.

SUMI: In understanding existential risk, how do we understand how, maybe, different existential risks might relate to one another? Or identify the costs and considerations of addressing and avoiding existential destruction?

NICK: That’s really difficult to answer. The costs are practically infinite; if the impacts are infinite, then the costs are infinite. With understanding existential risks, Bostrom’s made the first steps, but when I read his papers, I am very mindful that he’s not the final word on this. I do, at some points, criticise this idea that … he’s got it very neatly categorised on paper, but the reality is always going to be way messier. He has a very clear black and white line going between the word endurable and the word terminal, but it’s not always clear cut.

For example, if you look at dinosaurs, from one perspective, the dinosaurs suffered from a terminal existential risk – whether it was the meteor or some other incident, or what have you. That drastically curtailed their potential, if not annihilated them. It didn’t technically annihilate them, because birds are still around. Birds aren’t descendants of dinosaurs, they’re just dinosaurs, still living and chilling and flying around like they were all those millions of years ago. So from the birds’ perspective, it was endurable. From one dinosaur’s perspective it was endurable, and from one it wasn’t.

And this ties back to your starting point about risk for who, and your recurring point you often make, in sustainability we often talk about progress – progress for who? Risk, risk for who? That question, I think, hints at broader questions about class, societal power, status, and the humanity we either give people or deny them, human rights and so on.

SUMI: Okay, let’s talk about The Handmaid’s Tale. I haven’t read it, but from what you said, it seems like the pursuit of avoiding ecological destruction, in that pursuit –

NICK: They’re trying to save the species, right.

SUMI: – they end up in this really, otherwise dystopian environment where there’s a lot of sexual violence and all sorts of other awful things. It makes me think and question, what is in the moral or ethical paradigm that we’re in now that we think is so unalienable, and what might existential risks push us to in terms of … Like, for example, the majority of us – I’m not saying every single one of us – but the majority of us wouldn’t wake up one day and say, “Alright, today I’m just going to go out and kill a person.” But living in a risk-conscious society, would it make us see everybody else as our enemy and a sort of every-man-for-himself kind of mindset, and we’re constantly in fight or flight mode? I’d imagine that existential risk would fuck with us psychologically and affect our relationships with other people.

NICK: It can, it can create that kind of siege mentality. And that ties back to the importance that the authors of that report identified, in reframing it as about finding opportunities and turning those challenges into opportunities as much as possible.

Yeah, it’s this really tricky conflict between two priorities here. On one hand, we need to face facts, we need to look at what the reality is out there, and that includes looking at some pretty confronting challenges ahead. And then we need to counterbalance that with what we know about human psychology and how we respond to those confronting things. We’ll be talking about sustainability communication in future episodes, but this discussion about the philosophy underpinning sustainability and frameworks for sustainability already just shows how critical and foundational that challenge is. We’re dealing with what is ultimately a communication problem, and psychology problem, at the end of the day.

We don’t want to alienate people, we don’t want them to be under siege mentality all day. Although, we’ve also seen how well fear motivates people, and we’ve seen how easy it is to plant certain fears in people’s minds. So if we’re manipulating people in one way already and it’s not a good way, can we manipulate them in a more benign way maybe?

Just to quickly backtrack to your idea of a “risk for who”. It’s interesting that in that graph, and in the discussion of that graph in the paper, Bostrom at one point talks – and this, I think, is one of his biggest mistakes in the paper – he’s trying to give an example of an endurable risk for a national or a smaller geographical scale, and he talks about a loss of cultural heritage. As an endurable risk for a community. And that just immediately struck me as wrong. Because I’d been dealing with the issue at the time, I thought about the people in Wilcannia, which is a remote down in –

SUMI: Northwest New South Wales.

NICK: – northwest New South Wales. Like many remote towns connected into the Murray-Darling Basin, it’s been running out of water –

SUMI: Might be central-west, I don’t know. Anyway, yep.

NICK: Okay. But there’s a bunch out there, you know, Collarenebri … Ah, I’m trying to think of some others.

SUMI: Walgett?

NICK: Walgett, yeah that’s right, Walgett’s another one. So there’s all these remote communities, and typically, predominantly, Indigenous Australians on these communities. And the river’s drying up. The river’s drying up for many reasons, we won’t get into too much for now, but the long and the short of it is – to oversimplify a little bit – because of colonialism, you know, white people came and fucked up the river. There’s this powerful line in this article I read about the people in Wilcannia, and they we saying they’d lost the water from the river and that was their cultural heritage. They’d taught, through countless generations, they’d passed on their stories and history by using the river. It was a way to teach future generations. When the river died, so did that ability to pass on that heritage.

Now, according to Bostrom’s table, in the way that he describes it, that’s an endurable risk. But it’s not. That’s terminal, pretty much – or it certainly threatens to be terminal, for those cultures. For their way of life, and for their sense of self and identity. That line that really struck me from that article was, they were saying, “I can’t get culture from a bore pump.” They’d been provided with another means of water, you know, this bore pumping up water from the ground. But that wasn’t the culture, that didn’t replace that. That was irreplaceable, it comes from the river and the river alone. And so, it looked to me like a terminal risk.

What this reveals, and I talked about this in more detail later on in this essay, is this sticky relationship – this really intractable, messy, murky relationship – between risk and personal values, and culture, and history, and … It’s a lot more subjective, and I think, wishy-washy and hard to pin down when you really get into who’s being impacted and why.

SUMI: Yeah, I guess, what is the relationship between sustainability and social justice? What’s the value of levelling the playing field in comparison to – for one, do we have to compare the two – but how do we weigh that up against the long-term goals or pursuits of human society more broadly? Can we even define homogenous goals of human society more broadly?

NICK: One mistake a lot of people do, and I think this might be the mistake Bostrom’s doing is, slipping into species-level thinking. Like, thinking of us kind of as an organism. And that organism has to persist over time and if we’ve achieved that, and we avoid all the icebergs along the way in our boat, then we’ve achieved sustainability for a time. But it’s obviously about more than that.

In our first submission of essays to our supervisor, Edwina, we came up with a definition of sustainability that talked about it in those very simple terms, those kind of organistic terms. It said, “the ability to persist over time”. And Edwina was like, “What about flourishing?” [laughs] Like, I don’t want to just persist in life –

SUMI: I want to live!

NICK: Yeah, I want to live! I think it’s such a simple question, but it illustrates a) the challenge of finding a good definition of sustainability that’s going to fit every scenario and every way of thinking about it, and b) I think it shows that sticky relationship between personal values and closely-held personal beliefs that are kind of inarguable – it’s like, well I believe what I believe and I have the values I have, these aren’t facts that we debate back-and-forth – and then how you reconcile those personally held beliefs with a consensus idea of what constitutes an endurable or terminal risk. Because I could say it’s an endurable risk, but you might disagree entirely.

SUMI: It presents a bit of a challenge to Bostrom’s definition of existential risk as being terminal but also at a global scale. Is an existential risk still an existential risk if, say, six billion people die, do you have to look at where those six billion people were geographically in order to be able to –

NICK: Or are you just looking at a number, six out of seven billion.

SUMI: Exactly. If the only people who are left standing are the people who are, I don’t know, in the Middle East region or something like that, then basically you’ve lost a whole host of knowledge, of culture, future potential in achievements, technological understandings, trade, all sorts of things thinking about the global production of things generally. What sort of life is that going to be, if six billion people were to be wiped off the face of the earth?

NICK: To go back to The Handmaid’s Tale, it doesn’t even have to be annihilation that happens. We can survive the first hurdle, but then, how we manage that, how we tackle that, the society we create as a response can be a nightmarish world in which nobody would really want to live anyways. In that case, it’s not annihilation that made it the existential risk but the drastic curtailing of potential, as he describes it. That, in itself, you can already tell just from the way that’s worded, that’s going to be up for definition and up for debate. What’s potential, and what does curtailing it mean? What quantifies curtailing versus drastic curtailing, and so on? There’s lots of wiggle room here, I think, to describe a broader problem.

We talked about, if you ask the birds, the meteor was an endurable risk, but if you ask the triceratops, the triceratops says, “Nah, it was bad.” So if you ask one of the one billion people left on earth after everyone else dies, if it was endurable, they might say yes, but, you know. It’s a contested definition, but it’s certainly a good definition to start thinking about sustainability in a different way.

One other final point is, you can think about annihilation as a whole other concept. Say, one day, we can just upload our brains to computers and all leave earth and shoot ourselves in lasers across the cosmos or something, and the human species goes out of fashion as a result. Was that an endurable or terminal risk for the human species? Because the species isn’t around anymore, but something’s still carrying the torch in the form of an “us”. Annihilation itself isn’t a clear-cut concept. Again, to look at the bird versus the triceratops, “annihilation” doesn’t really make a whole lot of sense.

SUMI: To look at the different ways that different cultures might deal with their dead, some people believe that there needs to be a very specific set of rituals to follow in order to preserve the knowledge and value of the person that’s passed. Whereas others believe that in order to do that, you bury them in the ground. Others believe in cremation … If we are to think about the wiping of the human race, it’s important to consider the ways that different people value life, and what life means and how to respond to –

NICK: Yeah, I mean, if you believe in reincarnation or something, you might have a very different perspective, right? Is that what you’re sort of getting at?

SUMI: Yeah.

NICK: Different views on death are going to change that annihilation definition even further aren’t they, yeah.

SUMI: There’s this assumption that someone’s spirit and words have to live on, and … Okay. The invention of the dictionary has been associated with the stymieing, to an extent, of the evolution of language. When you take something and sort of write it in stone, then you don’t allow it to evolve as quickly or in the same organic way as it used to. So if we were to take all the knowledge of all the living human beings at this current moment, what does that mean for a seemingly inherent thing about humanity that –

NICK: Their potential, or –

SUMI: – that has to do with us evolving over time, us changing, you know, having a different understanding of morality over time, having emotional memories. What do you choose to keep, what do you choose to give away? What if there’s something that’s particularly private to someone, but that private memory really guided everything that they said and did in their entire life? It’s to hard to wrap your head around that.

NICK: Well I think this is a philosophical question at the end of the day, right, we’re asking, kind of, “What is humanity?” What makes a human “human”, what makes them worthy of being treated as a human? I’m studying that – it’s one of my courses right now – what is humanity. The reason I’m studying it is literally so that it can inform my sustainability stuff. It’s an elective, and it’s frustrating; this should be part of any transdisciplinary study in sustainability, it should involve these philosophical discussions. It’s frustrating that it’s seen as outside of the discipline, in a degree that’s supposed to be interdisciplinary. There’s a lack of understanding, I think, in a lot of talking about sustainability about those deeper philosophical issues and what the implications of them are.

SUMI: Remember, you can find that article in full, as well as more great written content on the TGC website!

The Grass Ceiling podcast is hosted by Nick Blood, and hosted and produced by me, Sumithri Venketasubramanian. Our project supervisor is Dr. Edwina Fingleton-Smith. The Grass Ceiling is made possible thanks to the technical support of the ANU Centre for the Public Awareness of Science. For more TGC content such as written articles featured in this episode, check out our website at www.thegrassceiling.net.

A big thank-you to the ANU Fenner School of Environment and Society, for all their support in making this project happen. All music used in this episode was produced by Jackson Wiebe.

Blooper time!

NICK: Along the … top-to-bottom axis, the x-axis, he has –

SUMI: Top-to-bottom is y.

NICK: Oh, is top-to-bottom y?

SUMI: Yeah, y is independent, x is dependent.

NICK: [laughs] So he has a, um …

Advertisement

One thought on “Podcast Ep4: Transcript

  1. Pingback: Episode 4: Peter Piper picked a peck of … climate change solutions?! – The Grass Ceiling

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s