In the south of France lies Chauvet Cave. This subterranean museum contains some of the oldest and best-preserved paintings in the world, offering us a glimpse of life through an incomprehensible abyss of time, to some 30,000 years ago.
The world the paintings depict seems unreal and fantastical: bears and antelope and bison and horses and bulls and rhinos and on the paintings go. Back then, we lived in a much colder and drier place but the sun still shone, so there was still life in abundance and – as the paintings show – incredible diversity.
This art still tells a story. Not only of then, but of now, and of the passage of time in between. A story of changing climates. A story about loss of diversity. What I learned from Chauvet Cave was another story too: one about colonisation and imperialism. A story that questioned the idea of “sustainability” as I understood it.
And I thought I understood it well. I am studying that very subject in detail at my university. But even as a well-versed student in that field, fully immersed in that area, my virtual wandering via online research and YouTube documentaries revealed to me a huge gap in my knowledge.
So, there was a moment. Something I saw that changed me. Not inspiration, but realization. It flashed across my mind, connecting a thousand different thoughts, and asking a thousand difficult questions, inviting reflection on things I’d come to hold close. Things I’d believed in.
That’s the story I want to share – now that finally, I might have found a place to speak it, where others might hear.
It starts with Werner Herzog’s documentary “Cave of Forgotten Dreams”, which is an utterly enthralling exploration of this place that I recommend diving into if you have the time. Their on-site film beautifully captures not only the art, but the natural artistry that frames it all. The cave itself is a thing of wonder: everything is crystalline from the slow accumulation of calcite so the walls, and stalagmites and other features of the cave all sparkle in the harsh light of the cameras.
The meticulously preserved grounds of the cave are littered with the bones of many animals, and they too are covered in a mineral snow that glimmers strangely. The camera lingers long enough on these scenes – away from the paintings – to encourage an appreciation of an even greater artist at work here. Quietly and out of view, this artist etched their own stories over the interceding millennia between human visits to this hidden gallery; one that I would argue rivals the Louvre in importance.
I say that because of two paintings there and the story they tell about an entirely different way of life that existed before colonial times. An awe-inspiring culture quite different from ours. The image is of two bulls that look identical, as if painted by the same artist, or around the same time period. Here is Werner, from the documentary, explaining what you see:
‘…there are figures of animals overlapping with each other. A striking point here is that in cases like this, after carbon dating, there are strong indications that some overlapping figures were drawn almost 5,000 years apart. The sequence and duration of time is unimaginable for us today. We are locked in history, and they were not.’
Werner Herzog, Cave of forgotten dreams
It’s hard to describe what those words and the art itself evoke, because it’s hard to wrap one’s head around this idea. Is it possible that life was so consistent, so continual, that for five thousand years not much changed at all? Is that what the paintings are saying? The questions alone invite a wholly different way of thinking about sustainability to the one I feel I’ve learned about so far. But surely this is one of the most profound examples one can see of sustainability, no?
Two near-identical pieces of art, overlapping, separated by five thousand years. A statement of cultural continuity spanning a frame of time we today – advanced as we consider our culture – would struggle to imagine.
If that’s a statement, it’s one hell of a statement!
From the perspective of this boringly typical member of a Western culture that is struggling to survive another year – let alone five thousand – this painting is fuckingstartling. Better yet, keeping in mind my ancestors once called themselves Settlers, I could describe it as unsettling.
Unmoored from the perspective of a civilization that appears all too fragile, verging on catastrophic, we can see another way of life that extended over timespans that feel impossible to us with all of these modern problems we’ve created for ourselves.
The writer and engineer Nick Arvin, whose blog post inspired me to watch the documentary, describes it beautifully:
‘They have been painted in identical style and appear as if they might have been painted by the same artist. But carbon dating has shown that they were created 5,000 years apart. From a modern perspective where paintings styles go from Modern to Postmodern in 50 years, this is difficult to grok. Herzog, in voiceover, suggests that the cave paintings show a people who lived “outside of history,” oblivious to the requirements of constant progress that drive modern civilization.’
Nick arvin, Reading Journal: Waiting for the Barbarians, by J.M. Coetzee
To help us wrap our heads around this idea, Arvin then points to another rabbit hole: a short story called Waiting for the Barbarians, by J.M Coetzee who approaches the same idea from the perspective of the colonizing force. The book’s narrator is the magistrate of a frontier town in some unknown “Empire” that serves to represent imperialism more generally. Beyond the frontiers, the native people, known as Barbarians, exist in harmony with the land, as did the people who once decorated Chauvet Cave. Coetzee sums up the different worldview of imperialism, contrasting it against Chauvet’s “Two Bulls” in this way:
Empire has created the time of history. Empire has located its existence not in the smooth recurrent spinning time of the cycle of the seasons but in the jagged time of rise and fall, of beginning and end, of catastrophe. Empire dooms itself to live in history and plot against history. One thought alone preoccupies the submerged mind of Empire: how not to end, how not to die, how to prolong its era.
J.M Coetzee, Waiting for the Barbarians
These two different conceptualizations of time speak to an insurmountable incongruity between cultures. The “smooth recurrent spinning time of the cycle of the seasons” is contrasted against the “jagged time of rise and fall”. Coetzee’s gorgeously dense imagery transports a litany of ideas but one here rings loudest: the grounding of one’s self in the environment – the cycle of the seasons – the cyclical nature of life and death, set against the refusal to die. A belief in a self that is separated from nature, and thus, can conquer nature and its cycles. The “jagged time of rise and fall” – what we colonialists call history. Call progress. Call success. Call utopia.
Empire’s “submerged mind” has overlooked some things. We can sense it now, in the Apeilicene, as even the things we clutch for in our dreams turn to ash. Turn against us. Turn us against ourselves, and each other. “Save us from what we want”.
As the documentary later describes, these paintings were drawn by homo sapiens, in a time and space they shared with other human species like Neanderthals. The art, it is claimed, was a uniquely human endeavour; not something Neanderthals engaged in. That tells me that even back then, we must have realized (maybe even quite keenly felt) that we were somehow different from our fellow animals – even ones very like us.
And despite this, or perhaps because of it, these people managed to live for thousands of years in harmony with everything else. Bisons and bulls and bears.
Now, we see ourselves as fundamentally different and disconnected from nature – an idea that permeates our language, our thought, and our actions.
Stepping away once again from the cave art, we have to appreciate the even greater stories that this landscape tells us, and the questions it makes us ask. In one area of the cave floor there are two footprints: one belonging to a young boy, and another, to a wolf. What could these footprints, etched in calcite and the hardening of time, possibly tell us? Herzog plays out the scenarios: Was the boy being stalked by the wolf? Or were the two perhaps walking together? Perhaps instead, the two imprints – boy, and wolf – are separated by thousands of years?
We cannot know. Nature will not let us know.
She has her secrets, and this, we must respect.
In a sense, it’s easy to understand colonialism, imperialism, and colonisation at a kind of “surface” academic level because they are just ideas with characteristics and features. Ideas like any other. But when people encourage others to “decolonize” their understanding of something, it feels to me like they’re often talking about something else too; something that goes beyond just learning about a new idea and its characteristics. Part of that feels like it’s experiential; that learning about this stuff involves doing and being a part of something. Part of that feels like a radical questioning, where “de-colonizing” might resemble “de-programming”. Not just thinking about things differently, but doing things differently too. Embracing that knowledge over time. Recognizing that we cannot always find meaning in things, that we cannot know all. Camus might smile at that.
Closing thoughts – Resisting the Bliss of ignorance
There’s a scene in the film The Matrix that resonates strongly with me. If you haven’t seen it, the film is famous for its crazy gravity-defying fight scenes, and equally, for the way it popularized the aeons-old philosophical idea of radical scepticism and made it understandable, perhaps disturbingly so. The film made us question reality like Descartes once did. Is anything we see truly real? In this film’s setting, the answer is no: the world most people know is a fantasy concocted by a malevolent robot civilization, designed to deceive and placate us into being unwitting cattle that is harvested for energy.
Fighting against the killer robots is a plucky band of protagonists. They’ve all been “unplugged” from the Matrix and want to save humanity from this awful fate. Since this is a Hollywood blockbuster, you know they’re going to win, and it’s true, they do achieve a kind of victory before the film (and the trilogy it spawned) wraps up. What I found most interesting along the way though, was the traitor among them. A man named “Cypher” who, unplugged from the Matrix world, now lives a shitty, stressful existence that includes subsisting on a porridge-like gruel. See, Cypher is like that part of me I opened this discussion with; ready to get lost in the fantasy, rather than deal with reality. He wants to be plugged back into the Matrix. He wants to forget.
So, he plugs back into the Matrix for a moment and meets with the bad robots to discuss how to get what he wants. He’s sitting in a lush restaurant, speaking with the big bad AI as a lady strums a harp, and what he says is so stupidly simple, and yet such an unforgettable moment to me.
Cypher: You know, I know this steak doesn’t exist. I know that when I put it in my mouth, the Matrix is telling my brain that it is juicy and delicious. After nine years, you know what I realize?
[Takes a bite of steak]
Cypher: Ignorance is bliss.
I don’t say all this because I’m an unhappily sober undergraduate who’d rather be playing games and thus, decided to write about them as a distance second place. I say it because there’s something genuinely profound in my self-indulgent desires here that I think needs to be addressed.
If you think about it, what I’m really saying here is that instead of writing this and doing my part to be a constructive member of society who contributes their ideas for the betterment of all, I’d much rather the quiet life of a social parasite. That kind of phrasing makes the idea more damning, perhaps, but also more real; more accurate.
Maybe it’s projection, but I think a great many people aren’t too different. I think Cypher represents so many of us. It’s hard to view us as conscientious practitioners of sustainability when something as stupid as convenience drives so much of our problematic behaviour. Single-use plastic, cheeseburgers, and the personal car – I think many people would rather see the world burn than give these daily conveniences or indulgences up, even if they’re the ones accelerating our downfall. It’s poetic, I think, that Cypher seems to be ready to sell the entirety of humanity out over some steak. All these years later after a film from 1999, with our growing awareness of the link between beef consumption and climate change, our traitor Cypher here is basically your average Western consumer. We’re the villains, and if I’m frank, I’m not sure we really care that much.
Ignorance is bliss.
It’s painfully obvious to say, but maybe needs to be said regardless: We generally seem to prefer to meet our own needs and desires right now, than we care about future generations, or even other people alive today. This is, arguably, a bleak or pessimistic view of humanity, and yet, perhaps it’s a realistic one too. Importantly, this kind of perspective is often missing from how we think about, how we communicate, and how we practice sustainability.
We often go into this whole thing with some huge assumptions. Firstly, we assume that humans are worth saving from annihilation. Secondly, we assume that humans do indeed want to be saved, and will do what’s required, if only we communicate it the right way, motivate them the right way, design society the right way, and so on. This article is an excellent example, because it suggests that redesigning society towards a more virtualized existence would provide a way to reduce natural resource consumption, and thus maybe promote the longevity of our species. It assumes, as a starting point, that doing so is a good idea.
We assume that people don’t do the right thing now because they’re not sufficiently empowered, educated, or motivated. But what if, in addition to all that being true, there’s also this more basic problem? What if, at least some of us are perfectly willing to see our species end because the alternative, saving ourselves, is a real grind, a lot of hard work, and takes a lot of sacrifices? What if we’re genuinely happy just saying “fuck it, rather die”? This is an idea similar to “The Fall” mentioned earlier. How do we want to spend our time? If we’re not chasing immortality, then at what point is it acceptable for us to give in; to our desires, to our apathy, or to other things.
It’s not like this would ever be an overtly stated position for us; we’re not about to enshrine defeatism into a Universal Declaration at the UN. But maybe, just maybe, we signal that collective surrender through other channels. Maybe the way we act, and indeed the way we don’t act, represents those interests. Almost like another school of thought about sustainability – one that you won’t ever see raised at the UN, in academic journals, or in mainstream discussions – a philosophy that is only ever in the background as a common thread between many different societal failings.
What if this helps explain where we’re at right now as a society? It’s an obvious, well-trodden answer to explain the ills of our world on human apathy and indifference, and yet perhaps it’s because it has become so cliché that we have become numb to the truth of this reality? Is the biggest conflict of sustainability one between believers and non-believers; between say, science advocates and science deniers? Or is perhaps the biggest battle right now the one driven by these often-unspoken selfish desires we all have? A battle between the people who care, and the people who honestly just don’t. Importantly, sometimes, each of us can play both roles – hero one moment, and villain the next.
This is a largely philosophical point about human world views, attitudes, beliefs, and so on. It is ultimately a deeply philosophical question: to what extent should humans indulge their desires, and at what cost?
I mention it in closing because other, future work in this project will have to focus in more detail on the challenge of communicating sustainability, a topic I only briefly touched upon here. Often, in sustainability communication, we focus on human psychology and this underlying philosophical problem goes unaddressed. For example, we might focus on the ways that humans best respond psychologically to new information. This might help us communicate more effectively, and it might even engender the types of responses we want, but it doesn’t directly address the underlying question of how much we should be manipulating behaviours.
There is an ethical question here that recurs throughout my writing, about the extent to which we should resist the allure of blissful ignorance. How best should we spend our time? And how harshly should we judge the traitors, the Cyphers, amongst us?
It might seem in other discussions like I’ve painted the idea of all-consuming virtual worlds as something largely negative – not just as a distraction to everyday life, but on a broader scale, as a pitfall along the way to development, or a detour that civilizations might get lost in. It’s possible, however, that virtual worlds offer some promising upsides for sustainability. These are not that obvious right now, but there are examples out there worth examining that demonstrate what those positives might look like.
Virtual worlds and their potential for sustainable consumption.
Compare the ecological footprint of someone who lives their life entirely in the real world, and someone who spends a great deal of their life in virtual environments. Both will need to eat real food, have real shelter, and so on. In some areas though, the VR person’s impact might be dramatically reduced. It’s in those moments of everyday consumption – driven less by biological needs, and more by psychology – that someone who spends their time (and money) in a virtual environment might really shine.
We often buy things for status. The clothes we wear, the cars we drive, even the bottles of water we drink out of – for many of us, the items we consume help signify and shape our identity. Since this consumer culture is often the driver of many sustainability challenges, it’s worth considering how its impacts can be blunted in certain environments[1], and virtual environments seem to offer genuine promise here.
Let’s use shoes as an example. You buy shoes for many reasons. One reason is unavoidably “real world”: you use them to cover your feet. But other reasons, like buying them for status, don’t necessarily have to happen in the “real world” to effectively scratch that consumer itch of yours. And if you buy shoes in the virtual world instead of the real, the ecological footprint is reduced effectively to just one, relatively cheap, thing – to the energy required to power the simulation[2].
With the caveat that not all consumption can be virtualized, take a moment to appreciate what’s on offer here: reducing large sections of consumerism to a what can become over time a single natural resource draw – to energy. Consider the entire life cycle of a real-world shoe; the natural resource extraction and refinement, the distribution and logistics, the post-life disposal and waste. Consider too, just how many different computers running in mines, warehouses, distribution centres, and retail outlets it takes to get a Nike shoe from the Earth into a Foot Locker store – maybe as many as it takes to get a shoe from someone’s imagination into the pixels on your screen, maybe even more.
All of that real world, socially-driven consumption around a shoe involves a wealth of different resources, and yet almost all of that in a virtualized environment is replaced with just a demand for one thing: energy.
The one resource, perhaps above all others, that the universe offers in abundance.
Hopefully you’re seeing the undercurrents of the idea here. I’m not just talking about buying virtual sneakers, but about civilizational development trajectories: if you wanted to make your civilization more sustainable then surely one excellent path towards achieving that involves limiting, to the maximum extent possible, your physical presence and through that your physical draw on the world’s resources. It only makes sense, then, to think that virtualization (and other forms of “dematerialization”) offer a promising way forward here.
The future of sustainable consumption is…$60 monocles?!
Image courtesy of CCP Games
The dashing gentleman above is an example of what a player’s character might look like in the game I used to work on, EVE Online. For many years in the game, these player’s characters were represented mostly just by a single portrait that players could customize, posing their avatar in various clothes, lighting, hair styles, and so on. The game was about spaceships, so you mostly just stared at whatever ship your character was flying in space, rather than their body. But eventually, the game was updated to include 3D environments that players could walk around in. And with that, came the opportunity for the studio making the game to sell virtual clothes. Among the first release of items was that monocle you can see our chap above sporting. Yep, we tried to sell a monocle to our players. We also charged $60 USD for it.
I was at the company when this all unfolded. It hurt us significantly, and part of me resented the idea that we would sell virtual goods at all, let alone for such outrageous prices. Ironically, part of why I left this industry was because I wanted to study sustainability and do some good instead. All these years later, my feelings are far more mixed. I can see how, in principle, something like this can offer surprising, and surprisingly large, gains in this new area I’ve shifted focus to.
Returning to the monocle, or “monocle-gate” as it was insufferably labelled; all of this happened essentially a lifetime ago in games industry terms, around 2011. Since that time, the purchase of in-game “cosmetic” items has now become far more accepted, and far more commonplace. I think it would spin people’s heads to know just how far this industry of virtual goods has grown in such a short time.
To illustrate this, I’ll start by comparing two examples of in-game transactions (often dubbed “microtransactions” because they’re usually only small amounts of money[4]):
This is one of the first ever microtransactions offered; some armour for your horse in a fantasy game:
I think it looks good, personally. For more information on this moment in gaming history see this article.[5]
Though not quite monocle-gate, it wasn’t received well either. Back then in 2006, the idea of asking $2.50 for a cosmetic item was new, and the game was importantly only single-player: there’s less motivation to make a status-type purchase in a non-social environment.
Here’s the big change, however:
As games have shifted increasingly online, even gameplay experiences that once were typically single-player and non-social have become the opposite. Just five years ago, if you played the biggest basketball video game out there, NBA2K, you’d do so largely by yourself or with friends on the couch beside you, maybe at most play some online games with other people.
In the last few years that’s changed, and now this game, and many others like it, are becoming more like the MMO genre – a persistent, always-online world. Now, in these basketball games, you have your own apartment, and you occupy a neighbourhood shared with other players. Naturally, this means there’s shops too. Because the game world is now social those items you buy can be shown off to other people as you walk the neighbourhood (or play on court). Status purchasing makes more sense, and I believe this is key to why we’ve seen this change.
This is the second example of microtransactions, and it shows just how detailed, embedded, and mainstream this has now become. So, to illustrate this, let me take you on a quick tour around the neighbourhood in one of the latest versions of that basketball game, NBA2K.
There’s a barber where you can drop in for a haircut change, which you can pay for with real money, of course:
There’s countless clothes stores too, for basically any style. Inside are genuine brands, and there’s a strange new grey area created here. It’s somehow more real when they’re officially branded Levi’s jeans. They may not be “real” but they are certainly “authentic” or “genuine” and this surely helps blur the lines between real and virtual even further.
There’s advertising everywhere, too. The billboard above that store is advertising another in-game item, also potentially purchasable for real money.
Speaking of branded goods, why not stop by JBL and get yourself some dope headphones to walk around the neighbourhood in?
And, of course, there’s a Foot Locker with all the big shoe brands you’d expect. I wasn’t using shoes earlier as an example by accident.
People drop into this virtual Foot Locker here and spend virtual currency on virtual shoes for reasons like status and prestige.
To be clear, the virtual currency (VC) most things are sold for can be “earned” in-game by playing, but because there is so much money to be made in this now, the game is increasingly designed these days to be less rewarding in that regard, and through that, to encourage players to reach into their wallets, just like they would in a real Foot Locker store.
The point being made here is we’ve come a long way: from $2.50 horse armour developed in-house, to officially branded Foot Locker stores slinging virtual Nikes in a fully licensed sports game franchise. For the publisher of NBA2K games, virtual goods are now a huge part of their revenue model and have proven highly successful. Oddly, this success comes despite often significant consumer backlash. Games journalist Luke Plunkett captures the broader consumer sentiment in a scathing article about NBA2K’s 2019 release of the game, and about the games industry use of microtransactions more broadly:
2K19 is like a free-to-play mobile game, a predatory experience where the game is always shaking you down for your lunch money, even after you’ve already given it $US50 ($70). To play 2K19 is to be in a constant state of denial and refusal, always aware that in every aspect of the game, from the gyms to the stores to the action on the court itself, you can either spend VC [virtual currency – the game’s money] or be told that you’re missing out on something.
There may remain a vocal portion of the player base and industry commentators loudly protesting virtual goods sales, but the overwhelming majority seem to have spoken with their wallets. Below is the game publisher, Take Two Interactive, reporting the sales figures for 2019:
Net revenue grew to $1.249 billion, as compared to $480.8 million in last year’s [2018] fiscal third quarter. Recurrent consumer spending (virtual currency, add-on content and in-game purchases, including the allocated value of virtual currency and add-on content included in special editions of certain games) increased and accounted for 24% of total net revenue. The largest contributors to net revenue in fiscal third quarter 2019 were Red Dead Redemption 2, NBA 2K19 and NBA 2K18.[7]
Almost a quarter of all revenue, and figures in the hundreds of millions, driven largely by the sale of two basketball games, and chiefly, the virtual goods sales that happen within them.
We’ve come a long way from horse armour, indeed! The question now is, where might this trend take us?
Breaking down what this all means:
Shifting consumer culture towards virtual worlds might seem like a ludicrous concept, but what I’m trying to show here is that it’s already happening. On a big scale too. One of the most popular games right now is Fortnite, and it’s made over a billion USD in 2018-2019 from purely cosmetic item sales – ones that don’t affect gameplay advantage[8]. In other words, a billion dollars that might have been spent on real world items for similar reasons, has instead been spent on pixels, which largely only needed energy to produce[9]. Big players like Levi’s, Nike, Foot Locker, and big revenue figures mean the industry is valued somewhere around 15 billion USD[10], a huge figure for something not many people talk about at all, let alone in sustainability terms.
There’s a great deal more that can be said about the virtual economy too. It brings many co-benefits, like offering new types of consumer empowerment and control over the goods they purchase, and even allowing them to be producers (owners of the means of production?!) themselves, as creators of in-game content, or people who can monetize their in-game prestige or fame for real world wealth[11].
Virtual goods economies also have a proudly long tradition with social goods, too. For decades now, sales of virtual goods have often been used to donate towards charities, or fund other social enterprises. The largest of MMO-type games like EVE Online and World of Warcraft run regular, highly successful fundraisers, providing millions of dollars in assistance. This example here is an in-game pet from the previously mentioned game World of Warcraft – a Cinder Kitten!
Sales proceeds of this fiery furball raised over $2 million for relief efforts following Superstorm Sandy.
Levelling up: Games as a competetive sport (E-Sports)
Meaning “electronic sports”, “esports”, if you haven’t heard of it, is the idea of professional gaming. In relation to virtual goods, a more recent twist is that proceeds from virtual goods sales now can also be combined into a pool that serves as prize money for e-sports athletes in major gaming tournaments. These prize pools have ballooned from the hundreds of thousands to the tens of millions in the last five years, driven largely by virtual goods sales. This has, in turn, helped promote the further professionalization of athletes competing, and grown the legitimacy of esports further. These in-game tournaments and esports more generally are now economies in their own right, involving broadcasters, analysts, announcers, along with sponsors for the shows, and sponsors and endorsements for the individual teams and players. To put it all in perspective, an esports athlete coming to the US for a tournament, for example, can sometimes file for the same VISA used by other professional athletes – a golf or tennis star, for example.
Madison Square Garden, New York, sold out on consecutive nights hosting one of the largest annual esports tournaments.
‘When you fill up “The World’s Most Famous Arena”—home to the New York Knicks, Rangers, and the “Fight of Century” between Ali and Frazier—and you do it on consecutive nights, you send notice that you’re to be taken seriously.’[12]
Clearly there is a culture growing here too, not just an economy. Gaming personalities, analysts, athletes, and journalists are all earning money and creating economic value, but they’re also shaping a new culture that draws ever more people in. This culture drives dozens of different economies today while just in its infancy. It will surely drive many more as it develops. Virtual worlds and virtual goods are part of this broader movement, and they’re likely only going to increase in economic and cultural value into the future.
Increasingly, there is a blurring of the line between real and virtual worlds; the status we achieve in them, the sense of belonging and accomplishment they provide, and the wealth, even, that we achieve in them.
Perhaps it was something like this that drove a company like Facebook in 2014 to purchase the virtual reality headset Oculus Rift for a then head-scratching 2 billion USD[13]. Many couldn’t understand why the social network giant wanted to get in on the virtual worlds industry, and why they were willing to pay so much for one of the earlier headset technologies that showed promise and potential for broad adoption.
A lot of commentators seemed to focus on everything I have so far; on gaming and the virtual economies around them. What’s interesting to consider are the broader applications of a virtual environment and economies that could spring up around them. As just one powerful example, imagine being able to buy cheap front row tickets to your favourite sporting team, musician, comedian, or whatever else, using VR? Facebook may have lofty ambitions for virtual worlds beyond games, while hoping for a similar ability to draw people in; to create cultures, and of course, economic value.
Virtualization offers attractive incentives to many companies. It’s often far cheaper to provide a virtual product than a real world one, and although price points are significantly lowered in virtual environments, the overall margins are far larger (hence it being so insanely lucrative for the companies that have done well on virtual goods sales). The potential here for new economies seems to catch the eye of business, but the potential gains in resource reduction should equally catch the eye of sustainability practitioners and advocates. Perhaps we should be having more discussions about virtualization, virtual goods, and how to use combine civic advocacy, business innovation, and government policy to encourage reductions in natural resource draws.
Interestingly though, this idea of increasingly pervasive virtualization of goods will battle both indifference and ignorance from people unaware of the virtual goods industry already out there, and it will also meet some hostility from gamer and game analysts, who are often in conflict with game studios over virtual goods sales. Virtual goods sales have at times been deeply predatory and problematic, as Luke Plunkett’s earlier article demonstrated.
Another recent example is controversy surrounding “loot boxes”. “Loot boxes” are essentially randomly generated boxes of items that players can spend currency on (usually real-world money). Because the items in the box are randomized, there is a chance each time for a good or bad item. They operate similarly to poker machines, and with similar odds for great “payouts”. What we have here, then, is huge gaming companies employing psychologically manipulative practices to lure gamers, often children, into gambling real money for virtual items.
Even if it may offer some promises for more sustainable consumption, the road towards virtualization, quite clearly already, has its own pitfalls. These flashpoints of controversy all have the familiar whiff of hyper-capitalist greed that we’d find most other places, whether trading oil commodities or offering real estate loans. As discouraging as that is, it’s also an indicator of just how real and high stakes this world, and this industry around it, is becoming.
When the sharks have begun circling, you can be confident there’s something meaty there.
The Transcension Hypothesis, Bucky Fuller, and the Sleepers
It’s not entirely correct of me to say earlier that the Fermi Paradox assumes that advanced civilizations would be motivated to colonize space. This is because the Fermi paradox is something of a living idea; one that is updated as new critiques are made. This idea I’ve focused on – of advanced civilizations essentially leaving our universe and disappearing to a virtual world within, has in fact been added to the theory (as officially as possible). Stephen Webb’s compendium of solutions to the Fermi paradox, If the Universe is Teeming With Aliens … Where is Everybody?[14]now references an idea like the one I’m discussing alongside 74 other possible explanations.
This is thanks to the paper by John Smart[15] outlining an idea called the “Transcension Hypothesis”. Smart and I aren’t quite speaking the same language, but we’re both playing the same game here, pun intended.
There are a few key pieces of source material worth quoting at length here, before I return to assumable them all into something understandable, and something I have some experience myself with.
Firstly, what is Smart’s “Transcension Hypothesis” about? The clearest and most concise summary, sadly, isn’t in the abstract of the paper itself, which is quite dense. Instead, I prefer this summary from H+Pedia – a transhumanist version of Wikipedia essentially, and interesting in their own right.
In the “developmental singularity hypothesis”, also called the transcension hypothesis, Smart proposes that STEM compression, as a driver of accelerating change, must lead cosmic intelligence to a future of highly miniaturized, accelerated, and local “transcension” to extra-universal domains, rather than to space-faring expansion within our existing universe. The hypothesis proposes that once civilizations saturate their local region of space with their intelligence, they need to leave our visible, macroscopic universe in order to continue exponential growth of complexity and intelligence, and disappear from this universe, thus explaining the Fermi paradox.[16]
One key idea here is that long-term sustainability might be cosmologically quiet– hiding its secrets from us
The idea here, as you can hopefully see, echoes my own about civilization development and virtualization. As I said earlier, if you want to optimize your ecological footprint, you will minimize your resource consumption. Smart’s paper echoes this with ideas about “highly miniaturized, accelerated, and local “transcension” to extra-universal domains, rather than to space-faring expansion within our existing universe”.
Think about this in terms of science fiction clichés. It’s extremely common to see future versions of humanity where we’ve colonized other planets, spreading out like locusts to consume, extract, and expand (I feel like sometimes in these movies we are the baddies and we don’t even know it). Less common than this story, historically, is the one where future versions of humanity instead colonize virtual worlds, expanding inwards, as paradoxical as that sounds. If they were to take this route, then there wouldn’t be as much going on in the physical world, perhaps. An interesting idea to ponder, how those two “worlds” might co-exist – but the important point for now, in terms of the Fermi Paradox is that a civilization that develops into “inner space” won’t leave much of a footprint in “outer space”.
Perhaps achieving sustainability means we won’t leave a large footprint to be noticed for other civilizations. If advanced civilizations optimize their existence to vastly minimize their environmental impact, and thereby perhaps, their visible presence, this could help explain the Fermi paradox in a way that’s maybe even uplifting (success is out there, it’s just quiet).
To elaborate on this idea, I’ll include a portion of the abstract from Smart’s paper, worth quoting at length so as to capture the general “feel” of the idea, at least.
The emerging science of evolutionary developmental (“evo devo”) biology can aid us in thinking about our universe as both an evolutionary system, where most processes are unpredictable and creative, and a developmental system, where a special few processes are predictable and constrained to produce far-future-specific emergent order, just as we see in the common developmental processes in two stars of an identical population type, or in two genetically identical twins in biology.
The transcension hypothesis proposes that a universal process of evolutionary development guides all sufficiently advanced civilizations into what may be called “inner space,” a computationally optimal domain of increasingly dense, productive, miniaturized, and efficient scales of space, time, energy, and matter, and eventually, to a black-hole-like destination. Transcension as a developmental destiny might also contribute to the solution to the Fermi paradox, the question of why we have not seen evidence of or received beacons from intelligent civilizations.[17]
There’s a whole lot to process there. In short, Smart is arguing a few key points:
The science of evolutionary development presents two ideas about evolution, one as a chaotic and unpredictable system, and another where a “special few processes” are predictable, almost inevitable. In this hypothesis, on a civilizational level, these processes are “constrained to produce a far-future-specific emergent order”. In other words, to produce predictable outcomes.
This theory argues that every advanced civilization will encounter a situation that represents that second type of evolutionary development – a process that is predictable. This is because, on a long enough timescale, they will desire a development path that shifts from physical space, towards “inner space” (maybe virtualized space?), a “computationally optimal domain of increasingly dense, productive, miniaturized, and efficient scales of space, time, energy, and matter”.
This shift towards an increasingly smaller physical profile, towards “inner space” may represent a solution to the Fermi Paradox. Civilizations that achieved advanced levels of sustainability would not, according to this theory, leave a significant physical footprint and thus be difficult to discover.
This idea of Smart’s is echoed in a far earlier work, from the great thinker and futurist Buckminster Fuller. Bucky talked about this same idea in terms of “ephemeralization” which is explained nicely below:
Ephemeralization is the ability of technological advancement to do “more and more with less and less until eventually you can do everything with nothing,” that is, an accelerating increase in the efficiency of achieving the same or more output (products, services, information, etc.) while requiring less input (effort, time, resources, etc.). Fuller’s vision was that ephemeralization will result in ever-increasing standards of living for an ever-growing population despite finite resources. From Wikipedia
Sounds quite sustainability-related huh? Sounds quite a bit like this transcension hypothesis, too. And these ideas I have of my own about the potential importance of virtual worlds to sustainability, well it all intermingles quite nicely indeed, if I can say so, while adjusting my $60 virtual monocle.
…and the Sleepers?
Right, so… yes, a little confession is perhaps in order. Like I said, I used to work on this game EVE Online, and I was a writer there. It was a wonderful environment in that sense, because the game embraced concepts of transhumanism and other philosophically rich domains quite openly. Fertile grounds for a writer with an interest in these topics like myself.
At one point, the game was releasing a large expansion to its content that would feature a new alien race. My role as a writer was to help shape the story of that race. My brief was more or less that they should be “super advanced and ahead of our time”.
So, the story I created about them was almost literally everything you’ve just read.
It’s so weird. That I’m trying to write seriously paper about sustainability, and that this part of my past life keeps recurring, haunting me like a ghost. Something I wrote as fiction is now something I feel a need to talk about as a potential reality.
This race, the “Sleepers” as they were known, had disappeared into a virtual world, into “inner space”, just like Smart describes. A huge part of their advanced technology was derived from fullerenes, a type of carbon molecule invented by that same “Bucky” fellow, my way of giving nod to his own ideas of ephemeralization. And I was there in that world, that virtual world, as a real person working in a virtual world, roleplaying a real person who was investigating this alien race that had disappeared into a virtual world and…the lines blur.
It was only years later, reading Smart’s paper, that I realized something I wrote about as fiction might be an idea of genuine interest to serious minds.
Image courtesy of CCP Games. Art by me 😉
The image above hints at the what the structures housing the Sleeper civilization look like. The image was released alongside a short story I wrote that accompanied it and intended to conjure a feeling of detachment. The cosmos surrounding the facility glows red with life while the facility housing the Sleepers –vanished to their VR world within, if they even remain – languishes in a static black and white monochrome.
We made sure (at least at the time I was there) that players had no direct interactions with the Sleepers. All that remained were these silent, enigmatic buildings, and the fearsomely powerful worker drones that sustained the colony’s physical needs. There was no “big bad villain” in this expansion of the game, which certainly bucked the trend of how most games tell their stories. But with the right nudges here and there, the players found an enjoyable mystery.
In this case, players could only pillage these places for scraps of understanding; they could only scratch at the surface of greatness as they pilfered the Sleeper’s sites for breadcrumbs. I was trying to make the experience as close as I could to what it might be really be like, encountering a civilization that exemplified the transcension hypothesis. I was doing it before I had even heard of that term or read that paper. I was doing it, as an example of the paper’s thesis.
[1] Of course, it’s worth considering dismantling it too, but those are well-explored paths. This isn’t an argument I’ve heard others make, so it’s my focus right here. We have a million eyeballs on that problem, you don’t get much more from a million-and-one, and certainly not at the cost of us not exploring this a little bit further, right? Great.
[2] There is an exception here, of course, since computers require a lot more than just energy, but we’ll shelve that consideration momentarily.
[3] Kuchera, B. (2018, July 4). Leaks, riots, and monocles: How a $60 in-game item almost destroyed EVE Online. Ars Technica.
[4] Perhaps, given that term, you can see why a $60 item, and a monocle of all bloody things, went down so badly?
[5] Fahey, M. (2016, April 4). Never Forget Your Horse Armour. Kotaku.
[6] Plunkett, L. (2018, September 12). NBA 2K19 Is A Nightmarish Vision Of Our Microtransaction-Stuffed Future. Kotaku.
[8] Fagan, K. (2018, July 20). Fortnite — a free video game — is a billion-dollar money machine. Business Insider.
[9] More thoroughly, there are the indirect resource requirements of the people needed to develop those pixels, so there is a broader resource drain still, it should be noted.
[10] Bonder, A. (2016, December 25). 5 lessons from the $15 billion virtual goods economy. VentureBeat.
[11] Video game streamers, for example, are massive celebrities. The top streamer, Ninja, amassed a staggering 218 million human hours watched on his channel in 2018. YouTube’s largest star, PewDiePie, while now a media icon in his own right, launched his career in the same way as Ninja, as a video games streamer.
[12] Cunningham, S. (2016, October 27). How Video Gamers Sold Out Madison Square Garden. Inside Hook.
[13] Kovach, S. (2014, March 26). Facebook Is Buying Oculus Rift, The Greatest Leap Forward In Virtual Reality, For $US2 Billion. Business Insider.
[14] Webb, S. (2002). If the Universe Is Teeming with Aliens … WHERE IS EVERYBODY? (2nd ed.). Copernicus.
[15] Smart, J. M. (2012, September). The transcension hypothesis: Sufficiently advanced civilizations invariably leave our universe, and implications for METI and SETI. Acta Astronautica, 78, pp. 55-68.
[16] H+Pedia is a Humanity+ project to spread accurate, accessible, non-sensational information about transhumanism, including radical life extension, futurism and other emerging technologies and their impact to the general public – From their website main page.
[17] Smart, J. M. (2012, September). The transcension hypothesis: Sufficiently advanced civilizations invariably leave our universe, and implications for METI and SETI. Smart,, 78, pp. 55-68.
Having worked in the video games industry, and now studying sustainability, I feel I have a better perspective than most on the interplay between these topics. It’s a strange combination, I realize, but the interplay between them is surprisingly large, and much of it stems from the unique way in which games are consumed.
I have seen first-hand how all-consuming some video games can be. I don’t just mean addictive either, I mean life-consuming, and even life–replacing. At various moments in my career I would meet people who played the game I worked on. They would become utterly invested in the game, to the point “playing” hardly capture it. To them, it was a second life, and I could hardly blame them. We wanted this world we created to be exactly that.
For me too, the lines blurred heavily between the game world and reality, and the relationship I had with the game was complex. It was both something I created and something I consumed. Something I lived in and worked in, and worked on. A world I was paid to create, but also got lost in myself – my favourite kind of gameplay was just sitting around “roleplaying. Interactions with other people inside the world mixed between real world banter, and in-character drama and in-world action. At one point, I was running something called “live events” where I would take control of in-game characters, such as a menacing invasion force that would fight it out against our players. An experiment that pushed the edges of narrative – now the characters lept off the page into a living, breathing, world, in a story that would unfold in real time before the eyes of thousands of players.
There’s a reason this has become the world’s biggest entertainment industry. Games can get incredibly deep.
Though games are growing in cultural and economic importance and becoming completely mainstream, it’s still perhaps underappreciated that there are vast, complex, always-online worlds out there, perpetually bustling with thousands of player “inhabitants” – pocket virtual worlds running on a network of computers. At times, the richness of the social, economic and other interactions in these games can rival real-world equivalents[1]. A person may chase fame, wealth, or prestige harder in their video game life than they do in the real world. Perhaps this is especially the case in the “MMO[2]” genre which creates not just “game worlds”, but cities, neighbourhoods, homes, and alternate lives – something persistent and to many of its players, something deeply meaningful.
Image courtesy of Blizzard Entertainment Inc.
This, for example, is Stormwind Keep.
It is the main city for one of the two players factions in the MMO game World of Warcraft. Players can retire to the city after adventures to socialize, trade, and otherwise interact with others.
Driven by a mixture of artistic aspiration and financial motivations, the studios behind these special types of always-online games design them to be as immersive as possible; tempting people to stay longer, to sink deeper in. The game I used to work on, EVE Online, is an MMO like this. It has run for decades now; a living, changing world that players have spent huge parts of their lives inside of.
This game is especially notable because not only does it provide an online world for players to interact in, but it all happens on the one “server” – everyone occupies the same world[3]. The technical wizardry required to achieve this is not insignificant. The studio, for a time, owned and operated the world’s most advanced and expensive single-server computer network. All this just to host a game. The same studio I worked for once (infamously) marketed its game as “more meaningful than real life”. Among other accolades, the game is part of a permanent exhibit at the Museum of Modern Art in New York, featured alongside just 13 others including immortal, iconic titles like Pacman.
Though the moment has long since passed, MoMA continues to indirectly stick it to Roger Ebert, famed movie critic that once boldly claimed video games could never be art[4]. The installation takes the form of a 4K UHD video that shows what happens on any given day in EVE Online. While a game full of ships flying around a statistically empty universe may not seem like a proper subject for an artistic 4K UHD museum documentary, EVE Online remains what is arguably the best instance of a living virtual world…The games weren’t chosen for being pretty, but rather for being an outstanding example of interactive design.[5]
I strongly believe the implications of all this are underexplored, and yet have potentially profound consequences from a sustainability perspective. To understand why, you must appreciate that these games and the technology behind them is all just in its infancy. Despite this, it’s already possible, financially, and technologically, to build some seriously impressive virtual worlds – ones that draw in great numbers of people. The longer-term implications of these games will become more obvious as the industries and cultures around them grow, and perhaps, as increasingly large numbers of the human population gravitate towards spending some part of their life inside virtual environments.
The key point here is that MMO studios needed cheap, pervasive, high-speed internet to really shine, so the genre is only a few decades old. Return your mind to the Fermi paradox however, and we’re talking about civilizations advanced enough to colonize space, rather than ones that recently developed broadband internet. That might mean they also invented some reallygood games along the way, or more broadly, some really advanced virtual environments. Did that distract them? Is that why we can’t see anybody out there? Is the Great Filter of advanced civilizations that they inevitably become enamoured, or perhaps even lost, in a simulated world?
Footnotes
[1] And in cases where in-game items have corresponding real-world monetary values, it can be real money and not just pixels at stake in those interactions.
[2] MMO stands for “Massively Multiplayer Online”.
[3] The more common alternative is called “instancing” where many copies of the game world are made, and players occupy just one copy at a time. For example, there are multiple “copies” of the city of Stormwind Keep in the MMO game World of Warcraft. In EVE Online, there is just one world that everyone occupies – one Stormwind Keep, effectively.
[4] Watt, M. (2010, April 19). Roger Ebert says video games can never be “art”. Geek.com.
[5] Plafke, J. (2015, May 12). Eve Online’s permanent art exhibit at MoMA can now be viewed online. Geek.com.
I’m so lazy that if I weren’t to some extent coerced into writing all of this, I probably wouldn’t have. You wouldn’t be reading this because there’d be nothing written to read.
Although not always, at times I don’t particularly enjoy writing. It’s a real grind putting ideas to paper in a way that is engaging, accessible, and says something of value – and I’m not claiming to have achieved that, just to have expended a great deal of effort trying to. It’s not just the writing either, but all that reading as well. There’s seemingly endless amounts of articles and papers and other supremely relevant content I could consume and integrate into this.
My sustainability bookmarks folder grows by the hour, ever-swelling with knowledge like some awful Lovecraftian beast. Its insatiable hunger for articles I’ll never have time to return to later must always be fed.
It’s hard work, reading all that, digesting it, reframing it, assimilating it. And you know what? Hard work kind of sucks.
I’m not dropping any revelations here, I know, but it’s worth stating the obvious out loud from time to time.
Being honest, I’d much rather be playing video games. I’ve sworn off all that during studies, but earlier this year during a break I was playing this fantastic game called Surviving Mars where you manage humanity’s first ever Mars colony – water, oxygen, food, and keeping colonists happy and fulfilled. It’s basically Interplanetary Sustainability: The Video Game. Indeed, the main goal seems to be just that – make the colony self-sustainable. Interestingly, in the conversations people have about these games, the word sustainability is regularly used.
It’s far from the only game with that kind of goal, either. I’ve also enjoyed playing another game, Oxygen Not Included which has a similar focus except this time you’re colonizing an asteroid.
There is definitely a part of me what would rather return to those games right now than write all this. The gamified versions of sustainability I encounter in these worlds is far simpler; there is almost always a way to “win” (they are games, after all), and that certainly has a draw.
Distilling this down further – there’s a part of me that would rather get lost in a fantasy, instead of deal with messy reality. What I wonder is this: if enough other people felt similarly, could this desire shape the development trajectory of our species?
There is a theory called the Fermi Paradox that essentially asks why we haven’t found any other advanced civilizations in the cosmos yet. There are many suggested answers as to why. One related idea suggests there are certain barriers (called Great Filters) to survival that few, if any, civilizations pass. One of the core issues with the paradox, however, is that it assumes there is a motivation to explore and colonize space (thus creating a cosmological footprint we humans might be able to observe).
But what if there isn’t? What if the solution to the Fermi paradox is that the overwhelming number of advanced civilizations simply aren’t motivated to do all that hard work? This needn’t be some depression or ennui, either, they could simply have better, more fun things to do.
So, yes, I’m going to talk about video games, and virtual worlds more broadly, but I’m going do it with the utmost seriousness. Virtual worlds may have something extremely useful to say about civilizational advancement on cosmological scales and the long-term sustainability challenges ahead of us. By looking more seriously at video games, we might be seeing the first signs already of these ideas having genuine merit.
Despite dealing with non-traditional topics, most of the content here still stays at least within the orbit of mainstream discourse.
This series on virtualization is a little different.
It is still ultimately grounded in some realities, very much so, but it incorporates other more far-flung ideas. They are nonetheless based on some well-reasoned observations and examinations, of technology especially, but also of human psychology, of economics, and of culture. This kind of approach represents the mixing of disciplines, and the difficulty of discussing just one idea in isolation from other related schools of thought, or within just one scale of time or space.
This section on virtualization embraces thinking outside the mainstream. As I hope to show, however, the ideas that follow are too important for the mainstream to ignore much longer and indeed, the technology that invites this mode of thinking is proceeding at pace regardless.
With that disclaimer out of the way this series is about some key ideas. Firstly, I am looking at sustainability on far longer timescales than usual, extending to cosmological-level timescales (millions, billions, potentially even trillions of years). Interplanetary colonization may be a sustainability issue, after all, but it wouldn’t strike many as a pressing one.
Within that perspective, I explore something called the Fermi Paradox, which asks why we appear to be alone in the cosmos. I provide a potential answer to this question by pointing to the allure (and potential utility) of virtual worlds, and in doing so, hope to make a deeper point about potential civilizational development outcomes that have profound consequences for our perspectives on sustainability.
To put it crudely, I explore the idea that every advanced civilization inevitably ends up playing video games.
Elsewhere I touched upon the idea that there are fates worse than death. Having your mind uploaded to a machine could make colonizing Mars easier, sure, but it could also go wrong in some horrible ways. The image below is from a more recent Bostrom paper, this time in 2013, some 11 years after the previous one[1]. It is the same threat matrix from before, but expanded and revamped to include especially, the inclusion of a “(hellish)” end to the severity spectrum. Charming. 🙂
New and improved threat matrixes. Shiny.
Bostrom’s framework now covers various hellish outcomes that may be even worse than annihilation – detailing them in gruesome detail in his paper. To him, these are still nonetheless part of the existential risk category, because they result in the “drastic curtailing” of human potential. Another thinker has expanded this idea further though; Max Daniel, Executive Director of the Foundational Research Institute, a group that “focuses on reducing risks of dystopian futures in the context of emerging technologies”[2] (Foundational Research Institute, 2019). Daniel suggests that x-risks with hellish outcomes are their own type of unique risk; the S-risk.
The S stands for suffering 🙂
Daniel’s online essay is adapted from a talk[3], given for the group Effective Altruism Global (EAG) in Boston – a sustainability-minded think tank. In it, Daniel focuses on Bostrom’s paper above, narrowing in on the “hellish” part of the grid to explore how suffering can be as negative an outcome as annihilation – yet often a less-discussed existential risk.
S-Risks and Hellish outcomes – Netflix’s Black Mirror
“To illustrate what s-risks are about, I’ll use a story from the British TV series Black Mirror, which you may have seen. Imagine that someday it will be possible to upload human minds into virtual environments. This way, sentient beings can be stored and run on very small computing devices, such as the white egg-shaped gadget depicted here.”
“BEHIND THE COMPUTING DEVICE YOU CAN SEE MATT. MATT’S JOB IS TO CONVINCE HUMAN UPLOADS TO SERVE AS VIRTUAL BUTLERS, CONTROLLING THE SMART HOMES OF THEIR OWNERS. IN THIS INSTANCE, HUMAN UPLOAD GRETA IS UNWILLING TO COMPLY”
“TO BREAK HER WILL, MATT INCREASES THE RATE AT WHICH TIME PASSES FOR GRETA. WHILE MATT WAITS FOR JUST A FEW SECONDS, GRETA EFFECTIVELY ENDURES MANY MONTHS OF SOLITARY CONFINEMENT.”
The preceding excerpt, taken from Daniel’s essay[4] illustrates how technology might be used as a torture device that could cause (almost literally) infinitely more suffering than current technology enables. If it’s possible to upload our minds into machines, then someone with absolute control over those machines and malicious intent may be able to harm us in profoundly new and disturbing ways. It’s simply not possible today to torture someone for a thousand years. But Black Mirror shows us how it might not only be possible, but as easy as setting an egg timer. Fun stuff!
Black Mirror achieves something that, for me, few science fiction narratives do. It makes me happy with my own stupid little life that will, mercifully, end someday.
It captures the grace and joy in annihilation. It sounds pessimistic, I know, or fatalistic, or defeatist. It’s only something you understand more clearly when you witness something like Black Mirror and realize how much better death would be to some of the outcomes the creator’s dark imaginations have dished up.
While it may not seem an especially happy note to end this discussion on, I raise it mostly here because of the various overlaps of ideas. Daniel focuses on Bostrom and uses Netflix’s Black Mirror to illustrate his point. These strike me as not only important, but somewhat intuitive – ideas I think many people have and share. Black Mirror has enjoyed success because its visions of the future, unlike so many other drab Hollywood bullshit, perfectly captures our collective anxieties.
Hopefully in time these ideas about risk will grow in influence, helping shape our response to the threats ahead in a way that is more open-minded, more considered, and (hopefully again) more effective.
Footnotes
[1] Bostrom, N. (2013, February). Existential Risk Prevention as Global Priority. Global Policy, 4(1), pp. 15-31. doi:10.1111/1758-5899.12002
The risk-based framework I’ve mentioned elsewhere might appear to leave some things out. Climate change (of a sort) happened once already, and our species did survive it. The end of the Ice Age and the arrival of the Holocene was something that Australian Indigenous peoples, for example, managed to overcome. It even afforded them opportunities to settle in previously uninhabitable areas once covered by ice.
The onset of the Holocene climatic optimum … coincides with rapid expansion, growth and establishment of regional populations across ~75% of Australia, including much of the arid zone.[1]
In a similar theme, birds are dinosaurs. Importantly, they’re not related to dinosaurs, but actual modern-day dinosaurs; the survivors of the mass-extinction event that was a terminal event for most of their kin.
That previous climate change event, and that mass extinction event, might both therefore be examples of endurable risks, using Bostrom’s terminology. The groups at risk (humans, and dinosaurs) were not entirely wiped out. However, in at least the case of dinosaurs, their time as the dominant lifeform on Earth was arguably over once we got a foothold.
Recall Bostrom’s definition of existential risk:
One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
Existential risk doesn’t just require the annihilation of life. It’s enough that the potential for that life is ‘drastically curtailed’ for it to be considered an existential threat. This isn’t the case for Indigenous Australians, of course, who may have even thrived thanks to the effects of the last great changes in climate. For them, the greatest existential risk would sadly come later, in the form of European Colonization. For the dinosaurs though, their future potential was drastically curtailed. Despite this, some still live on as birds (endure, in Bostrom’s terms).
The case of the birds could seem to pose a problem for a framework like the one I’ve built from Bostrom’s ideas, or at least demonstrates that rigid categorization on paper won’t always translate perfectly to the real world. This is because it doesn’t seem to allow for there to be endurable risks that are also terminal, or at least requires some further thinking when we have a risk that can be either terminal or endurable depending on perspective (endurable for the birds, but not the dinosaurs?). Bostrom’s own table, interestingly, only includes examples of terminal risks that involve annihilation, and not “drastic curtailing of potential”. It’s a trickier idea to pin down. How hard is that line between them?
We can explore this idea further using the earlier example of transhumanism, which represents another “grey area”. What happens when our species (humans) no longer exists but is replaced by something that is still in some way “human”?
To the same extent that modern birds still “carry the torch” for the dinosaurs, what if some future version of us ends up doing the same for our species? What we define as “terminal” might actually vary according to personal beliefs and preferences, and that reveals the immensely sticky link between risks and threats, and people’s closely held beliefs, values, and norms.
For example, imagine we can upload our brains to machine bodies. This could present a vast new realm of possibilities for us in terms of sustainability. Why terraform Mars when we’ve already seen how well robots can do there?!
Self-portrait of Curiosity located at the foothill of Mount Sharp (October 6, 2015).
If robots can thrive there, maybe we should be more like them?
But then, to some people, the moment we do that, we lose something important about our humanity. The era of the human is effectively over, they say. The point is deeply debatable, and has been debated many times: If we replace enough of ourselves with machines, computers, and technology – to the point we are arguably no longer human – does that mean our species no longer exists? Is it a terminal or endurable event for the human species?
Timothy Morton’s ideas are relevant here too. If we are a kind of cyborg, as he says, then this question isn’t even a theoretical. The same applies to his claim that industrial capitalism is a primitive AI ruling us – a claim that in some senses is quite hard to refute. Are these terminal or endurable events?
A related thought, perhaps another way to think about this, is speciation. This is a term from biology referring to the formation of new and distinct species in the course of evolution. Speciation has happened with humans before; other species like Neanderthals all share a common ancestor with us – one that speciated at various points. Humans themselves have driven artificial speciation in other species, from dogs to domestic livestock to produce – and we’ve been doing it for tens of thousands of years. Technology has often played a key role too, in creating new species of flora and fauna (often to our own benefit). From this perspective, the idea of further technology-driven speciation of humans themselves may be possible, especially if it benefits us – or appears to.
From Corgis to Corn: A Brief Look at the Long History of GMO Technology[3]does a great job at providing some specific examples of speciation over time, stretching back millennia:
Image from Harvard’s paper[4].
Bringing it all back to Bostrom’s framework, are outcomes where our humanity fades away a terminal event for our species? Or because something else persists, are they endurable in some way?
Transhumans are to humans what birds are to dinosaurs. They may carry the torch of the species forward, but they do leave many things behind in the process. The potential of a flesh-and-bone species to fully flourish may very well be curtailed in a future where we shed our biological limitations and transition to new forms. It might seem a distant possibility relegated to the realm of thought experiment, but it nonetheless presents moments for reflection when it comes to ideas of risk, and especially, the risk of species annihilation. This shows, hopefully, that annihilation can mean quite a few things, and not all are as bad as the word itself might imply.
Footnotes
[1] Williams, A. N., Ulm, S., Turney, C. S., Rohde, D., & White, G. (2015). Holocene Demographic Changes and the Emergence of Complex Societies in Prehistoric Australia. PLoS ONE, 10(6).
[2] Bostrom, N. (2002). Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. Journal of Evolution and Technology, 9(1).
In making podcasts for this project, we had the pleasure of talking with Elizabeth Boulton, a PhD researcher studying the work of Timothy Morton, who developed the concept of a hyperobject in attempting to better account for how exactly existential risks like climate change are a ‘different beast’, as Bostrom describes.
Having set global warming in irreversible motion, we are facing the possibility of ecological catastrophe. But the environmental emergency is also a crisis for our philosophical habits of thought, confronting us with a problem that seems to defy not only our control but also our understanding. Global warming is perhaps the most dramatic example of what Timothy Morton calls “hyperobjects”—entities of such vast temporal and spatial dimensions that they defeat traditional ideas about what a thing is in the first place[1].
The idea of a hyperobject can be confusing, but echoes concepts from Bostrom. Global warming, for example, is a process that occurs over geological timescales. This is not our default mode of thinking due to our biology limiting us to far shorter lifespans. When Bostrom says ‘we have not evolved mechanisms, either biologically or culturally, for managing such risks’[2] he ultimately alludes to a very basic, truth about our biology and the kind of mindset it locks us into. An argument like Morton’s is essentially building upon this idea in greater detail, arguing that climate change is an example of something so vast in time and space that it defies the ability of biologically-evolved human minds to comprehend (See also: Building a Map, about how AI and other technological progress might help us meet sustainability challenges beyond the human mind’s ability to solve).
Natural selection did not equip us for problems like this, for the simple reason that natural selection only works with endurable threats: there must be something alive left with favourable traits to select for. Since these are terminal risks, there is no room for natural selection, and therefore, no (or exceedingly little) room for our biology to help us.
One obvious point here is that technology may help us overcome that complexity. Climate models, for example, already employ tremendously advanced AI and other technological innovations that allow us to reduce informational complexity to levels a human mind can understand and respond to[3].
Going further, this idea of technology-driven innovation can be a key argument in transhuman or posthuman interpretation of sustainability. In short: smarter, more capable humans can solve bigger, more challenging problems. Bostrom suggests we need new societal institutions, new priorities, new policies, and new norms – all to face new threats. Similarly, if human minds cannot comprehend these new threats, then perhaps we need new minds and maybe even new bodies, too?
‘A reckoning for our species’: the philosopher prophet of the Anthropocene
Part of what makes Morton popular are his attacks on settled ways of thinking.
His most frequently cited book, Ecology Without Nature, says we need to scrap the whole concept of “nature”. He argues that a distinctive feature of our world is the presence of ginormous things he calls “hyperobjects” – such as global warming or the internet – that we tend to think of as abstract ideas because we can’t get our heads around them, but that are nevertheless as real as hammers.
He believes all beings are interdependent, and speculates that everything in the universe has a kind of consciousness, from algae and boulders to knives and forks. He asserts that human beings are cyborgs of a kind, since we are made up of all sorts of non-human components; he likes to point out that the very stuff that supposedly makes us us – our DNA – contains a significant amount of genetic material from viruses. He says that we’re already ruled by a primitive artificial intelligence: industrial capitalism. At the same time, he believes that there are some “weird experiential chemicals” in consumerism that will help humanity prevent a full-blown ecological crisis.
[2] Bostrom, N. (2002). Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. Journal of Evolution and Technology, 9(1).
[3] Cho, R. (2018, June 5). Artificial Intelligence – A Game Changer for Climate Change and the Environment. State of the Planet – Earth Institute, Columbia University.
If you wish to make an apple pie from scratch, you must first invent the universe.
– Carl Sagan, Cosmos, Episode 1.
The wonderful Sagan quote above demonstrates a way of thinking that embraces the complexity we can find in even the simplest thing. In this case, we’re looking at an apple pie, because Sagan was a good American patriot.
He re-frames the pie as something that exists in a broader context – inside a physical universe. He then redefines “from scratch” to mean “from the very beginning of that universe”. This perspective, he suggests, shows us the real recipe for making a pie. And it’s a lot more complicated than just slapping it in the oven for 30 minutes.
In that same spirit, we must consider very carefully how we go about defining the word “risk”. Like Sagan, we must see that this word exists in a broader context, and that coming up with a good definition might take us a little longer than we first thought.
Returning once again to that high-level framework for building a triage-focused, risk-based model of sustainability, the Global Challenges Foundation (GCF) report illustrates an important feature, not fully expressed in Step 1.
Identify candidate issues for consideration.
Develop criteria to rank them.
Apply criteria and develop a ranked list.
If you wish to build a list of risks, you must first define what you mean by “risk”.
Before we can identify candidate issues for consideration (Step 1) we first need a comprehensive definition of risk that ensures we do not forget anything important. I’ve complained at length that sustainability risk discourse focuses too much on environmentalism, which implies there must be other areas being left out of the discussion. In the GCF report, they’ve broadened the definition of risk to include a new category – infinite risks – and demonstrated the importance of areas previously under-explored. This all suggests that defining risk is itself a necessary and important part of building a risk-based model of sustainability. It sounds blindingly obvious, I know, but this pedantic stating of the fact is important!
This first of steps is a deceptively complex and important one: If we don’t define “risk” well enough, we will leave blind spots, some of which could be fatal. In other words, how well we define “risk” will determine our ability to manage it.
Getting strung out over the importance of definitions is often the work of philosophers. Usually that word invokes Ancient Greeks, or some idea of heady thoughts that make you say, “deep stuff dude”. But philosophy can be something more basic too; like thinking hard about what “risk” means – because there are a number of ways we can frame it, and more pragmatically, because if we don’t, we could all die.
Meet Nick Bostrom
There are few better to call in for this job than the philosopher Nick Bostrom, who has written at length on existential risk and is influential in this space. As an example of this, we can see a body of work on infinite risk going all the way back to 2002[1] that eventually culminates in a number of important think tanks using that same framework.
Bostrom’s approach to sustainability and risk is brilliant, and a little bit disturbing. That’s a theme with him and a reason I like his work. That darker underbelly translates into some compelling stories and visions. Sometimes his work feel less like a journal article, and more like science fiction (he has written a paper arguing that we are living inside a simulation, for example). He represents well, I think, the kind of philosophers we’ll need in the Apeilicene.
It’s no accident that his thoughts are echoed in some of the most prominent stories today, such as the film The Matrix or Netflix’s TV series Black Mirror. He very much grounded in our
He’s also quite prolific in this space. The GCF report was co-steered by him, and its ideas about infinite risk in 2015 echo earlier work on “infinite value” from a 2011 paper:
As a piece of pragmatic advice, the notion that we should ignore small probabilities is often sensible. Being creatures of limited cognitive capacities, we do well by focusing our attention on the most likely outcomes. Yet even common sense recognizes that whether a possible outcome can be ignored for the sake of simplifying our deliberations depends not only on its probability but also on the magnitude of the values at stake. The ignorable contingencies are those for which the product of likelihood and value is small. If the value in question is infinite, even improbable contingencies become significant according to common sense criteria.[2]
In other words: infinite risk completely changes with the importance of probability. It doesn’t matter much how unlikely something is, if that something can wipe us out.
Bostrom’s model of risk
Thinking about the GCF report’s formula: Risk = Probability x Impact, their formula can essentially be interpreted out of Bostrom’s passage above.
It’s worth looking in some detail at this model of risk, starting with his 2002 paper[3]. The image below outlines Bostrom’s attempt to distinguish between six ‘qualitatively different’ types of risk.
Image adapted from Bostrom’s 2002 paper.
The grid is relatively simple. Bostrom uses scope and intensity to differentiate different types of risk[4]. Scope is essentially the same as “scale”. Intensity describes how severe the outcome is; how survivable, or reversible. A personal risk that is endurable is something like your car getting stolen, while a personal risk that is terminal is that stolen car driving into your face at 100km. Local essentially means large-scale, but not global. A genocide in a single country is a local terminal risk.
Importantly, these are all well-known and familiar risks – things that we have dealt with before. That is not to say we are prepared for them necessarily, but that they are known risks. What’s new is the global, terminal risk. The spot marked X. A global-scale, terminal risk (sometimes called an “X-Risk”) is a special type; one Bostrom labels as “existential”. He defines them in the following way:
Existential Risks – One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[5]
As Bostrom argues: ‘risks in this sixth category are a recent phenomenon. This is part of the reason why it is useful to distinguish them from other risks. We have not evolved mechanisms, either biologically or culturally, for managing such risks’[6]. Evolving and developing these mechanisms is no easy task. Why? Because there is no place for the trial and error approach we typically use. We cannot learn from a mistake when its consequences are fatal. Nobody will be left to draw any lessons from it, nor have a world to apply that lesson to.
Our approach is therefore inherently and unavoidably speculative. We are trying to build our capacities for accurate foresight. We are trying to cultivate and encourage the imagination of strange futures. We do this so that we can better anticipate an unknowable future.
Bostrom makes this point in a broader sense too, arguing that the ‘institutions, moral norms, social attitudes or national security policies that developed from our experience with managing other sorts of risks’ may be less useful in dealing with existential risks which Bostrom describes as a ‘different type of beast’[7]. Arguably, some of the best work on existential risk comes from non-traditional institutions and think tanks – groups outside the mainstream. In a somewhat paradoxical sense, they must remain on that fringe; it’s easier to think outside the box when you already live outside of it. In another sense, I do feel that we need to begin paying closer attention to these kinds of institutions and their bodies of work, even if they may seem esoteric or alarmist at times.
Illustrating this forward-looking approach are outfits like the previously mentioned Global Challenges Foundation, as well as their collaborators the Future of Humanity Institute, which focuses on AI development and other so-called “exotic” threats like the risks of molecular nanotechnology. The similarly named Future of Life Institute is yet another think tank devoted to existential risks that focuses (again) on the dangers of unchecked AI development. There are many such groups in existence, and while well-funded and influential, in comparison to the UN’s own stature they are not yet mainstream.
These kinds of groups are newer, and sometimes explore areas well outside of the typical fare of the UN and the frameworks it develops. They exemplify what Bostrom means when he says that institutions experienced with past threats may be less useful in dealing with future ones.
In the future, I hope to look more closely at groups like these; to reflect on the threats they identify as important, to investigate what kinds of thoughts drove them to these conclusions, and to look more pragmatically at anything resembling a “triaged list” they might have developed. A meta-analysis and synthesis of their work would be a good step in building a risk-based model that can enjoy some consensus and attention.
I’ve offered glimpses of this landscape, but honestly only that – glimpses. There is a wealth of work and good ideas here that deserve greater attention from the media, from academia, from policymakers, and from the public. It might be helpful to consider their methodologies too – what frameworks and approaches they use that might be of value in the broader project of creating a triaged list of existential risks. For now, I’ve highlighted just a few notable outfits and thinkers and some of their most important ideas.
Footnotes
[1] Bostrom, N. (2002). Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. Journal of Evolution and Technology, 9(1).
[2] Bostrom, N. (2011). Infinite Ethics. Analysis and Metaphysics, 10, 9-59.
[3] Bostrom, N. (2002). Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. Journal of Evolution and Technology, 9(1).
[4] A third dimension, probability is also important – especially for any kind of triage to occur. This was evident in the GCF report’s own model. Bostrom points out that probability can be “superimposed” on to the matrix he’s developed. So again, this earlier work seems to align with the later reports coming out of collaborations, like the one with GCF.
[5] You might recall an earlier critique of our definition of sustainability using the word “persist” in “persist over time”– since it doesn’t capture the idea of human “flourishing”. Here, I think, Bostrom captures that idea better! A drastic curtailing of our potential is essentially the antithesis to human flourishing, so its avoidance makes flourishing possible, even more probable. This section not only latches on to Bostrom’s idea of going beyond annihilation as a concern, but tries to address this idea of flourishing, and of maximizing human potential.
[6] Bostrom, N. (2002). Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. Journal of Evolution and Technology, 9(1).