In the south of France lies Chauvet Cave. This subterranean museum contains some of the oldest and best-preserved paintings in the world, offering us a glimpse of life through an incomprehensible abyss of time, to some 30,000 years ago.
The world the paintings depict seems unreal and fantastical: bears and antelope and bison and horses and bulls and rhinos and on the paintings go. Back then, we lived in a much colder and drier place but the sun still shone, so there was still life in abundance and – as the paintings show – incredible diversity.
This art still tells a story. Not only of then, but of now, and of the passage of time in between. A story of changing climates. A story about loss of diversity. What I learned from Chauvet Cave was another story too: one about colonisation and imperialism. A story that questioned the idea of “sustainability” as I understood it.
And I thought I understood it well. I am studying that very subject in detail at my university. But even as a well-versed student in that field, fully immersed in that area, my virtual wandering via online research and YouTube documentaries revealed to me a huge gap in my knowledge.
So, there was a moment. Something I saw that changed me. Not inspiration, but realization. It flashed across my mind, connecting a thousand different thoughts, and asking a thousand difficult questions, inviting reflection on things I’d come to hold close. Things I’d believed in.
That’s the story I want to share – now that finally, I might have found a place to speak it, where others might hear.
It starts with Werner Herzog’s documentary “Cave of Forgotten Dreams”, which is an utterly enthralling exploration of this place that I recommend diving into if you have the time. Their on-site film beautifully captures not only the art, but the natural artistry that frames it all. The cave itself is a thing of wonder: everything is crystalline from the slow accumulation of calcite so the walls, and stalagmites and other features of the cave all sparkle in the harsh light of the cameras.
The meticulously preserved grounds of the cave are littered with the bones of many animals, and they too are covered in a mineral snow that glimmers strangely. The camera lingers long enough on these scenes – away from the paintings – to encourage an appreciation of an even greater artist at work here. Quietly and out of view, this artist etched their own stories over the interceding millennia between human visits to this hidden gallery; one that I would argue rivals the Louvre in importance.
I say that because of two paintings there and the story they tell about an entirely different way of life that existed before colonial times. An awe-inspiring culture quite different from ours. The image is of two bulls that look identical, as if painted by the same artist, or around the same time period. Here is Werner, from the documentary, explaining what you see:
‘…there are figures of animals overlapping with each other. A striking point here is that in cases like this, after carbon dating, there are strong indications that some overlapping figures were drawn almost 5,000 years apart. The sequence and duration of time is unimaginable for us today. We are locked in history, and they were not.’
Werner Herzog, Cave of forgotten dreams
It’s hard to describe what those words and the art itself evoke, because it’s hard to wrap one’s head around this idea. Is it possible that life was so consistent, so continual, that for five thousand years not much changed at all? Is that what the paintings are saying? The questions alone invite a wholly different way of thinking about sustainability to the one I feel I’ve learned about so far. But surely this is one of the most profound examples one can see of sustainability, no?
Two near-identical pieces of art, overlapping, separated by five thousand years. A statement of cultural continuity spanning a frame of time we today – advanced as we consider our culture – would struggle to imagine.
If that’s a statement, it’s one hell of a statement!
From the perspective of this boringly typical member of a Western culture that is struggling to survive another year – let alone five thousand – this painting is fuckingstartling. Better yet, keeping in mind my ancestors once called themselves Settlers, I could describe it as unsettling.
Unmoored from the perspective of a civilization that appears all too fragile, verging on catastrophic, we can see another way of life that extended over timespans that feel impossible to us with all of these modern problems we’ve created for ourselves.
The writer and engineer Nick Arvin, whose blog post inspired me to watch the documentary, describes it beautifully:
‘They have been painted in identical style and appear as if they might have been painted by the same artist. But carbon dating has shown that they were created 5,000 years apart. From a modern perspective where paintings styles go from Modern to Postmodern in 50 years, this is difficult to grok. Herzog, in voiceover, suggests that the cave paintings show a people who lived “outside of history,” oblivious to the requirements of constant progress that drive modern civilization.’
Nick arvin, Reading Journal: Waiting for the Barbarians, by J.M. Coetzee
To help us wrap our heads around this idea, Arvin then points to another rabbit hole: a short story called Waiting for the Barbarians, by J.M Coetzee who approaches the same idea from the perspective of the colonizing force. The book’s narrator is the magistrate of a frontier town in some unknown “Empire” that serves to represent imperialism more generally. Beyond the frontiers, the native people, known as Barbarians, exist in harmony with the land, as did the people who once decorated Chauvet Cave. Coetzee sums up the different worldview of imperialism, contrasting it against Chauvet’s “Two Bulls” in this way:
Empire has created the time of history. Empire has located its existence not in the smooth recurrent spinning time of the cycle of the seasons but in the jagged time of rise and fall, of beginning and end, of catastrophe. Empire dooms itself to live in history and plot against history. One thought alone preoccupies the submerged mind of Empire: how not to end, how not to die, how to prolong its era.
J.M Coetzee, Waiting for the Barbarians
These two different conceptualizations of time speak to an insurmountable incongruity between cultures. The “smooth recurrent spinning time of the cycle of the seasons” is contrasted against the “jagged time of rise and fall”. Coetzee’s gorgeously dense imagery transports a litany of ideas but one here rings loudest: the grounding of one’s self in the environment – the cycle of the seasons – the cyclical nature of life and death, set against the refusal to die. A belief in a self that is separated from nature, and thus, can conquer nature and its cycles. The “jagged time of rise and fall” – what we colonialists call history. Call progress. Call success. Call utopia.
Empire’s “submerged mind” has overlooked some things. We can sense it now, in the Apeilicene, as even the things we clutch for in our dreams turn to ash. Turn against us. Turn us against ourselves, and each other. “Save us from what we want”.
As the documentary later describes, these paintings were drawn by homo sapiens, in a time and space they shared with other human species like Neanderthals. The art, it is claimed, was a uniquely human endeavour; not something Neanderthals engaged in. That tells me that even back then, we must have realized (maybe even quite keenly felt) that we were somehow different from our fellow animals – even ones very like us.
And despite this, or perhaps because of it, these people managed to live for thousands of years in harmony with everything else. Bisons and bulls and bears.
Now, we see ourselves as fundamentally different and disconnected from nature – an idea that permeates our language, our thought, and our actions.
Stepping away once again from the cave art, we have to appreciate the even greater stories that this landscape tells us, and the questions it makes us ask. In one area of the cave floor there are two footprints: one belonging to a young boy, and another, to a wolf. What could these footprints, etched in calcite and the hardening of time, possibly tell us? Herzog plays out the scenarios: Was the boy being stalked by the wolf? Or were the two perhaps walking together? Perhaps instead, the two imprints – boy, and wolf – are separated by thousands of years?
We cannot know. Nature will not let us know.
She has her secrets, and this, we must respect.
In a sense, it’s easy to understand colonialism, imperialism, and colonisation at a kind of “surface” academic level because they are just ideas with characteristics and features. Ideas like any other. But when people encourage others to “decolonize” their understanding of something, it feels to me like they’re often talking about something else too; something that goes beyond just learning about a new idea and its characteristics. Part of that feels like it’s experiential; that learning about this stuff involves doing and being a part of something. Part of that feels like a radical questioning, where “de-colonizing” might resemble “de-programming”. Not just thinking about things differently, but doing things differently too. Embracing that knowledge over time. Recognizing that we cannot always find meaning in things, that we cannot know all. Camus might smile at that.
It might seem in other discussions like I’ve painted the idea of all-consuming virtual worlds as something largely negative – not just as a distraction to everyday life, but on a broader scale, as a pitfall along the way to development, or a detour that civilizations might get lost in. It’s possible, however, that virtual worlds offer some promising upsides for sustainability. These are not that obvious right now, but there are examples out there worth examining that demonstrate what those positives might look like.
Virtual worlds and their potential for sustainable consumption.
Compare the ecological footprint of someone who lives their life entirely in the real world, and someone who spends a great deal of their life in virtual environments. Both will need to eat real food, have real shelter, and so on. In some areas though, the VR person’s impact might be dramatically reduced. It’s in those moments of everyday consumption – driven less by biological needs, and more by psychology – that someone who spends their time (and money) in a virtual environment might really shine.
We often buy things for status. The clothes we wear, the cars we drive, even the bottles of water we drink out of – for many of us, the items we consume help signify and shape our identity. Since this consumer culture is often the driver of many sustainability challenges, it’s worth considering how its impacts can be blunted in certain environments[1], and virtual environments seem to offer genuine promise here.
Let’s use shoes as an example. You buy shoes for many reasons. One reason is unavoidably “real world”: you use them to cover your feet. But other reasons, like buying them for status, don’t necessarily have to happen in the “real world” to effectively scratch that consumer itch of yours. And if you buy shoes in the virtual world instead of the real, the ecological footprint is reduced effectively to just one, relatively cheap, thing – to the energy required to power the simulation[2].
With the caveat that not all consumption can be virtualized, take a moment to appreciate what’s on offer here: reducing large sections of consumerism to a what can become over time a single natural resource draw – to energy. Consider the entire life cycle of a real-world shoe; the natural resource extraction and refinement, the distribution and logistics, the post-life disposal and waste. Consider too, just how many different computers running in mines, warehouses, distribution centres, and retail outlets it takes to get a Nike shoe from the Earth into a Foot Locker store – maybe as many as it takes to get a shoe from someone’s imagination into the pixels on your screen, maybe even more.
All of that real world, socially-driven consumption around a shoe involves a wealth of different resources, and yet almost all of that in a virtualized environment is replaced with just a demand for one thing: energy.
The one resource, perhaps above all others, that the universe offers in abundance.
Hopefully you’re seeing the undercurrents of the idea here. I’m not just talking about buying virtual sneakers, but about civilizational development trajectories: if you wanted to make your civilization more sustainable then surely one excellent path towards achieving that involves limiting, to the maximum extent possible, your physical presence and through that your physical draw on the world’s resources. It only makes sense, then, to think that virtualization (and other forms of “dematerialization”) offer a promising way forward here.
The future of sustainable consumption is…$60 monocles?!
Image courtesy of CCP Games
The dashing gentleman above is an example of what a player’s character might look like in the game I used to work on, EVE Online. For many years in the game, these player’s characters were represented mostly just by a single portrait that players could customize, posing their avatar in various clothes, lighting, hair styles, and so on. The game was about spaceships, so you mostly just stared at whatever ship your character was flying in space, rather than their body. But eventually, the game was updated to include 3D environments that players could walk around in. And with that, came the opportunity for the studio making the game to sell virtual clothes. Among the first release of items was that monocle you can see our chap above sporting. Yep, we tried to sell a monocle to our players. We also charged $60 USD for it.
I was at the company when this all unfolded. It hurt us significantly, and part of me resented the idea that we would sell virtual goods at all, let alone for such outrageous prices. Ironically, part of why I left this industry was because I wanted to study sustainability and do some good instead. All these years later, my feelings are far more mixed. I can see how, in principle, something like this can offer surprising, and surprisingly large, gains in this new area I’ve shifted focus to.
Returning to the monocle, or “monocle-gate” as it was insufferably labelled; all of this happened essentially a lifetime ago in games industry terms, around 2011. Since that time, the purchase of in-game “cosmetic” items has now become far more accepted, and far more commonplace. I think it would spin people’s heads to know just how far this industry of virtual goods has grown in such a short time.
To illustrate this, I’ll start by comparing two examples of in-game transactions (often dubbed “microtransactions” because they’re usually only small amounts of money[4]):
This is one of the first ever microtransactions offered; some armour for your horse in a fantasy game:
I think it looks good, personally. For more information on this moment in gaming history see this article.[5]
Though not quite monocle-gate, it wasn’t received well either. Back then in 2006, the idea of asking $2.50 for a cosmetic item was new, and the game was importantly only single-player: there’s less motivation to make a status-type purchase in a non-social environment.
Here’s the big change, however:
As games have shifted increasingly online, even gameplay experiences that once were typically single-player and non-social have become the opposite. Just five years ago, if you played the biggest basketball video game out there, NBA2K, you’d do so largely by yourself or with friends on the couch beside you, maybe at most play some online games with other people.
In the last few years that’s changed, and now this game, and many others like it, are becoming more like the MMO genre – a persistent, always-online world. Now, in these basketball games, you have your own apartment, and you occupy a neighbourhood shared with other players. Naturally, this means there’s shops too. Because the game world is now social those items you buy can be shown off to other people as you walk the neighbourhood (or play on court). Status purchasing makes more sense, and I believe this is key to why we’ve seen this change.
This is the second example of microtransactions, and it shows just how detailed, embedded, and mainstream this has now become. So, to illustrate this, let me take you on a quick tour around the neighbourhood in one of the latest versions of that basketball game, NBA2K.
There’s a barber where you can drop in for a haircut change, which you can pay for with real money, of course:
There’s countless clothes stores too, for basically any style. Inside are genuine brands, and there’s a strange new grey area created here. It’s somehow more real when they’re officially branded Levi’s jeans. They may not be “real” but they are certainly “authentic” or “genuine” and this surely helps blur the lines between real and virtual even further.
There’s advertising everywhere, too. The billboard above that store is advertising another in-game item, also potentially purchasable for real money.
Speaking of branded goods, why not stop by JBL and get yourself some dope headphones to walk around the neighbourhood in?
And, of course, there’s a Foot Locker with all the big shoe brands you’d expect. I wasn’t using shoes earlier as an example by accident.
People drop into this virtual Foot Locker here and spend virtual currency on virtual shoes for reasons like status and prestige.
To be clear, the virtual currency (VC) most things are sold for can be “earned” in-game by playing, but because there is so much money to be made in this now, the game is increasingly designed these days to be less rewarding in that regard, and through that, to encourage players to reach into their wallets, just like they would in a real Foot Locker store.
The point being made here is we’ve come a long way: from $2.50 horse armour developed in-house, to officially branded Foot Locker stores slinging virtual Nikes in a fully licensed sports game franchise. For the publisher of NBA2K games, virtual goods are now a huge part of their revenue model and have proven highly successful. Oddly, this success comes despite often significant consumer backlash. Games journalist Luke Plunkett captures the broader consumer sentiment in a scathing article about NBA2K’s 2019 release of the game, and about the games industry use of microtransactions more broadly:
2K19 is like a free-to-play mobile game, a predatory experience where the game is always shaking you down for your lunch money, even after you’ve already given it $US50 ($70). To play 2K19 is to be in a constant state of denial and refusal, always aware that in every aspect of the game, from the gyms to the stores to the action on the court itself, you can either spend VC [virtual currency – the game’s money] or be told that you’re missing out on something.
There may remain a vocal portion of the player base and industry commentators loudly protesting virtual goods sales, but the overwhelming majority seem to have spoken with their wallets. Below is the game publisher, Take Two Interactive, reporting the sales figures for 2019:
Net revenue grew to $1.249 billion, as compared to $480.8 million in last year’s [2018] fiscal third quarter. Recurrent consumer spending (virtual currency, add-on content and in-game purchases, including the allocated value of virtual currency and add-on content included in special editions of certain games) increased and accounted for 24% of total net revenue. The largest contributors to net revenue in fiscal third quarter 2019 were Red Dead Redemption 2, NBA 2K19 and NBA 2K18.[7]
Almost a quarter of all revenue, and figures in the hundreds of millions, driven largely by the sale of two basketball games, and chiefly, the virtual goods sales that happen within them.
We’ve come a long way from horse armour, indeed! The question now is, where might this trend take us?
Breaking down what this all means:
Shifting consumer culture towards virtual worlds might seem like a ludicrous concept, but what I’m trying to show here is that it’s already happening. On a big scale too. One of the most popular games right now is Fortnite, and it’s made over a billion USD in 2018-2019 from purely cosmetic item sales – ones that don’t affect gameplay advantage[8]. In other words, a billion dollars that might have been spent on real world items for similar reasons, has instead been spent on pixels, which largely only needed energy to produce[9]. Big players like Levi’s, Nike, Foot Locker, and big revenue figures mean the industry is valued somewhere around 15 billion USD[10], a huge figure for something not many people talk about at all, let alone in sustainability terms.
There’s a great deal more that can be said about the virtual economy too. It brings many co-benefits, like offering new types of consumer empowerment and control over the goods they purchase, and even allowing them to be producers (owners of the means of production?!) themselves, as creators of in-game content, or people who can monetize their in-game prestige or fame for real world wealth[11].
Virtual goods economies also have a proudly long tradition with social goods, too. For decades now, sales of virtual goods have often been used to donate towards charities, or fund other social enterprises. The largest of MMO-type games like EVE Online and World of Warcraft run regular, highly successful fundraisers, providing millions of dollars in assistance. This example here is an in-game pet from the previously mentioned game World of Warcraft – a Cinder Kitten!
Sales proceeds of this fiery furball raised over $2 million for relief efforts following Superstorm Sandy.
Levelling up: Games as a competetive sport (E-Sports)
Meaning “electronic sports”, “esports”, if you haven’t heard of it, is the idea of professional gaming. In relation to virtual goods, a more recent twist is that proceeds from virtual goods sales now can also be combined into a pool that serves as prize money for e-sports athletes in major gaming tournaments. These prize pools have ballooned from the hundreds of thousands to the tens of millions in the last five years, driven largely by virtual goods sales. This has, in turn, helped promote the further professionalization of athletes competing, and grown the legitimacy of esports further. These in-game tournaments and esports more generally are now economies in their own right, involving broadcasters, analysts, announcers, along with sponsors for the shows, and sponsors and endorsements for the individual teams and players. To put it all in perspective, an esports athlete coming to the US for a tournament, for example, can sometimes file for the same VISA used by other professional athletes – a golf or tennis star, for example.
Madison Square Garden, New York, sold out on consecutive nights hosting one of the largest annual esports tournaments.
‘When you fill up “The World’s Most Famous Arena”—home to the New York Knicks, Rangers, and the “Fight of Century” between Ali and Frazier—and you do it on consecutive nights, you send notice that you’re to be taken seriously.’[12]
Clearly there is a culture growing here too, not just an economy. Gaming personalities, analysts, athletes, and journalists are all earning money and creating economic value, but they’re also shaping a new culture that draws ever more people in. This culture drives dozens of different economies today while just in its infancy. It will surely drive many more as it develops. Virtual worlds and virtual goods are part of this broader movement, and they’re likely only going to increase in economic and cultural value into the future.
Increasingly, there is a blurring of the line between real and virtual worlds; the status we achieve in them, the sense of belonging and accomplishment they provide, and the wealth, even, that we achieve in them.
Perhaps it was something like this that drove a company like Facebook in 2014 to purchase the virtual reality headset Oculus Rift for a then head-scratching 2 billion USD[13]. Many couldn’t understand why the social network giant wanted to get in on the virtual worlds industry, and why they were willing to pay so much for one of the earlier headset technologies that showed promise and potential for broad adoption.
A lot of commentators seemed to focus on everything I have so far; on gaming and the virtual economies around them. What’s interesting to consider are the broader applications of a virtual environment and economies that could spring up around them. As just one powerful example, imagine being able to buy cheap front row tickets to your favourite sporting team, musician, comedian, or whatever else, using VR? Facebook may have lofty ambitions for virtual worlds beyond games, while hoping for a similar ability to draw people in; to create cultures, and of course, economic value.
Virtualization offers attractive incentives to many companies. It’s often far cheaper to provide a virtual product than a real world one, and although price points are significantly lowered in virtual environments, the overall margins are far larger (hence it being so insanely lucrative for the companies that have done well on virtual goods sales). The potential here for new economies seems to catch the eye of business, but the potential gains in resource reduction should equally catch the eye of sustainability practitioners and advocates. Perhaps we should be having more discussions about virtualization, virtual goods, and how to use combine civic advocacy, business innovation, and government policy to encourage reductions in natural resource draws.
Interestingly though, this idea of increasingly pervasive virtualization of goods will battle both indifference and ignorance from people unaware of the virtual goods industry already out there, and it will also meet some hostility from gamer and game analysts, who are often in conflict with game studios over virtual goods sales. Virtual goods sales have at times been deeply predatory and problematic, as Luke Plunkett’s earlier article demonstrated.
Another recent example is controversy surrounding “loot boxes”. “Loot boxes” are essentially randomly generated boxes of items that players can spend currency on (usually real-world money). Because the items in the box are randomized, there is a chance each time for a good or bad item. They operate similarly to poker machines, and with similar odds for great “payouts”. What we have here, then, is huge gaming companies employing psychologically manipulative practices to lure gamers, often children, into gambling real money for virtual items.
Even if it may offer some promises for more sustainable consumption, the road towards virtualization, quite clearly already, has its own pitfalls. These flashpoints of controversy all have the familiar whiff of hyper-capitalist greed that we’d find most other places, whether trading oil commodities or offering real estate loans. As discouraging as that is, it’s also an indicator of just how real and high stakes this world, and this industry around it, is becoming.
When the sharks have begun circling, you can be confident there’s something meaty there.
The Transcension Hypothesis, Bucky Fuller, and the Sleepers
It’s not entirely correct of me to say earlier that the Fermi Paradox assumes that advanced civilizations would be motivated to colonize space. This is because the Fermi paradox is something of a living idea; one that is updated as new critiques are made. This idea I’ve focused on – of advanced civilizations essentially leaving our universe and disappearing to a virtual world within, has in fact been added to the theory (as officially as possible). Stephen Webb’s compendium of solutions to the Fermi paradox, If the Universe is Teeming With Aliens … Where is Everybody?[14]now references an idea like the one I’m discussing alongside 74 other possible explanations.
This is thanks to the paper by John Smart[15] outlining an idea called the “Transcension Hypothesis”. Smart and I aren’t quite speaking the same language, but we’re both playing the same game here, pun intended.
There are a few key pieces of source material worth quoting at length here, before I return to assumable them all into something understandable, and something I have some experience myself with.
Firstly, what is Smart’s “Transcension Hypothesis” about? The clearest and most concise summary, sadly, isn’t in the abstract of the paper itself, which is quite dense. Instead, I prefer this summary from H+Pedia – a transhumanist version of Wikipedia essentially, and interesting in their own right.
In the “developmental singularity hypothesis”, also called the transcension hypothesis, Smart proposes that STEM compression, as a driver of accelerating change, must lead cosmic intelligence to a future of highly miniaturized, accelerated, and local “transcension” to extra-universal domains, rather than to space-faring expansion within our existing universe. The hypothesis proposes that once civilizations saturate their local region of space with their intelligence, they need to leave our visible, macroscopic universe in order to continue exponential growth of complexity and intelligence, and disappear from this universe, thus explaining the Fermi paradox.[16]
One key idea here is that long-term sustainability might be cosmologically quiet– hiding its secrets from us
The idea here, as you can hopefully see, echoes my own about civilization development and virtualization. As I said earlier, if you want to optimize your ecological footprint, you will minimize your resource consumption. Smart’s paper echoes this with ideas about “highly miniaturized, accelerated, and local “transcension” to extra-universal domains, rather than to space-faring expansion within our existing universe”.
Think about this in terms of science fiction clichés. It’s extremely common to see future versions of humanity where we’ve colonized other planets, spreading out like locusts to consume, extract, and expand (I feel like sometimes in these movies we are the baddies and we don’t even know it). Less common than this story, historically, is the one where future versions of humanity instead colonize virtual worlds, expanding inwards, as paradoxical as that sounds. If they were to take this route, then there wouldn’t be as much going on in the physical world, perhaps. An interesting idea to ponder, how those two “worlds” might co-exist – but the important point for now, in terms of the Fermi Paradox is that a civilization that develops into “inner space” won’t leave much of a footprint in “outer space”.
Perhaps achieving sustainability means we won’t leave a large footprint to be noticed for other civilizations. If advanced civilizations optimize their existence to vastly minimize their environmental impact, and thereby perhaps, their visible presence, this could help explain the Fermi paradox in a way that’s maybe even uplifting (success is out there, it’s just quiet).
To elaborate on this idea, I’ll include a portion of the abstract from Smart’s paper, worth quoting at length so as to capture the general “feel” of the idea, at least.
The emerging science of evolutionary developmental (“evo devo”) biology can aid us in thinking about our universe as both an evolutionary system, where most processes are unpredictable and creative, and a developmental system, where a special few processes are predictable and constrained to produce far-future-specific emergent order, just as we see in the common developmental processes in two stars of an identical population type, or in two genetically identical twins in biology.
The transcension hypothesis proposes that a universal process of evolutionary development guides all sufficiently advanced civilizations into what may be called “inner space,” a computationally optimal domain of increasingly dense, productive, miniaturized, and efficient scales of space, time, energy, and matter, and eventually, to a black-hole-like destination. Transcension as a developmental destiny might also contribute to the solution to the Fermi paradox, the question of why we have not seen evidence of or received beacons from intelligent civilizations.[17]
There’s a whole lot to process there. In short, Smart is arguing a few key points:
The science of evolutionary development presents two ideas about evolution, one as a chaotic and unpredictable system, and another where a “special few processes” are predictable, almost inevitable. In this hypothesis, on a civilizational level, these processes are “constrained to produce a far-future-specific emergent order”. In other words, to produce predictable outcomes.
This theory argues that every advanced civilization will encounter a situation that represents that second type of evolutionary development – a process that is predictable. This is because, on a long enough timescale, they will desire a development path that shifts from physical space, towards “inner space” (maybe virtualized space?), a “computationally optimal domain of increasingly dense, productive, miniaturized, and efficient scales of space, time, energy, and matter”.
This shift towards an increasingly smaller physical profile, towards “inner space” may represent a solution to the Fermi Paradox. Civilizations that achieved advanced levels of sustainability would not, according to this theory, leave a significant physical footprint and thus be difficult to discover.
This idea of Smart’s is echoed in a far earlier work, from the great thinker and futurist Buckminster Fuller. Bucky talked about this same idea in terms of “ephemeralization” which is explained nicely below:
Ephemeralization is the ability of technological advancement to do “more and more with less and less until eventually you can do everything with nothing,” that is, an accelerating increase in the efficiency of achieving the same or more output (products, services, information, etc.) while requiring less input (effort, time, resources, etc.). Fuller’s vision was that ephemeralization will result in ever-increasing standards of living for an ever-growing population despite finite resources. From Wikipedia
Sounds quite sustainability-related huh? Sounds quite a bit like this transcension hypothesis, too. And these ideas I have of my own about the potential importance of virtual worlds to sustainability, well it all intermingles quite nicely indeed, if I can say so, while adjusting my $60 virtual monocle.
…and the Sleepers?
Right, so… yes, a little confession is perhaps in order. Like I said, I used to work on this game EVE Online, and I was a writer there. It was a wonderful environment in that sense, because the game embraced concepts of transhumanism and other philosophically rich domains quite openly. Fertile grounds for a writer with an interest in these topics like myself.
At one point, the game was releasing a large expansion to its content that would feature a new alien race. My role as a writer was to help shape the story of that race. My brief was more or less that they should be “super advanced and ahead of our time”.
So, the story I created about them was almost literally everything you’ve just read.
It’s so weird. That I’m trying to write seriously paper about sustainability, and that this part of my past life keeps recurring, haunting me like a ghost. Something I wrote as fiction is now something I feel a need to talk about as a potential reality.
This race, the “Sleepers” as they were known, had disappeared into a virtual world, into “inner space”, just like Smart describes. A huge part of their advanced technology was derived from fullerenes, a type of carbon molecule invented by that same “Bucky” fellow, my way of giving nod to his own ideas of ephemeralization. And I was there in that world, that virtual world, as a real person working in a virtual world, roleplaying a real person who was investigating this alien race that had disappeared into a virtual world and…the lines blur.
It was only years later, reading Smart’s paper, that I realized something I wrote about as fiction might be an idea of genuine interest to serious minds.
Image courtesy of CCP Games. Art by me 😉
The image above hints at the what the structures housing the Sleeper civilization look like. The image was released alongside a short story I wrote that accompanied it and intended to conjure a feeling of detachment. The cosmos surrounding the facility glows red with life while the facility housing the Sleepers –vanished to their VR world within, if they even remain – languishes in a static black and white monochrome.
We made sure (at least at the time I was there) that players had no direct interactions with the Sleepers. All that remained were these silent, enigmatic buildings, and the fearsomely powerful worker drones that sustained the colony’s physical needs. There was no “big bad villain” in this expansion of the game, which certainly bucked the trend of how most games tell their stories. But with the right nudges here and there, the players found an enjoyable mystery.
In this case, players could only pillage these places for scraps of understanding; they could only scratch at the surface of greatness as they pilfered the Sleeper’s sites for breadcrumbs. I was trying to make the experience as close as I could to what it might be really be like, encountering a civilization that exemplified the transcension hypothesis. I was doing it before I had even heard of that term or read that paper. I was doing it, as an example of the paper’s thesis.
[1] Of course, it’s worth considering dismantling it too, but those are well-explored paths. This isn’t an argument I’ve heard others make, so it’s my focus right here. We have a million eyeballs on that problem, you don’t get much more from a million-and-one, and certainly not at the cost of us not exploring this a little bit further, right? Great.
[2] There is an exception here, of course, since computers require a lot more than just energy, but we’ll shelve that consideration momentarily.
[3] Kuchera, B. (2018, July 4). Leaks, riots, and monocles: How a $60 in-game item almost destroyed EVE Online. Ars Technica.
[4] Perhaps, given that term, you can see why a $60 item, and a monocle of all bloody things, went down so badly?
[5] Fahey, M. (2016, April 4). Never Forget Your Horse Armour. Kotaku.
[6] Plunkett, L. (2018, September 12). NBA 2K19 Is A Nightmarish Vision Of Our Microtransaction-Stuffed Future. Kotaku.
[8] Fagan, K. (2018, July 20). Fortnite — a free video game — is a billion-dollar money machine. Business Insider.
[9] More thoroughly, there are the indirect resource requirements of the people needed to develop those pixels, so there is a broader resource drain still, it should be noted.
[10] Bonder, A. (2016, December 25). 5 lessons from the $15 billion virtual goods economy. VentureBeat.
[11] Video game streamers, for example, are massive celebrities. The top streamer, Ninja, amassed a staggering 218 million human hours watched on his channel in 2018. YouTube’s largest star, PewDiePie, while now a media icon in his own right, launched his career in the same way as Ninja, as a video games streamer.
[12] Cunningham, S. (2016, October 27). How Video Gamers Sold Out Madison Square Garden. Inside Hook.
[13] Kovach, S. (2014, March 26). Facebook Is Buying Oculus Rift, The Greatest Leap Forward In Virtual Reality, For $US2 Billion. Business Insider.
[14] Webb, S. (2002). If the Universe Is Teeming with Aliens … WHERE IS EVERYBODY? (2nd ed.). Copernicus.
[15] Smart, J. M. (2012, September). The transcension hypothesis: Sufficiently advanced civilizations invariably leave our universe, and implications for METI and SETI. Acta Astronautica, 78, pp. 55-68.
[16] H+Pedia is a Humanity+ project to spread accurate, accessible, non-sensational information about transhumanism, including radical life extension, futurism and other emerging technologies and their impact to the general public – From their website main page.
[17] Smart, J. M. (2012, September). The transcension hypothesis: Sufficiently advanced civilizations invariably leave our universe, and implications for METI and SETI. Smart,, 78, pp. 55-68.
Having worked in the video games industry, and now studying sustainability, I feel I have a better perspective than most on the interplay between these topics. It’s a strange combination, I realize, but the interplay between them is surprisingly large, and much of it stems from the unique way in which games are consumed.
I have seen first-hand how all-consuming some video games can be. I don’t just mean addictive either, I mean life-consuming, and even life–replacing. At various moments in my career I would meet people who played the game I worked on. They would become utterly invested in the game, to the point “playing” hardly capture it. To them, it was a second life, and I could hardly blame them. We wanted this world we created to be exactly that.
For me too, the lines blurred heavily between the game world and reality, and the relationship I had with the game was complex. It was both something I created and something I consumed. Something I lived in and worked in, and worked on. A world I was paid to create, but also got lost in myself – my favourite kind of gameplay was just sitting around “roleplaying. Interactions with other people inside the world mixed between real world banter, and in-character drama and in-world action. At one point, I was running something called “live events” where I would take control of in-game characters, such as a menacing invasion force that would fight it out against our players. An experiment that pushed the edges of narrative – now the characters lept off the page into a living, breathing, world, in a story that would unfold in real time before the eyes of thousands of players.
There’s a reason this has become the world’s biggest entertainment industry. Games can get incredibly deep.
Though games are growing in cultural and economic importance and becoming completely mainstream, it’s still perhaps underappreciated that there are vast, complex, always-online worlds out there, perpetually bustling with thousands of player “inhabitants” – pocket virtual worlds running on a network of computers. At times, the richness of the social, economic and other interactions in these games can rival real-world equivalents[1]. A person may chase fame, wealth, or prestige harder in their video game life than they do in the real world. Perhaps this is especially the case in the “MMO[2]” genre which creates not just “game worlds”, but cities, neighbourhoods, homes, and alternate lives – something persistent and to many of its players, something deeply meaningful.
Image courtesy of Blizzard Entertainment Inc.
This, for example, is Stormwind Keep.
It is the main city for one of the two players factions in the MMO game World of Warcraft. Players can retire to the city after adventures to socialize, trade, and otherwise interact with others.
Driven by a mixture of artistic aspiration and financial motivations, the studios behind these special types of always-online games design them to be as immersive as possible; tempting people to stay longer, to sink deeper in. The game I used to work on, EVE Online, is an MMO like this. It has run for decades now; a living, changing world that players have spent huge parts of their lives inside of.
This game is especially notable because not only does it provide an online world for players to interact in, but it all happens on the one “server” – everyone occupies the same world[3]. The technical wizardry required to achieve this is not insignificant. The studio, for a time, owned and operated the world’s most advanced and expensive single-server computer network. All this just to host a game. The same studio I worked for once (infamously) marketed its game as “more meaningful than real life”. Among other accolades, the game is part of a permanent exhibit at the Museum of Modern Art in New York, featured alongside just 13 others including immortal, iconic titles like Pacman.
Though the moment has long since passed, MoMA continues to indirectly stick it to Roger Ebert, famed movie critic that once boldly claimed video games could never be art[4]. The installation takes the form of a 4K UHD video that shows what happens on any given day in EVE Online. While a game full of ships flying around a statistically empty universe may not seem like a proper subject for an artistic 4K UHD museum documentary, EVE Online remains what is arguably the best instance of a living virtual world…The games weren’t chosen for being pretty, but rather for being an outstanding example of interactive design.[5]
I strongly believe the implications of all this are underexplored, and yet have potentially profound consequences from a sustainability perspective. To understand why, you must appreciate that these games and the technology behind them is all just in its infancy. Despite this, it’s already possible, financially, and technologically, to build some seriously impressive virtual worlds – ones that draw in great numbers of people. The longer-term implications of these games will become more obvious as the industries and cultures around them grow, and perhaps, as increasingly large numbers of the human population gravitate towards spending some part of their life inside virtual environments.
The key point here is that MMO studios needed cheap, pervasive, high-speed internet to really shine, so the genre is only a few decades old. Return your mind to the Fermi paradox however, and we’re talking about civilizations advanced enough to colonize space, rather than ones that recently developed broadband internet. That might mean they also invented some reallygood games along the way, or more broadly, some really advanced virtual environments. Did that distract them? Is that why we can’t see anybody out there? Is the Great Filter of advanced civilizations that they inevitably become enamoured, or perhaps even lost, in a simulated world?
Footnotes
[1] And in cases where in-game items have corresponding real-world monetary values, it can be real money and not just pixels at stake in those interactions.
[2] MMO stands for “Massively Multiplayer Online”.
[3] The more common alternative is called “instancing” where many copies of the game world are made, and players occupy just one copy at a time. For example, there are multiple “copies” of the city of Stormwind Keep in the MMO game World of Warcraft. In EVE Online, there is just one world that everyone occupies – one Stormwind Keep, effectively.
[4] Watt, M. (2010, April 19). Roger Ebert says video games can never be “art”. Geek.com.
[5] Plafke, J. (2015, May 12). Eve Online’s permanent art exhibit at MoMA can now be viewed online. Geek.com.
Despite dealing with non-traditional topics, most of the content here still stays at least within the orbit of mainstream discourse.
This series on virtualization is a little different.
It is still ultimately grounded in some realities, very much so, but it incorporates other more far-flung ideas. They are nonetheless based on some well-reasoned observations and examinations, of technology especially, but also of human psychology, of economics, and of culture. This kind of approach represents the mixing of disciplines, and the difficulty of discussing just one idea in isolation from other related schools of thought, or within just one scale of time or space.
This section on virtualization embraces thinking outside the mainstream. As I hope to show, however, the ideas that follow are too important for the mainstream to ignore much longer and indeed, the technology that invites this mode of thinking is proceeding at pace regardless.
With that disclaimer out of the way this series is about some key ideas. Firstly, I am looking at sustainability on far longer timescales than usual, extending to cosmological-level timescales (millions, billions, potentially even trillions of years). Interplanetary colonization may be a sustainability issue, after all, but it wouldn’t strike many as a pressing one.
Within that perspective, I explore something called the Fermi Paradox, which asks why we appear to be alone in the cosmos. I provide a potential answer to this question by pointing to the allure (and potential utility) of virtual worlds, and in doing so, hope to make a deeper point about potential civilizational development outcomes that have profound consequences for our perspectives on sustainability.
To put it crudely, I explore the idea that every advanced civilization inevitably ends up playing video games.
The Drawdown[1] project spearheaded by Paul Hawken, identified a range of potential methods to reverse climate change, and then prioritized them according to certain criteria. Specifically, they were ranked according to emissions reductions and cost. Emissions reductions is the key component of reversing climate change, and so this was considered the critical indicator of a given solution’s potential impact.
The inclusion of costs is intended to act as a proxy for feasibility in general, suggesting that projects with economic gains are arguably more feasible – although ascertaining costs in some areas was too difficult for this first version of the project. Project leader Paul Hawken also stressed that various co-benefits existed with these solutions that went far beyond economic considerations. Empowering women, delivering rooftop solar, and regenerating our natural environments are all examples of ways to achieve emissions reductions that come with other profound benefits. The image below illustrates this idea beautifully:
‘The cartoon seen around the world’.
This famous cartoon by artist Joel Pett went viral before the Copenhagen Climate Change Conference in 2009, helping promote the simple yet powerful idea of “co-benefits”.[2]
Drawdown demonstrates prioritization, but not triage: The focus is on prioritizing solutions by their effectiveness, rather than ranking threats by level of severity. This isn’t to say Drawdown is bad, however. This is not lazy thinking, simply different. Different approaches should be encouraged because each framework lends different strengths. A drawdown-type approach can be good for identifying lesser-known issues, for example, refrigeration management (a surprising #1 on the list, as shown below) and aligning our capacity for solutions with problems we can solve in a way that maximizes our potential positive impacts. That part is commendable.
More so than the results of Drawdown, their prioritization methodology might end up ultimately as their greatest achievement. One key point here is to examine what the Drawdown project does at this higher level, because it is an instructive example in highlighting a process resembling triage.
Identify candidate issues for consideration.
Develop criteria to rank them.
Apply criteria and develop a ranked list.
Drawdown’s “Top 20” list of most effective ways to reverse climate change. Updated in 2020. [3]
As Turner’s previous lamentations would highlight, however, the focus with Drawdown is still problematically on just one domain – the environmental, and even more specifically, on reversing climate change (just one environmental challenge of many).
What if, instead, there was a work comparable to Drawdown that identified existential risks, developed a criteria for prioritization, and produced a ranked list like the one above? Something like the list below?
Rank
Threat
1
Unintended consequences of AI development
2
Economic inequality
3
Climate change
4
Global nuclear war
etc…
What if we had something like this to help guide us?
Perhaps more humbly, I should ask: What if we already do, but it just doesn’t get the attention it deserves?
Footnotes
[1] I attended a talk on this report delivered by the editor Paul Hawken, which is where some of the information here is drawn from.
[2] Pett, J. (2012, March 18). Joel Pett: The cartoon seen ’round the world’. Lexington Herald Leader.
[3] Drawdown.org. (2017). Summary of Solutions by Overall Rank. Retrieved from Drawdown.org: https://www.drawdown.org/solutions-summary-by-rank Taken from their website in 2017. Notably, much has changed in the years since writing this. The 2020 review of Drawdown appears to have changed things considerably. The extent this undermines arguments here won’t be clear until I get a chance to take a closer look.
Typically, this term appears in a medical context where it refers to the process of determining priority of treatment based on the severity of a condition. For example, a hospital emergency room may have multiple patients to treat, and its triage that helps determine how to prioritize their treatment. Urgent and severe problems are dealt with first, and so on down the list of patients.
This would probably strike most people as “common sense” – it is clearly stupid to treat someone’s tooth ache while another patient with multiple gunshot wounds dies in the waiting room from lack of attention. Importantly, however, not every situation is so clear cut. Sometimes a patient with an urgent medical problem may not obviously present that way, while someone with a lesser issue can make a lot of noise demanding urgent attention (a tooth ache is a good example – exceptionally painful at times, but rarely life-threatening. The identification and classification of risk is, therefore, just as critical an aspect of triage as the ensuing prioritization that it informs. In other words, we need a way to identify the “quiet” but high-risk patients, just like we need a way to identify the high-risk threats that may not obviously present themselves, like someone screaming about a toothache or something with a gaping chest wound.
[Editor’s comment: The other part of triage which you haven’t mentioned is that patients who are urgent but too far gone are not treated – what are the sustainability parallels? Do we need to jettison certain causes in favour of those that are still saveable?]
The point here is that triage needs two things to work well: it’s not just about ranking threats, it’s also about identifying them in the first place – as many as possible that might be of relevance or importance. A good classification and identification scheme can help us in the more uncertain situations, when multiple high-priority issues present simultaneously. One can easily argue this is the case in sustainability, where climate change, resource constraints, economic equality, human rights, peace and justice, and other issues all present as equally urgent (and frustratingly, are often interrelated – making it harder to separate and then prioritize just one).
The UN Sustainable Development goals exemplifies, quite well I think, how we have frameworks already attempting something like the first half the work of triage – identifying threats. It’s not quite framed that way, but many can be read out of each goal. Eliminating poverty, for example, reduces the risk of harm at a personal level, and reduces the risk of broader societal disorder – and the reason we want to do this is, partly, to reduce such risks. The UN even demonstrates thinking “beyond the grass ceiling” and include issues like economic inequality, justice and peace, and human rights. What the SDGs lack, however, is an explicit risk-based focus, and any kind of serious ranking or triage. In a world of finite time and other resources, should we focus on SDG #1 or #10? Which one minimizes potential risks the most? Clearly, the framework isn’t that useful overall for a triage approach.
Now, perhaps, we need to begin the work of sorting out what’s most important from lists like these. We need something as accessible and well-supported as the SDGs, but we need it ranked, so that we can prioritize. This won’t be easy, since developing a criteria to rank these things would be immensely difficult and complex, and because as said earlier, these issues often interrelate. Despite the challenge of this task, we must take it on. Without a roadmap of prioritized risks, we are blindly hoping the things we focus most on (like climate change) are indeed the biggest threats. It’s fair to question if this focus comes with a cost.
Why can’t we do both?
We can do two things at once, of course. Issues can be “equally important” too. Just as they are interrelated. But we cannot pretend we have infinite resources to tackle sustainability challenges either. Money is limited. People’s attention spans are limited. The time people can devote to the cause actively and consciously is limited. Time, especially, is limited.
It is within the context of these constraints that a simple truth emerges: we need some level of focus here. We can’t just say “it’s all important” and proceed haphazardly, according to our own interests and agendas. Do that, and there’s a real risk we run out of time to fix certain problems in an optimal way. This truth is as uncomfortable as it is obvious – it implies there will be sacrifices; issues deprioritized along the way. A good example of this playing out, as I write these words, is the conflict between quarantine protocols protecting public health, and people’s right to protest. The clash between the Black Lives Matter movement and the restrictions of COVID-19 illustrate well, the kinds of difficult conversations ahead. In the Apeilicene, we are likely to see these situations with increasing frequency, as time-sensitive threats arise and demand extraordinarily difficult choices of us.
Similarly, we can’t expect that the current focal points themselves aren’t the product of political agendas and self-interest. If triage enables progression via focused prioritization, then it demands sacrifice as I’ve said. If sacrifices are required, it’s overwhelmingly more likely to be demanded of the powerless by the powerful. This perspective sheds some light on the landscape of global sustainable development. Often it is rich, industrialized nations pressuring less-wealthy countries to leapfrog coal and jump into more expensive solar – for example. In other words, the powerful expecting the powerless to do the heavy lifting.
What about root causes?
This is an issues-oriented approach, clearly. We might question why we aren’t identifying whole systems as problems. Capitalism, consumerism, and so on. The guy in the emergency room with heart attack symptoms isn’t just there because of his individual circumstances. There are larger, structural forces like globalization, modernism, reductions in manual labour, and consumerism that likely shaped his individual circumstances and the choices he could exercise.
But triage is not about root causes or systemic, structural change. It is what you practice in the emergency room. When someone’s heart is about to stop, it’s that immediate crisis you focus on, not systemic change.
Explaining the lack of triage in mainstream sustainability
In contrast with a triage approach to sustainability, triage in health care is not a peripheral concern – it is a core practice supported by years of research and used in basically every medical institution around the world. Why then, in sustainability, is this same approach not taken?
One possible explanation is that focusing on risks; on problems and challenges, often places a negative frame on a given issue, making it harder to identify potential opportunities. Similarly, focusing on risks can present challenges for communicating sustainability. Research shows that fearful messages cause disengagement, apathy, and a sense of hopelessness and incompetence. Continued exposure causes most people to “tune out the messages and move on to other, more pleasant concerns”[1]. Despite its obvious importance, a threat-based approach to sustainability clearly presents some challenges and considerations, and in looking at some examples of threat-based models, we’ll start to see the devil, as always, is lurking in the details.
Footnotes
[1] Robertson, M. (2017). Communicating Sustainability. New York: Routledge.
Even the best ideas “on paper” are no good if they cannot be practically applied in the real world. Theory is often differentiated from reality in this way, with the application or practice of ideas referred to as praxis.
We want to put these theories to work, but do we have a theory about how to put theory to work? What is the theory of praxis?
Our definition of sustainability has a broad scope, focusing on inclusivity and exhaustive consideration of many possibilities. Yet “scoping down” and narrowing considerations comes with benefits too. For reasons of pragmatism, political feasibility, comprehensibility, accessibility, and others, it can be useful to simplify things and focus in on one specific area, one specific need, one specific risk.
In the context of communicating sustainability too much information can lead to a range of negative outcomes such as information overload, decision paralysis. All of this leads to one basic question in practicing sustainability: what should our scope be? And can one like we are taking, with a broad and all-inclusive framework, still be useful in practical terms?
Operationalising sustainability requires a balancing act between broad and narrow perspectives.
The concept of “triage” is useful in highlighting how sustainability can be applied. In this model, prioritisation of one thing over another is based on evidence of urgency. A triage approach the impossible task of doing everything simultaneously, while still leaving room for all things to be considered – allowing us to identify subtle threats and risks, to uncover hidden opportunities, and otherwise benefit from a broader approach that considers many perspectives.
Building a map – maybe even a data set?
The field of medicine has often struggled with a specific problem – managing the different interactions between drugs given to a patient. This is a complex problem made difficult by many factors, including accounting for the variables of a patient’s gender, ethnicity, personal medical history, and other factors.
It is a problem, in other words, often beyond the human mind’s ability to solve.
To get around these limitations, we’ve turned to Artificial Intelligence. AI like IBM’s Watson are learning how to excel at tasks like these and can offer far more comprehensive and accurate overviews of the complex interactions across hundreds of drugs, for any kind of patient[1] (IBM, 2019). The approach here; to collect, combine, and compile that data – and then feed it to Watson – is how something incredible will be achieved.
If the Earth and its inhabitants are the sick patient (and all indications suggest we are), then it’s worth noting that the area of sustainability has no Watson; no all-seeing Oracle we can look to for guidance.
Perhaps we will need something like this someday? Perhaps the challenge of persisting over time is a problem beyond the human mind’s ability to solve alone, without help. Already, we are putting artificial minds to use on singular, discrete sustainability projects, from climate modelling, to autonomous transport, to smart irrigation systems. Perhaps a time will come when these systems are supplemented by something like IBM’s Watson, a more generalized artificial intelligence, capable of insights between complex systems both natural and human-made. Some of these insights we can barely imagine right now. Like the discovery of the microscope, a whole new world – once invisible – could open to us.
What we are doing then with The Grass Ceiling, and what we encourage others to do, is help map this terrain for future travellers. Like cartographers of old, we are exploring a diverse and unfamiliar world, and capturing what we can of it to guide future people to come. And perhaps, appropriately for our STEM-driven era that promises profound technological progress, we are also building something of a data set – a resource that would help us build a “Watson for sustainability”. A catalogue of ideas and areas of investigation that any kind of holistic, integrative system would want to consider.
Embracing conflict
Lessons from “praxis at scale” in the coastline paradox
Mathematics and real-world situations can highlight how our epistemological approach – specifically, embracing paradox and competing truths – can make sense.
Most of us know idea of the fractal; an infinitely-recurring, mathematically-defined structure that can be viewed in detail at any scale. Fractals are a good mascot for our definition and view of sustainability, for how we’re viewing knowledge: as something protean and shape-shifting, and “true” at all scales. Perhaps even more so, the Sierpinski Triangle should be our motif – a fractal that demonstrates a real-world paradox.
The Sierpiński triangle, also called the Sierpiński gasket or Sierpiński sieve, is a fractal attractive fixed set with the overall shape of an equilateral triangle, subdivided recursively into smaller equilateral triangles. Image from Wikimedia Commons.
The Coastline Paradox relates to the real-world problem of measuring the perimeter of a geographical area (for example, the coastline of Australia, some of which is shown below). As measurement accuracy increases, so too does the length of the coastline. Because accuracy can increase infinitely, it seems to suggest that therefore a coastline’s length can too. Theoretically, this means that Australia’s coastline is infinitely long – something that violates the laws of non-contradiction – it cannot be true that space is finite if it’s also true that coastlines have infinite length! Both are “true” however, but in different contexts.
“Tending towards the infinite”: The coastline’s length grows as measurement accuracy increases.
Perhaps these kinds of seeming contradictions can illustrate a way to think about sustainability issues: often multiple conclusions or results are true (or at least have merit). What often matters is the context, the scale, and framework we’re applying.
There are many definitions and frameworks of sustainability out there, many of which are provided by bodies like think tanks, governments, businesses, intergovernmental organisations such as the United Nations. Some focus on specific areas, like the Three Pillars framework (examined earlier here and in more detail here). This framework represents sustainability as a movement with the three specific concerns:
Elsewhere, the concept of Sustainable development (one way in which we practice sustainability – the conversion of these ideas into actual, real-world projects) aims to tackle each of these three pillars in an integrated, interdisciplinary way that focuses on societal and economic development, attempting to ensure that progress made in one area does not cause regress in another. The UN Sustainable Development Goals[1] are an excellent example of this conceptual framework of sustainability being applied to real-world projects.
Another conception of sustainability takes a different, time-focused approach. The concept of intergenerational equity repositions sustainability as the challenge of ‘meeting the needs of the present without compromising the ability of future generations to meet their own needs’ (courtesy of the Brundtland Commission). In this framework, we must consider sustainability across different scales of time; the present and the future.
Each of these, and many other, definitions aims to capture certain elements of the concept of sustainability, but none of them alone capture it all. The three pillars focuses perhaps too tightly on its chosen areas, whereas intergenerational equity narrows our considerations to the lens of time scales. That is why, for this project, we are using a stipulative definition[2] of sustainability that is far broader. For the purposes of our work, sustainability is defined in the most basic way possible:
Sustainability is the ability for humans and their environments to persist over time.
Persist? That’s it? What about flourishing, the supervisor of our research project here asked. It’s a good question, and one that shows how any definition – even one as broad as our own – will always be imperfect.
[2] “A declaration of a meaning that is intended to be attached by the speaker to a word, expression, or symbol and that usually does not already have an established use in the sense intended” (Mirriam Webster).
First published: March 5, 2017 for Woroni[1]. Reworked in 2019 for The Grass Ceiling
Although there are elements within sustainability dating back to the Ancient Greeks and even earlier, the idea has risen in prominence greatly since the 1970’s, spurred into public consciousness then by the broader momentum building within the environmentalist movement.
Rachel Carson’s Silent Spring[2], often credited for kick-starting modern environmentalism, had been released in the early 1960’s and done much in the intervening years to raise awareness within the U.S and abroad that human activities were not only harming the planet, but also humans themselves. ‘Our heedless and destructive acts enter into the vast cycles of the earth and in time return to bring hazard to ourselves,’ Carson told a Senate Subcommittee, not long after the book’s publication[3].
A decade after Carson’s best-selling book had helped launch modern environmentalism, the UN held one of the first conferences relating directly to the idea of sustainability: The United Nations Conference on the Human Environment, held in Stockholm, 1972. The result of the conference, among other things, was the Stockholm Declaration – a list of 26 principles intended to guide a new kind of development that was more sustainable[4]. A taste of the first five are included below.
1. BOTH ASPECTS OF MAN’S ENVIRONMENT, THE NATURAL AND THE MAN-MADE, ARE ESSENTIAL TO HIS WELL-BEING AND TO THE ENJOYMENT OF BASIC HUMAN RIGHTS THE RIGHT TO LIFE ITSELF.
2. THE PROTECTION AND IMPROVEMENT OF THE HUMAN ENVIRONMENT IS A MAJOR ISSUE WHICH AFFECTS THE WELL-BEING OF PEOPLES AND ECONOMIC DEVELOPMENT THROUGHOUT THE WORLD…
3. MAN HAS CONSTANTLY TO SUM UP EXPERIENCE AND GO ON DISCOVERING, INVENTING, CREATING AND ADVANCING…
4. IN THE DEVELOPING COUNTRIES MOST OF THE ENVIRONMENTAL PROBLEMS ARE CAUSED BY UNDER-DEVELOPMENT…
5. THE NATURAL GROWTH OF POPULATION CONTINUOUSLY PRESENTS PROBLEMS FOR THE PRESERVATION OF THE ENVIRONMENT…[5]
United Nations Conference on the Human Environment, Stockholm, 1972
A few things are immediately noticeable: Gendered language referring to all people as “man”. A focus on economic development, suggesting that capitalism is the answer. The labelling of some countries as “developing”. The suggestion that “underdevelopment” is the major cause of environmental problems in those countries.The suggestion that population growth is an issue
Many – if not all – of these narratives are challenged today. Flawed as it is here, this idea of ‘sustainable development’ was gaining traction. Reading through that list of principles, the influence of the environmental movement is evident too. There are perhaps only three principles that do not explicitly mention or concern themselves with the environment. The focus of early sustainability here was narrower than it is today, and yet it remains nonetheless heavily fixated on the environment all the same.
The over-greening of sustainability
Environmental motifs sampled from the same Google image search. “Sustainablity” seems to be about gardening, or having the world (almost literally) in our hands. Being mindful of this framing is important.
If you Google beyond cursory image searches and explore different organisations, you may notice that many sustainability-related projects in government are overseen by, or somehow related to, their environmental departments. Within education, you’ll notice the subject is usually taught by environmental departments too. Here at ANU, my own sustainability degree revolves around courses taught by the Fenner School of Environment and Society. Sustainability has its roots (forgive me) in the environment, and it is usually out of those same departments – once focused exclusively on that domain – that sustainability is beginning to emerge.
It can take quite a bit more digging beyond first impressions to realize that there is more to modern sustainability than just environmental concerns.
The history of sustainability, with its roots in modern environmentalism, has undoubtedly “greened” sustainability. This could be placing harmful boundaries on the concept by focusing it too much on environmentalism. The name of our project, The Grass Ceiling, represents our desire to transcend historical preoccupations with environmentalism.
For over 40 years now the UN and associated bodies have been expanding the earlier environment-focused definitions of sustainability to be more inclusive of other equally important factors. By 1987, the UN’s Brundtland Commission – another famous milestone in the rise of sustainability – was speaking about the idea in terms of the ‘three pillars’; the social, the economic, and the environmental[6].
The ‘three pillars’ idea has remained popular since Brundtland, and cemented itself into much of the research, discourse and practice of mainstream sustainability. Corporate sustainability over previous decades has often used what’s known as the Triple Bottom Line – a framework that encourages focusing on social and environmental outcomes in addition to the economic ‘bottom line’. The three pillars idea is explicit here, as it is elsewhere.
In subsequent articles, I implicitly and explicitly critique this idea of the ‘three pillars’ in more depth, demonstrating that there are more ways to think about sustainability beyond these three core concerns. For right now, however, they represent a good first glance at sustainability – a more comprehensive idea than the “green” that a quick Google search suggests, and therefore a good first glimpse beyond the grass ceiling.
The Three Pillars: Social, Environmental, Economic.
What are these three pillars, then, and what is sustainability as it relates to them? The idea is relatively simple: societies cannot achieve sustainability by focusing on the environment alone. We could, for example, achieve all the environmental goals laid out by the UN and others, such as carbon emissions reductions, and yet still be living in an unsustainable world destined for collapse. A reduction in ocean acidification, or the complete halt of biodiversity loss would only be a partial victory for sustainability so long as women around the world remain disempowered, poverty continues to destroy lives, and economic inequality heightens to dangerous and unprecedented levels.
These lingering, unresolved issues would also risk creating situations that unwind progress made elsewhere. If countries with alarming levels of economic inequality fall into civil unrest and even conflict, then the progress made on the environmental front is almost certain to slip.
This framework suggests that achieving environmental outcomes depends upon taking a holistic approach. The social and economic impacts of environmental policy are often so significant that tackling just one ‘pillar’ in a vacuum dooms any such process to failure. Consider how much of the pushback against environmental policy is framed as an economic argument. In Australia, for example, environmental policies are often challenged on economic terms – as too expensive, or economically unfeasible. As the arguments go, achieving environmental targets is no good for Australia if the cost is economic turmoil (and implicitly, the social upheaval that entails). The argument is not without merit and echoes the complex interrelationship between our society, the economy, and our environment.
To wrap up then, and keep things simple for introductory purposes, sustainability can be considered as a movement with three core concerns: environmental responsibility, economic equity, and social justice. Sustainable development (one practice of sustainability) aims to tackle each of these three pillars in a holistic, integrated, and interdisciplinary way that ensures progress made in one area does not cause regress in another.
This idea sounds good on paper, and indeed much progress has been made under this framework. As we’ll see in future discussions, however, there is more to sustainability than these three areas, and even within just these three, there remain many challenges ahead.
Footnotes
[1] Blood, N. (2017, March 5). What is Sustainability? Woroni.
[2] Carson, R., & Darling, L. (1962). Silent Spring. Boston: Houghton Mifflin.
There are many ways of looking at this idea of “sustainability” and we’re here to take you on a guided tour of some of them. Below you can find some extra show notes, resources, and other goodies.
This episode was recorded on 21 and 24 January 2019.
WHAT IS SUSTAINABILITY?
Different ways of looking at it
Often defined narrowly: As environmentalism. Conflicting studies or viewpoints. Less often a conversation happening with everything taken together: Integrative, interdisciplinary, or multidisciplinary.
Trying to bring together different ideas, voices, perspectives, practices.
Especially marginalised ideas or voices
Our coming together was in an interdisciplinary class: physical geography + human geography = “the geography of sustainability“.
Out of the green movement comes these conferences on “sustainability”
1972: Stockholm Declaration: Majority are environmental, but some important non-environmental concerns (freedom, dignity, etc).Starting to see, even from the outset, that it is something about more than environmentalism. Sustainability is about more than the environment.
Brundtland Commission: At this point talking about environment even less, and when considering humanity does so intergenerationally, from the perspective of current and future “needs”.
Sustainable Development: “Sustainable Development that meets the needs of the present without compromising the ability of future generations to meet their needs.”
Sustainability as a concept tied to time.
Tied to more than just our environment: tied to concepts of justice, equity, and society.
MDGs(2000 – 2015): Goal #7 only mention of sustainability – sustainable environment.
SDGs (2015 – 2030): Great shift here to focus on this idea. Mentioned in their very name / suggesting a much broader aspiration to sustainability: for example “Sustainable consumption and production”.
If we achieve these goals we are closer to achieving “sustainability”?
Advanced cross-sectoral work, a more multidisciplinary perspective to each of these challenges. Greater coalitions of voices lending greater plurality of voices, once marginalized or sidelined.
Seeing a trend over time here of an increasing body of work being built up around sustainability – covering a range of concerns beyond (but always related back to) the environment.
DEFINITIONS AND CONCEPTUALISATIONS
Literal definitions: Sustaining over time (a certain period of time)
Three Pillars theoretical framework that is practised explicitly: Triple Bottom Line
To what extent does it mirror sustainability? To what extent is this a good framework for practicing sustainability? It depends: corporate social responsbility varies according to the case you’re looking at
Individual action as consumers? Structural change? Broader questions here about how to best achieve sustainability: Who practises sustainability? Is it just environmentalists? What about intersections between other progressive causes?
Do a simple Google search of the word “sustainability” and see what you find. A sea of green: plants, leaves, the famous motif of the hand holding the plant. What we’re trying to challenge here is the dominance of an environmental perspective when it comes to sustainability.
Because we want to embrace complexity. Because we want to learn about other perspectives, especially those outside the mainstream.