Risk and the Hyperobject

A similar way of understanding risk

In making podcasts for this project, we had the pleasure of talking with Elizabeth Boulton, a PhD researcher studying the work of Timothy Morton, who developed the concept of a hyperobject in attempting to better account for how exactly existential risks like climate change are a ‘different beast’, as Bostrom describes.

Having set global warming in irreversible motion, we are facing the possibility of ecological catastrophe. But the environmental emergency is also a crisis for our philosophical habits of thought, confronting us with a problem that seems to defy not only our control but also our understanding. Global warming is perhaps the most dramatic example of what Timothy Morton calls “hyperobjects”—entities of such vast temporal and spatial dimensions that they defeat traditional ideas about what a thing is in the first place[1].

The idea of a hyperobject can be confusing, but echoes concepts from Bostrom. Global warming, for example, is a process that occurs over geological timescales. This is not our default mode of thinking due to our biology limiting us to far shorter lifespans. When Bostrom says ‘we have not evolved mechanisms, either biologically or culturally, for managing such risks’[2] he ultimately alludes to a very basic, truth about our biology and the kind of mindset it locks us into. An argument like Morton’s is essentially building upon this idea in greater detail, arguing that climate change is an example of something so vast in time and space that it defies the ability of biologically-evolved human minds to comprehend (See also: Building a Map, about how AI and other technological progress might help us meet sustainability challenges beyond the human mind’s ability to solve).

Natural selection did not equip us for problems like this, for the simple reason that natural selection only works with endurable threats: there must be something alive left with favourable traits to select for. Since these are terminal risks, there is no room for natural selection, and therefore, no (or exceedingly little) room for our biology to help us.

One obvious point here is that technology may help us overcome that complexity. Climate models, for example, already employ tremendously advanced AI and other technological innovations that allow us to reduce informational complexity to levels a human mind can understand and respond to[3].

Going further, this idea of technology-driven innovation can be a key argument in transhuman or posthuman interpretation of sustainability. In short: smarter, more capable humans can solve bigger, more challenging problems. Bostrom suggests we need new societal institutions, new priorities, new policies, and new norms – all to face new threats. Similarly, if human minds cannot comprehend these new threats, then perhaps we need new minds and maybe even new bodies, too?


‘A reckoning for our species’: the philosopher prophet of the Anthropocene

Published by The Guardian as part of their Long Reads in 2017. Read the full article here.

Part of what makes Morton popular are his attacks on settled ways of thinking.

His most frequently cited book, Ecology Without Nature, says we need to scrap the whole concept of “nature”. He argues that a distinctive feature of our world is the presence of ginormous things he calls “hyperobjects” – such as global warming or the internet – that we tend to think of as abstract ideas because we can’t get our heads around them, but that are nevertheless as real as hammers.

He believes all beings are interdependent, and speculates that everything in the universe has a kind of consciousness, from algae and boulders to knives and forks. He asserts that human beings are cyborgs of a kind, since we are made up of all sorts of non-human components; he likes to point out that the very stuff that supposedly makes us us – our DNA – contains a significant amount of genetic material from viruses. He says that we’re already ruled by a primitive artificial intelligence: industrial capitalism. At the same time, he believes that there are some “weird experiential chemicals” in consumerism that will help humanity prevent a full-blown ecological crisis.


Check out Ecology without Nature


Footnotes

[1] University of Minnesota Press. (2019). Hyperobjects: Philosophy and Ecology after the End of the World. Retrieved from University of Minnesota Press: https://www.upress.umn.edu/book-division/books/hyperobjects

[2] Bostrom, N. (2002). Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. Journal of Evolution and Technology, 9(1).

[3] Cho, R. (2018, June 5). Artificial Intelligence – A Game Changer for Climate Change and the Environment. State of the Planet – Earth Institute, Columbia University.

Advertisement

One thought on “Risk and the Hyperobject

  1. Pingback: Terror, black mirrors, and the era of threats – The Grass Ceiling

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s