top of page

Bangs, whimpers, crunches and shrieks: the surprising variety of existential threats to humanity

[PICTURE]

It's deeply disturbing to consider the possibility that a natural or man-made catastrophe will shatter human civilization beyond repair. But philosopher Nick Bostrom believes that we're not thinking about them nearly enough — and that we can help secure the future for our children if we start considering them more closely.

Bostrom is the director of Oxford University's Future of Humanity Institute, one of several academic think tanks arguing that even seemingly improbable or far-off threats to civilization should be taken seriously and planned for because of the immensely high stakes: the lives and happiness of not only all currently living humans but all potential future humans.

One particularly engaging example of Bostrom's work comes from a paper he wrote in 2002, in which he provides a four-category taxonomy for the types of existential threat humanity conceivably faces. This categorization method illustrates how broad the question of existential risk really is, and provides a framework for thinking about both the nature of specific threats and how to deal with them. (It's also got poetic flair; two of the four categories were inspired by T.S. Eliot's famous formulation from "The Hollow Men": "This is the way the world ends / Not with a bang, but with a whimper.")

We've put together a short rundown of Bostrom's system for you to explore. It's important to note Bostrom's definition of "existential risk" here:

"Existential risk – One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential."

Keep this definition in mind as you read further. You'll find that some such risks don't look anything like the carnage you might instinctively associate with the end of the human race. And remember that while some of these threats appear to be impossibly distant prospects, others — such as pandemics or nuclear war — could happen very soon.

[PICTURE]

Risk type #1: The Bang

What is it?: Bangs are "the most obvious kind of existential risk", as Bostrom puts it — abrupt, devastating catastrophes that could bring human civilization (or even existence) screeching to a halt in a relatively short period. Most well-known and relatively likely existential threats to humanity can be classified as Bangs.

Some examples of Bangs:

  • Broad-scale nuclear war: Bostrom is hardly alone in believing that currently extant nuclear weapons stockpiles would be capable of irrevocably crippling or outright exterminating humanity, thanks to the possibility of nuclear winter. However, he's careful to point out that not all nuclear wars would necessarily constitute existential threats; a limited nuclear exchange between India and Pakistan, for example, would be unlikely to undo human civlization altogether.

  • Deliberate misuse of nanotechnology: Nanotechnology — technology that functions on a molecular or atomic scale, and especially tiny machines called nanomachines — is improving at a rapid clip. Bostrom argues that humanity is likely to develop nanomachine weaponry long before it produces effective countermeasures to such weaponry. As a result, there may be a period in the near future during which massively destructive nano-weapons may fall into the wrong hands, or be used by governments due to political instability, before humanity has any means to control their effects.

  • Naturally-occuring or bioengineered plague: Humanity's increasingly dense, urban population distribution and the recent advent of genetic engineering make both of these possibilities more likely to play out.

  • Badly-designed AI superintelligence: A superintelligent computer with immense powers at its disposal could conceivably achieve its programmed goals using methods that destroy its creators. Bostrom provides an example in which such a machine chooses to solve an extremely complex math question by incorporating all nearby matter into a giant calculation device, thereby killing all humans on Earth. This threat becomes increasingly plausible as AI technology continues to improve.

  • Asteroid or comet strike: Though this scenario has historical precedent and looms large in the public consciousness thanks to Hollywood blockbusters like Deep Impact and Armageddon, Bostrom considers it unlikely relative to these other examples.

[PICTURE]

Risk type #2: The Crunch

What is it?: Crunches are existential threats that involve human progress grinding to a halt, or even moving backwards, rather than humanity's literal destruction. These threats fit the bill because they would "drastically curtail human potential."

Some examples of Crunches:

  • Resource depletion: Humanity may run out of the stuff it needs to sustain a high-tech civilization, which could eventually collapse as a result. This threat grows increasingly real as we exhaust humanity's supply of essential non-renewable resources, such as fossil fuels and certain metals.

  • "Dysgenic" pressures:The unsettling possibility that perverse reproductive incentives will cause humanity to evolve into a more fertile but less intelligent version of itself, thereby limiting or destroying its ability to progress. Bostrom argues that improvements to genetic engineering technology may ameliorate this risk.

  • Technological arrest: Humans may find themselves simply incapable of pushing their technological capacities any further because of the sheer difficulties involved, leading to stagnation and eventual collapse.

[PICTURE]

Risk type #3: The Shriek

What is it?: Shrieks are mind-boggling scenarios in which humans achieve hyper-advanced computing & information technology — what Bostrom calls "posthumanity" — but the results are undesirable for the vast majority of people. These possibilities often scan like science fiction, but Bostrom argues that they're becoming increasingly plausible thanks to modern computing's growing potency. And he may not be wrong – in the coming decades or centuries, the technologies involved in these scenarios might come to appear quite commonplace.

Some examples of Shrieks:

  • World domination by a machine-assisted human consciousness: If you've seen the 1992 film The Lawnmower Man, you can imagine this Shriek — a human mind, uploaded into a supercomputer that allows it to progressively increase its own intelligence and power, takes over the world and proceeds to rule it by whim. The apocalyptic nature of this scenario depends on the personality of the uploaded human.

  • World domination by a badly-designed superintelligent AI: This Shriek also bears a striking similarity to a famous sci-fi tale — specifically, Harlan Ellison's extremely disturbing short story "I Have No Mouth, and I Must Scream." This scenario differs from the "bad AI" Bang mentioned earlier in that humanity would be immiserated by the perpetual rule of a supercomputer whose goals don't further human happiness, rather than destroyed outright.

  • AI-assisted totalitarian regime: If a small number of people retain control over the first superintelligent computer, they might use it to rule the world perpetually, at the expense of the rest of humanity's well-being.

[PICTURE]

Risk type #4: The Whimper

What is it?: Whimpers are essentially the best-case existential threat scenarios — they involve humanity colliding with long-term limitations on expansion...or on what it means to be "human." Bostrom exhibits some ambivalence about whether these should be properly considered existential threats at all, since scenarios along the lines of using up all the resources in the galaxy after millions of years of interstellar expansion are actually fairly desirable...but not all of them sound so nice.

Some examples of Whimpers:

  • Humanity abandons its core values in pursuit of interstellar expansion: Simply put, humans may stop doing the things that make us human — creating art, relaxing, pursuing goals for emotional rather than rational purposes, and so forth — in exchange for investing all available resources in economically productive activity in general, and interstellar colonization in particular. This scenario doesn't involve the destruction of humanity so much as its spiritual cessation.

  • Destruction at the hands of an alien civilization: You knew this one had to appear eventually, right? Bostrom argues that this possibility is extremely unlikely unless humanity achieves significant interstellar travel, which places it firmly in the category of good problems to have.

The most difficult part of thinking about existential threats is their deeply unpredictable nature. Bostrom's paper is dotted with reminders that the greatest threats facing human civlization may come from completely unanticipated directions. But solving any problem — even problems as potentially massive as these — require you to start somewhere, and this framework may be of great value to anyone who prefers to take a holistic approach to this challenging and engaging issue.

bottom of page