Your Notebook is now a Draft.

Once it's ready, please submit it for review by our team of Community Moderators. Thank you!


This content now needs to be approved by community moderators.


This essay was submitted and is waiting for review.

A Global Catastrophe This Century

{{"estimatedReadingTime" | translate:({minutes: qctrl.question.estimateReadingTime()})}}
Metaculus Journal

Metaculus predictions suggest that there is an 18% chance a global catastrophe will happen this century, and they indicate that there’s a 3.7% chance such a catastrophe will nearly or completely annihilate our species.

Figure 1. Of 1000 runs of this century, it is expected that 180 will involve a catastrophe, and 37 runs will involve the near or complete annihilation of our species (~2.7k combined predictions).

Catastrophes are defined as the loss of at least 10% of the human population within 5 years or less. Such an event would be at least 40 times worse than the COVID-19 pandemic in terms of global death toll, according to global death toll estimates produced by The Economist.

Near or complete extinction events are defined as events that destroy at least 95% of the population within a 25-year period. This would be an event much worse than the human species has ever experienced (with perhaps the exception of the Youngest Toba eruption).

Many things—AI, nuclear war, synthetic or naturally occurring pandemics—can cause catastrophes

Metaculus predictions expect that if a global catastrophe is to occur this century, it will most probably (66% chance) be caused by either: a type of AI failure mode, an engineered pandemic, or the effects of nuclear war.

Figure 2. The most likely catastrophes are those caused by AI failure modes, synthetic biology, nuclear war, and "other risks" (~1500 predictions).

However, climate change or geoengineering catastrophes, natural pandemics, and “other risks” also pose serious risks. On Metaculus' account, these three collectively take up 27% of the total risk faced this century.

Forecasters are worried about the existential risk of Artificial Intelligence

Recently, some philosophers have pointed out that complete extinction might be of unique moral and civilizational importance compared to those catastrophes from which we are able to eventually recover. For this reason, these events might warrant special concern.

Metaculus predictions indicate that if a catastrophe were to cause near or complete extinction, it would most probably (70% chance) be caused by a form of AI failure mode.

Figure 3. The risk of AI failure modes makes up 70% of total near- or complete-extinction risk (~2.7k predictions).

These predictions suggest that, while those concerned with global catastrophes should concern themselves with a broad range of risks, those primarily or exclusively concerned with extinction events should have their worries dominated by risks from AI. Moreover, since these predictions are about a greater than 95% population loss, it seems plausible that if we restricted attention just to complete extinction risks, the extent to which AI failure modes dominate extinction risks might be even more pronounced than is indicated here.

Should we take these forecasts seriously?

These forecasts are generated by aggregating the forecasts collected over three years from hundreds of forecasters with multi-year long track records in successfully forecasting events over short-to-medium term horizons. Forecasters with strong track records are assigned more weight, and forecasts are extremized when many strong track record forecasters agree. 

While this system performs well for predicting events that occur in months or in a few years (see this track record—and in particular, try playing around with the “evaluated at” filter), there is little evidence that this ability generalizes to long-term, decade-out forecasts. Moreover, there is no very strong reason to expect that the incentives created by the questions are particularly conducive to creating high-quality forecasts (see, e.g. this thread). There might also be substantial biases stemming from the fact that those most concerned with global catastrophic risks are also more likely to provide and update their forecasts.

Despite some of these issues, I'd guess that cleverly aggregating the predictions of hundreds of top forecasters is probably one of the best systems we currently have for producing and updating such forecasts at relatively low costs—though some other systems might work too, such as through reciprocal scoring and prediction markets. So, insofar as we should listen to any speculation about global catastrophic risks, we probably should take these estimates at least somewhat seriously.

Computing P(Near extinction)

Probabilities of the chance of extinction can be calculated roughly as follows:

P(\text{Extinction}) = \sum_{i \in E} P(\text{$i$ causes extinction} | \text{$i$ causes catastrophe}) P(\text{$i$ causes catastrophe})where P(i causes catastrophe) is given by:

P(\text{$i$ causes catastrophe}) = P(\text{$i$ causes catastrophe} | \text{Catastrophe}) P(\text{Catastrophe occurs})Using current predictions, this P(Extinction) = 3.67%. This calculation assumes that near-extinction events are mutually exclusive, and so it will overstate the 'true probability' implied by these forecasts. However, this bias is likely small. For example, if we instead assume that near-extinction events are independent and then calculate P(Extinction) as follows:

1-\prod_{i \in E}\big(1-P(\text{$i$ causes extinction} | \text{$i$ causes catast.}) P(\text{$i$ causes catast.})\big)we get a probability of 3.64%, which is pretty similar. Moreover, since catastrophes are likely positively correlated rather than independent (e.g. a nuclear war might make biological warfare more likely), my guess is that 3.67% is likely closer to the 'true probability'.

At any rate, we can bound the upward bias from above by noting that

P(\text{Near extinction}) \geq \max_{i \in E}P(i\text{ causes near-extinction}) = 2.6\%and so, the calculated P(Near extinction) cannot be biased upward relative to the 'true probability' by more than ~1 percentage point.

Predictions used

The relevant predictions used in this analysis are the following (and note that this analysis uses the Metaculus predictions rather than the Community predictions):


I received useful feedback from Ryan Beck, Ege Erdil, Matthew Barnett, SimonM, Christian Williams, and Will MacAskill. Any mistakes are my own.

Existential Risk
Submit Essay

Once you submit your essay, you can no longer edit it.