Your submission is now a Draft.

Once it's ready, please submit your draft for review by our team of Community Moderators. Thank you!

You have been invited to co-author this question.

When it is ready, the author will submit it for review by Community Moderators. Thanks for helping!


This question now needs to be reviewed by Community Moderators.

We have high standards for question quality. We also favor questions on our core topic areas or that we otherwise judge valuable. We may not publish questions that are not a good fit.

If your question has not received attention within a week, or is otherwise pressing, you may request review by tagging @moderators in a comment.

You have been invited to co-author this question.

It now needs to be approved by Community Moderators. Thanks for helping!


{{qctrl.question.predictionCount() | abbrNumber}} predictions
{{"myPredictionLabel" | translate}}:  
{{ qctrl.question.resolutionString() }}
{{qctrl.question.predictionCount() | abbrNumber}} predictions
My score: {{qctrl.question.player_log_score | logScorePrecision}}
Created by: Tamay and
co-authors , {{coauthor.username}}

Make a Prediction


You can now see an excellent visualization of global catastrophic risks estimates produced in the Ragnarök series here.

It’s dangerous to be alive and risks are everywhere. But not all risks are created equally. Those that are especially large in scope and severe in intensity are global catastrophic risks, which are risks that could inflict serious damage to human well-being on a global scale.

Until relatively recently, most global catastrophic risks were natural, such as the supervolcano episodes and asteroidal/cometary impacts that led to mass extinctions millions of years ago. Other natural risks might include a pandemic of naturally occurring disease, non-anthropogenic climate change, supernovae, gamma-ray bursts, and spontaneous decay of cosmic vacuum state. Humanity has survived these natural existential risks for hundreds of thousands of years; which suggests that it is not any of these that will do us in within the next hundred.

By contrast, through technological advances, our species is introducing entirely new kinds of risks, anthropogenic risks, which are man-made threats that have no track record of surviving. Our longevity as a species therefore offers no strong prior grounds for confident optimism. Examples of anthropogenic risks are nuclear war, advanced artificial intelligence, biotechnology and bioengineered organisms, human-made climate change and nanotechnology risks.

There are two complementary ways of estimating the chances of catastrophe. What we could call the direct way is to analyze the various specific failure-modes, assign them probabilities, which is what--at least partially-- the questions in the Ragnarök series are designed to do.

Secondly, there is the indirect way. As Nick Bostrom has argued, there are theoretical constraints that can be brought to bear on the issue, based on some general features of the world in which we live. There is only small number of these, but they are important because they do not rely on making a lot of guesses about the details of future technological and social developments. For example, the so-called Doomsday argument, which purports to show that we have systematically underestimated the probability that humankind will go extinct relatively soon.

Moreover, the Fermi Paradox tells us that it is not the case that life evolves on a significant fraction of Earth-like planets and proceeds to develop advanced technology. Hence, there must be (at least) one Great Filter – an evolutionary step that is extremely improbable – somewhere on the line between Earth-like planet and colonizing-in-detectable-ways civilization. If the Great Filter isn’t in our past, we must fear it in our (near) future.