Your submission is now a Draft.

Once it's ready, please submit your draft for review by our team of Community Moderators. Thank you!

You have been invited to co-author this question.

When it is ready, the author will submit it for review by Community Moderators. Thanks for helping!


This question now needs to be reviewed by Community Moderators.

We have high standards for question quality. We also favor questions on our core topic areas or that we otherwise judge valuable. We may not publish questions that are not a good fit.

If your question has not received attention within a week, or is otherwise pressing, you may request review by tagging @moderators in a comment.

You have been invited to co-author this question.

It now needs to be approved by Community Moderators. Thanks for helping!


{{qctrl.question.predictionCount() | abbrNumber}} predictions
{{"myPredictionLabel" | translate}}:  
{{ qctrl.question.resolutionString() }}
{{qctrl.question.predictionCount() | abbrNumber}} predictions
My score: {{qctrl.question.player_log_score | logScorePrecision}}
Created by: Matthew_Barnett and
co-authors , {{coauthor.username}}

Make a Prediction


If current physical theories are approximately correct, human extinction is inevitable. But humanity could also enjoy a very long future.

An existential risk, following Nick Bostrom, is "an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential". Here, we focus on a narrow type of existential risk: the risk of human extinction at the hands of artificial intelligence.

AI-driven human extinction is a trope of science fiction, but it also a chief concern among researchers who study existential risks. According to one informal survey from the Future of Humanity Institute in 2008, researchers stated that there was a 5% chance that humanity will go extinct from AI by 2100.

Other information about a potential future AI catastrophe can be found in the body of this related question.