Your submission is now a Draft.

Once it's ready, please submit your draft for review by our team of Community Moderators. Thank you!

You have been invited to co-author this question.

When it is ready, the author will submit it for review by Community Moderators. Thanks for helping!


This question now needs to be approved by community moderators.

You have been invited to co-author this question.

It now needs to be approved by community moderators. Thanks for helping!

AI incident causes $1bn damage by 2032


Risks from Artificial intelligence are considered by many to be one of the greatest threats to human civilisation in the coming centuries.

In Toby Ord's recent book The Precipice he places the risk of human extinction due to unaligned AI this century at 10%.

This question asks if we will see large scale incidents leading to loss of life or damage as a result of AI developments going wrong in the next ten years.

By 2032, will we see an event precipitated by AI malfunction that causes at least 100 deaths and/or at least $1B 2021 USD in economic damage?

This question resolves positively if there are three credible media reports indicating that there has been an event precipitated by AI malfunction which caused either 100+ deaths or $1bn 2021 USD in economic damage before Jan 1st 2032.

Multiple incidents stemming from the same source can count for resolution.

To count as precipitated by AI malfunction an incident should involve an AI system behaving unexpectedly. An example could be if an AI system autonomously driving cars caused hundreds of deaths which would have been easily avoidable for human drivers, or if an AI system overseeing a hospital system took actions to cause patient deaths as a result of misinterpreting a goal to minimise bed usage.

If, for example, the Boeing MCAS system had been an AI system and there was no possibility for the pilots to override its decision to lower the aeroplane nose, leading to a fatal crash, this would count for resolution.

An AI system being used in warfare and causing 100+ deaths in the course of its expected behaviour is an example of something which should not count.

A system should be considered AI if it is widely considered to be AI (e.g. by the credible media reports resolving the question). If this is not sufficiently clear for resolution, then as a secondary criterion, any system using machine learning techniques which has an agentic role in the disaster in question should count for this question.

Make a Prediction


Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.

Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.

Current points depend on your prediction, the community's prediction, and the result. Your total earned points are averaged over the lifetime of the question, so predict early to get as many points as possible! See the FAQ.