Risks from Artificial intelligence are considered by many to be one of the greatest threats to human civilisation in the coming centuries.
In Toby Ord's recent book The Precipice he places the risk of human extinction due to unaligned AI this century at 10%.
This question asks if we will see large scale incidents leading to loss of life or damage as a result of AI developments going wrong in the next ten years.
By 2032, will we see an event precipitated by AI malfunction that causes at least 100 deaths and/or at least $1B 2021 USD in economic damage?
This question resolves positively if there are three credible media reports indicating that there has been an event precipitated by AI malfunction which caused either 100+ deaths or $1bn 2021 USD in economic damage before Jan 1st 2032.
Multiple incidents stemming from the same source can count for resolution.
To count as precipitated by AI malfunction an incident should involve an AI system behaving unexpectedly. An example could be if an AI system autonomously driving cars caused hundreds of deaths which would have been easily avoidable for human drivers, or if an AI system overseeing a hospital system took actions to cause patient deaths as a result of misinterpreting a goal to minimise bed usage.
If, for example, the Boeing MCAS system had been an AI system and there was no possibility for the pilots to override its decision to lower the aeroplane nose, leading to a fatal crash, this would count for resolution.
An AI system being used in warfare and causing 100+ deaths in the course of its expected behaviour is an example of something which should not count.
A system should be considered AI if it is widely considered to be AI (e.g. by the credible media reports resolving the question). If this is not sufficiently clear for resolution, then as a secondary criterion, any system using machine learning techniques which has an agentic role in the disaster in question should count for this question.