You can now see an excellent visualization of global catastrophic risks estimates produced in the Ragnarök series here.
Currently, artificial intelligence can outperform humans in a number of narrow domains, such as playing chess and searching data. As artificial intelligence researchers continue to make progress, though, these domains are highly likely to grow in number and breadth over time. Many experts now believe there is a significant chance that a machine superintelligence – a system that can outperform humans at all relevant intelligence tasks – will be developed within the next century, and possibly much sooner.
In a 2017 survey of artificial intelligence experts, experts were asked about the effects of human level machine intelligence. They assigned a 10% for a bad outcome and 5% for an outcome described as “Extremely Bad (e.g., human extinction).” Although a selection bias, large variance in responses (reflecting vast uncertainty), and the unreliability of subjective opinions mean that these estimates warrant skepticism, they nevertheless suggest that the possibility of superintelligence ought to be taken seriously.
In a 2008 survey at the Global Catastrophic Risk Conference in Oxford, participants were asked to make their best guess at the chance that there will be disasters of different types before 2100. The median estimate of the chance of 1 billion deaths by 2100 by a superintelligent AI was 5%. Interestingly the median estimate of the chance of human extinction by a superintelligent AI was also 5%, suggesting that if an AI-failure-mode-induced-catastrophe does occur, it's likely be a terminal one for human civilisation.
When considering how AI might become a risk, experts think two scenarios most likely (according to the Future of Life Institute):
- The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.
- The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met. As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants.
In the first part of the Ragnarök Question Series, we asked the question If a global catastrophe occurs, will it be due to an artificial intelligence failure-mode? Now it is asked,
Given that an artificial intelligence failure-mode catastrophe occurs that results in the reduction of global population of at least 10% by 2100, will the global population decline by more than 95% relative to the pre-catastrophe population?
The question resolves positive if such a catastrophe does occur, and the global population is less than 95% of the pre-catastrophe population at any point within 25 years of the catastrophe. It resolves ambiguous if an artificial intelligence global catastrophe that claims at least 10% (in any period of 5 years or less) does not occur. The question resolves negative if a an artificial intelligence failure-mode induced global catastrophe occurs that claims at least 10% (in any period of 5 years or less) but the post-catastrophe population remains above 5%.
This question is part of the Ragnarök Question Series. Check out the other questions in the series:
-
If a global biological catastrophe occurs, will it reduce the human population by 95% or more?
-
If a nuclear catastrophe occurs, will it reduce the human population by 95% or more?
-
If a global climate disaster occurs by 2100, will the human population decline by 95% or more?
Also, please check out our questions on whether a global catastrophe will occur by 2100, and if so, which?:
-
By 2100 will the human population decrease by at least 10% during any period of 5 years?
-
Will such a catastrophe be due to either human-made climate change or geoengineering?
-
Will such a catastrophe be due to a nanotechnology failure-mode?
-
Will such a catastrophe be due to an artificial intelligence failure-mode?
-
Will such a catastrophe be due to biotechnology or bioengineered organisms?
All results are analysed here, and will be updated periodically.