predicting critical insights generating predictive insights computing predictive understanding generating definitive wisdom assembling intelligent futures mapping the future formulating critical understanding crowdsourcing intelligent estimations delivering contingent predictions crowdsourcing quantitative wisdom assembling definitive predictions predicting calibrated wisdom assembling definitive contingencies crowdsourcing predictive insights

Question

Metaculus Help: Spread the word

If you like Metaculus, tell your friends! Share this question via Facebook, Twitter, or Reddit.

Will Metaculus predict that artificial intelligence continues to pose a global catastrophic risk?

Currently, artificial intelligence can outperform humans in a number of narrow domains, such as playing chess and searching data. As artificial intelligence researchers continue to make progress, though, these domains are highly likely to grow in number and breadth over time. Many experts now believe there is a significant chance that a machine superintelligence – a system that can outperform humans at all relevant intelligence tasks – will be developed within the next century, and possibly much sooner.

As predictions to a previous question suggest, artificial intelligence might pose a global catastrophic risk (defined there as a 10% decrease in the world population in any period of 5 years). When considering how AI might become a risk, experts think two scenarios most likely (according to the Future of Life Institute):

  1. The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.

  2. The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met. As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem.

It is thought by some that reducing the second of these two risks will require progress in technical methods of developing scalable control methods that could ensure that a AI will be safe and will behave as its programmers intend even if its intellectual capabilities are increased to arbitrary levels. Until recently, this problem was almost entirely neglected; but in the last couple of years, technical research agendas have been developed, and there are now several research groups pursuing work in this area. Total investment in long-term AI safety, however, remains orders of magnitude less than investment in increasing AI capability. Additionally, reducing the first of the listed risks might require improvements in our ability to control, govern and coordinate on the usage of such systems, so to reduce potential security threats from malicious uses of AI technologies.

But how certain are we that artificial intelligence continue to be regarded to constitute a large chunk of global catastrophic risk, at least through 2040? A previous question asked: If a global catastrophe happens before 2100, will it be principally due to the deployment of some Artificial Intelligence system(s)?

Will the probability (of both the Metaculus and community predictions) artificial intelligence causing a global catastrophe (given that a global catastrophe does occur) remain above 5% in each 6-month period before 2040?

This question resolves positively if both the Metaculus and community predictions) of artificial intelligence causing a global catastrophe fail to fall below 5% for any 6-month period before 2040, as will be confirmed by one of the admins.

{{qctrl.predictionString()}}

Metaculus help: Predicting

Predictions are the heart of Metaculus. Predicting is how you contribute to the wisdom of the crowd, and how you earn points and build up your personal Metaculus track record.

The basics of predicting are very simple: move the slider to best match the likelihood of the outcome, and click predict. You can predict as often as you want, and you're encouraged to change your mind when new information becomes available. With tachyons you'll even be able to go back in time and backdate your prediction to maximize your points.

The displayed score is split into current points and total points. Current points show how much your prediction is worth now, whereas total points show the combined worth of all of your predictions over the lifetime of the question. The scoring details are available on the FAQ.

Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.

Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.

This question is not yet open for predictions.

Thanks for predicting!

Your prediction has been recorded anonymously.

Want to track your predictions, earn points, and hone your forecasting skills? Create an account today!

Track your predictions
Continue exploring the site

Community Stats

Metaculus help: Community Stats

Use the community stats to get a better sense of the community consensus (or lack thereof) for this question. Sometimes people have wildly different ideas about the likely outcomes, and sometimes people are in close agreement. There are even times when the community seems very certain of uncertainty, like when everyone agrees that event is only 50% likely to happen.

When you make a prediction, check the community stats to see where you land. If your prediction is an outlier, might there be something you're overlooking that others have seen? Or do you have special insight that others are lacking? Either way, it might be a good idea to join the discussion in the comments.