predicting critical insights generating predictive insights computing predictive understanding generating definitive wisdom assembling intelligent futures mapping the future formulating critical understanding crowdsourcing intelligent estimations delivering contingent predictions crowdsourcing quantitative wisdom assembling definitive predictions predicting calibrated wisdom assembling definitive contingencies crowdsourcing predictive insights

Question

Metaculus Help: Spread the word

If you like Metaculus, tell your friends! Share this question via Facebook, Twitter, or Reddit.

Ragnarök Question Series: overview and upcoming questions

It’s dangerous to be alive and risks are everywhere. But not all risks are created equally. Those that are especially large in scope and severe in intensity are global catastrophic risks, which are risks that could inflict serious damage to human well-being on a global scale.

Until relatively recently, most global catastrophic risks were natural, such as the supervolcano episodes and asteroidal/cometary impacts that led to mass extinctions millions of years ago. Other natural risks might include a pandemic of naturally occurring disease, non-anthropogenic climate change, supernovae, gamma-ray bursts, and spontaneous decay of cosmic vacuum state. Humanity has survived these natural existential risks for hundreds of thousands of years; which suggests that it isn't any of them will do us in within the next hundred.

By contrast, through technological advances, our species is introducing entirely new kinds of risks, anthropogenic risks, which are man-made threats that have no track record of surviving. Our longevity as a species therefore offers no strong prior grounds for confident optimism. Examples of anthropogenic risks are nuclear war, advanced artificial intelligence, biotechnology and bioengineered organisms, human-made climate change and nanotechnology risks.

It is an interesting strain of self-absorption that allows humans to take such pleasure in media depictions of our own extinction events. We can’t help but obsess over our apocalyptic visions. Despite this, it is striking how little careful scientific attention these issues have received, compared to other topics that are much less important. We therefore want our best predictors on the case to think carefully about different failure-modes and assign probabilities to different bangs and whimpers.

The series begins with the headline By 2100 will the human population decrease by at least 10% during any period of 5 years?. This might be best answered by listing specific risks, and assigning probabilities to these causing roughly 0.7-1bn deaths in any 5-year period before 2100.

Suppose a global catastrophe does happen before 2100, and the world population declines by 10% in any period of 5 years, which risk is the likely culprit? In the second part of the question series we want you to predict the probability of different risks causing a global catastrophe, assuming that such a global catastrophe does happen! In other words, we want you to predict the conditional probability of different risks causing a global catastrophe, given that such a catastrophe does happen. Important note, make sure that your probabilistic predictions add up to at most 100%!

  1. If a global catastrophe occurs, will it be due to human-made climate change or geoengineering?

Relevant questions: How much global warming by 2100?, 2˚C global warming by 2100?, Will the European Union meet its 2030 targets under the Paris Climate Treaty?.

  1. If a global catastrophe occurs, will it be due to biotechnology or bioengineered organisms?

Relevant questions: Will a terrorist group reportedly obtain viable bioweapon sample?, A devastating bioterror attack by 2025?, Attack using a genetically engineered virus by 2020?, A significant bioterror attack by 2025?, A new Spanish Flu?

  1. If a global catastrophe occurs, will it be due to nanotechnology failure-mode?

Relevant questions: none yet, got any ideas for short term questions that would inform us?

  1. If a global catastrophe occurs, will it be due to an artificial intelligence failure-mode?

Relevant questions: Human-machine intelligence parity by 2040?, Will AI Progress Surprise Us?.

  1. If a global catastrophe occurs, will it be due to nuclear war?

Relevant questions: Will a country that has nuclear weapons actually give them up by 2035?, Will the world still have nuclear weapons through 2075?.


Upcoming will be questions about the risks of non-anthropogenic, or natural risks such as natural pandemics, supervolcanoes, supernovae, gamma ray bursts, the spontaneous decay of cosmic vacuum state.

Although our species might recover from most global catastrophes, some global catastrophes are likely to be so bad that it permanently and drastically curtails the potential of earth-originating intelligent life, such as complete extinction.

Hence, also upcoming will be questions about whether, when certain catastrophes occur that result in the decline of at least 10% of the human population (such as a artificial intelligence catastrophe, a nuclear catastrophe, etc.), the human species actually goes extinct.

For predictions to questions on these existential risks, and global catastrophic risks more generally, it makes sense to also keep in mind the so-called Doomsday argument, which purports to show that we have systematically underestimated the probability that humankind will go extinct relatively soon. Moreover, the Fermi Paradox tells us that it is not the case that life evolves on a significant fraction of Earth-like planets and proceeds to develop advanced technology. Hence, there must be (at least) one Great Filter – an evolutionary step that is extremely improbable – somewhere on the line between Earth-like planet and colonizing-in-detectable-ways civilization. And if the Great Filter isn’t in our past, we must fear it in our (near) future.