It’s dangerous to be alive and risks are everywhere. But not all risks are created equally. Those that are especially large in scope and severe in intensity are global catastrophic risks, which are risks that could inflict serious damage to human well-being on a global scale.
Until relatively recently, most global catastrophic risks were natural, such as the supervolcano episodes and asteroidal/cometary impacts that led to mass extinctions millions of years ago. Other natural risks might include a pandemic of naturally occurring disease, non-anthropogenic climate change, supernovae, gamma-ray bursts, and spontaneous decay of cosmic vacuum state. Humanity has survived these natural existential risks for hundreds of thousands of years; which suggests that it is not any of these that will do us in within the next hundred.
By contrast, through technological advances, our species is introducing entirely new kinds of risks, anthropogenic risks, which are man-made threats that have no track record of surviving. Our longevity as a species therefore offers no strong prior grounds for confident optimism. Examples of anthropogenic risks are nuclear war, advanced artificial intelligence, biotechnology and bioengineered organisms, human-made climate change and nanotechnology risks.
There are two complementary ways of estimating the chances of catastrophe. What we could call the direct way is to analyze the various specific failure-modes, assign them probabilities, which is what--at least partially-- the questions in the Ragnarök series are designed to do.
Secondly, there is the indirect way. As Nick Bostrom has argued, there are theoretical constraints that can be brought to bear on the issue, based on some general features of the world in which we live. There is only small number of these, but they are important because they do not rely on making a lot of guesses about the details of future technological and social developments. For example, the so-called Doomsday argument, which purports to show that we have systematically underestimated the probability that humankind will go extinct relatively soon.
Moreover, the Fermi Paradox tells us that it is not the case that life evolves on a significant fraction of Earth-like planets and proceeds to develop advanced technology. Hence, there must be (at least) one Great Filter – an evolutionary step that is extremely improbable – somewhere on the line between Earth-like planet and colonizing-in-detectable-ways civilization. If the Great Filter isn’t in our past, we must fear it in our (near) future.
By 2100 will the human population decrease by at least 10% during any period of 5 years or less?
The question resolves positively the human population (on Earth, and possibly elsewhere) decreases by at least 10% in any period of 5 years or less. Years are here defined as consecutive calendar years.
This question is part of the Ragnarök Question Series. Check out the other questions in the series:
-
If a global biological catastrophe occurs, will it reduce the human population by 95% or more?
-
If a nuclear catastrophe occurs, will it reduce the human population by 95% or more?
-
If a global climate disaster occurs by 2100, will the human population decline by 95% or more?
Also, please check out our questions on whether a global catastrophe will occur by 2100, and if so, which?:
-
By 2100 will the human population decrease by at least 10% during any period of 5 years?
-
Will such a catastrophe be due to either human-made climate change or geoengineering?
-
Will such a catastrophe be due to a nanotechnology failure-mode?
-
Will such a catastrophe be due to an artificial intelligence failure-mode?
-
Will such a catastrophe be due to biotechnology or bioengineered organisms?
All results are analysed here, and will be updated periodically.