Your submission is now a Draft.

Once it's ready, please submit your draft for review by our team of Community Moderators. Thank you!

You have been invited to co-author this question.

When it is ready, the author will submit it for review by Community Moderators. Thanks for helping!

Pending

This question now needs to be approved by community moderators.

You have been invited to co-author this question.

It now needs to be approved by community moderators. Thanks for helping!

Nanotechnology GC to cause (near) extinction

You can now see an excellent visualization of global catastrophic risks estimates produced in the Ragnarök series here.

In 1959, Richard Feynman pointed out that nanometre‐scale machines could be built and operated, and that the precision inherent in molecular construction would make it easy to build multiple identical copies. This raised the possibility of manufacturing at ever increasing speeds, in which production systems could rapidly and cheaply increase their productive capacity. This in turn suggested the possibility of destructive runaway self‐replication.

As Eric Drexler, a nanotech pioneer, first warned in Engines of Creation in 1986 (pg. 146),

In a mature form, molecular nanotechnology would enable the construction of bacterium-scale self-replicating mechanical robots that can feed on dirt or other organic matter. Such replicators could eat up the biosphere or destroy it by other means such as by poisoning it, burning it, or blocking out sunlight.

Plants with ‘leaves’ no more efficient than today’s solar cells could out‐compete real plants, crowding the biosphere with an inedible foliage. Tough omnivorous “bacteria” could out‐compete real bacteria: They could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. A person of malicious intent in possession of this technology might cause a catastrophe on Earth by releasing such nanobots into the environment.

Such self-replicating systems, if not countered, could make the earth largely uninhabitable. Other potential risks include ecological and health disasters resulting from nano-pollutants, the use of misuse of nanotechnology weaponry, and, given the general-purpose character of nanotech, possibly much more.

A recent paper evaluates the opportunities and risks of atomically precise manufacturing argues that the risks might be greatest from military affairs, and specifically rogue actor violence:

A more significant concern for military APM comes from the potential dangers of rogue actors, including rogue states such as DPRK as well as terrorist groups and other nonstate actors. Over the last two decades, rogue actors have been an increasingly prominent concern for the international community. Looking ahead, some worry that advances in certain technologies, especially biotechnology, could enable rogue actors to cause outsized harm, potentially even a major global catastrophe (e.g., Rees, 2003). APM could also enable a wider range of rogue actors to create powerful arsenals. APM could further make these arsenals smaller and thus easier to conceal. In this regard, APM could be considered similar to biotechnology. This makes for a major risk: a world in which small rogue groups can cause global harm is a fragile world to live in.

Although only small portion of scientists might currently be working to develop self-replicating nanotech, a recent study done for NASA's Institute for Advanced Concepts by General Dynamics Advanced Information Systems suggests that a useful self-replicating machine could be less complex than a Pentium 4 chip, and uncovered no road blocks to extending macroscale systems to microscale and then to nanoscale self-replicating systems. Drexler points out that much of recent surprising progress comes from disparate fields, and isn't labelled generally "nanotechnology".

In the first part of the Ragnarök Question Series, we asked the question If a global catastrophe occurs, will it be due to nanotechnology failure-mode? Now it is asked,

Given that nanotechnology catastrophe occurs that results in the reduction of global population of at least 10% by 2100, will the global population decline by more than 95% relative to the pre-catastrophe population?

The question resolves ambiguous if a global nanotechnology catastrophe that claims at least 10% (in any period of 5 years or less) does not occur. It resolves positively if such a catastrophe does occur, and the global population is less than 95% of the pre-catastrophe population at any point within 25 years of the catastrophe.

The question resolves negative if a global nanotechnology catastrophe occurs that claims at least 10% (in any period of 5 years or less) the post-catastrophe population remains above 5% of the pre-catastrophe population over the subsequent 25 years.


This question is part of the Ragnarök Question Series. Check out the other questions in the series:

  1. If a global biological catastrophe occurs, will it reduce the human population by 95% or more?

  2. If an artificial intelligence catastrophe occurs, will it reduce the human population by 95% or more?

  3. If a nuclear catastrophe occurs, will it reduce the human population by 95% or more?

  4. If a global climate disaster occurs by 2100, will the human population decline by 95% or more?

  5. If a global nanotechnology catastrophe occurs by 2100, will the human population decline by 95% or more?

Also, please check out our questions on whether a global catastrophe will occur by 2100, and if so, which?:

  1. By 2100 will the human population decrease by at least 10% during any period of 5 years?

  2. Will such a catastrophe be due to either human-made climate change or geoengineering?

  3. Will such a catastrophe be due to a nanotechnology failure-mode?

  4. Will such a catastrophe be due to nuclear war?

  5. Will such a catastrophe be due to an artificial intelligence failure-mode?

  6. Will such a catastrophe be due to biotechnology or bioengineered organisms?

All results are analysed here, and will be updated periodically.

Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.

Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.

Current points depend on your prediction, the community's prediction, and the result. Your total earned points are averaged over the lifetime of the question, so predict early to get as many points as possible! See the FAQ.