M

This essay is now a Draft

The list of questions relevant to this contest is here. Once you submit your essay, it will be available for judges to review and will no longer be able to edit it. Please make sure to review the eligibility criteria before submitting. Thank you!

Pending

This content now needs to be approved by community moderators.

Submitted

This essay was submitted and is waiting for review.

Looking ahead to 2050: Charting scenarios without transformative AI

{{"estimatedReadingTime" | translate:({minutes: qctrl.question.estimateReadingTime()})}}
AI Progress Essay Contest

This essay was submitted to the AI Progress Essay Contest, an initiative that focused on the timing and impact of transformative artificial intelligence. You can read the results of the contest and the winning essays here.


Metaculus hosts many questions about the second half of the 21st century. These can be hard to answer if you think AI is likely to be transforming the world by then. Instead of going to longer-term questions and asking, "How does the question resolve if the Earth is now a Dyson sphere?", I decided to look at the subset of possible futures without transformative AI.

A baseline scenario with transformative AI by 2050 can be constructed starting from Ajeya Cotra's document on biological anchors (henceforth "bioanchors"), but taking inspiration from Fun with +12 OOMs of Compute to assume training computation requirements are on the low side. Concretely, the short-horizon neural net anchor, keeping bioanchors's estimates for the other variables, implies a 2042 median (p16).

The purpose here isn't to argue for a particular estimate, but to define a fixed reference point to explore variation around. Still, this timeline isn't far from my own (unstable, uncertain) best guess: adopting the short-horizon neural net anchor may produce timelines that are too “optimistic”, but on the other hand, I'd guess bioanchors underestimates “breakthrough” algorithmic progress. (If you expect faster or slower progress, claims here may carry over to, say, 2035 or 2075.)

Reality could diverge from this baseline in several ways. Some imply post-2050 worlds without transformative AI. Others shorten transformative AI timelines, perhaps canceling “negative” surprises; I've decided to bracket those possibilities out of what was already becoming a long essay.

I'll follow bioanchors's definition of "transformative AI" as meaning AI that, if deployed, would cause change comparable to the agricultural or industrial revolutions, speeding up economic growth by at least an order of magnitude.

Bioanchors thoroughly explores definitions, variant assumptions, and possible objections, and the following discussion will pave over many subtleties. But basically, bioanchors converts different considerations into a common currency of orders of magnitude (OOMs) of compute. There's an unknown number of OOMs it would take to create a transformative model with 2020 algorithms. Humanity adds more OOMs to its efforts over time, through a mix of investing more dollars in training runs, getting more compute per dollar from hardware improvement (a variant of Moore's Law), and getting more intelligence out of a given amount of compute by improving AI algorithms. At some point, the “compute needed” and “compute applied” lines cross.

For an indication of how OOMs translate to time delays, note that the short-horizon and long-horizon neural network anchor start out with a 6 OOM difference in computation requirements, falling to 4 OOMs around 2050 because of algorithmic progress, and imply medians in the early 2040s and 2060s respectively.

I've used this tool to visualize guesses on how likely the world is to fall within some discrete bins. I'll first discuss the bins, defining them loosely (leaving room to argue about whether they overlap and exhaust the space) and commenting on their probabilities and resulting futures.


"Hard AI"

Creating a transformative model takes much more compute than the baseline expects, and this is the main reason why it isn't done by 2050. This implies Fun with 12 OOMs's strategies don't suffice for transformative AI. In bioanchors's terms, maybe the right anchor is one of the harder ones: long-horizon neural nets, evolution, or "above all anchors". Alternatively, maybe its estimates of the computing power of the human brain, or its other adjustments, were too optimistic.

There's still a lot of compute being cleverly applied. So maybe AI is solving a lot of “shallow” problems, but without this constituting a transformation, or causing one through accelerating research.

Depending on how disappointing the difficulty is, we may see transformative AI in the next few decades after 2050 anyway.

This case seems relatively likely because it only requires one surprise and because, unlike "hardware plateau" and "software disappointment" below, there's room for it to have enough impact by itself.


"Hardware Plateau"

The main reason why transformative AI isn't invented by 2050 is our computer hardware doesn't advance enough.

In bioanchors, between 2025 and 2050, hardware improves by a (highly uncertain) factor of 1000. (Bioanchors guesses there’s a factor of 140 left from continued straightforward progress, so the rest has to come from exotic new computing paradigms.)

This means there aren’t many OOMs in play, so I put a low probability on this being the main cause by itself.

Better algorithms and higher spending could compensate for the few lost OOMs soon after 2050.


"Software Disappointment"

The main reason why transformative AI isn't invented by 2050 is AI algorithms improve much more slowly than expected.

In bioanchors, gradual algorithmic progress adds a few OOMs by 2050, e.g. respectively 2, 3, and 4 for the short-, medium-, and long-horizon neural net anchors. For breakthrough progress, bioanchors only reassigns some percentage points of probability from "above all anchors" to the other anchors.

Like with a hardware plateau, there aren't many OOMs of disappointment to work with. So again, I put a low probability on this being the single main cause of our inability to build transformative AI.

A somewhat "brute force" transformative AI may come not long after, due to improved hardware and higher spending. A lack of breakthrough ideas suggests relatively continuous progress.


"Untransformative AI"

We invent AI that's as advanced as in the baseline case, but not as transformative. Maybe a transformation of the economy depends on the automation of processes that people can’t or won’t apply AI to. Or maybe superintelligent AI is developed, but decides to transcend into a world of hyper-advanced physics, leaving without major side effects on human civilization. Similar to the "Stabilized World" case below, where humanity coordinates on not developing advanced AI, humanity could successfully develop advanced AI but coordinate to not deploy it.

For various reasons, it seems hard to make these scenarios work. If they do, whatever caused AI to be untransformative could also apply to other technologies, like whole brain emulation, later in the century.


"Ruined World"

A catastrophe happens before transformative AI would have been attained, wiping out humanity or crippling its ability to build advanced computer hardware, make much research progress, or invest much money in training runs. One can imagine nuclear war, a natural or artificial pandemic, or a war with nanotechnological or other futuristic weapons.

This seems clearly possible, but not very likely per decade right now. However, maybe future developments, including around AI, will strongly destabilize world politics.

I'd guess recovery would take decades rather than centuries, but uncertainty in recovery speed and in the nature of the recovered society allow a wide range of possibilities for the farther future.


"Distracted World"

Despite an absence of surprising technical difficulties, humanity decides to focus on other things than AI. This could represent an "AI winter" where funders decide the technology doesn't look promising (mistakenly, or we would be in one of the "Hard AI" categories), or it could come about because of cultural changes. Maybe some new huge problem shows up that takes all the attention of people who would normally make AI progress.

It seems like the current forces causing people to care about AI are stable and would take decades to change, so I put a low probability on this.


"Stabilized World"

Relevant actors like regulators coordinate to prevent or delay major investment in projects to develop advanced AI, and this prevents it being developed by 2050.

This seems hard because it would require controlling the behavior of many actors against strong incentives. On the other hand, if other safety strategies fail, people may come to see strong reasons to try, maybe new technologies will facilitate coordination, and delay is probably easier than outright prevention.


"Stagnant World"

The world becomes increasingly incompetent at doing large projects and making intellectual progress, causing low success in both hardware and software R&D as well as probably lower economic growth and investment in AI projects, and this prevents transformative AI being invented by 2050.

My sense is this isn't happening quickly enough to play a major part in the next couple of decades, so I assigned a pretty low probability.


"Hard AI with Hardware Plateau"

Neither a surprisingly high 2020 training computation requirement nor slow hardware progress is individually the main factor delaying transformative AI, but both provide a few OOMs of disappointment.

In this world, algorithms keeps improving, and investment keeps rising.

Whether AI is hard seems mostly independent of whether hardware stagnates, even though AI applications may motivate hardware R&D, and pre-transformative AI may aid chip design.

So this case requires two mostly independent but smaller surprises. On the whole, I think it's one of the more likely cases.


"Hard AI with Software Disappointment"

Neither a surprisingly high 2020 computation requirement nor slow algorithmic progress is individually the main factor delaying transformative AI, but both provide a few OOMs of disappointment.

In this world, Moore's Law goes on, and investment continues to go up. A long build-up of hardware, but without the software to make use of it, could imply an abrupt software-limited singularity from later software breakthroughs.

Whether AI is hard (in the sense of high 2020 training computation requirements) seems mostly independent of whether algorithmic progress ends up disappointing.

As before, this requires two mostly independent but smaller surprises.


"Hardware Plateau with Software Disappointment"

Neither slow hardware progress nor slow algorithmic progress is individually the main factor delaying transformative AI, but both provide a few OOMs of disappointment.

Here, if transformative AI is attained, it's probably because a Manhattan/Apollo-like project throws a lot of money at the problem, perhaps aided by high economic growth.

As previously, this loses some plausibility by needing the baseline to be off in two dimensions. Some correlated ways in which both hardware and software R&D could disappoint are covered by other cases.


"Hard AI with Hardware Plateau and Software Disappointment"

The reasoning above for double combinations applies to this triple combination as well.


My subjective distribution, given no transformative AI by 2050, looks as follows:

Where does this leave us relative to the Metaculus community's views?

It's tricky to assemble the community's estimates into a single coherent worldview. The median predictor on non-AI far future questions may be under-updating on AI estimates, because users are individually under-updating, because different questions are being answered by different populations, or because users are conditioning on no transformative AI as part of the implied rules.

For example, combine "time to AGI" with "time from AGI to superintelligence", then compare to predictions on accelerated economic growth. This suggests a high probability on the “Untransformative AI” case above, but I don't think this conclusion can be taken at face value.


On the AGI questions, the community estimate at least seems near enough for a baseline before 2050 to be informative. Also note Metaculus expects compute to be scaled up pretty quickly, though with much uncertainty:


Writing this essay has weakly reinforced my own sense that roadblocks to transformative AI are temporary. If it doesn't happen by 2050, it likely still happens by 2100, maybe via technologies like genetic engineering or whole brain emulation. It's also reminded me to smooth my distributions. A wide variety of possible delays of uncertain size add up to a distribution that doesn't change sharply in any few-year period.

But this essay is meant as an attempt to organize uncertainty, to provide a conceptual picture, rather than to give well-founded probabilities. Progress in that direction could come from future numerical models, like bioanchors but with e.g. an annual probability of various dynamics inducing some OOMs of delay. And it seems similarly useful to split out and model surprises in the direction of faster timelines, like human intelligence enhancement, AI-powered research assistance, or unexpected qualitative jumps in AI theory.


Footnotes:

  1. Thanks to Tamay Besiroglu for helpful review comments.
  2. For lack of a better alternative, I use words like “optimistic” for considerations that point to transformative AI being invented sooner, and words like “disappointment” for considerations that point to transformative AI being invented later, even though I don’t claim fast timelines are desirable on expectation.


Submit Essay

Once you submit your essay, you can no longer edit it.