M

This essay is now a Draft

The list of questions relevant to this contest is here. Once you submit your essay, it will be available for judges to review and will no longer be able to edit it. Please make sure to review the eligibility criteria before submitting. Thank you!

Pending

This content now needs to be approved by community moderators.

Submitted

This essay was submitted and is waiting for review.

Will transformative AI come with a bang?

{{"estimatedReadingTime" | translate:({minutes: qctrl.question.estimateReadingTime()})}}
AI Progress Essay Contest

This essay was submitted to the AI Progress Essay Contest, an initiative that focused on the timing and impact of transformative artificial intelligence. You can read the results of the contest and the winning essays here.


In the early days of AI safety, people used to think of AI progress primarily in terms of self-improving exponentially growing systems that would ultimately result in a singularity (see e.g. superintelligence or MIRI). 

One assumption of this view is that, after some point in time, capabilities improve very rapidly, i.e. TAI comes with a bang. However, due to advances in Deep Learning and a shift in hypotheses around timelines, some people think that TAI will be more boring and just steadily get better and better in a less steep fashion. This view doesn’t imply that alignment is less important, just that we might have to adapt our expectations and strategies, e.g. it might be harder to argue that AI systems are dangerous when they are always just slightly better than their predecessor without a clear cut-off point. 

In this essay, I want to a) explain takeoff speeds and discuss possible operationalizations, b) use multiple existing forecasts to evaluate what the community currently believes about takeoff speeds and c) try to combine them into a realistic and coherent conclusion.


Definition

On a high level, the speed of AI takeoff describes the length of the period between reaching roughly human-level AI and reaching superintelligence (see e.g. Cotton-Barratt & Ord 2014). 

Operationalizing takeoff speed is harder and we could choose different approaches that all have their shortcomings. A fast takeoff could for example be defined as

  1. A sharp increase (e.g. 30%) in global GDP growth within 1 year following TAI. 
  2. A sharp increase (e.g. 30%) in automation within 1 year following TAI. 
  3. A Nobel prize-worthy discovery within 1 year after TAI. 
  4. The timespan between AGI and a superintelligence. Rather than trying to find proxies for fast takeoff, we could just try to predict takeoff speed itself. 

The above operationalizations have various problems. TAI is sometimes defined via GDP growth and could thus lead to a circular definition; Big jumps in automation and Nobel prize-worthy discoveries might already be possible with narrow systems; Predicting takeoff directly depends a lot on the exact definitions of TAI and superintelligence; and so on.

Furthermore, these different operationalizations could lead to very different conclusions. Therefore, they should be seen as intuition guides for different ways in which takeoff speeds could be interpreted. However, to get a well-rounded picture of takeoff speeds we want to take multiple views into account. 

In the following essay, I will somewhat arbitrarily take fast takeoffs to mean less than two years between AGI and superintelligence (which is already conservative) and everything else will be called a slow takeoff.

Even though there are relevant distinctions between TAI and AGI, I will use them relatively synonymously in this essay since the metaculus definitions don't make the distinctions either.


Considerations

Before we dive into the details of takeoffs let’s take a look at the predictions for AGI timings (also used in takeoff questions) as an anchor. There are currently two different predictions. The first one has a lower bar than the second one.

The median prediction of the low-bar version is currently 2042.

The median prediction of the high-bar version is currently 2057.

I would interpret the 15 years between the two median predictions of these operationalizations already as soft evidence for slower takeoffs. While the second question has a higher bar to clear, I would expect it to be cleared in much less than 15 years in case of a fast takeoff.


Time between AGI and superintelligence

There are two questions directly asking about the time span (in months) between AGI and a superintelligence (SAI). Both questions use the low bar definition (see above) of AGI and define SAI as being basically better at all tasks than humans.

Their median predictions are 13.5 and 7.15 months (!) respectively. While both of these questions imply relatively fast takeoffs, they are not at all consistent with the higher-bar-definition of AGI from above. Both definitions of SAI used in the takeoff questions are much harder to clear than the high-bar AGI question from above implying a logical inconsistency. To me, this sounds analogous to predicting a human on mars earlier than a human in the stratosphere. 

My intermediate conclusion is that, when asked directly, people predict fast takeoffs but they aren't necessarily consistent with other community beliefs about timelines. I think this could be explained by multiple different hypotheses--it could be an inside- vs. outside-view discrepancy, maybe the questions have been answered by different groups of people, maybe all individual estimates are consistent but the aggregate distributions aren’t, etc.

Let's, therefore, look at different proxies for takeoff speed in the following.


Economic indicators

One question asks whether the world GDP will increase by 30% within one year in any of the 15 subsequent years after TAI. 

The community prediction of 70% for more than 30% GDP growth is not very conclusive. If people were bullish on a fast takeoff, the probability should be close to 100%. However, there are too many reasonable explanations for this prediction to interpret this as strong evidence for or against fast takeoffs--the AGI could lead to 30% growth in the first year or in the 15th year; the AGI could not be aligned and thus not lead to growth; the AGI could lead to consistent growth of e.g. 20% every year but never reach 30; etc.

Additionally, there are three economic questions independent of AI. The first two of those ask when the 10-year averaged GWP growth will exceed 6% and 10% respectively and the third asks when GWP will exceed 130% of all previous years.


Their respective median predictions are 2081, 2131 and 2098. 

There is one question on large-scale automation which is also not specifically linked to AI timelines. 


Its median estimate is currently 2224.

If you thought AI takeoffs were fast, the predictions for TAI/AGI and economic indicators should be much closer together than they currently are--at least not as far apart as 50 years or more.

However, it is possible that the people answering economic questions didn’t consider AI timelines when making their predictions, e.g. because they have a different background or due to psychological factors such as inside- vs. outside view.

Science

There is one question on when an AI will win any Nobel prize for the first time.


The median prediction is currently at 2192. 

However, there are very few Nobel prizes, they usually lag behind the frontiers of science for a bit, there is some randomness in the process and they follow archaic rules and traditions. Thus, this prediction might not accurately reflect takeoff speed.


Deep Learning

I would argue that progress in Deep Learning is fairly regular. While there are sometimes sudden jumps in capabilities, e.g. GPT-3 “understanding” a concept that GPT-2 didn’t get or some RL algorithms seeing sudden jumps in performance during training, there are many regularities. In general, more data, bigger models and more compute will lead to somewhat predictable gains in capabilities (see e.g. biological anchors or trends in compute). 

The current community prediction is 68% which I would interpret as mild evidence for slower takeoffs. My reasoning for this is based on a) the “slow and steady” nature of DL, i.e. the current bottlenecks of data, size and compute will prevent rapid takeoffs and b) DL is primarily based on error-driven learning. Thus, feedback loops might be constrained by interactions with the real world that are hard to simulate away. However, I’m not very confident in this reasoning, especially the second one.


Uncertainties

I want to emphasize that I have large uncertainty around my interpretations. I think there is a good chance, that our current understanding of AI is too limited and improvements in algorithms or architectures could lead to very different outcomes. The community broadly agrees that we should be uncertain, e.g. they expect that we will be surprised by AI progress with a 70% median probability.



Conclusions

Interestingly, the two questions that ask about takeoff speed directly predict relatively fast takeoffs. However, two slightly different definitions of AGI are much further apart than AGI and superintelligence in these takeoff questions which is a clear contradiction. I think this mostly shows that the community is still very uncertain about takeoff questions and that slightly different wordings can lead to completely different answers. 

If you look at economic indicators, scientific indicators and the predictions on deep learning, the implied takeoffs are much slower than in the direct questions. A possible explanation would be that some of these predictions are made by people who don’t think about AI timelines. However, even if the question is explicitly about the implications of AI, e.g. on GDP growth, the predictions still imply slower takeoffs. 

My two main takeaways are a) there is a lot of uncertainty about takeoffs and different questions and wordings lead to vastly different predictions and b) in aggregate, I would interpret most predictions as mild evidence against fast takeoffs. 

In any case, the inconsistencies and implied errors bars seem too large for such an important question and show the importance of improving our prediction models for them.


Submit Essay

Once you submit your essay, you can no longer edit it.