Your submission is now in Draft mode.

Once it's ready, please submit your draft for review by our team of Community Moderators. Thank you!

Submit Essay

Once you submit your essay, you can no longer edit it.


This content now needs to be approved by community moderators.


This essay was submitted and is waiting for review.

Time From (weak) AGI to Superintelligence

AI Progress Essay Contest


Related Question on Metaculus:

(background text by @Matthew_Barnett)

Futurists have long speculated that upon the arrival of artificial general intelligence, the first superintelligent AI will quickly follow. From I. J. Good, writing in 1965,

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

Nick Bostrom wrote in his book Superintelligence (2014),

Note that one could think that it will take quite a long time until machines reach the human baseline, or one might be agnostic about how long that will take, and yet have a strong view that once this happens, the further ascent into strong superintelligence will be very rapid.

and categorized takeoff durations into three types:

  • "A slow takeoff is one that occurs over some long temporal interval, such as decades or centuries."

  • "A fast takeoff occurs over some short temporal interval, such as minutes, hours, or days."

  • "A moderate takeoff is one that occurs over some intermediary temporal interval, such as months or years."

While it seems that most prominent thinkers are convinced that rapid technological and economic progress will follow the development of AGI (See Paul Christiano, Robin Hanson, Eliezer Yudkowsky, Ben Goertzel), most AI researchers surveyed are not convinced. In 2016, AI Impacts asked AI researchers,

Assume that HLMI will exist at some point. How likely do you then think it is that the rate of global technological improvement will dramatically increase (e.g. by a factor of ten) as a result of machine intelligence:

Within two years of that point? ___% chance

Within thirty years of that point? ___% chance

The median answer was 20% for the two year estimate and 80% for the thirty year estimate.

After an AGI is created, how many months will it be before the first superintelligent AI is created?

This question will resolve as the number of months between the first development of a (weak) Artificial General Intelligence and a Superintelligent Artificial Intelligence, according to widespread media and historical consensus. If an AGI is not created before January 1, 2150, this question will resolve ambiguously.

"Artificial General Intelligence" (AGI) is defined for the purposes of this question based on another Metaculus question, the full (updated) definition of which is in the fine print.

"Superintelligent Artificial Intelligence" (SAI) is defined for the purposes of this question as an AI which can perform any task humans can perform in 2021, as well or superior to the best humans in their domain. The SAI may be able to perform these tasks themselves, or be capable of designing sub-agents with these capabilities (for instance the SAI may design robots capable of beating professional football players which are not successful brain surgeons, and design top brain surgeons which are not football players). Tasks include (but are not limited to): performing in top ranks among professional e-sports leagues, performing in top ranks among physical sports, preparing and serving food, providing emotional and psychotherapeutic support, discovering scientific insights which could win 2021 Nobel prizes, creating original art and entertainment, and having professional-level software design and AI design capabilities.

As an AI improves in capacity, it may not be clear at which point the SAI has become able to perform any task as well as top humans. It will be defined that the AI is superintelligent if, in less than 7 days in a non-externally-constrained environment, the AI already has or can learn/invent the capacity to do any given task. A "non-externally-constrained environment" here means, for instance, access to the internet and compute and resources similar to contemporaneous AIs.

"an artificial general intelligence" is defined as a single unified software system that can satisfy the following criteria, all easily completable by a typical college-educated human:

  • Able to reliably pass a text-only Turing test of the type that would win the Loebner Silver Prize or Longbets Kurzweil/Kapor Bet, in which human judges cannot reliably distinguish normal human chat participants from AI.

  • Able to score 90% or more on a robust version of the Winograd Schema Challenge, e.g. the "Winogrande" challenge or comparable data set for which human performance is at 90+%

  • Be able to score 75th percentile (as compared to the corresponding year's human students; this was a score of 600 in 2016) on all the full mathematics section of a circa-2015-2020 standard SAT exam, using just images of the exam pages and having less than ten SAT exams as part of the training data. (Training on other corpuses of math problems is fair game as long as they are arguably distinct from SAT exams.)

  • Be able to learn the classic Atari game "Montezuma's revenge" (based on just visual inputs and standard controls) and explore all 24 rooms based on the equivalent of less than 100 hours of real-time play (see closely-related question.)

By "unified" we mean that the system is integrated enough that it can, for example, explain its reasoning on an SAT problem or Winograd schema question, or verbally report its progress and identify objects during videogame play. (This is not really meant to be an additional capability of "introspection" so much as a provision that the system not simply be cobbled together as a set of sub-systems specialized to tasks like the above, but rather a single system applicable to many problems.)

Make a Prediction


Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.

Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.

Current points depend on your prediction, the community's prediction, and the result. Your total earned points are averaged over the lifetime of the question, so predict early to get as many points as possible! See the FAQ.