M

This essay is now a Draft

The list of questions relevant to this contest is here. Once you submit your essay, it will be available for judges to review and will no longer be able to edit it. Please make sure to review the eligibility criteria before submitting. Thank you!

Pending

This content now needs to be approved by community moderators.

Submitted

This essay was submitted and is waiting for review.

Economic Impacts of Artificial General Intelligence

{{"estimatedReadingTime" | translate:({minutes: qctrl.question.estimateReadingTime()})}}
AI Progress Essay Contest

This essay was submitted to theAI Progress Essay Contest, an initiative that focused on the timing and impact of transformative artificial intelligence. You can read the results of the contest and the winning essayshere.

The Metaculus community has forecasted on a broad range of questions related to the development of human-level artificial intelligence and its impacts on the global economy. The community's median and interquartile (25% and 75%) predictions on a selection of these questions are shown in the table below, and will form the basis for the following discussion of AI development and its effects on future economic growth. This discussion will primarily focus on describing plausible interpretations of the community's predictions.

Forecasts as of February 28, 2022

Development of AGI

In the table shown above, questions one through four are predicting when artificial general intelligence (AGI) and superintelligent AI will be developed. Generally, AGI is understood to be a machine that operates with a human level of intelligence, sometimes called strong AI. Superintelligent AI is generally understood to be a machine more intelligent than the most intelligent humans.

Metaculus offers two versions of each of the AGI and superintelligent AI questions. I'll refer to the different AGI formulations as Lite and Strong. The resolution criteria used for AGI Lite and Strong AGI could both plausibly be interpreted as operating at a human level of intelligence. However, the individual criteria of the AGI Lite question may be surpassed by weak AI systems soon. GPT-3 and its spin-offs are approaching human levels of conversability, scores are approaching the 90% threshold on the WinoGrande challenge, and have shown promising performance on the videogame threshold. The biggest challenge will be unifying these systems into a general intelligence that can describe its approach on these tasks, but one can imagine this happening in the next few decades in a way that may not really resemble actual human intelligence.

In comparison, the Strong AGI question uses more stringent resolution criteria, such as requiring interaction with physical objects in a way that would imply similar capabilities to humans. It's likely that an AI demonstrating the capabilities required of the Strong AI question would also satisfy the resolution criteria of the AGI Lite question. A bullet point summary of the different resolution criteria is provided below, or if you prefer fun flowcharts there's one at the end of this section.

AGI Lite

  • Loebner Silver - Chatbot that judges can't distinguish from a human, able to convince judges the actual human is the AI.
  • Scores over 90% on a test of language comprehension (the WinoGrande challenge).
  • Scores in the 75th percentile on the math portion of the SAT exam without extensive pretraining on SATs and using images of the questions.
  • Learns Montezuma's Revenge video game with only visual input and standard controls and able to explore all 24 rooms in under 100 real-time hours of play.
  • Able to explain its reasoning and describe its progress on the above tasks.

Strong AGI

  • Loebner Gold - Chatbot that judges can't distinguish from a human even when tested on deciphering text, images, and auditory inputs.
  • Able to assemble complex physical components (a model car for example).
  • High accuracy in testing on "extensive world knowledge and problem solving ability".
  • Turn a description of a simple computer program into functional code.
  • Able to explain its reasoning and describe its progress on the above tasks.

Creating criteria for when an AI has become a superintelligence is even more challenging. The approach used by question three I'll refer to as Specific Superintelligence and by question four as Nonspecific Superintelligence. The Specific Superintelligence must be able to perform any tasks a human could perform in 2021 as well or superior to the top humans in each field. Additionally, it specifies guidelines, such as the ability to design robots to beat humans at sports and the ability to make Nobel-worthy scientific discoveries. The Nonspecific Superintelligence just requires that a superintelligent AI must be able to answer questions at a superhuman level across virtually all questions of interest, or be able to exceed human ability across virtually all activities of interest.

Both the Specific and Nonspecific superintelligence questions are conditional on AGI Lite being developed. The community forecasts AGI Lite in 2042, with Nonspecific Superintelligence developing 14 months later and Specific Superintelligence 6 months later, using the median forecasts. However, considering that the median forecast for Strong AGI is 2055 these forecasts seem pretty inconsistent, since both Specific and Nonspecific Superintelligence would clearly be more capable than Strong AGI.¹

The transition from AGI to Superintelligence is expected to be very fast, with a 75% chance it happens in less than about 6 years regardless of the type. A scenario where human-level AIs are developed and progress stalls doesn't seem to be expected. One plausible scenario is that once AGI is developed, a sudden increase in the total research effort from AGIs could lead to new ideas and approaches being generated by these AGIs that could bring about a superintelligent AI. Alternatively, it may be thought that once an AGI has been developed, all it would take is a larger amount of training data and computing capabilities to elevate the AGI to the level of superintelligence.

AGI and Superintelligent AI Forecasts as of February 28, 2022

Because the Strong AGI and Specific Superintelligence questions provide specific and stringent criteria, I'll focus on them going forward. However, keep in mind that the actual Specific Superintelligence question is based on the development of AGI Lite as a starting point. Due to the inconsistency mentioned previously, I've assumed that the community expects Specific Superintelligence to develop shortly after the development of Strong AGI instead of AGI Lite.

Here's a flowchart of the criteria used by Metaculus for assessing AI across the different questions.²

AGI and Superintelligent AI Flowchart (Click Here for High Quality Version)

A Sudden Boost to GDP

Question five asks about the first year that world GDP exceeds 130% of the previous high. In other words, when the world first experiences a 30% annual increase in GDP, not counting a sudden recovery from a recession. The median forecast is 2127, with a 33% chance that it doesn't happen before 2199. Considering that the highest annual growth in the last 60 years was 6.6%, it's understandable that the community thinks there's a 33% chance annual growth won't exceed 30% before 2199.

Despite the recent base rate, there's reason to believe that high growth rates are possible. Consider this chart showing world GDP over the last 2000 years. Using this chart to roughly estimate doubling times (the time it takes for world GDP to double), I estimate that from year zero it took in the ballpark of 1500 years for world GDP to double, then around 320 years to double again, then 80 years, then 40, then 20. The doubling time has held fairly steady at 20 years since somewhere around the mid to early 1900s. Using doubling times to estimate annual growth rates suggests that world GDP grew at a tiny fraction of a percent up until around 1500, when it approached a quarter of a percent, until finally breaking 1% around the 1800s and growing at somewhere around 3.5% in the last 100 years.

World GDP over the last two millennia

Looking only at world GDP misses a big factor, which is that exponential world GDP growth has been helped along by exponential growth in world population. More people means more brains coming up with new ideas and more workers able to do physical labor. But since world population growth has slowed and is projected to continue slowing we should instead look at GDP per capita. That doesn't change the overall picture much though. One estimate of GDP per capita in the long run shows that starting at year zero GDP per capita took about 1800 years to double, then around 75 years, finally reaching around 25 years. Or in terms of annual growth GDP per capita went from almost negligible levels of annual growth to a little under 3%.

The size of the world population over the last 12,000 years

If GDP growth is truly accelerating we might expect to see world GDP growth in the double digits sometime in the 21st century. World GDP growth has slowed down over the last 60 years, but perhaps it's just a temporary decline on a longer upward trend. What would drive this large increase in growth? Aside from development of AGI or superintelligent AI, some plausible possibilities could be abundant energy brought about by fusion or renewable energy, automation driven by weak AI (such as driverless cars and hyper-efficient factories), nanotechnology, or a continued trend of numerous incremental technology improvements.

Realistically though, the slowdown in annual growth rates makes this kind of growth hard to imagine even with these developments. An alternate viewpoint could be that the last few centuries have been an exception, brought about by the industrial revolution enabling a surge in population growth which led to more economic growth, and we're at last cresting this feedback loop as population and GDP growth both level off a bit. Under this view it would take some significant new advancement to drive GDP growth into double digits, we couldn't just rely on incremental advancements or more abundant energy. The development of AGI and superintelligent AI could provide that boost.

How that boost would occur can be hard to imagine, the world economy doubling every two years or every year seems far-fetched. Something like that might require factories to be built in a matter of weeks or days, and goods to be transported at a speed and scale that seems impossible, bringing to mind the city-covered planet of Coruscant in Star Wars. Whether that's actually what that kind of growth would look like is anyone's guess, but economist Robin Hanson has created a simple economic model that suggests what level of growth might be achieved with AGI.

The basis of his approach is a commonly used model of the economy that takes total economic production as a function of technology level, amount of labor (workers), and capital (assets that can be used for production). Hanson modifies this basic model to split capital into regular capital and computer capital, and labor into human labor and labor from AGI. He sets the initial conditions (where there is no labor from AGI) such that annual economic growth is 4.3%, a doubling time of every 16 years. When labor from AGI is introduced annual growth becomes 45%, a doubling time of 18 months.

The derivation of his model is beyond my understanding, but the gist seems to be that declining computer prices coupled with the development of machine intelligence could create an explosion in the amount of labor from machine intelligences, resulting in a massive boost to the economy.

With an understanding of how these different scenarios might come about in practice, we can look at what the Metaculus community thinks is most likely to happen.

Sudden, Not Gradual

Question five gives a median forecast of 30% annual world GDP growth occurring in 2127, with a 33% chance this doesn't happen before 2199. As shown before, the median prediction of the development of Strong AGI is 2055, with a 75% chance it happens by 2090, and superintelligence likely following shortly after that. That suggests the community thinks world GDP won't reach 30% GDP growth until over 70 years after the development of AGI.

Does that mean the community thinks AGI and superintelligent AI won't be the cause of 30%+ annual growth? Not necessarily. Question 11 asks whether 30% annual growth will occur within 15 years of human-level AI being developed (note that this question uses different criteria to assess human-level AI, but it's fairly strict and seems consistent with Strong AGI). The community gives a 67% chance this happens. If we look at this conditionally, where if Strong AGI is developed, then within 15 years there's a 67% chance of world GDP exceeding 30% annual growth we can take the 75% chance of Strong AGI by 2090, and multiply those probabilities to arrive at about a 50% chance that world GDP sees 30% annual growth within 15 years of 2090, so by 2105. That median forecast is much closer to the median suggested by question five, only about two decades earlier than that question.

That suggests the large gap between Strong AGI and a GDP boost implied by question five is partly a function of the uncertainty in whether Strong AGI will actually lead to massive growth so soon. Perhaps Strong AGI and superintelligence can be developed but at a prohibitively expensive cost that prevents even a superintelligence from making more Strong AGI or superintelligences, and the impact of just one doesn't lead to a quick boost in GDP, instead taking decades or longer. Or perhaps we develop unaligned AGI or superintelligence, resulting in a disaster that harms the economy. Or maybe it's extremely hard to achieve massive economic growth, so hard that even one or many superintelligences can't do it.

Question 10 gives the community's direct response to one of those possibilities. The question asks how many years before or after the development of Strong AGI will 25% annual GDP growth be achieved, conditional on both Strong AGI being developed and 25% growth being achieved. The community's interquartile range is entirely in the period after Strong AGI is developed, with a median forecast of 3.6 years after. This implies several things. One is that it's not just a matter of timing. If a huge boost in GDP will occur as a result of Strong AGI, the community is very confident it will happen fast.

The second implication is that the community doesn't think massive economic growth will happen until Strong AGI is developed. Questions seven and eight provide further support for this. They ask when world annual GDP growth will average 6% and 10% over a 10 year period. The forecasts suggest that we won't see higher average growth rates until around the same time we see 30% annual growth (though 6% average growth is expected a little earlier), implying that a sudden surge in GDP may be the cause of these increases in average GDP. Even if world baseline annual growth was 0%, if the following three years achieve 30% annual growth that results in a 10 year average over 6%, and four years of it achieves a 10 year average over 10%.

Question six is consistent with the above interpretation as well, with a median prediction of 12% peak annual growth before 2100, and a 75% chance growth is less than 58%. The tail on this question is very large, the community gives a 70% chance growth is less than 30%, and a 92% chance it's less than 4,000%. What growth in the 1,000% range would even look like is anyone's guess (the size of the economy doubling every 25 days) but it might involve some kind of runaway development spurred by AI, such as creating finished goods and equipment from most of the matter in our solar system.

Lastly, question nine asks when the US labor force participation rate will fall below 10%. The Metaculus community doesn't see this happening until long after the median forecasts of explosive GDP growth, with a median forecast of 2224, and a 41% chance it doesn't occur by 2300. This suggests a role for human work, despite the potential existence of superintelligent AI.

Although it might seem counterintuitive that humans may still work when superintelligent AI exists, we can look to the development of cell phones as a possible analogy. Someone from 40 years ago would think it would be an amazing improvement to their lives if they could call and text from just about anywhere for a reasonable price per message or call. But now that's commonplace and inexpensive, and the cutting edge is in faster data speeds and enabling more data for a cheaper price. Just because we achieve relative abundance compared to what we had before doesn't mean we're no longer resource constrained. New technology enables new developments that require more resources.

Imagine if our superintelligent AI overlord can produce 1,000 new Strong AGI workers per day. It can choose to send those workers out to mine Pluto, or send some to make entertaining holograms for humans who are willing to pay for them. A worker mining Pluto would likely produce a lot more value than one sent to create hologram content. This difference in potential value is called the opportunity cost. Because of opportunity costs and scarcity, there may always be a role for human employment. Even though a superintelligent AI would have an absolute advantage over a human in all skills, a human may have a comparative advantage in certain jobs, such as being a hologram influencer.

On the other hand, it's hard to know what could actually end up happening, and it's possible the Metaculus community isn't properly reconciling forecasts across different questions. The world may only need so many hologram influencers, and if human skills are about as valuable to the future economy as VCR repair is to the economy in 2022, then it's plausible that there may not actually be jobs for humans in an AI future, or if there are they may offer very low wages.

Forecasts as of February 28, 2022

The Metaculus AI Timeline

The above discussion weaves together different forecasts on the future of AI and the impact of AI on the economy. The conclusions implied by the forecasts fit together decently well. Metaculus sees a future where human-level AI is developed late this century, followed shortly by superintelligence, with a substantial chance of producing explosive economic growth that would be like nothing the world has ever seen. These predictions produce an exciting timeline over the next few centuries which may give greater insight into what possibilities future developments in AI hold for humanity.

Timeline of Metaculus AI Forecasts (Click Here for High Quality Version)

Footnotes

¹ Thanks to @marius.hobbhahn for pointing this out in their essay Will transformative AI come with a bang? I had initially misread and thought they were starting from different AGI versions.

² While more capable versions of AI do not specifically encompass the less capable versions in the Metaculus resolution criteria I've presented them as such for simplicity and based on my judgement that the more capable systems would be able to perform the tasks required of the less capable systems.

Questions Used

Submit Essay

Once you submit your essay, you can no longer edit it.