Your submission is now in Draft mode.

Once it's ready, please submit your draft for review by our team of Community Moderators. Thank you!

Submit Essay

Once you submit your essay, you can no longer edit it.

Pending

This content now needs to be approved by community moderators.

Submitted

This essay was submitted and is waiting for review.

Will the first AGI be based on deep learning?

AI Progress Essay Contest

Question

The Deep Learning Book, which is considered by many to be the best reference textbook on the topic, introduces deep learning,

This book is about a solution to [fuzzy ill-defined problems]. This solution is to allow computers to learn from experience and understand the world in terms of a hierarchy of concepts, with each concept defined through its relation to simpler concepts. By gathering knowledge from experience, this approach avoids the need for human operators to formally specify all the knowledge that the computer needs. The hierarchy of concepts enables the computer to learn complicated concepts by building them out of simpler ones. If we draw a graph showing how these concepts are built on top of each other, the graph is deep, with many layers. For this reason,we call this approach to AI deep learning

Paul Christiano has written that future AGI might be based on deep learning principles,

It now seems possible that we could build “prosaic” AGI, which can replicate human behavior but doesn’t involve qualitatively new ideas about “how intelligence works:”

It’s plausible that a large neural network can replicate “fast” human cognition, and that by coupling it to simple computational mechanisms — short and long-term memory, attention, etc. — we could obtain a human-level computational architecture.

It’s plausible that a variant of RL can train this architecture to actually implement human-level cognition. This would likely involve some combination of ingredients like model-based RL, imitation learning, or hierarchical RL. There are a whole bunch of ideas currently on the table and being explored; if you can’t imagine any of these ideas working out, then I feel that’s a failure of imagination (unless you see something I don’t).

Assume for the purpose of this question, that this question resolves on some date.

Metaculus admin(s) and/or community moderator(s) will survey 11 AI researchers whose work they consider relevant and whose work has been cited at least 500 times within the past 365 days according to Google Scholar. We will then ask about the relevant AI system:

Was the relevant AI system based on Deep Learning, as defined by the 2016 version of the Deep Learning Book?

Respondents will be requested to submit only one of the following responses:

  • The complete system was based on DL

  • Most of system was based on DL

  • At least a significant portion of the system was based on DL

  • Only a minor portion of the system was based on DL

  • No portion, or only a trivial portion of the system was based on DL

  • I don't know

Then the question resolves positively if a majority of surveyed experts who don't respond "I don't know" respond as follows:

  • The complete system was based on DL

  • Most of system was based on DL

The question resolves ambiguously if a majority of experts respond "I don't know".

Make a Prediction

Prediction

Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.

Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.

Current points depend on your prediction, the community's prediction, and the result. Your total earned points are averaged over the lifetime of the question, so predict early to get as many points as possible! See the FAQ.