Your submission is now in Draft mode.

Once it's ready, please submit your draft for review by our team of Community Moderators. Thank you!

Submit Essay

Once you submit your essay, you can no longer edit it.

Pending

This content now needs to be approved by community moderators.

Submitted

This essay was submitted and is waiting for review.

Transformers to accelerate DL progress

Question

OpenAI's transformer based GPT-3 has generated a lot of hype around the capabilities of current methods in deep learning. GPT-3 seems to be capable of creative works of writing as shown by Gwern. This creative potential, if applied to scientific writing or code generation, may accelerate research progress. If successfully applied to deep learning research, this acceleration may be self-reinforcing potentially having implications on the development of an AGI system. Indeed the Metaculus question "When will the first Artificial General Intelligence system be devised, tested, and publicly known of?" updated 10 years forward in the months following the announcement of GPT-3.

Will transformer derived architectures accelerate progress in deep learning?

This question resolves positively if by 2025 there are at least 5 papers which successfully used transformer derived architectures to find improved neural network architectures or architecture components. Each paper must either use the transformer model to generate code for the architecture or to generate a natural language description of the architecture. Each of these papers must be cited at least 100 times as indicated by the corresponding Google Scholar page.

The code and/or description produced by the transformer model need not be complete or bug-free -- i.e. the authors may use the transformer output as inspiration. The architecture components considered must be described by the paper authors as improving on the state-of-the-art with respect to some benchmark of the authors' choosing. The 5 papers need not be particularly distinct. If they all describe similar architectural innovations, this question will still resolve positive.

This question uses Metaculus user Barnett's definition of "Transformer derived":

Define a transformer derived architecture as one that is either directly referred to as a "transformer" or otherwise cites the 2017 paper from Vaswani et al. as the chief inspiration for its operation. If the architecture is a mix of at least two component architectures, it is also transformer derived if one of the component architectures is a transformer. If there is any contention in the Metaculus comment section, a strawpoll will be taken on the subreddit /r/machinelearning asking,

Is it accurate to say that [the model in question] is a derivative of the transformer model from Vaswani et al.?

After one week, a majority vote indicates the answer, with a tie indicating the answer "Yes".

Make a Prediction

Prediction

Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.

Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.

Current points depend on your prediction, the community's prediction, and the result. Your total earned points are averaged over the lifetime of the question, so predict early to get as many points as possible! See the FAQ.