Question

Metaculus Help: Spread the word

If you like Metaculus, tell your friends! Share this question via Facebook, Twitter, or Reddit.

Billions of params of GPT-4 if released

GPT stands for "Generative Pre-Training" and was introduced in this paper from OpenAI in 2018. GPT-2 became famous in 2019 within the machine learning community for producing surprisingly coherent written text samples. It used 1.5 billion parameters.

In May 2020, OpenAI released GPT-3, a 175 billion parameter model, widely regarded to have impressive language generation abilities. The massive increase in parameter count compared to GPT-2 is likely the result of a previous investigation from OpenAI which revealed the relationship between neural language model size and performance. Many are now interpreting OpenAI's strategy as one intended to scale neural models to their ultimate practical limit. Gwern writes,

The scaling hypothesis that, once we find a scalable architecture like self-attention or convolutions, which like the brain can be applied fairly uniformly (eg “The Brain as a Universal Learning Machine” or Hawkins), we can simply train ever larger NNs and ever more sophisticated behavior will emerge naturally as the easiest way to optimize for all the tasks & data, looks increasingly plausible. [...]

In 2010, who would have predicted that over the next 10 years, deep learning would undergo a Cambrian explosion causing a mass extinction of alternative approaches throughout machine learning, that models would scale up to 175,000 million parameters, and that these enormous models would just spontaneously develop all these capabilities, aside from a few diehard connectionists written off as willfully-deluded old-school fanatics by the rest of the AI community.

If GPT-4 is released from OpenAI, how many parameters will it contain, in billions of parameters? Resolution is made via a report from OpenAI.

If OpenAI does not release GPT-4 by January 1st 2023, this question resolves ambiguously.

In case OpenAI does not explicitly refer to the relevant model as GPT-4, members of the community, community moderators or admin will do a strawpoll on the /r/openai subreddit and ask:

In your opinion, is it roughly correct to say that this model is the successor to GPT-3?

After 1 week, the majority answer wins with a tie counting as "yes".

{{qctrl.predictionString()}}

Metaculus help: Predicting

Predictions are the heart of Metaculus. Predicting is how you contribute to the wisdom of the crowd, and how you earn points and build up your personal Metaculus track record.

The basics of predicting are very simple: move the slider to best match the likelihood of the outcome, and click predict. You can predict as often as you want, and you're encouraged to change your mind when new information becomes available.

The displayed score is split into current points and total points. Current points show how much your prediction is worth now, whereas total points show the combined worth of all of your predictions over the lifetime of the question. The scoring details are available on the FAQ.

Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.

Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.

This question is not yet open for predictions.

Thanks for predicting!

Your prediction has been recorded anonymously.

Want to track your predictions, earn points, and hone your forecasting skills? Create an account today!

Track your predictions
Continue exploring the site

Community Stats

Metaculus help: Community Stats

Use the community stats to get a better sense of the community consensus (or lack thereof) for this question. Sometimes people have wildly different ideas about the likely outcomes, and sometimes people are in close agreement. There are even times when the community seems very certain of uncertainty, like when everyone agrees that event is only 50% likely to happen.

When you make a prediction, check the community stats to see where you land. If your prediction is an outlier, might there be something you're overlooking that others have seen? Or do you have special insight that others are lacking? Either way, it might be a good idea to join the discussion in the comments.