GPT stands for "Generative Pre-Training" and was introduced in this paper from OpenAI in 2018. GPT-2 became famous in 2019 within the machine learning community for producing surprisingly coherent written text samples. It used 1.5 billion parameters.
In May 2020, OpenAI released GPT-3, a 175 billion parameter model, widely regarded to have impressive language generation abilities. The massive increase in parameter count compared to GPT-2 is likely the result of a previous investigation from OpenAI which revealed the relationship between neural language model size and performance. Many are now interpreting OpenAI's strategy as one intended to scale neural models to their ultimate practical limit. Gwern writes,
The scaling hypothesis that, once we find a scalable architecture like self-attention or convolutions, which like the brain can be applied fairly uniformly (eg “The Brain as a Universal Learning Machine” or Hawkins), we can simply train ever larger NNs and ever more sophisticated behavior will emerge naturally as the easiest way to optimize for all the tasks & data, looks increasingly plausible. [...]
In 2010, who would have predicted that over the next 10 years, deep learning would undergo a Cambrian explosion causing a mass extinction of alternative approaches throughout machine learning, that models would scale up to 175,000 million parameters, and that these enormous models would just spontaneously develop all these capabilities, aside from a few diehard connectionists written off as willfully-deluded old-school fanatics by the rest of the AI community.
If GPT-4 is released from OpenAI, how many parameters will it contain, in billions of parameters? Resolution is made via a report from OpenAI.
If OpenAI does not release GPT-4 by January 1st 2023, this question resolves ambiguously.
In case OpenAI does not explicitly refer to the relevant model as GPT-4, members of the community, community moderators or admin will do a strawpoll on the /r/openai subreddit and ask:
In your opinion, is it roughly correct to say that this model is the successor to GPT-3?
After 1 week, the majority answer wins with a tie counting as "yes".