A statistical language model is a probability distribution over sequences of words. Due to Google and OpenAI work big per-trained language models gained recognition as multitask and few-shot learners.
Recently OpenAI released Generative Pretrained Transformer 3, commonly known by its abbreviated form GPT-3. GPT-3 is currently the largest language model and the successor to GPT-2. It was first announced in May 2020. OpenAI stated that full version of GPT-3 contains 175 billion parameters, two orders of magnitude larger than the 1.5 billion parameters in the full version of GPT-2. OpenAI released full GPT-2 1.5B model on November 5, 2019 on modified MIT license. However, GPT-3 is not yet available.
This question asks when will a language model with at least 100B parameters be open sourced including for commercial use?
The question will resolve on a date when such model will be first available for download and is licensed in a way that allows free of charge commercial use. This explicitly includes licenses like MIT, Apache, BSD, GNU etc. and their derivatives as long as free of charge commercial use is allowed. Additionally, the model must at least partially match capabilities of GPT-3, especially good few-shot learning ability. Ongoing attempts at recreating GPT-3 should not be included until they are declared as finished by the authors.