Last year, OpenAI announced their big project for that year: GPT-2, a transformer based language model representing a significant advance in language modeling capabilities.
On February 17th an article from the MIT Technology Review reported,
One of the biggest secrets is the project OpenAI is working on next. Sources described it to me as the culmination of its previous four years of research: an AI system trained on images, text, and other data using massive computational resources. A small team has been assigned to the initial effort, with an expectation that other teams, along with their work, will eventually fold in. On the day it was announced at an all-company meeting, interns weren’t allowed to attend. People familiar with the plan offer an explanation: the leadership thinks this is the most promising way to reach AGI.
This question resolves on the date when OpenAI publishes a blog post or paper or a document of any kind, describing a large machine learning model that was trained on both images and text, and other data using massive computational resources (>10^4 Petaflop/s-days as determined from estimates, judged by the Metaculus mods). If they do not unveil their secret project before April 2022, then this question resolves ambiguously.