On May 31st, 2022, prominent deep learning skeptic and NYU professor emeritus Gary Marcus challenged Elon Musk to a bet on AGI by the end of 2029. His proposed bet consisted of 5 AI achievements, of which he predicted no more than 2 would come to pass before 2030. This question is about Marcus' first prediction,
In 2029, AI will not be able to watch a movie and tell you accurately what is going on (what I called the comprehension challenge in The New Yorker, in 2014). Who are the characters? What are their conflicts and motivations? etc.
For this challenge, we will use the MovieQA dataset as an illustrative example of a benchmark that could trigger positive resolution,
The dataset consists of 14,944 questions about 408 movies with high semantic diversity. The questions range from simpler "Who" did "What" to "Whom", to "Why" and "How" certain events occurred. Each question comes with a set of five possible answers; a correct one and four deceiving answers provided by human annotators.
Will AI be able to watch a movie and tell you accurately what is going on before 2030?
This question will resolve positively if, before January 1st 2030, a computer program is publicly and credibly documented to have achieved at least 90.0% accuracy or above the human baseline on a benchmark comparable to the MovieQA dataset, when restricted to only watching the movies tested (rather than reading plots, subtitles, scripts, or human-provided transcriptions for the movies). Any candidate benchmark must provide difficult questions that test deep comprehension, including questions of how and why, rather than mere shallow pattern matching.
Importantly, this means that any candidate computer program must not have been given access to media that could have reasonably been expected to spoil the plot to any of these movies during its training (for example, the Wikipedia pages for these movies). The AI is allowed to be trained on other media, such as Project Gutenberg books. This restriction is merely intended to eliminate cheating, not to require any additional capabilities beyond what Gary Marcus specified.
A simple way to prove that a candidate computer program did not cheat is by showing that all the data the AI was trained on was generated prior to when the movies were released. However, this is not the only way of proving that cheating did not occur.
Metaculus admins will use their discretion in determining whether a candidate computer program met these criteria.