In this article computer scientist Gary Marcus laid down the gauntlet:
...allow me to propose a Turing Test for the twenty-first century: build a computer program that can watch any arbitrary TV program or YouTube video and answer questions about its content.... no existing program—not Watson, not Goostman, not Siri—can currently come close to doing what any bright, real teenager can do: watch an episode of “The Simpsons,” and tell us when to laugh.
For the purposes of this question, assume that a data set has been created based on labeling of at least 100 episodes of a television comedy (obviously without laugh track/studio audience and preferably but not necessarily The Simpsons.)
Using at most 25 episodes as part of the training corpus, when will an ML system achieve 90% of human accuracy when tested on 25 other different randomly chosen episodes?
Fine print:
-
The accuracy metric is unspecified but should essentially compare at what points in each episode a human specifies "I laughed or smiled." The human accuracy can be drawn directly from the training data, since it is labeled by human comedic assessment.
-
The training set can include other videos but at most 25 of the comedy in question.
-
It is of course uncertain that such a dataset will be developed (though the author encourages it) or that it will become a significant target of ML research. If no ML papers attempting such a test are published by 2030 the question resolves as ambiguous.