Current machine learning techniques are data-hungry and brittle—they can only make sense of patterns they've seen before. Using current methods, an algorithm can gain new skills by exposure to large amounts of data, but cognitive abilities that could broadly generalize to many tasks remain elusive. This makes it very challenging to create systems that can handle the variability and unpredictability of the real world, such as domestic robots or self-driving cars.
François Chollet, creator of the Keras neural network library, in the paper "On measuring intelligence" describes in detail the context and motivation behind a benchmark that is supposed to put the general, broad intelligence of machines to test.
The Abstraction and Reasoning Corpus (ARC) provides a benchmark to measure AI skill-acquisition on unknown tasks, with the constraint that only a handful of demonstrations are shown to learn a complex task. The setup is similar to the Raven's Progressive Matrices IQ test. You can inspect training examples here.
Recently François Chollet started a Kaggle competition based on ARC with sum of prizes amounting to 20 000$.
This question asks whether the 1st place winner will pass 0.8 or less top-3 error rate as defined in the Kaggle competition evaluation? The question will resolve positive if the threshold is passed, negative otherwise. To achieve this threshold, an AI will need to answer correctly after 3 tries in only 20% of tasks. This threshold will also unlock an additional $3,000 for the top competitors.
This question will close on May 18th 2020, the entry deadline for the competition, and resolve as soon as the final result are known.
ETA: Closing date changed.