The Hutter Prize is a 50,000€ Prize for Compressing Human Knowledge. The competition's stated mission is "to encourage development of intelligent compressors/programs as a path to AGI." Since it is argued that Wikipedia is a good indication of the "Human World Knowledge," the prize often benchmarks compression progress of algorithms using the enwik8 dataset, a representative 100MB extract from Wikipedia.
Since 2006, the Hutter Prize has galvanized not only data scientists but also many AI researchers who believe that image/text compression and AI are essentially two sides of the same coin. Compression algorithms are based on the premise of finding patterns in data and are predictive in nature. Furthermore, many machine learning researchers would agree that systems with better predictive models possess more "understanding" and intelligence in general.
The bits-per-character (the number of bits required per character) for compression of enwiki8 is the de-facto measurement unit for Hutter Prize compression progression. In 2016, the state of the art was set at 1.313 bits-per-character using Suprisal-Driven Zoneout, a regularization method for RNN.
In what year will a language model generate sequences with less than 1.0 bits-per-character on the enwik8 dataset?
Resolution occurs when a method achieves less than 1.0 bits-per-character.