An active area of research in artificial intelligence is language modelling: the task of learning a probability distribution over the next word in a sentence given all the previous words. To measure the performance of these models, researchers often use the Penn Treebank dataset, a large collection of sentences published in the Wall Street Journal. They then measure the word-level perplexity of their model, which intuitively is the weighted average number of words the model thinks might occur next at any point in time. More specifically, if our test set is where is a word, is the context for that word, and ranges from 1 to , the number of words, then the perplexity of model on that test set is Basically, lower perplexities are better.
The best perplexity achieved so far by a published model that the author can find is 52.8, achieved in the paper Regularizing and Optimizing LSTM Language Models published by Merity, Keskar, and Socher in ICLR 2018. Better results have been achieved using dynamic evaluation, which trains the model on data while it is being tested, however we will discard those, only focussing on perplexities of pre-trained models.
In this question, we ask: what will the lowest achieved perplexity on the Penn Treebank dataset be of all pre-trained models in papers accepted to prominent AI conferences in 2019?
For the purpose of this conference, the list of prominent AI conferences is ICLR, ICML, NeurIPS, AAAI, AISTATS, NAACL, EMNLP-IJCNLP, UAI, ACL, IJCAI, and COLT. The author reserves the right to add conferences to this list if he thinks they should be on it, and promises not to use this power to rig the question.