delivering precise futures crowdsourcing contingent forecasts calculating probable forecasts forecasting definitive insights assembling intelligent forecasts mapping the future formulating calibrated futures calculating critical forecasts forecasting definitive wisdom formulating definitive futures calculating precise contingencies aggregating definitive estimations modeling probable wisdom delivering precise insights

Question

Metaculus Help: Spread the word

If you like Metaculus, tell your friends! Share this question via Facebook, Twitter, or Reddit.

Best Penn Treebank perplexity of 2019?

An active area of research in artificial intelligence is language modelling: the task of learning a probability distribution over the next word in a sentence given all the previous words. To measure the performance of these models, researchers often use the Penn Treebank dataset, a large collection of sentences published in the Wall Street Journal. They then measure the word-level perplexity of their model, which intuitively is the weighted average number of words the model thinks might occur next at any point in time. More specifically, if our test set is where is a word, is the context for that word, and ranges from 1 to , the number of words, then the perplexity of model on that test set is Basically, lower perplexities are better.

The best perplexity achieved so far by a published model that the author can find is 52.8, achieved in the paper Regularizing and Optimizing LSTM Language Models published by Merity, Keskar, and Socher in ICLR 2018. Better results have been achieved using dynamic evaluation, which trains the model on data while it is being tested, however we will discard those, only focussing on perplexities of pre-trained models.

In this question, we ask: what will the lowest achieved perplexity on the Penn Treebank dataset be of all pre-trained models in papers accepted to prominent AI conferences in 2019?

For the purpose of this conference, the list of prominent AI conferences is ICLR, ICML, NeurIPS, AAAI, AISTATS, NAACL, EMNLP-IJCNLP, UAI, ACL, IJCAI, and COLT. The author reserves the right to add conferences to this list if he thinks they should be on it, and promises not to use this power to rig the question.

{{qctrl.predictionString()}}

Metaculus help: Predicting

Predictions are the heart of Metaculus. Predicting is how you contribute to the wisdom of the crowd, and how you earn points and build up your personal Metaculus track record.

The basics of predicting are very simple: move the slider to best match the likelihood of the outcome, and click predict. You can predict as often as you want, and you're encouraged to change your mind when new information becomes available. With tachyons you'll even be able to go back in time and backdate your prediction to maximize your points.

The displayed score is split into current points and total points. Current points show how much your prediction is worth now, whereas total points show the combined worth of all of your predictions over the lifetime of the question. The scoring details are available on the FAQ.

Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.

Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.

This question is not yet open for predictions.

Thanks for predicting!

Your prediction has been recorded anonymously.

Want to track your predictions, earn points, and hone your forecasting skills? Create an account today!

Track your predictions
Continue exploring the site

Community Stats

Metaculus help: Community Stats

Use the community stats to get a better sense of the community consensus (or lack thereof) for this question. Sometimes people have wildly different ideas about the likely outcomes, and sometimes people are in close agreement. There are even times when the community seems very certain of uncertainty, like when everyone agrees that event is only 50% likely to happen.

When you make a prediction, check the community stats to see where you land. If your prediction is an outlier, might there be something you're overlooking that others have seen? Or do you have special insight that others are lacking? Either way, it might be a good idea to join the discussion in the comments.