forecasting quantitative predictions crowdsourcing probable contingencies aggregating quantitative wisdom formulating critical contingencies delivering calibrated predictions mapping the future computing intelligent forecasts predicting definitive wisdom composing intelligent contingencies mapping quantitative contingencies assembling calibrated wisdom assembling definitive wisdom modeling calibrated predictions aggregating intelligent understanding

Question

Metaculus Help: Spread the word

If you like Metaculus, tell your friends! Share this question via Facebook, Twitter, or Reddit.

By May 2020, will a single language model obtain an average score equal to or greater than 90% on the SuperGLUE benchmark?

Cross-posted on Metaculus AI Forecasting.

The SuperGlue Benchmark measures progress in language understanding tasks.

The original benchmark, GLUE (General Language Understanding Evaluation) is a collection of language understanding tasks built on established existing datasets and selected to cover a diverse range of dataset sizes, text genres, and degrees of difficulty. The tasks were sourced from a survey of ML researchers, and it was launched in mid 2018. Several models have now surpassed the GLUE human baseline.

The new SuperGLUE benchmark contains a set of more difficult language understanding tasks. Human Level performance on the SuperGlue baseline is 89.8. The current best performing ML model as of July 19th, 2019 is BERT++ with a score of 71.5. Will language model performance have progressed enough that by next year one will have superhuman performance on the SuperGLUE benchmark?

Will a single language model obtain an average score equal to or greater than 90% on the SuperGLUE benchmark at any time before May 1st, 12:00:01 a.m. GMT?

This question will be resolved as true if, according to the public SuperGLUE benchmark leaderboard, a single entry has a score of 90% or higher. This question closes and resolves retroactively 48 hours before the first such score is listed on the SuperGLUE benchmark leaderboard.

{{qctrl.predictionString()}}

Metaculus help: Predicting

Predictions are the heart of Metaculus. Predicting is how you contribute to the wisdom of the crowd, and how you earn points and build up your personal Metaculus track record.

The basics of predicting are very simple: move the slider to best match the likelihood of the outcome, and click predict. You can predict as often as you want, and you're encouraged to change your mind when new information becomes available.

The displayed score is split into current points and total points. Current points show how much your prediction is worth now, whereas total points show the combined worth of all of your predictions over the lifetime of the question. The scoring details are available on the FAQ.

Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.

Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.

This question is not yet open for predictions.

Thanks for predicting!

Your prediction has been recorded anonymously.

Want to track your predictions, earn points, and hone your forecasting skills? Create an account today!

Track your predictions
Continue exploring the site

Community Stats

Metaculus help: Community Stats

Use the community stats to get a better sense of the community consensus (or lack thereof) for this question. Sometimes people have wildly different ideas about the likely outcomes, and sometimes people are in close agreement. There are even times when the community seems very certain of uncertainty, like when everyone agrees that event is only 50% likely to happen.

When you make a prediction, check the community stats to see where you land. If your prediction is an outlier, might there be something you're overlooking that others have seen? Or do you have special insight that others are lacking? Either way, it might be a good idea to join the discussion in the comments.