Your submission is now in Draft mode.

Once it's ready, please submit your draft for review by our team of Community Moderators. Thank you!

Your essay is now in Draft mode

Once you submit your essay, it will be available to judges for review and you can no longer edit it. Please make sure to review eligibility criteria before submitting. Thank you!

Submit Essay

Once you submit your essay, you can no longer edit it.


This content now needs to be approved by community moderators.


This essay was submitted and is waiting for review by judges.

SOTA on SQuAD2.0 2023-02-14


The Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD1 was introduced in 2016 by Rajpurkar et al.

In 2018, Rajpurkar et al introduced a SQuAD2.0, which combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0 systems must not only answer questions when possible (like in Squad1) but also determine when no answer is supported by the paragraph and abstain from answering.

As of writing this question, the best model is SA-Net on Albert (ensemble), which gets an exact match 90.724% of the time (meaning its predictions match the ground truth exactly, 90.724% of the time). Notably, this is better than human performance, which gets an exact rate at a rate of only 86.83%.

What will the highest Exact Match rate of the best-performing model on SQuAD2.0 be on 2023-02-14?

This question resolves as the best SQuAD2.0, in Exact Match, as displayed on the relevant leaderboard at 11:59 PM GMT on 2023-02-14.

Performance figures may be taken from e-prints, conference papers, peer-reviewed articles, and blog articles by reputable AI labs (including the associated code repositories). Published performance figures must be available before 11:59 PM GMT on 2023-02-14 to qualify.

In case the relevant leaderboard is not maintained, other credible sources should be consulted.

In case the relevant performance figure is given as a confidence interval, the median value will be used to resolve the question.

Make a Prediction


Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.

Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.

This question is not yet open for predictions.

Current points depend on your prediction, the community's prediction, and the result. Your total earned points are averaged over the lifetime of the question, so predict early to get as many points as possible! See the FAQ.

Metaculus help: Predicting

Predictions are the heart of Metaculus. Predicting is how you contribute to the wisdom of the crowd, and how you earn points and build up your personal Metaculus track record.

The basics of predicting are very simple: move the slider to best match the likelihood of the outcome, and click predict. You can predict as often as you want, and you're encouraged to change your mind when new information becomes available.

The displayed score is split into current points and total points. Current points show how much your prediction is worth now, whereas total points show the combined worth of all of your predictions over the lifetime of the question. The scoring details are available on the FAQ.

Thanks for predicting!

Your prediction has been recorded anonymously.

Want to track your predictions, earn points, and hone your forecasting skills? Create an account today!

Track your predictions
Continue exploring the site