M

Your submission is now a Draft.

Once it's ready, please submit your draft for review by our team of Community Moderators. Thank you!

You have been invited to co-author this question.

When it is ready, the author will submit it for review by Community Moderators. Thanks for helping!

Pending

This question now needs to be reviewed by Community Moderators.

We have high standards for question quality. We also favor questions on our core topic areas or that we otherwise judge valuable. We may not publish questions that are not a good fit.

If your question has not received attention within a week, or is otherwise pressing, you may request review by tagging @moderators in a comment.

You have been invited to co-author this question.

It now needs to be approved by Community Moderators. Thanks for helping!

{{qctrl.question.title}}

{{qctrl.question.predictionCount() | abbrNumber}} predictions
{{"myPredictionLabel" | translate}}:  
{{ qctrl.question.resolutionString() }}
{{qctrl.question.predictionCount() | abbrNumber}} predictions
My score: {{qctrl.question.player_log_score | logScorePrecision}}
Created by: MetaculusOutlooks and
co-authors , {{coauthor.username}}
Forecasting AI Progress AI Technical Benchmarks

Make a Prediction

Prediction

The Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD1 was introduced in 2016 by Rajpurkar et al.

In 2018, Rajpurkar et al introduced a SQuAD2.0, which combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0 systems must not only answer questions when possible (like in Squad1) but also determine when no answer is supported by the paragraph and abstain from answering.

As of writing this question, the best model is SA-Net on Albert (ensemble), which gets an exact match 90.724% of the time (meaning its predictions match the ground truth exactly, 90.724% of the time). Notably, this is better than human performance, which gets an exact rate at a rate of only 86.83%.