The Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD1 was introduced in 2016 by Rajpurkar et al.
In 2018, Rajpurkar et al introduced a SQuAD2.0, which combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0 systems must not only answer questions when possible (like in Squad1) but also determine when no answer is supported by the paragraph and abstain from answering.
As of writing this question, the best model is SA-Net on Albert (ensemble), which gets an exact match 90.724% of the time (meaning its predictions match the ground truth exactly, 90.724% of the time). Notably, this is better than human performance, which gets an exact rate at a rate of only 86.83%.
What will the highest Exact Match rate of the best-performing model on SQuAD2.0 be on 2023-02-14?
This question resolves as the best SQuAD2.0, in Exact Match, as displayed on the relevant leaderboard at 11:59 PM GMT on 2023-02-14.
Performance figures may be taken from e-prints, conference papers, peer-reviewed articles, and blog articles by reputable AI labs (including the associated code repositories). Published performance figures must be available before 11:59 PM GMT on 2023-02-14 to qualify.
In case the relevant leaderboard is not maintained, other credible sources should be consulted.
In case the relevant performance figure is given as a confidence interval, the median value will be used to resolve the question.