The Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD1 was introduced in 2016 by Rajpurkar et al.
In 2018, Rajpurkar et al introduced a SQuAD2.0, which combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0 systems must not only answer questions when possible (like in Squad1) but also determine when no answer is supported by the paragraph and abstain from answering.
As of writing this question, the best model is SA-Net on Albert (ensemble), which gets an exact match 90.724% of the time (meaning its predictions match the ground truth exactly, 90.724% of the time). Notably, this is better than human performance, which gets an exact rate at a rate of only 86.83%.