SuperGLUE (Wang et al., 2019) is a benchmark for evaluating general-purpose language understanding systems. The set of eight tasks in the benchmark emphasises diverse task formats and low-data training data tasks, with nearly half the tasks having fewer than 1k examples and all but one of the tasks having fewer than 10k examples.
As of writing this question, the state-of-the-art model for is T5: Text-To-Text Transfer Transformer (Raffel et al., 2019), which achieves an average score 89.3, just below the human baseline of 89.8
The SuperGLUE leaderboard may be accessed here.
What will the state-of-the-art performance on SuperGLUE be on 2021-06-14?
This question resolves as the highest level of performance achieved on SuperGLUE up until 2021-06-14, 11:59PM GMT amongst models trained on any number training set(s). Performance is given in a "score", which is the average of various performance metrics (see Wang et al., 2019 for more details).
Performance figures may be taken from e-prints, conference papers, peer-reviewed articles, and blog articles by reputable AI labs (including the associated code repositories). Published performance figures must be available before 2021-06-14, 11:59PM GMT to qualify.
In case the relevant performance figure is given as a confidence interval, the median value will be used to resolve the question.