Your submission is now a Draft.

Once it's ready, please submit your draft for review by our team of Community Moderators. Thank you!

You have been invited to co-author this question.

When it is ready, the author will submit it for review by Community Moderators. Thanks for helping!


This question now needs to be reviewed by Community Moderators.

We have high standards for question quality. We also favor questions on our core topic areas or that we otherwise judge valuable. We may not publish questions that are not a good fit.

If your question has not received attention within a week, or is otherwise pressing, you may request review by tagging @moderators in a comment.

You have been invited to co-author this question.

It now needs to be approved by Community Moderators. Thanks for helping!


{{qctrl.question.predictionCount() | abbrNumber}} predictions
{{"myPredictionLabel" | translate}}:  
{{ qctrl.question.resolutionString() }}
{{qctrl.question.predictionCount() | abbrNumber}} predictions
My score: {{qctrl.question.player_log_score | logScorePrecision}}
Created by: MetaculusOutlooks and
co-authors , {{coauthor.username}}
Forecasting AI Progress

Make a Prediction


This question is part of the Maximum Likelihood Round of the Forecasting AI Progress Tournament. You can view all other questions in this round here.

arXiv is a repository of electronic preprints approved for posting after moderation, but not full peer review. It consists of scientific papers in the fields of mathematics, physics, astronomy, electrical engineering, computer science, quantitative biology, statistics, mathematical finance and economics, which can be accessed online.

Many machine learning articles will be posted on arXiv before publication. In theoretical computer science and machine learning, over 60% of published papers have arXiv e-prints (Sutton et al. 2017).

Few-shot learning methods have been developed to explicitly optimize machine learning models that predict new classes using only a few labelled examples per class. Few-shot learners use prior knowledge, and can generalize to new tasks containing only a few samples with supervised information (Wang et al., 2020).