Metaculus Help: Spread the word

If you like Metaculus, tell your friends! Share this question via Facebook, Twitter, or Reddit.

AI Safety & other: 2021 through 2026

This question is part of the Maximum Likelihood Round of the Forecasting AI Progress Tournament. You can view all other questions in this round here.


arXiv is a repository of electronic preprints approved for posting after moderation, but not full peer review. It consists of scientific papers in the fields of mathematics, physics, astronomy, electrical engineering, computer science, quantitative biology, statistics, mathematical finance and economics, which can be accessed online.

Many machine learning articles will be posted on arXiv before publication. In theoretical computer science and machine learning, over 60% of published papers have arXiv e-prints (Sutton et al. 2017).

AI Safety refers to a field aimed at developing techniques for designing AI systems that do not display unintended and harmful behaviour (Amodei et al., 2016). A related problem is that of (the lack of) transparency and interpretability of complicated ML systems. Transparency and interpretability techniques aim to generate insights about what ML systems are doing. Such techniques may enable meaningful human oversight and in building fair, safe, and aligned AI systems (Olah, 2018).

How many e-prints on AI Safety, interpretability or explainability will be published on arXiv over the 2021-01-01 to 2026-12-31 period?

This question resolves as the total number of Natural Language Processing e-prints published on arXiv over the 2021-01-01 to 2026-12-31 period (inclusive), as per the e-print's "original submission date".

Details of the search query

For the purpose of this question e-prints published under Computer Science that contain the following keywords in "all fields" (i.e. the abstract and title):

"ai safety", "ai alignment", "aligned ai", "value alignment problem", "reward hacking", "reward tampering", "tampering problem", "safe exploration", "robust to distributional shift", "scalable oversight", "explainable AI", "interpretable AI", "explainable model", "verification for machine learning", "verifiable machine learning", "interpretable model", "interpretable machine learning", "cooperative inverse reinforcement learning", "value learning", "iterated amplification", "preference learning", "AI safety via debate", "reward modeling", "logical induction"

The query should include cross-listed papers (papers listed on other subjects besides Computer Science). You can execute the query here.

Running this query for previous years gives:

  • 80 for the calendar year 2017
  • 127 for the calendar year 2018
  • 275 for the calendar year 2019

{{qctrl.predictionString()}}

Metaculus help: Predicting

Predictions are the heart of Metaculus. Predicting is how you contribute to the wisdom of the crowd, and how you earn points and build up your personal Metaculus track record.

The basics of predicting are very simple: move the slider to best match the likelihood of the outcome, and click predict. You can predict as often as you want, and you're encouraged to change your mind when new information becomes available.

The displayed score is split into current points and total points. Current points show how much your prediction is worth now, whereas total points show the combined worth of all of your predictions over the lifetime of the question. The scoring details are available on the FAQ.

Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.

Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.

This question is not yet open for predictions.

Thanks for predicting!

Your prediction has been recorded anonymously.

Want to track your predictions, earn points, and hone your forecasting skills? Create an account today!

Track your predictions
Continue exploring the site

Community Stats

Metaculus help: Community Stats

Use the community stats to get a better sense of the community consensus (or lack thereof) for this question. Sometimes people have wildly different ideas about the likely outcomes, and sometimes people are in close agreement. There are even times when the community seems very certain of uncertainty, like when everyone agrees that event is only 50% likely to happen.

When you make a prediction, check the community stats to see where you land. If your prediction is an outlier, might there be something you're overlooking that others have seen? Or do you have special insight that others are lacking? Either way, it might be a good idea to join the discussion in the comments.