Once it's ready, please submit it for review by our team of Community Moderators. Thank you!
This content now needs to be approved by community moderators.
This essay was submitted and is waiting for review.
By Peter Mühlbacher, Research Scientist at Metaculus, and Peter Scoblic, Director of Nuclear Risk at Metaculus
In this post, we report the results of a recent analysis we conducted exploring the performance of all AI-related forecasts on the Metaculus platform, including an investigation of the factors that enhance or degrade accuracy.
Most significantly, in this analysis we found that both the Community and Metaculus Predictions robustly outperform naïve baselines. The recent claim that performance on binary questions is “near chance” requires sampling on only a small subset of the forecasting questions we have posed or on the questionable proposition that a Brier score of 0.207 is akin to a coin flip. What’s more, forecasters performed better on continuous questions, as measured by the continuous ranked probability score (CRPS). In sum, both the Community Prediction and the Metaculus Prediction—on both binary and continuous questions—provide a clear and useful insight into the future of artificial intelligence, despite not being “perfect”.
We reviewed Metaculus’s resolved binary questions (“What is the probability that X will happen?”) and resolved continuous questions (“What will be the value of X?”) that were related to the future of artificial intelligence. For the purpose of this analysis, we defined AI-related questions as those which belonged to one or more of the following categories: “Computer Science: AI and Machine Learning”; “Computing: Artificial Intelligence”; “Computing: AI”; and “Series: Forecasting AI Progress.” This gave us: 64 resolved binary questions (with 10,497 forecasts by 2,052 users) and 88 resolved continuous questions (with 13,683 predictions by 1,114 users). Our review of these forecasts found:
This chart details the performance of both the Community and Metaculus predictions on binary and continuous questions. Please note that, for all scores, lower is better and that Brier scores, which range from 0 to 1 (where 0 represents oracular omniscience and 1 represents complete anticipatory failure) are roughly comparable to continuous ranked probability scores (CRPS) given the way we conducted our analysis. (For more on scoring methodology, see below.)
We can use Brier scores to measure the quality of a forecast on binary questions. Given that a Brier score is the mean squared error of a forecast, the following things are true:
So, how do we judge the value of a Brier score of 0.207? Is it fair to say that it is close to “having no idea”?
No. Here’s why. Let’s assume for a moment that your forecasts are perfectly calibrated. In other words, if you say something happens with probability p, it actually happens with probability p. We can map the relationship between any question whose “true probability” is p and the Brier score you would receive for forecasting that the probability is p, giving us a graph like this:
This shows that even a perfectly calibrated forecaster will achieve a Brier score worse than 0.207 when the true probability of a question is between 30% and 70%. So, to achieve an overall Brier score better than 0.207, one would have to have forecast on a reasonable number of questions whose true probability was less 29% or greater than 71%. In other words, even a perfectly calibrated forecaster could wind up with a Brier score near 0.25, depending on the true probability of the questions they were predicting. So, assuming a sufficient number of questions, the idea that one could get a Brier score of 0.207 simply by chance is untenable. Remember: predicting 50% on every question would give you a Brier score of 0.25 (which is 19% worse) and random guessing would give you a Brier score of 0.33 (which is 57% worse).
Metaculus makes no claim that the Community Prediction is perfectly calibrated, but neither do we have enough information to claim that it is not well-calibrated. Using 50% confidence intervals for the Community Prediction’s “true probability” (given the fraction of questions resolving positively), we find that about half of them intersect the y=x line, which indicates perfect calibration:
A simulation can help us understand what Brier scores to expect and how much they would fluctuate on our set of 64 binary AI questions if we assumed each to resolve independently as Yes with probability equal to its average Community Prediction. Resampling outcomes repeatedly, we get the following distribution, which shows that even if the (average) Community Prediction was perfectly calibrated, it would get a Brier score worse than 0.207 nearly a quarter of the time:
If we don’t have enough data to reject the hypothesis that the community prediction is perfectly calibrated, then we certainly cannot conclude that “the community prediction is near chance.” This analysis in no way suggests that the Community Prediction is perfectly calibrated or that it is the best it could be. It simply illustrates that a Brier score of 0.207 over 64 questions is far better than “near chance,” especially when we consider that forecasting performance is partly a function of question difficulty. We suspect that AI-related questions tend to be intrinsically harder than many other questions, reinforcing the utility of the Community Prediction. The Metaculus Prediction of 0.182 is superior.
Many of the most meaningful AI questions on Metaculus require answers in the form of a continuous range of values, such as, “When will the first general AI system be devised, tested, and publicly announced?” We assessed the accuracy of continuous forecasts, finding that the Community and Metaculus predictions for continuous questions robustly outperform naïve baselines. Just as predictions on binary questions should outperform simply predicting 50% (which yields a Brier score of 0.25), predictions on continuous questions should outperform simply predicting a uniform distribution of possible outcomes (which yields a CRPS of 0.172 on the questions in this analysis).
Here, again, both the Community Prediction (0.096) and the Metaculus Prediction (0.103) were significantly better than baseline. In fact, the Community and Metaculus predictions performed considerably better on continuous questions than on binary questions. We can bootstrap the set of resolved questions to simulate how much scores could fluctuate, and we find that the fluctuations would have to conspire against us in the most unfortunate possible way (p<0.1%) to achieve even the baseline you’d get by predicting a uniform distribution. As we can see from the histograms below, it is more difficult for luck to account for a CRPS better than baseline than it is for a Brier score. So, if we cannot say that a Brier score of 0.207 is near chance, we certainly cannot say that a CRPS of 0.096 is near chance.
Metaculus asks a wide range of questions related to artificial intelligence, some of which are more tightly coupled to A(G)I timelines than others. The AI categories cover a wide range of subjects, including:
Being fundamentally mistaken about fundamental drivers of AI progress, like hardware access, can impact the accuracy of forecasts for more decision-relevant questions, like the timing of developing AGI. While accurate knowledge of these issues is necessary for reliable forecasts in all but the very long term, it might not be sufficient. In other words, a good track record across all these questions doesn't guarantee that predictions on any specific AI question will be accurate. The optimal reference class for evaluating forecasting track records is still an open question, but for now, this is the best option we have.
Forecaster Charles Dillon has also grouped questions to explore whether Metaculus tends to be overly optimistic or pessimistic regarding AI timelines and capabilities. Although we haven't had enough resolved questions since his analysis to determine if his conclusions have changed, his work complements this study nicely. We plan to perform additional forecast accuracy analyses in the future.
All scoring rules below are chosen such that lower = better.
All scoring rules below are strictly proper scoring rules, i.e. predicting your true beliefs gets you the best score in expectation.
The Brier score for a prediction on a binary question with outcome is . So
An average Brier score higher than 0.25 means that we’re better off just predicting 50% on every question.
On Metaculus, forecasts on continuous questions are submitted in the form of
such that .
Some questions (all older questions) have “closed bounds”, i.e. they are formulated i n a way that the outcome cannot be below the lower bound () or above the upper bound (). Newer questions can have any of the four combinations of a closed/open lower bound and a closed/open upper bound.
For the analysis it is convenient to shift and rescale bounds & outcomes such that outcomes within bounds are in .
The Continuous Ranked Probability Score for a prediction on a continuous question with outcome is given by . This is equivalent to averaging the Brier score of all (implicitly defined by the CDF) binary predictions of the form , which allows us to compare continuous questions with binary questions.
Given
we define the score of to be
.
This is just a time-weighted average of the scores at each point in time.
Concretely, if the first prediction arrives at time , the second prediction arrives at time , and the question closes at , then the score is times the score of the first prediction and times the score of the second prediction because the second prediction was “active” twice as long.
Once you submit your essay, you can no longer edit it.