Once it's ready, please submit it for review by our team of Community Moderators. Thank you!
This content now needs to be approved by community moderators.
This essay was submitted and is waiting for review.
Today, we’re introducing a new tournament scoring system that’s clearer, more consistent, and that better rewards more forecasters.
The new system is simple: the sum of Peer scores determines a forecaster’s tournament rank, and their winnings are proportional to the square of that sum.
The new system applies to all new tournaments, and to very long term tournaments with few resolved questions. (Find a full list below.)
Our tournament scoring was groundbreaking when we introduced it in 2021. It served us well over the last 3 years, aligning incentives and distributing more than $50,000 in prizes on dozens of Tournaments.
The new system offers a few key advantages:
In words:
You receive a Peer Score for each tournament question, with zeros for any you don’t predict. Your Total Score is the sum. If it’s positive, congratulations: you just won a share of the prize pool. Your share is proportional to the square of your Total Score. Your Total Score also determines your tournament ranking.
Not in words:
To maximize your Total Score, always predict your honest belief.
Further:
Forecasting all questions and forecasting early are now more important. Previously, predicting half of a tournament’s questions or joining halfway through would win you ~39% of the prize you would’ve otherwise won. Now it wins you ~25%.
If we apply the new scoring to past tournaments, we find on average the top forecaster would have won a smaller prize, but the top 10 would have cumulatively won more prize. We expect far fewer cases where the top forecaster receives 90%+ of the prize.
We think those design choices have better incentives and reward users for forecasting more. Read about the trade-offs and alternatives we considered here: New Tournament Scoring: Trade-offs, Decisions.
Imagine a tournament with 5 forecasters, 30 questions, and a prize pool of $1000.
Their scores and winnings are:
Forecaster | Total score | Take | %Prize | Prize |
---|---|---|---|---|
A | 40×30=1200 | 1200²=1,440,000 | 76.2% | $762 |
B | 40×30×½=600 | 600²=360,000 | 19.0% | $190 |
C | 10×30=300 | 300²=90,000 | 4.8% | $48 |
D | -20×30=-600 | 0 | 0% | $0 |
E | -50×30=-1500 | 0 | 0% | $0 |
sum | 0 | 1,890,000 | 100% | $1000 |
Some tournaments will run for years to come. We don’t want to retain both scoring systems for longer than necessary. We also appreciate that you may have had certain scoring expectations at the time of your forecasts. To balance these, most tournaments will continue using the legacy scoring system, except for tournaments that:
Here are the full lists:
Two special cases not mentioned above:
Biosecurity Tournament: the $15,000 for the 2024 questions will be awarded according to the old scoring system. The rest of the prize ($10,000) for the 2032 questions will use the new scoring system.
Forecasting Our World in Data: the $8,000 awarded in 2025 for the 1-year questions will use the old scoring system. The other $8,000 awarded in 2027 for the 3-year questions will use the new scoring system.
Note that the current (Q1 2024) Quarterly Cup will use the old scoring system, but future iterations will use the new system.
For simplicity, all Question Series will switch to the new system, since they don’t award prizes or medals.
If you're unsure which scoring system a particular tournament uses, open its leaderboard: if there is a "Coverage" column, then the tournament is using the legacy scoring. The new scoring system does not use the Coverage, and omits that column.
We look forward to reading your feedback, questions, and ideas below!
Once you submit your essay, you can no longer edit it.