By Anthony on Aug. 4, 2019, 3:20 p.m. GMT
The domain system is a big under-the-hood improvement that has quietly gone live, but will start to be used.
Metaculus improves the decision-making capacity of organizations and individuals by providing an open, reputation-based platform where people can improve their predictive abilities and organizations can calibrate their thinking according to calinrated and aggregated probabilities of future events.
Metaculus is in the process of adding new functionalities and growing the site in exciting ways. We want to bring onto the team a full-stack web developer who loves Metaculus and wants to see it thrive. We’re a small team, so you have a real opportunity here to influence the direction of the site and to make a big impact.
You'll notice a smattering of new questions, which have been imported from a recently-begun IARPA forecasting competition. Here are some salient facts.
Back in 2010, the US Intelligence Advanced Research Projects Agency (IARPA) launched a program on "Aggregative Contingent Estimation". This program produced a lot of very interesting research on crowd-based prediction via solicitation and aggregation, much of which informed Metaculus's conception (though there has been no funding or collaboration between Metaculus and IARPA).
IARPA is now running a "Hybrid forecasting competition" testing methods of combining human and machine prediction. As part of this program they are running a public-facing contest geopolitical forecasting challenge in which teams are invited to download questions and submit predictions on them by aggregating IARPA-provided human predictions, as well as doing whatever else.
While Metaculus has decided not to enter this competition full-fledged (since it would distract from out own focus), we have set up the machinery to download questions and make predictions, and will put some live on Metaculus that we think might be of interest to our community. We'll then feed back the Metaculus prediction to the competition. This, we hope, will be an interesting exercise, comparing the Metaculus aggregation method (and predictions) to others.
If you have concerns about providing predictions that will (in aggregate) be fed to IARPA servers, you can of course simply not make predictions on these questions (which are labeled and can be found in this category).
Metaculus now has a new type of question: private questions. Private questions are questions that only you can see on the site. They aren't moderated, so you can post one and predict on it right away. You can resolve your own private questions at any time, but points for private predictions won't be added to your overall Metaculus score and they won't effect your ranking on the leader board.
We've made some improvements to the Metaculus track record page: questions can now be filtered by date and question category, making it easier to drill down into the data. For example, it's easy to see that in the last 6 months the Metaculus prediction has outperformed the community prediction with a Brier score that's 33% lower. It's also easy to see that Metaculus performed especially well on biological questions (Brier score 75% lower than the community's), and both the Metaculus and community predictions missed the mark for cryptocurrency questions (both with scores > 0.24). In addition to the filters, there's now a histogram to clearly show the distributions of scores.
The results are in, and a big congrations go to our winners jzima and Barbarossa and our runners up AndrewMcKnight and ElliotOlds. But how did everyone else do? In short, not so well.
Our winner received 748 points, but the median score amongst all cryptocurrency players was only 7 points and the average score was way down at −73 points. Only 25 players broke 100 points. Player overconfidence combined with our log scoring system really hurt here: just one very overconfident prediction could easily send someone's score tumbling down from the top of the ranks. That said, there was a top tier of eight players who got at least 590 points (with a large gap separating the 9th player), and, as we'll see below, they used different confidence strategies to get there.
After some internal deliberating and data crunching, we're finally ready to announce the results of the 2017 Cryptocurrency Prediction Competition. First we'll say a few words on the reason for delay, then the results.
As those following the competition know, we got stalled on four questions that were based on the highest or total amount amount of initial coin offerings (ICOs) in October and November. It turns out that the data for ICOs is terribly incomplete and unreliable, with numbers dramatically varying from one source to another. On top of this, it seems fairly clear that there is an order-of-magnitude uncertainty in the ICO funds actually raised by Paragon. In the end, we decided to resolve these questions per their letter, keeping with Metaculus's general policy. However, in recognition that these questions could be considered flawed because based on a misleading resource, we've decided to also issue prizes to the two predictors who would have won had we resolved these four questions as ambiguous.
So, without further ado, the winners are: