M

This Discussion is now a Draft.

Once it's ready, please submit it for review by our team of Community Moderators. Thank you!

Pending

This Discussion now needs to be approved by community moderators.

For predictions, should we enter our all-things-considered-view, our all-things-considered-minus-Metaculus view, or our inside view predictions?

According to the FAQ:

Your expected score is maximized if you provide the true probability.

But what does true probability actually mean here?

For example, in a question like "How many cases of COVID-19 will there be in 2020?" if I was trying to form my personal assessment of the situation, I will start with the outside view ("What does Metaculus think of this question right now?") and then adjust with various other meta-outside views ("What does superforecasters think?" "What are different epidemiologist's takes"). Only afterwards (if I have more time), would I start looking into other ways to answer the question directly (thinking of base rates like seasonal flu, looking into past pandemics, looking directly at the epidemiological evidence like R0, doubling times, looking at policies etc) to form my "all-things-considered-view."

However, I think (from the perspective of other users of a prediction platforms) predictions that primarily get their thrust from the prediction platform itself doesn't add much signal (and may lead to double-counting). So it'd be better if other people were to aggregate my views into their all-things-considered view if and only if they're only counting my inside view predictions rather than rolling my outside view considerations into this.

On the flip side, presumably I might have outside view considerations that other users aren't accounting for (eg, I read different epidemiologists, or I consult different prediction platforms), so using that evidence in my predictions do add information. But presumably all Metaculus users know of Metaculus...

My current compromise is to try not to be anchored on Metaculus's views, and report a "true probability" of "all things considered minus Metaculus."

What do other users think?

PS.

These comments here on an EA Forum post on epistemic modesty goes into more details of why it's helpful to report inside view predictions over "all things considered views" to prevent over-updating.

Categories:
Metaculus