exploring predictive contingencies predicting contingent futures exploring predictive contingencies forecasting critical understanding generating calibrated futures mapping the future assembling calibrated futures calculating accurate contingencies mapping contingent predictions composing precise predictions crowdsourcing calibrated understanding assembling predictive contingencies computing predictive estimations crowdsourcing contingent wisdom


Metaculus Help: Spread the word

If you like Metaculus, tell your friends! Share this question via Facebook, Twitter, or Reddit.

Introducing Elicit, a tool for converting your beliefs into probabilities

Hey everyone, Ought has been working on Elicit, a tool for converting your beliefs into probabilities. With Elicit, forecasters can focus on the things they enjoy most, such as reasoning about likely scenarios, factors, and the world in general. Elicit then takes care of the rest, like fitting a curve and filling in the gaps, mixing in the community distribution, and submitting on Metaculus. Many of you have requested interfaces like this so we hope you’ll find it useful!

Elicit currently works for numerical questions on linear scales. Here’s an Elicit fit for the question What fraction of United States adults will be vaccinated against the 2020—2021 seasonal influenza?. Elicit is still a wee app so we humbly request your patience with its issues and imagination for its potential 😛. We’re happy to receive feedback, bugs, despairs, and hopes at elicit@ought.org.

Over the next few weeks, we’ll iron out kinks and get your feedback on how useful building on this seems. We’re quite excited about it; internally we’ve found ourselves wanting to use it throughout the day to reify statements about how likely we think things are. If this meets a need in the community, some of the things we’ll work on next include:

  • Expanding to more question types on Metaculus
  • Expanding to other prediction platforms
  • Allowing users to build forecasts by relating to other forecasts (e.g. “Event X must happen after Event Y so make sure the peak of the distribution for Event X is after Event Y”)
  • Enriching the types of beliefs users can express (e.g. “2020-2030 is more likely than 2030-2040 but I don’t know how likely either is in absolute terms”)

Ultimately, people should be able to focus on the reasoning that they are best at and enjoy most, delegating the rest to machines. By delegating more reasoning to machines, we hope to improve forecasting outcomes (since machines can reason better about some things people are doing today), make judgmental forecasting more scalable, and teach machines how to reason about open-ended questions.

Embed this question

You can use the below code snippet to embed this question on your own webpage. Feel free to change the height and width to suit your needs.