formulating quantitative wisdom delivering precise predictions delivering definitive predictions formulating predictive predictions crowdsourcing quantitative predictions mapping the future forecasting calibrated estimations aggregating predictive wisdom computing definitive estimations predicting contingent contingencies composing probable understanding composing intelligent predictions composing intelligent insights composing predictive contingencies

Question

Metaculus Help: Spread the word

If you like Metaculus, tell your friends! Share this question via Facebook, Twitter, or Reddit.

What will be the best score in the 2019/2020 Winograd Schema AI challenge

The "Winograd Schema Challenge" was devised by Hector Levesque, to test the ability of a machine intelligences system to understand the meaning of various sentences in a way that goes beyond pattern matching of syntactics. A question might be, for example:

Babar wonders how he can get new clothing. Luckily, a very rich old man who has always been fond of little elephants understands right away that he is longing for a fine suit. As he likes to make people happy, he gives him his wallet.

Who is longing for a fine suit? Who likes to make people happy? Who gives a wallet? Who is a wallet given to?

These questions cannot be answered effectively by the sort of pattern matching that underlies much machine transcription, translation, etc., but humans can do them fairly readily (with a typical human scoring 90%+ on this sort of test).

Machine systems are tested against this schema in annual competitions sponsored by Nuance and organized by coomonsensereasoning.org. In the latest competition, the best competitor scores about 60%. (Because many of the questions are binary, this is not a terribly impressive score; the full 2016 question list is here.)

The 2018 Challenge did not take place due to several late cancellations. The timing of the next challenge is unclear, but seems likely to be either 2019 or 2020.

What score, in percentage correct, will the top entrant in the 2019 or 2020 (whichever is the next after 2016) Winograd challenge score?

There are two rounds, and this applies to the first round only. Resolves as ambiguous if the test is not held by end of 2020, or if there are significant changes from the 2016 competition so that the scores are not directly comparable. Closes retroactively to noon one day before the listed date of the competition.

{{qctrl.predictionString()}}

Metaculus help: Predicting

Predictions are the heart of Metaculus. Predicting is how you contribute to the wisdom of the crowd, and how you earn points and build up your personal Metaculus track record.

The basics of predicting are very simple: move the slider to best match the likelihood of the outcome, and click predict. You can predict as often as you want, and you're encouraged to change your mind when new information becomes available. With tachyons you'll even be able to go back in time and backdate your prediction to maximize your points.

The displayed score is split into current points and total points. Current points show how much your prediction is worth now, whereas total points show the combined worth of all of your predictions over the lifetime of the question. The scoring details are available on the FAQ.

Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.

Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.

This question is not yet open for predictions.

Thanks for predicting!

Your prediction has been recorded anonymously.

Want to track your predictions, earn points, and hone your forecasting skills? Create an account today!

Track your predictions
Continue exploring the site

Community Stats

Metaculus help: Community Stats

Use the community stats to get a better sense of the community consensus (or lack thereof) for this question. Sometimes people have wildly different ideas about the likely outcomes, and sometimes people are in close agreement. There are even times when the community seems very certain of uncertainty, like when everyone agrees that event is only 50% likely to happen.

When you make a prediction, check the community stats to see where you land. If your prediction is an outlier, might there be something you're overlooking that others have seen? Or do you have special insight that others are lacking? Either way, it might be a good idea to join the discussion in the comments.