M

Your submission is now a Draft.

Once it's ready, please submit your draft for review by our team of Community Moderators. Thank you!

You have been invited to co-author this question.

When it is ready, the author will submit it for review by Community Moderators. Thanks for helping!

Pending

This question now needs to be reviewed by Community Moderators.

We have high standards for question quality. We also favor questions on our core topic areas or that we otherwise judge valuable. We may not publish questions that are not a good fit.

If your question has not received attention within a week, or is otherwise pressing, you may request review by tagging @moderators in a comment.

You have been invited to co-author this question.

It now needs to be approved by Community Moderators. Thanks for helping!

{{qctrl.question.title}}

{{qctrl.question.predictionCount() | abbrNumber}} predictions
{{"myPredictionLabel" | translate}}:  
{{ qctrl.question.resolutionString() }}
{{qctrl.question.predictionCount() | abbrNumber}} predictions
My score: {{qctrl.question.player_log_score | logScorePrecision}}
Created by: jacob.pfau and
co-authors , {{coauthor.username}}
AI Technical Benchmarks

Make a Prediction

Prediction

Recent natural language processing (NLP) models have succeeded in generating human-level text and translations. However questions remain regarding to what extent this success relies on understanding, as opposed to memorization of statistical patterns.

A recent paper showed that when statistical-cues are removed, state of the art NLP models fail on argument reasoning tasks -- despite human performance remaining unaffected. Untrained humans perform at ~80% accuracy on this argument reasoning task, whereas recent NLP models perform near 50%.