modeling intelligent estimations computing accurate wisdom assembling calibrated wisdom predicting contingent insights composing contingent wisdom mapping the future mapping critical contingencies composing precise insights generating probable forecasts aggregating accurate futures delivering definitive contingencies mapping predictive wisdom aggregating accurate forecasts exploring calibrated forecasts

Question

Metaculus Help: Spread the word

If you like Metaculus, tell your friends! Share this question via Facebook, Twitter, or Reddit.

When will multi-modal ML out-perform uni-modal ML?

Human infant learning integrates information across senses -- sight, sound, touch, etc. -- but current state of the art machine learning models usually use only one of these types. It remains to be seen whether integrating data across modes is necessary for achieving human-level intelligence.

In contemporary machine learning (ML) research, we are mostly interested in image, text, graph, and video data. State of the art models in each of these domains train only on inputs of that specific domain; let's call this uni-modal training. By extension, if a model were to train on two or more of these input types, while evaluating on only one, we'll call that multi-modal training with uni-modal evaluation. For the purposes of this question, we are only interested in uni-modal evaluation tasks, so robotics and driving benchmarks are out of the question.

Question Description: When will a multi-modal trained model out-perform the previous state of the art on one of the following uni-modal benchmarks:

  1. ImageNet
  2. WikiText-103
  3. Cityscapes
  4. Additional uni-modal benchmarks from paperswithcode.com may be added to reflect trends in machine learning research. I will review paperswithcode.com two and four years after this question opens to request that moderators add the two most popular benchmarks which have more new entries (since June 1, 2020) than at least two thirds of the above benchmarks. If one of the newly added benchmarks involves data of the same type as one of the above benchmarks (i.e. image classification, text, image segmentation), and has more new entries, then the old benchmark will be superseded, and removed from the list.

Resolution Condition: This question resolves as the first date on which one of the benchmarks above has a #1 ranked paper which sets the record using a multi-modal trained model. If no such paper is listed before 2030, then the question resolves as >01/01/2030.

Specifics and Caveats:

  1. Multi-modal pre-training counts towards resolution.

  2. For text tasks, training on video counts if, and only if the image stream is used -- i.e. not just the audio stream.

  3. For image tasks, training on video counts if, and only if the audio stream is used -- i.e. not just the image stream.

  4. If paperswithcode.com shuts down or permanently stops updating their data, then the question resolves as ambiguous.

{{qctrl.predictionString()}}

Metaculus help: Predicting

Predictions are the heart of Metaculus. Predicting is how you contribute to the wisdom of the crowd, and how you earn points and build up your personal Metaculus track record.

The basics of predicting are very simple: move the slider to best match the likelihood of the outcome, and click predict. You can predict as often as you want, and you're encouraged to change your mind when new information becomes available.

The displayed score is split into current points and total points. Current points show how much your prediction is worth now, whereas total points show the combined worth of all of your predictions over the lifetime of the question. The scoring details are available on the FAQ.

Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.

Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.

This question is not yet open for predictions.

Thanks for predicting!

Your prediction has been recorded anonymously.

Want to track your predictions, earn points, and hone your forecasting skills? Create an account today!

Track your predictions
Continue exploring the site

Community Stats

Metaculus help: Community Stats

Use the community stats to get a better sense of the community consensus (or lack thereof) for this question. Sometimes people have wildly different ideas about the likely outcomes, and sometimes people are in close agreement. There are even times when the community seems very certain of uncertainty, like when everyone agrees that event is only 50% likely to happen.

When you make a prediction, check the community stats to see where you land. If your prediction is an outlier, might there be something you're overlooking that others have seen? Or do you have special insight that others are lacking? Either way, it might be a good idea to join the discussion in the comments.

Embed this question

You can use the below code snippet to embed this question on your own webpage. Feel free to change the height and width to suit your needs.