Your submission is now in Draft mode.

Once it's ready, please submit your draft for review by our team of Community Moderators. Thank you!

Submit Essay

Once you submit your essay, you can no longer edit it.

Pending

This content now needs to be approved by community moderators.

Submitted

This essay was submitted and is waiting for review.

AI Performance on MATH Dataset before 2025

Question

From Hendrycks et al,

Many intellectual endeavors require mathematical problem solving, but this skill remains beyond the capabilities of computers. To measure this ability in machine learning models, we introduce MATH, a new dataset of 12,500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations. [...]

Even though we are able to increase accuracy on MATH, our results show that accuracy remains relatively low, even with enormous Transformer models. Moreover, we find that simply increasing budgets and model parameter counts will be impractical for achieving strong mathematical reasoning if scaling trends continue. While scaling Transformers is automatically solving most other text-based tasks, scaling is not currently solving MATH. To have more traction on mathematical problem solving we will likely need new algorithmic advancements from the broader research community.

In addition,

It's also worth mentioning the competition maths problems in MATH are designed under the assumption that competitors don't use calculators or script executors. That way, solving them requires making a clever observation or reducing the search space to make the problem tractable. With a script executor, competitors do not need to figure out how to succinctly reason to the conclusion and cleverness is rarely needed.

There are other competition problems designed to be difficult even with calculators and script exectuors, but there are not nearly as many of these problems lying around.

If we care about measuring and forecasting mathematical problem solving capabilities with MATH, it will probably make sense to give ML models a no calculator restriction, just as is done for human contestants.

The best model in the paper only received an average accuracy of 6.9% on the dataset.

What will be the best accuracy score on the MATH dataset by 2025?

This question will resolve as the state-of-the-art average accuracy score on the MATH dataset, as reported prior to January 1 2025. Credible reports include but are not limited to blog posts, arXiv preprints, and papers.

Admins will use their discretion in determining whether a result should be considered valid. Obvious cheating, such as including the test set in the training data, does not count.

Make a Prediction

Prediction

Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.

Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.

Current points depend on your prediction, the community's prediction, and the result. Your total earned points are averaged over the lifetime of the question, so predict early to get as many points as possible! See the FAQ.