The International Math Olympiad is a mathematics competition for kids 18-and-under featuring extrordinarily difficult and novel mathematics problems. Contestants complete a total of 6 problems over 2 days, with 4.5 hours each day to submit their solutions. Problems are graded by judges on a 0 - 7 point scale for correct proofs, skill demonstrated, and partial results. Gold Medals are awarded for total scores of 31 or more (an average per-question score greater than 5). In the 2020 Olympiad, out of 616 contestants representing 105 countries, 49 gold medals were awarded and only one competitor won a perfect score of 42.
The IMO Grand Challenge is a proposed AI challenge to develop an AI which can win a gold medal in the Olympiad. The rules have been tentatively described as follows:
To remove ambiguity about the scoring rules, we propose the formal-to-formal (F2F) variant of the IMO: the AI receives a formal representation of the problem (in the Lean Theorem Prover), and is required to emit a formal (i.e. machine-checkable) proof. We are working on a proposal for encoding IMO problems in Lean and will seek broad consensus on the protocol.
Other proposed rules:
Credit. Each proof certificate that the AI produces must be checkable by the Lean kernel in 10 minutes (which is approximately the amount of time it takes a human judge to judge a human’s solution). Unlike human competitors, the AI has no opportunity for partial credit.
Resources. The AI has only as much time as a human competitor (4.5 hours for each set of 3 problems), but there are no other limits on the computational resources it may use during that time.
Reproducibility. The AI must be open-source, released publicly before the first day of the IMO, and be easily reproduceable. The AI cannot query the Internet.
There is no official commitment from the IMO or other AI development teams to compete for this challenge, but it's possible this may happen eventually.
When will an AI win a Gold Medal in the International Math Olympiad?
This question resolves on the date an AI system competes well enough on an IMO test to earn the equivalent of a gold medal. The IMO test must be most current IMO test at the time the feat is completed (previous years do not qualify).
Tentatively, we will hold the same terms as currently proposed by the IMO Grand Challenge:
The AIs must recieve formal representations of the IMO problems and present formal (machine-checkable) proofs.
The proof certificates produced must be checkable in 10 minutes.
The AI has 4.5 hours per set of 3 problems to compute, but there are no other limitations on computational resources.
The AI must be open-source, publicly released before the IMO begins, and be easily reproducable.
The AI cannot have access to the internet during the test.
If the IMO Grand Challenge eventually uses different constraints than those above, or the IMO test format changes, Metaculus Admins may modify this question at their discretion or resolve ambiguously, if they choose so.
If the IMO no longer holds open Olympiads, and there is no comparable successor for under-18 Mathematic competitions, this question will resolve ambiguously.