The International Math Olympiad is a mathematics competition for kids 18-and-under featuring extrordinarily difficult and novel mathematics problems. Contestants complete a total of 6 problems over 2 days, with 4.5 hours each day to submit their solutions. Problems are graded by judges on a 0 - 7 point scale for correct proofs, skill demonstrated, and partial results. Gold Medals are awarded for total scores of 31 or more (an average per-question score greater than 5). In the 2020 Olympiad, out of 616 contestants representing 105 countries, 49 gold medals were awarded and only one competitor won a perfect score of 42.

The IMO Grand Challenge is a proposed AI challenge to develop an AI which can win a gold medal in the Olympiad. The rules have been tentatively described as follows:

To remove ambiguity about the scoring rules, we propose the formal-to-formal (F2F) variant of the IMO: the AI receives a formal representation of the problem (in the Lean Theorem Prover), and is required to emit a formal (i.e. machine-checkable) proof. We are working on a proposal for encoding IMO problems in Lean and will seek broad consensus on the protocol.

Other proposed rules:

Credit. Each proof certificate that the AI produces must be checkable by the Lean kernel in 10 minutes (which is approximately the amount of time it takes a human judge to judge a human’s solution). Unlike human competitors, the AI has no opportunity for partial credit.

Resources. The AI has only as much time as a human competitor (4.5 hours for each set of 3 problems), but there are no other limits on the computational resources it may use during that time.

Reproducibility. The AI must be open-source, released publicly before the first day of the IMO, and be easily reproduceable. The AI cannot query the Internet.

There is no official commitment from the IMO or other AI development teams to compete for this challenge, but it's possible this may happen eventually.