Last December, the chess world was shocked when Google announced that a home-grown chess program, Alpha Zero, crushed the world's previously most dominant chess engine, Stockfish, in controlled conditions.
What was especially cool was that Alpha Zero taught itself to play chess at this ridiculously competitive level without referencing any books or human strategy. It just played itself millions of times and learned from its experiences.
Google vanished Alpha Zero, but chess programming boffins recently created a similar open-source program called Leela--which learns and improves in a similar fashion. Chess commentators have compared her to "Alpha Zero's little sister". Within a few weeks, she managed to surge from a rating of around 1800 (a strong club player) to a strength so impressive that she nearly knocked out Grandmaster Andrew Tang in a match.
As of this writing, though, Leela still struggles against the silicon best and brightest--in April, she finished near the bottom of the field in a match against other engines.
When will Leela--or some other self-learning chess engine, along the lines of Alpha Zero--claim the crown and win the official Top Computer Engine Championship (TCEC)?
For positive resolution, the winning program should be trained almost entirely via self-play. That is, the system in start of training should be below ELO 1000. Resolution defaults to "never" if this does not occur by June 1, 2022. (One can imagine this happening if, for example, the top competitors are systems with enough built-in structure to be decent > 1000 ELO players before they start any self-play.)