Discontinuity in progress occurs when a particular technological advance pushes some progress metric substantially above what would be expected based on extrapolating past progress. If AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages, then future progress might arrive faster than we would expect by simply looking at past progress. Moreover, if one AI team finds a big lump, it might jump way ahead of the other teams. According to AI Impacts, discontinuity on the path to AGI, lends itself to:
A previous question did a good job operationalising Human-machine intelligence parity. It proposes a generalised intelligence test that compares machine systems to human experts in each of physics, mathematics and computer science. Using this, we can define a surprising discontinuity in AI progress as a tripling of the odds (given by p/(1-p)) in both the Metaculus prediction and community prediction within a 2-month period.