What is the likelihood of discontinuous progress around the development of Human Level Machine Intelligence (i.e. machines that can accomplish a wide range of important tasks at least as good as human experts)?
Discontinuity in progress occurs when a particular technological advance pushes some progress metric substantially above what would be expected based on extrapolating past progress. If AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages, then future progress might arrive faster than we would expect by simply looking at past progress. Moreover, if one AI team finds a big lump, it might jump way ahead of the other teams. According to AI Impacts, discontinuity on the path to AGI, lends itself to:
A previous question did a good job operationalising Human-machine intelligence parity. It proposes a generalised intelligence test that compares machine systems to human experts in each of physics, mathematics and computer science. Using this, we can define a surprising discontinuity in AI progress as a tripling of the odds (given by p/(1-p)) in both the Metaculus prediction and community prediction within a 2-month period.
So, Will the both the Metaculus prediction odds and the community prediction odds of a positive resolution to our question on human-machine intelligence parity at least triple within any two-month period before its close date?
Some examples of a tripling of the odds are 60% becoming at least 81.8%, 70% becoming at least 87.5%, 80% becoming at least 92.3%, 90% becoming at least 96.4%, etc. See AI Impacts' fantastic overview of the issue of discontinuous progress toward AGI.
(Edited 8/29/18 to require the change in *both* Metaculus and community prediction as the source of odds.)