A very interesting analysis by OpenAI shows a dramatic and steady increase in the amount of computation performed to train the most sophisticated AI/ML systems, with a doubling time of 3.5 months rather than the 18 months of Moore's law.
As of question writing, the highest amount of total computation is attributed to AlphaGoZero, at about FLOP. This is within a factor of 3 of Avogadro's number, indicating that the amount of computation going into ML systems is starting to become "Macroscopic" in terms of having numbers of elements comparable to the number of elementary particles in macroscopic systems.
When will this be achieved? We'll ask:
By end of 2019, will a published paper or pre-print describe an AI/ML system that has been trained using a number of computations that the method used by OpenAI in the above study would attribute more than operations?
Author will make his best effort to cajole authors of that study into resolving the question.