M

Your submission is now a Draft.

Once it's ready, please submit your draft for review by our team of Community Moderators. Thank you!

You have been invited to co-author this question.

When it is ready, the author will submit it for review by Community Moderators. Thanks for helping!

Pending

This question now needs to be reviewed by Community Moderators.

We have high standards for question quality. We also favor questions on our core topic areas or that we otherwise judge valuable. We may not publish questions that are not a good fit.

If your question has not received attention within a week, or is otherwise pressing, you may request review by tagging @moderators in a comment.

You have been invited to co-author this question.

It now needs to be approved by Community Moderators. Thanks for helping!

{{qctrl.question.title}}

{{qctrl.question.predictionCount() | abbrNumber}} predictions
{{"myPredictionLabel" | translate}}:  
{{ qctrl.question.resolutionString() }}
{{qctrl.question.predictionCount() | abbrNumber}} predictions
My score: {{qctrl.question.player_log_score | logScorePrecision}}
Created by: Matthew_Barnett and
co-authors , {{coauthor.username}}
AI Training and Compute

Make a Prediction

Prediction

In the last few years, the size of the largest deep learning models has grown enormously. Within the field of natural language processing, the largest models have gone from having 94 million parameters in 2018, to 17 billion parameters in early 2020.

Now, Microsoft has released a new library DeepSpeed and created a memory efficient optimizer which aid in training extremely large models distributed across GPU clusters. From their blog post,

The Zero Redundancy Optimizer (abbreviated ZeRO) is a novel memory optimization technology for large-scale distributed deep learning. ZeRO can train deep learning models with 100 billion parameters on the current generation of GPU clusters at three to five times the throughput of the current best system. It also presents a clear path to training models with trillions of parameters, demonstrating an unprecedented leap in deep learning system technology. [...] With all three stages enabled, ZeRO can train a trillion-parameter model on just 1024 NVIDIA GPUs.

For comparison, the current top supercomputer Summit has 27,648 GPUs, suggesting that training models with tens of trillions of parameters is already within theoretical reach.

Also recently, advances in neural models such as the new Reformer may enable the ability to train large models that use memory much more efficiently.

I have chosen 100 trillion because it is considered by some to be the median estimate of the number of synapses in a human neocortex.