Emergent Mind

Scalable and Efficient MoE Training for Multitask Multilingual Models

(2109.10465)
Published Sep 22, 2021 in cs.CL , cs.AI , and cs.LG

Abstract

The Mixture of Experts (MoE) models are an emerging class of sparsely activated deep learning models that have sublinear compute costs with respect to their parameters. In contrast with dense models, the sparse architecture of MoE offers opportunities for drastically growing model size with significant accuracy gain while consuming much lower compute budget. However, supporting large scale MoE training also has its own set of system and modeling challenges. To overcome the challenges and embrace the opportunities of MoE, we first develop a system capable of scaling MoE models efficiently to trillions of parameters. It combines multi-dimensional parallelism and heterogeneous memory technologies harmoniously with MoE to empower 8x larger models on the same hardware compared with existing work. Besides boosting system efficiency, we also present new training methods to improve MoE sample efficiency and leverage expert pruning strategy to improve inference time efficiency. By combining the efficient system and training methods, we are able to significantly scale up large multitask multilingual models for language generation which results in a great improvement in model accuracy. A model trained with 10 billion parameters on 50 languages can achieve state-of-the-art performance in Machine Translation (MT) and multilingual natural language generation tasks. The system support of efficient MoE training has been implemented and open-sourced with the DeepSpeed library.

Overview

  • MoE models can scale in size without proportional increases in computational costs, offering significant accuracy improvements.

  • DeepSpeed MoE addresses scaling challenges by using multi-dimensional parallelism and CPU memory alongside GPU memory.

  • Innovative training techniques like Random Token Selection (RTS) improve token distribution and training efficiency for MoE models.

  • Expert aggregation and pruning strategies presented in the paper enhance model convergence and inference time.

  • The Z-code M3 multitask multilingual MoE model shows notable performance gains in machine translation and natural language tasks.

Overview of Mixture of Experts (MoE) Models

Mixture of Experts (MoE) models represent a significant shift in machine learning, specifically in how they handle computational resources compared to dense models. With a conventional dense architecture, increasing model size inevitably demands higher computational costs. In contrast, MoE models can grow substantially in size without a directly proportional increase in compute expenses. Consequently, researchers can pursue substantial accuracy improvements without an exorbitant increase in computational requirements. Despite these advantages, large-scale MoE models introduce unique system and training challenges that need to be addressed to fully utilize their potential.

System Challenges in MoE Training

The central challenge in scaling MoE models derives from how their parameter count is distributed across the base and expert models. While increasing the base model size boosts both the number of parameters and computational cost, adding more experts inflates the parameter count but not the compute cost. Balancing the two is a delicate exercise essential for achieving high accuracy at a controlled computation cost. The proposed system, DeepSpeed MoE, addresses this by incorporating multi-dimensional parallelism and harnessing CPU memory to scale beyond GPU memory constraints, accommodating trillions of parameters.

Training and Inference Efficiency

MoE models have unique issues such as expert capacity limits and imbalanced usage of experts, which can hinder their learning potential. A novel training method introduced in this paper is Random Token Selection (RTS), which enhances token distribution and regularizes training for MoE models, leading to more efficient convergence. Furthermore, the paper presents Aggregation of Experts (AoE) and expert pruning strategies to quicken model convergence and inference time, making the models more practical for real-world applications.

Multitask Multilingual MoE Model Performance

The paper also explore the multitask multilingual MoE model called Z-code M3, noting sizeable improvements in machine translation and multilingual natural language generation tasks. The Z-code M3 model, when pre-trained on a mix of tasks and languages, shows remarkable performance enhancements on downstream tasks. The ability to jointly leverage the inductive biases from multiple tasks and languages in an MoE framework proves to be a significant advantage over single-task-oriented models.

The results of this research demonstrate the promise of MoE models in creating more efficient and capable AI systems. Despite their complexity, the strategies for overcoming the challenges related to scale, training, and inference efficiency make MoE models an exciting area for future developments in machine learning. The paper's contribution, including the development of the scalable DeepSpeed MoE system, is likely to influence subsequent efforts in building and optimizing large-scale MoE models.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.