Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression (2302.08545v2)

Published 16 Feb 2023 in cs.LG, cs.AI, and cs.NI

Abstract: Deep neural networks (DNNs) are the de facto standard for essential use cases, such as image classification, computer vision, and natural language processing. As DNNs and datasets get larger, they require distributed training on increasingly larger clusters. A main bottleneck is the resulting communication overhead where workers exchange model updates (i.e., gradients) on a per-round basis. To address this bottleneck and accelerate training, a widely-deployed approach is compression. However, previous deployments often apply bi-directional compression schemes by simply using a uni-directional gradient compression scheme in each direction. This results in significant computational overheads at the parameter server and increased compression error, leading to longer training and lower accuracy. We introduce Tensor Homomorphic Compression (THC), a novel bi-directional compression framework that enables the direct aggregation of compressed values and thus eliminating the aforementioned computational overheads. Moreover, THC is compatible with in-network aggregation (INA), which allows for further acceleration. Our evaluation shows that training representative vision and LLMs with THC reaches target accuracy by 1.40x to 1.47x faster using INA and 1.28x to 1.33x faster using a software PS compared with state-of-the-art systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Minghao Li (44 papers)
  2. Ran Ben Basat (31 papers)
  3. Shay Vargaftik (25 papers)
  4. ChonLam Lao (6 papers)
  5. Kevin Xu (21 papers)
  6. Michael Mitzenmacher (99 papers)
  7. Minlan Yu (24 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.