Emergent Mind
Gradient Coding
(1612.03301)
Published Dec 10, 2016
in
stat.ML
,
cs.DC
,
cs.IT
,
cs.LG
,
math.IT
,
and
stat.CO
Abstract
We propose a novel coding theoretic framework for mitigating stragglers in distributed learning. We show how carefully replicating data blocks and coding across gradients can provide tolerance to failures and stragglers for Synchronous Gradient Descent. We implement our schemes in python (using MPI) to run on Amazon EC2, and show how we compare against baseline approaches in running time and generalization error.
We're not able to analyze this paper right now due to high demand.
Please check back later (sorry!).
Generate a summary of this paper on our Pro plan:
We ran into a problem analyzing this paper.