Emergent Mind

Abstract

We discuss a simple, binary tree-based algorithm for the collective allreduce (reduction-to-all, MPIAllreduce) operation for parallel systems consisting of $p$ suitably interconnected processors. The algorithm can be doubly pipelined to exploit bidirectional (telephone-like) communication capabilities of the communication system. In order to make the algorithm more symmetric, the processors are organized into two rooted trees with communication between the two roots. For each pipeline block, each non-leaf processor takes three communication steps, consisting in receiving and sending from and to the two children, and sending and receiving to and from the root. In a round-based, uniform, linear-cost communication model in which simultaneously sending and receiving $n$ data elements takes time $\alpha+\beta n$ for system dependent constants $\alpha$ (communication start-up latency) and $\beta$ (time per element), the time for the allreduce operation on vectors of $m$ elements is $O(\log p+\sqrt{m\log p})+3\beta m$ by suitable choice of the pipeline block size. We compare the performance of an implementation in MPI to similar reduce followed by broadcast algorithms, and the native MPIAllreduce collective on a modern, small $36\times 32$ processor cluster. With proper choice of the number of pipeline blocks, it is possible to achieve better performance than pipelined algorithms that do not exploit bidirectional communication.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.