Emergent Mind

Abstract

In this paper we consider parallel implementations of approximate multiplication of large matrices with exponential decay of elements. Such matrices arise in computations related to electronic structure calculations and some other fields of computational science. Commonly, sparsity is introduced by dropping out small entries (truncation) of input matrices. Another approach, the sparse approximate multiplication algorithm [M. Challacombe and N. Bock, arXiv preprint 1011.3534, 2010] performs truncation of sub-matrix products. We consider these two methods and their combination, i.e. truncation of both input matrices and sub-matrix products. Implementations done using the Chunks and Tasks programming model and library [E. H. Rubensson and E. Rudberg, Parallel Comput., 40:328-343, 2014] are presented and discussed. We show that the absolute error in the Frobenius norm behaves as $O\left(n{1/2} \right), n \longrightarrow \infty $ and $O\left(\tau{p/2} \right), \tau \longrightarrow 0,\,\, \forall p < 2$ for all three methods, where $n$ is the matrix size and $\tau$ is the truncation threshold. We compare the methods on a model problem and show that the combined method outperforms the original two. The methods are also applied to matrices coming from large chemical systems with $\sim 106$ atoms. We show that the combination of the two methods achieves better weak scaling by reducing the amount of communication by a factor of $\approx 2$.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.