Emergent Mind

The Inverse Fast Multipole Method

(1407.1572)
Published Jul 7, 2014 in math.NA and cs.NA

Abstract

This article introduces a new fast direct solver for linear systems arising out of wide range of applications, integral equations, multivariate statistics, radial basis interpolation, etc., to name a few. \emph{The highlight of this new fast direct solver is that the solver scales linearly in the number of unknowns in all dimensions.} The solver, termed as Inverse Fast Multipole Method (abbreviated as IFMM), works on the same data-structure as the Fast Multipole Method (abbreviated as FMM). More generally, the solver can be immediately extended to the class of hierarchical matrices, denoted as $\mathcal{H}2$ matrices with strong admissibility criteria (weak low-rank structure), i.e., \emph{the interaction between neighboring cluster of particles is full-rank whereas the interaction between particles corresponding to well-separated clusters can be efficiently represented as a low-rank matrix}. The algorithm departs from existing approaches in the fact that throughout the algorithm the interaction corresponding to neighboring clusters are always treated as full-rank interactions. Our approach relies on two major ideas: (i) The $N \times N$ matrix arising out of FMM (from now on termed as FMM matrix) can be represented as an extended sparser matrix of size $M \times M$, where $M \approx 3N$. (ii) While solving the larger extended sparser matrix, \emph{the fill-in's that arise in the matrix blocks corresponding to well-separated clusters are hierarchically compressed}. The ordering of the equations and the unknowns in the extended sparser matrix is strongly related to the local and multipole coefficients in the FMM~\cite{greengard1987fast} and \emph{the order of elimination is different from the usual nested dissection approach}. Numerical benchmarks on $2$D manifold confirm the linear scaling of the algorithm.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.