Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 14 tok/s Pro
GPT-5 High 14 tok/s Pro
GPT-4o 90 tok/s Pro
Kimi K2 179 tok/s Pro
GPT OSS 120B 462 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Pseudospectral Shattering, the Sign Function, and Diagonalization in Nearly Matrix Multiplication Time (1912.08805v5)

Published 18 Dec 2019 in math.NA, cs.CC, cs.DS, cs.NA, math.FA, and math.PR

Abstract: We exhibit a randomized algorithm which given a matrix $A\in \mathbb{C}{n\times n}$ with $|A|\le 1$ and $\delta>0$, computes with high probability an invertible $V$ and diagonal $D$ such that $|A-VDV{-1}|\le \delta$ using $O(T_{MM}(n)\log2(n/\delta))$ arithmetic operations, in finite arithmetic with $O(\log4(n/\delta)\log n)$ bits of precision. Here $T_{MM}(n)$ is the number of arithmetic operations required to multiply two $n\times n$ complex matrices numerically stably, known to satisfy $T_{MM}(n)=O(n{\omega+\eta})$ for every $\eta>0$ where $\omega$ is the exponent of matrix multiplication (Demmel et al., Numer. Math., 2007). Our result significantly improves the previously best known provable running times of $O(n{10}/\delta2)$ arithmetic operations for diagonalization of general matrices (Armentano et al., J. Eur. Math. Soc., 2018), and (with regards to the dependence on $n$) $O(n3)$ arithmetic operations for Hermitian matrices (Dekker and Traub, Lin. Alg. Appl., 1971). It is the first algorithm to achieve nearly matrix multiplication time for diagonalization in any model of computation (real arithmetic, rational arithmetic, or finite arithmetic), thereby matching the complexity of other dense linear algebra operations such as inversion and $QR$ factorization up to polylogarithmic factors. The proof rests on two new ingredients. (1) We show that adding a small complex Gaussian perturbation to any matrix splits its pseudospectrum into $n$ small well-separated components. In particular, this implies that the eigenvalues of the perturbed matrix have a large minimum gap, a property of independent interest in random matrix theory. (2) We give a rigorous analysis of Roberts' Newton iteration method (Roberts, Int. J. Control, 1980) for computing the sign function of a matrix in finite arithmetic, itself an open problem in numerical analysis since at least 1986.

Citations (3)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.