Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 60 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 14 tok/s Pro
GPT-4o 77 tok/s Pro
Kimi K2 159 tok/s Pro
GPT OSS 120B 456 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Approximating Sparse Matrices and their Functions using Matrix-vector products (2310.05625v2)

Published 9 Oct 2023 in math.NA and cs.NA

Abstract: The computation of a matrix function $f(A)$ is an important task in scientific computing appearing in machine learning, network analysis and the solution of partial differential equations. In this work, we use only matrix-vector products $x\mapsto Ax$ to approximate functions of sparse matrices and matrices with similar structures such as sparse matrices $A$ themselves or matrices that have a similar decay property as matrix functions. We show that when $A$ is a sparse matrix with an unknown sparsity pattern, techniques from compressed sensing can be used under natural assumptions. Moreover, if $A$ is a banded matrix then certain deterministic matrix-vector products can efficiently recover the large entries of $f(A)$. We describe an algorithm for each of the two cases and give error analysis based on the decay bound for the entries of $f(A)$. We finish with numerical experiments showing the accuracy of our algorithms.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. Near-Optimal Perfectly Matched Layers for Indefinite Helmholtz Problems. SIAM Rev. 2016;58(1):90–116.
  2. Garrappa R, Popolizio M. On the use of matrix functions for fractional partial differential equations. Math Comput Simul. 2011;81(5):1045–1056.
  3. Grimm V, Hochbruck M. Rational approximation to trigonometric operators. BIT. 2008 jun;48(2):215–229.
  4. Benzi M, Boito P. Matrix functions in network analysis. GAMM-Mitt. 2020;43(3):e202000012.
  5. Estrada E, Higham DJ. Network Properties Revealed through Matrix Functions. SIAM Rev. 2010;52(4):696–714.
  6. Goedecker S. Linear scaling electronic structure methods. Rev Mod Phys. 1999 Jul;71:1085–1123.
  7. Efficient Computation of Sparse Matrix Functions for Large-Scale Electronic Structure Calculations: The CheSS Library. J Chem Theory Comput. 2017;13(10):4684–4698.
  8. Cortinovis A, Kressner D. On randomized trace estimates for indefinite matrices with an application to determinants. Found Comput Math. 2021;22(3):875–903.
  9. Preconditioning for Scalable Gaussian Process Hyperparameter Optimization. In: International Conference on Machine Learning (ICML); 2022. .
  10. Newman MEJ. Finding community structure in networks using the eigenvectors of matrices. Phys Rev E. 2006 Sep;74:036104.
  11. Higham NJ. Functions of Matrices. SIAM; 2008.
  12. Golub GH, Van Loan CF. Matrix Computations. 4th ed. Johns Hopkins University Press; 2013.
  13. Güttel S. Rational Krylov approximation of matrix functions: Numerical methods and optimal pole selection. GAMM-Mitt. 2013;36(1):8–31.
  14. Limited-memory polynomial methods for large-scale matrix functions. GAMM-Mitt. 2020;43(3):e202000019.
  15. Benzi M, Razouk N. Decay bounds and O(n) algorithms for approximating functions of sparse matrices. ETNA. 2007;28:16–39. Available from: http://eudml.org/doc/130625.
  16. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory. 2006;52(2):489–509.
  17. Candès EJ, Tao T. Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? IEEE Trans Inf Theory. 2006;52(12):5406–5425.
  18. Donoho DL. Compressed sensing. IEEE Trans Inf Theory. 2006;52(4):1289–1306.
  19. Decay rates for inverses of band matrices. Math Comp. 1984;43(168):491–499.
  20. Benzi M, Golub GH. Bounds for the Entries of Matrix Functions with Applications to Preconditioning. BIT. 1999 sep;39(3):417–438.
  21. Analysis of Probing Techniques for Sparse Approximation and Trace Estimation of Decaying Matrix Functions. SIAM J Matrix Anal Appl. 2021;42(3):1290–1318.
  22. Coleman TF, Moré JJ. Estimation of Sparse Jacobian Matrices and Graph Coloring Blems. SIAM J Numer Anal. 1983;20(1):187–209.
  23. On the Estimation of Sparse Jacobian Matrices. IMA J Appl Math. 1974 02;13(1):117–119. Available from: https://doi.org/10.1093/imamat/13.1.117.
  24. Saad Y. Analysis of Some Krylov Subspace Approximations to the Matrix Exponential Operator. SIAM J Numer Anal. 1992;29(1):209–228.
  25. Computing Aαsuperscript𝐴𝛼A^{\alpha}italic_A start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT, log⁡(A)𝐴\log(A)roman_log ( italic_A ), and Related Matrix Functions by Contour Integrals. SIAM J Numer Anal. 2008;46(5):2505–2523.
  26. Block krylov subspace methods for functions of matrices. ETNA. 2018;27:100–126.
  27. Divide-and-Conquer Methods for Functions of Matrices with Banded or Hierarchical Low-Rank Structure. SIAM J Matrix Anal Appl. 2022;43(1):151–177.
  28. Halikias D, Townsend A. Matrix recovery from matrix-vector products. arXiv preprint arXiv:221209841. 2022;.
  29. Levitt J, Martinsson PG. Linear-Complexity Black-Box Randomized Compression of Hierarchically Block Separable Matrices. arXiv preprint arXiv:220502990. 2022;.
  30. Querying a Matrix through Matrix-Vector Products. ACM Trans Algorithms. 2021 oct;17(4).
  31. An estimator for the diagonal of a matrix. Applied Numerical Mathematics. 2007;57(11):1214–1229. Numerical Algorithms, Parallelism and Applications (2).
  32. Herman MA, Strohmer T. High-Resolution Radar via Compressed Sensing. IEEE Trans Signal Process. 2009;57(6):2275–2284.
  33. Identification of Matrices Having a Sparse Representation. IEEE Trans Signal Process. 2008;56(11):5376–5388.
  34. Sketching Sparse Matrices, Covariances, and Graphs via Tensor Products. IEEE Trans Inf Theory. 2015;61(3):1373–1388.
  35. Candès EJ, Tao T. Decoding by linear programming. IEEE Trans Inf Theory. 2005;51(12):4203–4215.
  36. Greedy algorithms for compressed sensing. In: Eldar YC, Kutyniok G, editors. Compressed Sensing: Theory and Applications. Cambridge University Press; 2012. p. 348–393.
  37. Blanchard JD, Tanner J. Performance comparisons of greedy algorithms in compressed sensing. Numer Linear Algebra Appl. 2015;22(2):254–282.
  38. Blumensath T, Davies ME. Normalized Iterative Hard Thresholding; Guaranteed Stability and Performance. IEEE J Sel Topics Signal Process. 2010;4(2):298–309.
  39. Foucart S. Hard Thresholding Pursuit: an Algorithm for Compressive Sensing. SIAM J Numer Anal. 2011;49(6):2543–2563.
  40. Needell D, Tropp JA. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl Comput Harmon Anal. 2009;26(3):301–321.
  41. Foucart S, Rauhut H. A Mathematical Introduction to Compressive Sensing. Applied and Numerical Harmonic Analysis. Springer New York; 2013. Available from: https://books.google.co.uk/books?id=zb28BAAAQBAJ.
  42. Rokhlin V, Tygert M. A Fast Randomized Algorithm for Overdetermined Linear Least-Squares Regression. Proc Natl Acad Sci USA. 2008;105(36):13212–13217.
  43. Adaptive algorithm for sparse signal recovery. Digital Signal Processing. 2019;87:10–18.
  44. On the Power of Adaptivity in Sparse Recovery. In: 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science; 2011. p. 285–294.
  45. Krahmer F, Ward R. Stable and Robust Sampling Strategies for Compressive Imaging. IEEE Trans Image Process. 2014;23(2):612–622.
  46. Improved Algorithms for Adaptive Compressed Sensing. In: Chatzigiannakis I, Kaklamanis C, Marx D, Sannella D, editors. 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). vol. 107 of Leibniz International Proceedings in Informatics (LIPIcs). Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik; 2018. p. 90:1–90:14.
  47. Benzi M, Simoncini V. Decay Bounds for Functions of Hermitian Matrices with Banded or Kronecker Structure. SIAM J Matrix Anal Appl. 2015;36(3):1263–1282.
  48. Pozza S, Simoncini V. Inexact arnoldi residual estimates and decay properties for functions of non-Hermitian matrices. BIT. 2019;59(4):969–986.
  49. LeVeque RJ. Finite Difference Methods for Ordinary and Partial Differential Equations. Steady State and Time Dependent Problems. SIAM; 2007.
  50. Loan CFV. The ubiquitous Kronecker product. J Comput Appl Math. 2000;123(1):85–100. Numerical Analysis 2000. Vol. III: Linear Algebra.
  51. Davis TA, Hu Y. The University of Florida Sparse Matrix Collection. ACM Trans Math Softw. 2011 dec;38(1). Available from: https://doi.org/10.1145/2049662.2049663.
  52. Ranking hubs and authorities using matrix functions. Linear Algebra Appl. 2013;438(5):2447–2474.
  53. Estrada E, Hatano N. Communicability in complex networks. Phys Rev E. 2008 Mar;77:036111.
  54. Decay Properties of Spectral Projectors with Applications to Electronic Structure. SIAM Rev. 2013;55(1):3–64.
  55. Analysis of stochastic probing methods for estimating the trace of functions of sparse symmetric matrices. arXiv preprint arXiv:200911392. 2023;.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.