Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 45 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 96 tok/s Pro
Kimi K2 206 tok/s Pro
GPT OSS 120B 457 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

Optimal Solutions of Well-Posed Linear Systems via Low-Precision Right-Preconditioned GMRES with Forward and Backward Stabilization (2303.04251v1)

Published 7 Mar 2023 in math.NA and cs.NA

Abstract: Linear systems in applications are typically well-posed, and yet the coefficient matrices may be nearly singular in that the condition number $\kappa(\boldsymbol{A})$ may be close to $1/\varepsilon_{w}$, where $\varepsilon_{w}$ denotes the unit roundoff of the working precision. It is well known that iterative refinement (IR) can make the forward error independent of $\kappa(\boldsymbol{A})$ if $\kappa(\boldsymbol{A})$ is sufficiently smaller than $1/\varepsilon_{w}$ and the residual is computed in higher precision. We propose a new iterative method, called Forward-and-Backward Stabilized Minimal Residual or FBSMR, by conceptually hybridizing right-preconditioned GMRES (RP-GMRES) with quasi-minimization. We develop FBSMR based on a new theoretical framework of essential-forward-and-backward stability (EFBS), which extends the backward error analysis to consider the intrinsic condition number of a well-posed problem. We stabilize the forward and backward errors in RP-GMRES to achieve EFBS by evaluating a small portion of the algorithm in higher precision while evaluating the preconditioner in lower precision. FBSMR can achieve optimal accuracy in terms of both forward and backward errors for well-posed problems with unpolluted matrices, independently of $\kappa(\boldsymbol{A})$. With low-precision preconditioning, FBSMR can reduce the computational, memory, and energy requirements over direct methods with or without IR. FBSMR can also leverage parallelization-friendly classical Gram-Schmidt in Arnoldi iterations without compromising EFBS. We demonstrate the effectiveness of FBSMR using both random and realistic linear systems.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)