Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Adaptive Sampling Approach for the Reduced Basis Method (1910.00298v2)

Published 1 Oct 2019 in math.NA and cs.NA

Abstract: The offline time of the reduced basis method can be very long given a large training set of parameter samples. This usually happens when the system has more than two independent parameters. On the other hand, if the training set includes fewer parameter samples, the greedy algorithm might produce a reduced-order model with large errors at the samples outside of the training set. We introduce a method based on a surrogate error model to efficiently sample the parameter domain such that the training set is adaptively updated starting from a coarse set with a small number of parameter samples. A sharp a posteriori error estimator is evaluated on a coarse training set. Radial Basis Functions are used to interpolate the error estimator over a separate fine training set. Points from the fine training set are added into the coarse training set at every iteration based on a user defined criterion. In parallel, parameter samples satisfying a defined tolerance are adaptively removed from the coarse training set. The approach is shown to avoid high computational costs by using a small training set and to provide a reduced-order model with guaranteed accuracy over a fine training set. Further, we show numerical evidence that the reduced-order model meets the defined tolerance over an independently sampled test set from the parameter domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Sridhar Chellappa (10 papers)
  2. Lihong Feng (24 papers)
  3. Peter Benner (167 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.