Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Using Gradient to Boost the Generalization Performance of Deep Learning Models for Fluid Dynamics (2212.00716v1)

Published 9 Oct 2022 in physics.flu-dyn, cs.LG, cs.NA, and math.NA

Abstract: Nowadays, Computational Fluid Dynamics (CFD) is a fundamental tool for industrial design. However, the computational cost of doing such simulations is expensive and can be detrimental for real-world use cases where many simulations are necessary, such as the task of shape optimization. Recently, Deep Learning (DL) has achieved a significant leap in a wide spectrum of applications and became a good candidate for physical systems, opening perspectives to CFD. To circumvent the computational bottleneck of CFD, DL models have been used to learn on Euclidean data, and more recently, on non-Euclidean data such as unstuctured grids and manifolds, allowing much faster and more efficient (memory, hardware) surrogate models. Nevertheless, DL presents the intrinsic limitation of extrapolating (generalizing) out of training data distribution (design space). In this study, we present a novel work to increase the generalization capabilities of Deep Learning. To do so, we incorporate the physical gradients (derivatives of the outputs w.r.t. the inputs) to the DL models. Our strategy has shown good results towards a better generalization of DL networks and our methodological/ theoretical study is corroborated with empirical validation, including an ablation study.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)

Summary

We haven't generated a summary for this paper yet.