Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rethinking materials simulations: Blending direct numerical simulations with neural operators (2312.05410v1)

Published 8 Dec 2023 in cs.LG and physics.comp-ph

Abstract: Direct numerical simulations (DNS) are accurate but computationally expensive for predicting materials evolution across timescales, due to the complexity of the underlying evolution equations, the nature of multiscale spatio-temporal interactions, and the need to reach long-time integration. We develop a new method that blends numerical solvers with neural operators to accelerate such simulations. This methodology is based on the integration of a community numerical solver with a U-Net neural operator, enhanced by a temporal-conditioning mechanism that enables accurate extrapolation and efficient time-to-solution predictions of the dynamics. We demonstrate the effectiveness of this framework on simulations of microstructure evolution during physical vapor deposition modeled via the phase-field method. Such simulations exhibit high spatial gradients due to the co-evolution of different material phases with simultaneous slow and fast materials dynamics. We establish accurate extrapolation of the coupled solver with up to 16.5$\times$ speed-up compared to DNS. This methodology is generalizable to a broad range of evolutionary models, from solid mechanics, to fluid dynamics, geophysics, climate, and more.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (78)
  1. Thomas JR Hughes. The Finite Element Method: Linear Static and Dynamic Finite Element Analysis. Courier Corporation, 2012.
  2. Sergei K Godunov and I Bohachevsky. Finite difference method for numerical computation of discontinuous solutions of the equations of fluid dynamics. Matematičeskij Sbornik, 47(3):271–306, 1959.
  3. Finite volume methods. Handbook of Numerical Analysis, 7:713–1018, 2000.
  4. Spectral/HP Element Methods for Computational Fluid Dynamics. Oxford University Press, USA, 2005.
  5. Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural Networks, 3(5):551–560, 1990.
  6. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707, 2019.
  7. An ai framework integrating physics-informed neural network with predictive control for energy-efficient food production in the built environment. Applied Energy, 348:121450, 2023.
  8. A physics-informed neural network for quantifying the microstructural properties of polycrystalline nickel using ultrasound data: A promising approach for solving inverse problems. IEEE Signal Processing Magazine, 39(1):68–77, 2021.
  9. Predicting traction return current in electric railway systems through physics-informed neural networks. In 2022 IEEE Symposium Series on Computational Intelligence (SSCI), pages 1460–1468. IEEE, 2022.
  10. A framework based on symbolic regression coupled with extended physics-informed neural networks for gray-box learning of equations of motion from data. arXiv preprint arXiv:2305.10706, 2023.
  11. Physics-informed machine learning. Nature Reviews Physics, 3(6):422–440, 2021.
  12. Learning nonlinear operators via deeponet based on the universal approximation theorem of operators. Nature Machine Intelligence, 3(3):218–229, 2021.
  13. Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895, 2020.
  14. Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE Transactions on Neural Networks, 6(4):911–917, 1995.
  15. Wavelet neural operator for solving parametric partial differential equations in computational mechanics problems. Computer Methods in Applied Mechanics and Engineering, 404:115783, 2023.
  16. LNO: Laplace neural operator for solving differential equations. arXiv preprint arXiv:2303.10528, 2023.
  17. Convergence rate of deeponets for learning operators arising from advection-diffusion equations. arXiv preprint arXiv:2102.10621, 2021.
  18. On the training and generalization of deep operator networks. arXiv preprint arXiv:2309.01020, 2023.
  19. Generic bounds on the approximation error for physics-informed (and) operator learning. Advances in Neural Information Processing Systems, 35:10945–10958, 2022.
  20. Simulating progressive intramural damage leading to aortic dissection using deeponet: an operator–regression neural network. Journal of the Royal Society Interface, 19(187):20210670, 2022.
  21. DeepM&Mnet: Inferring the electroconvection multiphysics fields based on operator approximation by neural networks. Journal of Computational Physics, 436:110296, 2021.
  22. Deepm&mnet for hypersonics: Predicting the coupled flow and finite-rate chemistry behind a normal shock using neural-network approximation of operators. Journal of Computational Physics, 447:110698, 2021.
  23. Operator learning for predicting multiscale bubble growth dynamics. The Journal of Chemical Physics, 154(10), 2021.
  24. Learning the solution operator of parametric partial differential equations with physics-informed deeponets. Science Advances, 7(40):eabi8605, 2021.
  25. Learning deep implicit Fourier neural operators (IFNOs) with applications to heterogeneous material modeling. Computer Methods in Applied Mechanics and Engineering, 398:115296, 2022.
  26. Deep neural operators can serve as accurate surrogates for shape optimization: a case study for airfoils. arXiv preprint arXiv:2302.00807, 2023.
  27. Fourcastnet: Accelerating global high-resolution weather forecasting using adaptive Fourier neural operators. In Proceedings of the Platform for Advanced Scientific Computing Conference, pages 1–11, 2023.
  28. Physics-informed deep neural operators networks. arXiv preprint arXiv:2207.05748, 2022.
  29. Learning bias corrections for climate models using deep neural operators. arXiv preprint arXiv:2302.03173, 2023.
  30. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer, 2015.
  31. U-no: U-shaped neural operators. arXiv preprint arXiv:2204.11127, 2022.
  32. U-FNO—An enhanced fourier neural operator-based deep-learning model for multiphase flow. Advances in Water Resources, 163:104180, 2022.
  33. DiTTO: Diffusion-inspired temporal transformer operator. arXiv preprint arXiv:2307.09072, 2023.
  34. Towards multi-spatiotemporal-scale generalized pde modeling. arXiv preprint arXiv:2209.15616, 2022.
  35. Attention is all you need. Advances in Neural Information Processing Systems, 30, 2017.
  36. Long-Qing Chen. Phase-field models for microstructure evolution. Annual Review of Materials Research, 32(1):113–140, 2002.
  37. Microstructure morphology and concentration modulation of nanocomposite thin-films during simulated physical vapor deposition. Acta Materialia, 188:181–191, 2020.
  38. Stability of immiscible nanocrystalline alloys in compositional and thermal fields. Acta Materialia, 226:117620, 2022.
  39. Yuhong Zhao. Understanding and design of metallic alloys guided by phase-field simulations. npj Computational Materials, 9(1):94, 2023.
  40. Electrochemically induced fracture in LLZO: How the interplay between flaw density and electrostatic potential affects operability. Journal of Power Sources, 559:232646, 2023.
  41. Prediction of diblock copolymer morphology via machine learning. arXiv preprint arXiv:2308.16886, 2023.
  42. A data-driven surrogate model to rapidly predict microstructure morphology during physical vapor deposition. Applied Mathematical Modelling, 88:589–603, 2020.
  43. Artificial intelligence in predicting mechanical properties of composite materials. Journal of Composites Science, 7(9):364, 2023.
  44. Surrogate modeling of stress fields in periodic polycrystalline microstructures using u-net and fourier neural operators. In NeurIPS 2022 AI for Science: Progress and Promises, 2022.
  45. Mechanical neural networks: Architected materials that learn behaviors. Science Robotics, 7(71):eabq7278, 2022.
  46. Amir Abbas Kazemzadeh Farizhandi and Mahmood Mamivand. Spatiotemporal prediction of microstructure evolution with predictive recurrent neural network. Computational Materials Science, 223:112110, 2023.
  47. Emulating microstructural evolution during spinodal decomposition using a tensor decomposed convolutional and recurrent neural network. Computational Materials Science, 224:112187, 2023.
  48. Multisom: Multi-layer self organizing maps for local structure identification in crystalline structures. Computational Materials Science, 227:112263, 2023.
  49. Novel deeponet architecture to predict stresses in elastoplastic structures with variable complex geometries and loads. arXiv preprint arXiv:2306.03645, 2023.
  50. Machine learning surrogate model for acceleration of ferroelectric phase-field modeling. ACS Applied Electronic Materials, 5(7):3894–3907, 2023.
  51. Deep material network via a quilting strategy: visualization for explainability and recursive training for improved accuracy. npj Computational Materials, 9(1):128, 2023.
  52. Elham Kianiharchegani. Data-Driven Exploration of Coarse-Grained Equations: Harnessing Machine Learning. PhD thesis, The University of Western Ontario, 2023.
  53. Graph neural network modeling of grain-scale anisotropic elastic behavior using simulated and measured microscale data. npj Computational Materials, 8(1):259, 2022.
  54. Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Computational Materials, 7(1):3, 2021.
  55. Accelerating phase-field predictions via recurrent neural networks learning the microstructure evolution in latent space. Computer Methods in Applied Mechanics and Engineering, 397:115128, 2022.
  56. Learning two-phase microstructure evolution using neural operators and autoencoder architectures. npj Computational Materials, 8(1):190, 2022.
  57. An unsupervised latent/output physics-informed convolutional-lstm network for solving partial differential equations using peridynamic differential operator. Computer Methods in Applied Mechanics and Engineering, 407:115944, 2023.
  58. Adaptive physics-informed neural operator for coarse-grained non-equilibrium flows. arXiv preprint arXiv:2210.15799, 2022.
  59. Latent dynamics networks (LDNets): learning the intrinsic dynamics of spatio-temporal processes. arXiv preprint arXiv:2305.00094, 2023.
  60. Trade-offs in the latent representation of microstructure evolution. Acta Materialia, 263:119514, 2024.
  61. Benchmark problems for the mesoscale multiphysics phase field simulator (MEMPHIS). Technical report, Sandia National Lab.(SNL-NM), Albuquerque, NM (United States), 2020.
  62. Ultrathin films of single-walled carbon nanotubes for electronics and sensors: a review of fundamental and applied aspects. Advanced Materials, 21(1):29–53, 2009.
  63. Morphology-controlled vapor-phase nanowire growth with ruddlesden–popper lead bromide perovskite. Chemistry of Materials, 35(8):3300–3306, 2023.
  64. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
  65. Transformer networks for trajectory forecasting. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 10335–10342. IEEE, 2021.
  66. Wolfgang Hackbusch. Multi-grid Methods and Applications, volume 4. Springer Science & Business Media, 2013.
  67. Cluster-based network modeling—from snapshots to complex dynamical systems. Science Advances, 7(25):eabf5006, 2021.
  68. Learning time-dependent deposition protocols to design thin films via genetic algorithms. Materials & Design, 219:110815, 2022.
  69. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 6(6):861–867, 1993.
  70. Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25, 2012.
  71. Group normalization. In Proceedings of the European conference on computer vision (ECCV), pages 3–19, 2018.
  72. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016.
  73. A neural network for speaker-independent isolated word recognition. In First International Conference on Spoken Language Processing, pages 1077–1080. ICSLP, 1990.
  74. A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285, 2016.
  75. {{\{{TensorFlow}}\}}: a system for {{\{{Large-Scale}}\}} machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pages 265–283, 2016.
  76. Deep Learning. MIT press, 2016.
  77. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  78. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Review, 53(2):217–288, 2011.
Citations (9)

Summary

We haven't generated a summary for this paper yet.