Multifidelity, domain decomposition, and stacking for improving training for physics-informed networks - presented by Dr Amanda Howard

Multifidelity, domain decomposition, and stacking for improving training for physics-informed networks

Dr Amanda Howard

AH
Ask the seminar a question! BETA
Multifidelity, domain decomposition, and stacking for improving training for physics-informed networks
AH
Amanda Howard
Pacific Northwest National Laboratory

Physics-informed neural networks and operator networks have shown promise for effectively solving equations modeling physical systems. However, these networks can be difficult or impossible to train accurately for some systems of equations. One way to improve training is through the use of a small amount of data, however, such data is expensive to produce. We will introduce our novel multifidelity framework for stacking physics-informed neural networks and operator networks that facilitates training by progressively reducing the errors in our predictions for when no data is available. In stacking networks, we successively build a chain of networks, where the output at one step can act as a low-fidelity input for training the next step, gradually increasing the expressivity of the learned model. We will finally discuss the extension to domain decomposition using the finite basis method, including applications to newly-developed Kolmogorov-Arnold Networks. Applications will be discussed for accelerating simulations of computational fluid dynamics.

AI Institute in Dynamic Systems logo
Data-Driven Science and Engineering Seminars
AI Institute in Dynamic Systems
Cite as
A. Howard (2025, April 25), Multifidelity, domain decomposition, and stacking for improving training for physics-informed networks
Share
Details
Listed seminar This seminar is open to all
Recorded Available to all
Video length 56:02
Q&A Now closed