Symmetry’s made to be broken: Learning how to break symmetry with symmetry-preserving neural networks - presented by Asst. Prof. Tess Smidt

Symmetry’s made to be broken: Learning how to break symmetry with symmetry-preserving neural networks

Asst. Prof. Tess Smidt

Asst. Prof. Tess Smidt
Symmetry’s made to be broken: Learning how to break symmetry with symmetry-preserving neural networks
Asst. Prof. Tess Smidt
Tess Smidt
Massachusetts Institute of Technology

Symmetry-preserving (equivariant) neural networks are extremely data-efficient and generalize well when applied to diverse domains (e.g. computer vision and atomic systems). But, are there circumstances where equivariance is too strict or otherwise undesirable? In this talk, I’ll discuss how symmetry-preserving neural networks can learn symmetry-breaking information in order to fit a dataset that may have missing information unbeknownst to the researcher. Furthermore, due to the mathematical guarantees of equivariant neural networks, these learned parameters are guaranteed to be minimally symmetry breaking. I’ll describe network architectures that can learn these symmetry-breaking parameters in two distinct settings: 1) symmetry-breaking parameters are learned for an entire dataset to capture global asymmetries in the data and 2) symmetry-breaking parameters are predicted in an equivariant manner for individual examples. Finally, I’ll demonstrate these networks on several prototypical examples and apply them to predicting structural distortions of crystalline materials.

AI Institute in Dynamic Systems logo
Data-Driven Science and Engineering Seminars
AI Institute in Dynamic Systems
Cite as
T. Smidt (2023, January 6), Symmetry’s made to be broken: Learning how to break symmetry with symmetry-preserving neural networks
Share
Details
Listed seminar This seminar is open to all
Recorded Available to all
Video length 1:04:14