AI/ML+Physics Part 3: Designing an Architecture - presented by Prof. Steve Brunton

AI/ML+Physics Part 3: Designing an Architecture

Prof. Steve Brunton

Prof. Steve Brunton
Slide at 12:08
WHAT IS PHYSICS?
Interpretability / Generalizability
Parsimony / Simplicity
Symmetries / Invariances / Conservation
YES!
Share slide
Summary (AI generated)

The takeaway is that we want our machine learning models to be interpretable, generalizable, simple, and parsimonious while enforcing known symmetries, variances, and conservation of the physical world. We should incorporate thousands of years of human experience learning physics into our models.

For example, let's consider a pendulum in a lab as a physical system. The data representation is a high-dimensional vector of a time series of pixels from a video. Although the data is high-dimensional, the system has low-dimensional meaning, such as the angle and angular velocity of the pendulum.

As humans, we can extract key features and patterns from high-dimensional data to identify important variables like angle and angular velocity. We may choose a machine learning architecture, like an autoencoder network, to compress the data and find the best representation of the variables.

We can also use the architecture to learn differential equations governing the evolution of the variables, such as the dynamics of the pendulum. By selecting a machine learning architecture that is adept at learning differential equations, like the sparse identification of nonlinear dynamics, we can achieve this goal.