AI/ML+Physics Part 3: Designing an Architecture - presented by Prof. Steve Brunton

AI/ML+Physics Part 3: Designing an Architecture

Prof. Steve Brunton

Prof. Steve Brunton
Slide at 05:40
ResNet Block
ARCHITECTURES
3x3 CONV BATCH RELU 3x3 CONV BATCH
RELU
STRIDE=1 NORM
STRIDE=1 NORM
PAD=1
PAD=1
128 64 64
input
image
output
tile
segmentation
128 128
aldt alax
conv 3x3, ReLU
copy and crop
max pool 2x2
1024
up-conv 2x2
conv 1x1
BRANCH NET
a(x)
Fourier layer 1 Fourier layer 2
Fourier layer T
u(x)
Fourier layer
TRUNK NET
e(x)
Share slide
Summary (AI generated)

We will cover each topic in depth with code, examples, and case studies, dedicating at least half an hour or an hour to each. There is a wealth of material available, with approximately five hours focused solely on Cindy. This allows for a deep dive into equation discovery for those interested. Today's discussion centers on architectures that are specifically beneficial for physics, aiding in the development of models that are more physical and require less data due to implicit biases that add structure and physics to machine learning architectures.

Physics plays a crucial role in machine learning, but the term itself needs clarification. While the Wikipedia definition involves matter, energy, and change, I prefer to define physics in the context of the capabilities we want our machine learning models to possess. Historically, physics has been characterized by simple and interpretable principles such as F equals MA and E equals MC squared. These fundamental laws are easy to understand and generalize, making them valuable in the development of machine learning models.