SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning - presented by Mr. Nicholas Zolman

SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning

Mr. Nicholas Zolman

NZ
arXiv logo

Associated pre-print

Nicholas Zolman et al. (2024) SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning.
SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning
NZ
Nicholas Zolman
University of Washington

Deep reinforcement learning (DRL) has shown significant promise for uncovering sophisticated control policies that interact in environments with complicated dynamics, such as stabilizing the magnetohydrodynamics of a tokamak fusion reactor or minimizing the drag force exerted on an object in a fluid flow. However, these algorithms require an abundance of training examples and may become prohibitively expensive for many applications. In addition, the reliance on deep neural networks often results in an uninterpretable, black-box policy that may be too computationally expensive to use with certain embedded systems. Recent advances in sparse dictionary learning, such as the sparse identification of nonlinear dynamics (SINDy), have shown promise for creating efficient and interpretable data-driven models in the low-data regime. In this work we introduce SINDy-RL, a unifying framework for combining SINDy and DRL to create efficient, interpretable, and trustworthy representations of the dynamics model, reward function, and control policy. We demonstrate the effectiveness of our approaches on benchmark control environments and challenging fluids problems. SINDy-RL achieves comparable performance to state-of-the-art DRL algorithms using significantly fewer interactions in the environment and results in an interpretable control policy orders of magnitude smaller than a deep neural network policy.

References
  • 1.
    Nicholas Zolman et al. (2024) SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning.
Brunton Lab logo
Research Abstracts from Brunton Lab
Brunton Lab (University of Washington)
Cite as
N. Zolman (2024, May 16), SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning
Share
Details
Listed seminar This seminar is open to all
Recorded Available to all
Video length 21:27