Actor-Critic Method for Solving High Dimensional Hamilton-Jacobi-Bellman type PDEs - presented by Jianfeng Lu

Actor-Critic Method for Solving High Dimensional Hamilton-Jacobi-Bellman type PDEs

Jianfeng Lu

JL

Associated Journal of Computational Physics article

J. Han et al. (2020) Solving high-dimensional eigenvalue problems using deep neural networks: A diffusion Monte Carlo like approach. Journal of Computational Physics
Article of record
Actor-Critic Method for Solving High Dimensional Hamilton-Jacobi-Bellman type PDEs
JL
Jianfeng Lu
Duke University

In this talk, we will discuss numerical approach to solve high dimensional Hamilton-Jacobi-Bellman (HJB) type elliptic partial differential equations (PDEs). The HJB PDEs, reformulated as optimal control problems, are tackled by the actor-critic framework inspired by reinforcement learning, based on neural network parametrization of the value and control functions. Within the actor-critic framework, we employ a policy gradient approach to improve the control, while for the value function, we derive a variance reduced least-squares temporal difference method using stochastic calculus. We will also discuss convergence analysis for the actor-critic method, in particular the policy gradient method for solving stochastic optimal control. Joint work with Jiequn Han (Flatiron Institute) and Mo Zhou (Duke University).

References
  • 1.
    J. Han et al. (2020) Solving high-dimensional eigenvalue problems using deep neural networks: A diffusion Monte Carlo like approach. Journal of Computational Physics
Journal of Computational Physics logo
Journal of Computational Physics Seminar Series
Journal of Computational Physics
Cite as
J. Lu (2023, April 3), Actor-Critic Method for Solving High Dimensional Hamilton-Jacobi-Bellman type PDEs
Share
Details
Listed seminar This seminar is open to all
Recorded Available to all
Video length 58:48
Q&A Now closed
Disclaimer The views expressed in this seminar are those of the speaker and not necessarily those of the journal