Chargement...
 
Tao

Seminar16042019

Tuesday, 16th of April

14h30 (room R2014, 660 building) (see location)

Michele Alessandro Bucci

(LIMSI)

Title: Control of chaotic dynamical system with Deep Reinforcement Learning approach


Abstract

Control of flows is a very active research field fueled by the potential outcomes for the current environmental issues and the resulting socio-political implications. For instance, the ability to control fluid motions would allow saving fuel, thus reducing pollutants due to cargo-ships or airliners.
Optimal control theory has already been successfully applied to academical flow configurations, such as boundary layers, cavity flows or flows past bluff bodies. However, the success of these control applications often relies on rather restrictive hypotheses, such as low fluid velocity (i.e., low Reynolds number) or the possibility to identify physical mechanisms at work so that, in these configurations, a standard Linear Quadratic Gaussian (LQG) control coupled with an estimator (Kalman filter) achieves satisfactory results. Another limitation is that, despite progresses made in computer hardware, direct numerical simulations of the Navier-Stokes equations for industrial set-ups are still computationally prohibitive and the direct application of control theory is impossible.
A possible solution is the introduction of reduced order models (ROM) to circumvent the limitations due to the large dimensionality of the system at hand. Reduced order modeling inherently makes use of the effective number of degrees of freedom of a system, thus providing a compact representation of its dynamics. However, this procedure can also lead to ineffectiveness of the controller in off-design operation conditions.
We here propose a shift in the paradigm for the control of flows. Indeed, it is rather natural to rethink flow control problems within a machine learning framework. In particular, Deep Reinforcement Learning (DRL) is a suitable strategy for circumventing dimensionality constraints and non-linear optimization issues while enjoying the convenience of a quadratic cost function assessing optimality of the control law.
We consider the Kuramoto-Sivashinsky (KS) equation as a case-study to test the capability of DRL to control chaotic dynamics. We show how DRL yields successful results, including for regimes where the LQG approach fails. Moreover, application of DRL for dynamical systems opens new fundamental questions about the optimal exploration of the action-state space. The amount of data necessary to represent a chaotic attractor is exponentially dependent on the correlation dimension. For turbulent flows, this dimension is potentially very large and it is necessary to adopt an efficient exploration strategy. Based on attractor topology arguments, we show how not every trajectory is effective in the learning process and how some are more "informative" than others.
Further open questions on the DRL application to dynamical systems will be discussed, such as robustness issues and multi-objective functions with concurrent gradients.




Contact: guillaume.charpiat at inria.fr
All TAU seminars: here


Collaborateur(s) de cette page: guillaume .
Page dernièrement modifiée le Lundi 15 avril 2019 14:45:18 CEST par guillaume.