Accelerating Deep Reinforcement Learning of Active Flow Control strategies through a multi-environment approach

25 Jun 2019  ·  Jean Rabault, Alexander Kuhnle ·

Artificial Neural Networks trained through Deep Reinforcement Learning (DRL) have recently been proposed as a methodology to discover complex Active Flow Control (AFC) strategies [Rabault et. al., Journal of Fluid Mechanics, 2019]. However, while promising results were obtained on a simple 2D benchmark flow at moderate Reynolds number, considerable speedups will be required to investigate more challenging flow configurations. In the case of AFC trained with Computational Fluid Mechanics (CFD) data, it was found that the CFD part, rather than the training of the Artificial Neural Network, was the limiting factor for speed of execution. Therefore, speedups should be obtained through a combination of two approaches. The first one, which is well documented in the literature, consists in the parallelization of the numerical simulation itself. The second one consists in parallelizing the DRL algorithm, by using several independent simulations running in parallel to provide data to the Artificial Neural Network. In the present work, we discuss this second solution for parallelization. We show that the problem is embarrassingly parallel and that perfect speedup can be obtained up to the batch size of the problem, with weaker scaling still taking place for a larger number of simulations. This opens the way to performing efficient, distributed DRL in the context of AFC which is an important step towards studying more sophisticated Fluid Mechanics problems through DRL.

PDF Abstract