Thesis Open Access

Robust model-based deep reinforcement learning for flow control

Janis Geise

Active flow control has the potential of achieving remarkable drag reductions in applications for fluid mechanics, when combined with deep reinforcement learning (DRL). The high computational demands for CFD simulations currently limits the applicability of DRL to rather simple cases, such as the flow past a cylinder, as a consequence of the large amount of simulations which have to be carried out throughout the training. One possible approach of reducing the computational requirements is to substitute the simulations partially with models, e.g. deep neural networks; however, model uncertainties and error propagation may lead an unstable training and deteriorated performance compared to the model-free counterpart. The present thesis aims to modify the model-free training routine for controlling the flow past a cylinder towards a model-based one. Therefore, the policy training alternates between the CFD environment and environment models, which are trained successively over the course of the policy optimization. In order to reduce uncertainties and consequently improve the prediction accuracy, the CFD environment is represented by two model ensembles responsible for predicting the states and lift force as well as the aerodynamic drag, respectively. It could have been shown that this approach is able to yield a comparable performance to the model-free training routine at a Reynolds number of Re = 100 while reducing the overall runtime by up to 68.91%. The model-based training, however, showed a high dependency of the performance and stability on the initialization, which needs to be investigated further.

Files (13.1 MB)
Name Size
robust_MB_DRL_for_flow_control.pdf
md5:9dd0e82161e449cb64c47ff19a08252d
13.1 MB Download
66
63
views
downloads
All versions This version
Views 6666
Downloads 6363
Data volume 828.1 MB828.1 MB
Unique views 6060
Unique downloads 5656

Share

Cite as