Conference paper Open Access

Active Improvement of Control Policies with Bayesian Gaussian Mixture Model

Girgin, Hakan; Pignat, Emmanuel; Jaquier, Noémie; Calinon, Sylvain

Learning from demonstration (LfD) is an intuitive framework allowing non-expert users to easily (re-)program robots. However, the quality and quantity of demonstrations have a great influence on the generalization performances of LfD approaches. In this paper, we introduce a novel active learning framework in order to improve the generalization capabilities of control policies. The proposed approach is based on the epistemic uncertainties of Bayesian Gaussian mixture models (BGMMs). We determine the new query point location by optimizing a closed-form information-density cost based on the quadratic Rényi entropy. Furthermore, to better represent uncertain regions and to avoid local optima problem, we propose to approximate the active learning cost with a Gaussian mixture model (GMM). We demonstrate our active learning framework in the context of a reaching task in a cluttered environment with an illustrative toy example and a real experiment with a Panda robot.

Files (3.1 MB)
Name Size
3.1 MB Download
All versions This version
Views 1717
Downloads 1414
Data volume 42.9 MB42.9 MB
Unique views 1616
Unique downloads 1313


Cite as