Journal article Open Access

Model-Based Reinforcement Learning for Closed-Loop Dynamic Control of Soft Robotic Manipulators

Thuruthel, Thomas George; Falotico, Egidio; Renda, Federico; Laschi, Cecilia

Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="" xmlns:oai_dc="" xmlns:xsi="" xsi:schemaLocation="">
  <dc:creator>Thuruthel, Thomas George</dc:creator>
  <dc:creator>Falotico, Egidio</dc:creator>
  <dc:creator>Renda, Federico</dc:creator>
  <dc:creator>Laschi, Cecilia</dc:creator>
  <dc:description>Dynamic control of soft robotic manipulators is an open problem yet to be well explored and analyzed. Most of the current applications of soft robotic manipulators utilize static or quasi-dynamic controllers based on kinematic models or linearity in the joint space. However, such approaches are not truly exploiting the rich dynamics of a soft-bodied system. In this paper, we present a model-based policy learning algorithm for closed-loop predictive control of a soft robotic manipulator. The forward dynamic model is represented using a recurrent neural network. The closed-loop policy is derived using trajectory optimization and supervised learning. The approach is verified first on a simulated piecewise constant strain model of a cable driven under-actuated soft manipulator. Furthermore, we experimentally demonstrate on a soft pneumatically actuated manipulator how closed-loop control policies can be derived that can accommodate variable frequency control and unmodeled external loads.</dc:description>
  <dc:title>Model-Based Reinforcement Learning for Closed-Loop Dynamic Control of Soft Robotic Manipulators</dc:title>
Views 21
Downloads 125
Data volume 434.8 MB
Unique views 21
Unique downloads 112


Cite as