Journal article Open Access

Model-Based Reinforcement Learning for Closed-Loop Dynamic Control of Soft Robotic Manipulators

Thuruthel, Thomas George; Falotico, Egidio; Renda, Federico; Laschi, Cecilia


MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <leader>00000nam##2200000uu#4500</leader>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u">https://creativecommons.org/licenses/by/4.0/legalcode</subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2018-11-12</subfield>
  </datafield>
  <controlfield tag="005">20200421202018.0</controlfield>
  <controlfield tag="001">3759636</controlfield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">openaire</subfield>
    <subfield code="o">oai:zenodo.org:3759636</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;Dynamic control of soft robotic manipulators is an open problem yet to be well explored and analyzed. Most of the current applications of soft robotic manipulators utilize static or quasi-dynamic controllers based on kinematic models or linearity in the joint space. However, such approaches are not truly exploiting the rich dynamics of a soft-bodied system. In this paper, we present a model-based policy learning algorithm for closed-loop predictive control of a soft robotic manipulator. The forward dynamic model is represented using a recurrent neural network. The closed-loop policy is derived using trajectory optimization and supervised learning. The approach is verified first on a simulated piecewise constant strain model of a cable driven under-actuated soft manipulator. Furthermore, we experimentally demonstrate on a soft pneumatically actuated manipulator how closed-loop control policies can be derived that can accommodate variable frequency control and unmodeled external loads.&lt;/p&gt;</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy</subfield>
    <subfield code="a">Falotico, Egidio</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Department of Mechanical Engineering and the Center for Autonomous Robotics Systems, Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates</subfield>
    <subfield code="a">Renda, Federico</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy</subfield>
    <subfield code="a">Laschi, Cecilia</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">3478695</subfield>
    <subfield code="z">md5:d2beb3db830b6d9801984f472e3b22ac</subfield>
    <subfield code="u">https://zenodo.org/record/3759636/files/thuruthel2018model.pdf</subfield>
  </datafield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">publication</subfield>
    <subfield code="b">article</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy</subfield>
    <subfield code="a">Thuruthel, Thomas George</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.1109/TRO.2018.2878318</subfield>
    <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Model-Based Reinforcement Learning for Closed-Loop Dynamic Control of Soft Robotic Manipulators</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2">opendefinition.org</subfield>
  </datafield>
</record>
21
127
views
downloads
Views 21
Downloads 127
Data volume 441.8 MB
Unique views 21
Unique downloads 113

Share

Cite as