Journal article Open Access

Surgical Hand Gesture Prediction for the Operating Room

Inna Skarga-Bandurova; Rostislav Siriak; Tetiana Biloborodova; Fabio Cuzzolin; Vivek Singh Bawa; Mohamed Ibrahim Mohamed; R Dinesh Jackson


MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <leader>00000nam##2200000uu#4500</leader>
  <datafield tag="041" ind1=" " ind2=" ">
    <subfield code="a">eng</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">surgical robot</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">GestureConvLSTM</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">ConvLSTM</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">operating room</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">prediction; surgeon</subfield>
  </datafield>
  <controlfield tag="005">20210128002727.0</controlfield>
  <controlfield tag="001">4471560</controlfield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">OBU</subfield>
    <subfield code="a">Rostislav Siriak</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">OBU</subfield>
    <subfield code="a">Tetiana Biloborodova</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">OBU</subfield>
    <subfield code="a">Fabio Cuzzolin</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">OBU</subfield>
    <subfield code="a">Vivek Singh Bawa</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">OBU</subfield>
    <subfield code="a">Mohamed Ibrahim Mohamed</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">OBU</subfield>
    <subfield code="a">R Dinesh Jackson</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">880657</subfield>
    <subfield code="z">md5:f88d3491a04fbbec13e6deffc8b40b66</subfield>
    <subfield code="u">https://zenodo.org/record/4471560/files/20-09_Surgical Hand Gesture Prediction for the Operating Room.pdf</subfield>
  </datafield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2020-09-04</subfield>
  </datafield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">openaire</subfield>
    <subfield code="p">user-saras-project</subfield>
    <subfield code="o">oai:zenodo.org:4471560</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">OBU</subfield>
    <subfield code="a">Inna Skarga-Bandurova</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Surgical Hand Gesture Prediction for the Operating Room</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-saras-project</subfield>
  </datafield>
  <datafield tag="536" ind1=" " ind2=" ">
    <subfield code="c">779813</subfield>
    <subfield code="a">Smart Autonomous Robotic Assistant Surgeon</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u">https://creativecommons.org/licenses/by/4.0/legalcode</subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2">opendefinition.org</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;Technological advancements in smart assistive technology enable navigating and manipulating various types of computer-aided devices in the operating room through a contactless gesture interface. Understanding surgeon actions is crucial to natural human-robot interaction in operating room since it means a sort of prediction a human behavior so that the robot can foresee the surgeon&amp;#39;s intention, early choose appropriate action and reduce waiting time. In this paper, we present a new deep network based on Convolution Long Short-Term Memory (ConvLSTM) for gesture prediction configured to provide natural interaction between the surgeon and assistive robot and improve operating-room efficiency. The experimental results prove the capability of reliably recognizing unfinished gestures on videos. We quantitatively demonstrate the latter ability and the fact that GestureConvLSTM improves the baseline system performance on LSA64 dataset.&lt;/p&gt;</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.3233/SHTI200621</subfield>
    <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">publication</subfield>
    <subfield code="b">article</subfield>
  </datafield>
</record>
6
6
views
downloads
Views 6
Downloads 6
Data volume 5.3 MB
Unique views 5
Unique downloads 5

Share

Cite as