Conference paper Open Access

A ROS Framework for Audio-Based Activity Recognition

Theodoros Giannakopoulos; Georgios Siantikos


MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <leader>00000nam##2200000uu#4500</leader>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">ROS</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">audio analysis</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">open-source</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">audio segmentation</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">audio classification</subfield>
  </datafield>
  <controlfield tag="005">20190410035144.0</controlfield>
  <controlfield tag="001">221421</controlfield>
  <datafield tag="711" ind1=" " ind2=" ">
    <subfield code="g">PETRA 2016</subfield>
    <subfield code="a">9th ACM International Conference on PErvasive Technologies Related to Assistive Environments</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">NCSR Demokritos</subfield>
    <subfield code="a">Georgios Siantikos</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">431876</subfield>
    <subfield code="z">md5:bc7fb57e5773ebafd7ee80f597c903bf</subfield>
    <subfield code="u">https://zenodo.org/record/221421/files/CONF43.pdf</subfield>
  </datafield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2016-07-01</subfield>
  </datafield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">openaire</subfield>
    <subfield code="p">user-radio</subfield>
    <subfield code="o">oai:zenodo.org:221421</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">NCSR Demokritos</subfield>
    <subfield code="a">Theodoros Giannakopoulos</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">A ROS Framework for Audio-Based Activity Recognition</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-radio</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u">http://creativecommons.org/licenses/by/4.0/legalcode</subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2">opendefinition.org</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;Research on robot perception mostly focuses on visual information analytics. Audio-based perception is mostly based&lt;br&gt;
on speech-related information. However, non-verbal information of the audio channel can be equally important in the perception procedure, or at least play a complementary role. This paper presents a framework for audio signal analysis that utilizes the ROS architectural principles. Details on the design and implementation issues of this workflow are described, while classification results are also presented in the context of two use-cases motivated by the task of medical monitoring. The proposed audio analysis framework is provided as an open-source library at github (https://github.com/tyiannak/AUROS).&lt;/p&gt;</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.1145/2910674.2935858</subfield>
    <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">publication</subfield>
    <subfield code="b">conferencepaper</subfield>
  </datafield>
</record>
76
131
views
downloads
Views 76
Downloads 131
Data volume 56.6 MB
Unique views 75
Unique downloads 125

Share

Cite as