Conference paper Open Access
Theodoros Giannakopoulos; Georgios Siantikos
Research on robot perception mostly focuses on visual information analytics. Audio-based perception is mostly based
on speech-related information. However, non-verbal information of the audio channel can be equally important in the perception procedure, or at least play a complementary role. This paper presents a framework for audio signal analysis that utilizes the ROS architectural principles. Details on the design and implementation issues of this workflow are described, while classification results are also presented in the context of two use-cases motivated by the task of medical monitoring. The proposed audio analysis framework is provided as an open-source library at github (https://github.com/tyiannak/AUROS).
Name | Size | |
---|---|---|
CONF43.pdf
md5:bc7fb57e5773ebafd7ee80f597c903bf |
431.9 kB | Download |
Views | 219 |
Downloads | 707 |
Data volume | 305.3 MB |
Unique views | 195 |
Unique downloads | 679 |