Published February 28, 2025
| Version v1
Publication
Open
Enhancing Explainability with Multimodal Context Representations for Smarter Robots
Description
Viswanath, A., Veeramacheneni, L., & Buschmeier, H. (2025). Enhancing Explainability with Multimodal Context Representations for Smarter Robots. Papers of the 3rd Workshop on Explainability in Human-Robot Collaboration at HRI ’25, Melbourne, Australia. https://doi.org/10.5281/zenodo.14930029
Abstract: Artificial Intelligence (AI) has significantly advanced in recent years, driving innovation across various fields, especially in robotics. Even though robots can perform complex tasks with increasing autonomy, challenges remain in ensuring explainability and user-centered design for effective interaction. A key issue in Human-Robot Interaction (HRI) is enabling robots to effectively perceive and reason over multimodal inputs, such as audio and vision, to foster trust and seamless collaboration. In this paper, we propose a generalized and explainable multimodal framework for context representation, designed to improve the fusion of speech and vision modalities. We introduce a use case on assessing ‘Relevance’ between verbal utterances from the user and visual scene perception of the robot. We present our methodology with a Multimodal Joint Representation module and a Temporal Alignment module, which can allow robots to evaluate relevance by temporally aligning multimodal inputs. Finally, we discuss how the proposed framework for context representation can help with various aspects of explainability in HRI.
Files
xhri2025-paper-FINAL.pdf
Files
(1.5 MB)
Name | Size | Download all |
---|---|---|
md5:f656a5ab609af2fb0968f65391a8dd4a
|
1.5 MB | Preview Download |