Published August 14, 2023 | Version 1.0
Dataset Open

TongueTap: Multimodal Tongue Gesture Recognition with Head-Worn Devices

  • 1. Georgia Institute of Technology
  • 2. Microsoft Research
  • 3. Microsoft

Description

Please cite the primary paper at https://doi.org/10.1145/3577190.3614120 when referencing this dataset.

This dataset contains multimodal tongue gesture data as a supplement for "TongueTap: Multimodal Tongue Gesture Recognition with Head-Worn Devices" published in ICMI (International Conference on Multimodal Interfaces) 2023. The data is presented in three formats (XDF, pickle and NumPy) at various stages of pre-processing. Please review the READMEs in each file before working with them. Please also review the paper at https://doi.org/10.1145/3577190.3614120 for more information about the data and how it was collected.

Abstract

Mouth-based interfaces are a promising new approach enabling silent, hands-free and eyes-free interaction with wearable devices. However, interfaces sensing mouth movements are traditionally custom-designed and placed near or within the mouth. TongueTap synchronizes multimodal EEG, PPG, IMU, eye tracking and head tracking data from two commercial headsets to facilitate tongue gesture recognition using only off-the-shelf devices on the upper face. We classified eight closed-mouth tongue gestures with 94% accuracy, offering an invisible and inaudible method for discreet control of head-worn devices. Moreover, we found that the IMU alone differentiates eight gestures with 80% accuracy and a subset of four gestures with 92% accuracy. We built a dataset of 48,000 gesture trials across 16 participants, allowing TongueTap to perform user-independent classification. Our findings suggest tongue gestures can be a viable interaction technique for VR/AR headsets and earables without requiring novel hardware.

Files

TongueTap Dataset.zip

Files (2.8 GB)

Name Size Download all
md5:d14e4b30060cd74ce4e9bc86e62f3856
2.8 GB Preview Download