There is a newer version of the record available.

Published March 3, 2023 | Version v1
Dataset Open

EarSet: A Multi-Modal In-Ear Dataset

  • 1. Nokia Bell Labs, University of Cambridge
  • 2. Nokia Bell Labs
  • 3. National University of Singapore
  • 4. University of Cambridge

Description

EarSet aims at providing the research community with a novel, multi-modal, dataset, which, for the first time, will allow studying of the impact of body and head/face movements on both the morphology of the PPG wave captured at the ear, as well as on the vital signs estimation. To accurately collect in-ear PPG data, coupled with a 6 degrees-of-freedom (DoF) motion signature, we prototyped and built a flexible research platform for in-the-ear data collection. The platform is centered around a novel ear-tip design which includes a 3-channel PPG (green, red, infrared) and a 6-axis (accelerometer, gyroscope) motion sensor (IMU) co-located on the same ear-tip. This allows the simultaneous collection of spatially distant (i.e., one tip in the left and one in the right ear) PPG data at multiple wavelengths and the corresponding motion signature, for a total of 18 data streams. 
Inspired by the Facial Action Coding Systems (FACS), we consider a set of potential sources of motion artifact (MA) caused by natural facial and head movements.  Specifically, we gather data on 16 different head and facial motions - head movements (nodding, shaking, tilting), eyes movements (vertical eyes movements, horizontal eyes movements, brow raiser, brow lowerer, right eye wink, left eye wink), and mouth movements (lip puller, chin raiser, mouth stretch, speaking, chewing).
We also collect motion and PPG data under activities, of different intensities, which entail the movement of the entire body (walking and running). Together with in-ear PPG and IMU data, we collect several vital signs including, heart rate, heart rate variability, breathing rate, and raw ECG, from a medical-grade chest device.

With approximately 17 hours of data from 30 participants of mixed gender and ethnicity (mean age: 28.9 years, standard deviation: 6.11 years), our dataset empowers the research community to analyze the morphological characteristics of in-ear PPG signals with respect to motion, device positioning (left ear, right ear), as well as a set of configuration parameters and their corresponding data quality/power consumption trade-off. We envision such a dataset could open the door to innovative filtering techniques to mitigate, and eventually eliminate, the impact of MA on in-ear PPG. We ran a set of preliminary analyses on the data, considering both handcrafted features, as well as a DNN (Deep Neural Network) approach. Ultimately, we observe statistically significant morphological differences in the PPG signal across different types of motions when compared to a situation where there is no motion. We also discuss a 3-classes classification task and show how full-body motions and head/face motions can be discriminated from a still baseline (and among themselves). These preliminary results represent the first step towards the detection of corrupted PPG segments and show the importance of studying how head/face movements impact PPG signals in the ear. 

To the best of our knowledge, this is the first in-ear PPG dataset that covers a wide range of full-body and head/facial motion artifacts. Being able to study the signal quality and motion artifacts under such circumstances will serve as a reference for future research in the field, acting as a stepping stone to fully enable PPG-equipped earables.

Files

EarSet.zip

Files (325.2 MB)

Name Size Download all
md5:a02c55d954f62f683a948f487d4321c6
325.2 MB Preview Download