There is a newer version of this record available.

Dataset Open Access

Written and spoken digits database for multimodal learning

Khacef, Lyes; Rodriguez, Laurent; Miramond, Benoit

Database description:

The written and spoken digits database is not a new database but a constructed database from existing ones, in order to provide a ready-to-use database for multimodal fusion.

The written digits database is the original MNIST handwritten digits database [1] with no additional processing. It consists of 70000 images (60000 for training and 10000 for test) of 28 x 28 = 784 dimensions.

The spoken digits database was extracted from Google Speech Commands [2], an audio dataset of spoken words that was proposed to train and evaluate keyword spotting systems. It consists of 105829 utterances of 35 words, amongst which 38908 utterances of the ten digits (34801 for training and 4107 for test). A pre-processing was done via the extraction of the Mel Frequency Cepstral Coefficients (MFCC) with a framing window size of 50 ms and frame shift size of 25 ms. Since the speech samples are approximately 1 s long, we end up with 39 time slots. For each one, we extract 12 MFCC coefficients with an additional energy coefficient. Thus, we have a final vector of 39 x 13 = 507 dimensions. Standardization and normalization were applied on the MFCC features.

To construct the multimodal digits dataset, we associated written and spoken digits of the same class respecting the initial partitioning in [1] and [2] for the training and test subsets. Since we have less samples for the spoken digits, we duplicated some random samples to match the number of written digits and have a multimodal digits database of 70000 samples (60000 for training and 10000 for test).

The dataset is provided in six files as described below. Therefore, if a shuffle is performed on the training or test subsets, it must be performed in unison with the same order for the written digits, spoken digits and labels.

 

Files:

  • data_wr_train.npy: 60000 samples of 784-dimentional written digits for training;
  • data_sp_train.npy: 60000 samples of 507-dimentional spoken digits for training;
  • labels_train.npy: 60000 labels for the training subset;
  • data_wr_test.npy: 10000 samples of 784-dimentional written digits for test;
  • data_sp_test.npy: 10000 samples of 507-dimentional spoken digits for test;
  • labels_test.npy: 10000 labels for the test subset.

 

References:

  1. LeCun, Y. & Cortes, C. (1998), “MNIST handwritten digit database”.
  2. Warden, P. (2018), “Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition”.

Files (723.5 MB)
Name Size
data_sp_test.npy
md5:ada9d8f678eb6a0450ce29f23e751a18
40.6 MB Download
data_sp_train.npy
md5:3d5db0abf36233dcff708fd922695a2a
243.4 MB Download
data_wr_test.npy
md5:4b100bccd73e3f97ff4b80c9c47c25bc
62.7 MB Download
data_wr_train.npy
md5:a15c809e13d9e084b66222ec08192773
376.3 MB Download
labels_test.npy
md5:20340fa279db76c71b53b44be1787da7
80.1 kB Download
labels_train.npy
md5:885246e8886e86f0015cc378d413990f
480.1 kB Download
  • LeCun, Y. & Cortes, C. (1998), "MNIST handwritten digit database".

  • Warden, P. (2018), "Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition".

485
457
views
downloads
All versions This version
Views 485431
Downloads 457418
Data volume 88.4 GB84.1 GB
Unique views 395364
Unique downloads 10295

Share

Cite as