Dataset Open Access

# Written and spoken digits database for multimodal learning

Khacef, Lyes; Rodriguez, Laurent; Miramond, Benoit

### Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:creator>Khacef, Lyes</dc:creator>
<dc:creator>Rodriguez, Laurent</dc:creator>
<dc:creator>Miramond, Benoit</dc:creator>
<dc:date>2020-10-01</dc:date>
<dc:description>Database description:

The written and spoken digits database is not a new database but a constructed database from existing ones, in order to provide a ready-to-use database for multimodal fusion [1].

The written digits database is the original MNIST handwritten digits database [2] with no additional processing. It consists of 70000 images (60000 for training and 10000 for test) of 28 x 28 = 784 dimensions.

The spoken digits database was extracted from Google Speech Commands [3], an audio dataset of spoken words that was proposed to train and evaluate keyword spotting systems. It consists of 105829 utterances of 35 words, amongst which 38908 utterances of the ten digits (34801 for training and 4107 for test). A pre-processing was done via the extraction of the Mel Frequency Cepstral Coefficients (MFCC) with a framing window size of 50 ms and frame shift size of 25 ms. Since the speech samples are approximately 1 s long, we end up with 39 time slots. For each one, we extract 12 MFCC coefficients with an additional energy coefficient. Thus, we have a final vector of 39 x 13 = 507 dimensions. Standardization and normalization were applied on the MFCC features.

To construct the multimodal digits dataset, we associated written and spoken digits of the same class respecting the initial partitioning in [2] and [3] for the training and test subsets. Since we have less samples for the spoken digits, we duplicated some random samples to match the number of written digits and have a multimodal digits database of 70000 samples (60000 for training and 10000 for test).

The dataset is provided in six files as described below. Therefore, if a shuffle is performed on the training or test subsets, it must be performed in unison with the same order for the written digits, spoken digits and labels.

Files:

data_wr_train.npy: 60000 samples of 784-dimentional written digits for training;
data_sp_train.npy: 60000 samples of 507-dimentional spoken digits for training;
labels_train.npy: 60000 labels for the training subset;
data_wr_test.npy: 10000 samples of 784-dimentional written digits for test;
data_sp_test.npy: 10000 samples of 507-dimentional spoken digits for test;
labels_test.npy: 10000 labels for the test subset.

References:

Khacef, L. et al. (2020), "Brain-Inspired Self-Organization with Cellular Neuromorphic Computing for Multimodal Unsupervised Learning".
LeCun, Y. &amp; Cortes, C. (1998), “MNIST handwritten digit database”.
Warden, P. (2018), “Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition”.
</dc:description>
<dc:identifier>https://zenodo.org/record/4452953</dc:identifier>
<dc:identifier>10.5281/zenodo.4452953</dc:identifier>
<dc:identifier>oai:zenodo.org:4452953</dc:identifier>
<dc:relation>doi:10.5281/zenodo.3515934</dc:relation>
<dc:relation>url:https://zenodo.org/communities/multimodality</dc:relation>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:subject>MNIST</dc:subject>
<dc:subject>written digits</dc:subject>
<dc:subject>spoken digits</dc:subject>
<dc:subject>multimodal learning</dc:subject>
<dc:title>Written and spoken digits database for multimodal learning</dc:title>
<dc:type>info:eu-repo/semantics/other</dc:type>
<dc:type>dataset</dc:type>
</oai_dc:dc>

495
457
views