Dataset Open Access

NAIST contribution to ZeroSpeech 2017 (Track 1)

Heck, Michael; Sakti, Sakriani; Nakamura, Satoshi

This is the official submission of NAIST for track 1 of the zero resource speech challenge 2017 (ZeroSpeech2017).

Our system uses feature vector optimized DPGMM based clustering for unsupervised subword modeling. The general idea is to unsupervisedly learn frame-level class labels in a first run of DPGMM based clustering. These labels are then used to automatically learn speech feature transformations (LDA, MLLT, (basis) fMLLR) to improve discriminability and to reduce speaker variance. The optimized feature vectors are re-clustered and frame-wise posteriorgrams are extracted to serve as new speech representation.

The system also applies posteriorgram based model combination.

This pipeline is entirely unsupervised and only needs the raw audio recordings as input. No pre-defined segmentation, speaker IDs or other meta data is required.

The only parameter subject to tuning is the LDA output dimensionality, which in this case has been optimized on the development data sets (english, french, mandarin) and tested on the surprise language data sets (LANG1, LANG2).

Since the output is posteriorgrams, the ABX scoring needs to be done with the Kullback-Leibler (KL) divergence.

Files (9.0 GB)
Name Size
9.0 GB Download
All versions This version
Views 7979
Downloads 1212
Data volume 107.9 GB107.9 GB
Unique views 7878
Unique downloads 1111


Cite as