Published April 25, 2018 | Version v1
Dataset Open

An Empirical Study of Word Embedding Dimensionality Reduction

Creators

  • 1. Peak Labs

Contributors

  • 1. Peak Labs

Description

In order to analyze the impact on model quality while reducing the number of dimensions, strictly controlled trainings of word embedding are performed on Wikipedia corpora of 170 languages. The specially designed word embedding training tool makes use of processed corpus and intermediate results to accelerate the training, while keeping the consistency of negative sampling.

Tests of semantic relatedness show that, except for some corpora of poor scale, the margin gain from extra dimensions significantly decreases above 200.

Files

models-100.zip

Files (12.1 GB)

Name Size Download all
md5:41939b1442bf98fca7061e5f453aaf1e
1.2 GB Preview Download
md5:4231678bb87430787b5e03c9cec34185
2.3 GB Preview Download
md5:499faeb960521b19661e39bac1e09f47
3.4 GB Preview Download
md5:56e2838f23536bc1e8b72c672668d3cc
4.6 GB Preview Download
md5:4c75f9c9b07a189132bd0402cf0ddafd
584.7 MB Download
md5:2e4f0abf594112f9fb35d86457eb74e9
19.3 MB Preview Download