Published June 1, 2014 | Version v1
Dataset Open

Don't count, predict! Semantic vectors

  • 1. University of Trento

Description

Semantic vectors associated with the paper "Don't count, predict! A systematic comparison of context-counting vs context-predicting semantics vectors"

Abstract: context-predicting models (more commonly known as embeddings or neural language models) are the new kids on the distributional semantics block. Despite the buzz surrounding these models, the literature is still lacking a systematic comparison of the predictive models with classic, count-vector-based distributional semantic approaches. In this paper, we perform such an extensive evaluation, on a wide range of lexical semantics tasks and across many parameter settings. The results, to our own surprise, show that the buzz is fully justified, as the context-predicting models obtain a thorough and resounding victory against their count-based counterparts.

Files

Files (2.8 GB)

Name Size Download all
md5:18225022479202eedcaa83175a85b69d
6.3 kB Download
md5:a3152a86b1cf462241a943e26582a849
500.0 MB Download
md5:05ac800eb85f5017cdaefc2597552e64
500.0 MB Download
md5:f1c2bd158db8755510edfd0ef90b1861
428.6 MB Download
md5:213e19cf9c6a5f266650b2ffe0fca0c5
500.0 MB Download
md5:66b31c5f2032a96bfefd63d32fe2e6ee
331.7 MB Download
md5:91523ff5c4d31c8e24b0f4c79a541800
562.9 MB Download

Additional details

Funding

COMPOSES – Compositional Operations in Semantic Space 283554
European Commission