Published October 24, 2016 | Version v1
Dataset Open

A word2vec model file built from the French Wikipedia XML Dump using gensim.

  • 1. University of Würzburg

Description

A word2vec model file built from the French Wikipedia XML dump using gensim. The data published here includes three model files (you need all three of them in the same folder) as well as the Python script used to build the model (for documentation). The Wikipedia dump was downloaded on October 7, 2016 from https://dumps.wikimedia.org/. Before building the model, plain text was extracted from the dump. The size of that dataset is about 500 million words or 3.6 GB of plain text. The principal parameters for building the model were the following: no lemmatization was performed, tokenization was done using the "\W" regular expression (any non-word character splits tokens), and the model was built with 500 dimensions.

Files

Files (1.9 GB)

Name Size Download all
md5:090749a5fc06c35213b0a9dcdbe504b9
2.9 kB Download
md5:be587b5dd37141d2d455a8965ef69f1a
31.3 MB Download
md5:49f21e6842bdeafc3ac49db08e11ef84
951.0 MB Download
md5:56b21cd52cafd79f87aaf6a6361f10ac
951.0 MB Download