Dataset Open Access

A word2vec model file built from the French Wikipedia XML Dump using gensim.

Schöch, Christof

A word2vec model file built from the French Wikipedia XML dump using gensim. The data published here includes three model files (you need all three of them in the same folder) as well as the Python script used to build the model (for documentation). The Wikipedia dump was downloaded on October 7, 2016 from https://dumps.wikimedia.org/. Before building the model, plain text was extracted from the dump. The size of that dataset is about 500 million words or 3.6 GB of plain text. The principal parameters for building the model were the following: no lemmatization was performed, tokenization was done using the "\W" regular expression (any non-word character splits tokens), and the model was built with 500 dimensions.

Files (1.9 GB)
Name Size
build_word2vec_model_v020.py
md5:090749a5fc06c35213b0a9dcdbe504b9
2.9 kB Download
frwiki.gensim
md5:be587b5dd37141d2d455a8965ef69f1a
31.3 MB Download
frwiki.gensim.syn0.npy
md5:49f21e6842bdeafc3ac49db08e11ef84
951.0 MB Download
frwiki.gensim.syn1neg.npy
md5:56b21cd52cafd79f87aaf6a6361f10ac
951.0 MB Download
4,495
1,739
views
downloads
All versions This version
Views 4,4954,497
Downloads 1,7391,738
Data volume 813.6 GB813.6 GB
Unique views 4,2014,202
Unique downloads 777776

Share

Cite as