This is a version of the original GoogleNews-vectors-negative300 Word2Vec embeddings for English. In addition, we provide the following modifications files:
Note that this is not a product of original research, but a derived work, deposited here as a point of permanent reference and as a building stone of subsequent research. For such application, a publication independent from Google is necessary to guarantee stability against changes in their data releases.
The original Word2vec code and data was published via https://code.google.com/archive/p/word2vec/ under an Apache License 2.0. We obtained the Word2vec data from https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit?usp=sharing on Jun 3, 2020.
The Word2vec documentation included the following references:
 Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient Estimation of Word Representations in Vector Space. In Proceedings of Workshop at ICLR, 2013.  Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of NIPS, 2013.  Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic Regularities in Continuous Space Word Representations. In Proceedings of NAACL HLT, 2013.
The derived data is made available under the same license (Apache License 2.0). However, note that the content derived from WordNet (lemmas) are subject to the Princeton Wordnet license as stated in LICENSE.