There is a newer version of the record available.

Published November 7, 2019 | Version v1
Dataset Open

Quantity doesn't buy quality syntax with neural language models

  • 1. Cornell University
  • 2. Johns Hopkins University

Description

This repository contains the 125 LSTM models analyzed in van Schijndel, Mueller, and Linzen (2019) "Quantity doesn't buy quality syntax with neural language models". Each archive contains 25 models trained on a specific number of training tokens.

The naming convention for each model is:
LSTM_[Hidden Units]_[Training Tokens]_[Training Partition]_[Random Seed]-d[Dropout Rate].pt

Hidden Units: The number of hidden units per layer (there are two layers in each model) {100, 200, 400, 800, 1600}
Training Tokens: The number of tokens used to train each model {2m, 10m, 20m, 40m, 80m}
Training Partition: Five distinct training partitions were created for each amount of training data {a, b, c, d, e}
Random Seed: The random seed used to train each model*
Dropout Rate: All models used a dropout rate of 0.2

*A scripting bug led to a random seed of 0 for all models trained on less than 40 million tokens. This does not substantively affect the analyses since each model is distinct in terms of the model configuration or training data, so we opted to not retrain the models with unique random seeds to save time and computational resources.

Files

Files (13.3 GB)

Name Size Download all
md5:114e1a60f9e4f62f3369f9451a40e075
2.7 GB Download
md5:71105f301b90694faa68b2a80e3195c1
2.7 GB Download
md5:ec59aa70584f58e737507980772cb882
2.6 GB Download
md5:bf23aaa6824fe7c2c387bea3a0b6edff
2.7 GB Download
md5:3af698b7986b7aee77940127e71a8e76
2.7 GB Download

Additional details

Related works

Is documented by
Conference paper: 10.18653/v1/D19-1592 (DOI)

References