There is a newer version of this record available.

Software Open Access

Transformers: State-of-the-Art Natural Language Processing

Wolf, Thomas; Debut, Lysandre; Sanh, Victor; Chaumond, Julien; Delangue, Clement; Moi, Anthony; Cistac, Perric; Ma, Clara; Jernite, Yacine; Plu, Julien; Xu, Canwen; Le Scao, Teven; Gugger, Sylvain; Drame, Mariama; Lhoest, Quentin; Rush, Alexander M.


JSON-LD (schema.org) Export

{
  "description": "TrOCR and VisionEncoderDecoderModel\n<p>One new model is released as part of the TrOCR implementation: <code>TrOCRForCausalLM</code>, in PyTorch. It comes along a new <code>VisionEncoderDecoderModel</code> class, which allows to mix-and-match any vision Transformer encoder with any text Transformer as decoder, similar to the existing <code>SpeechEncoderDecoderModel</code> class.</p>\n<p>The TrOCR model was proposed in <a href=\"https://arxiv.org/abs/2109.10282\">TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models</a>, by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.</p>\n<p>The TrOCR model consists of an image transformer encoder and an autoregressive text transformer to perform optical character recognition in an end-to-end manner.</p>\n<ul>\n<li>Add TrOCR + VisionEncoderDecoderModel by @NielsRogge in <a href=\"https://github.com/huggingface/transformers/pull/13874\">https://github.com/huggingface/transformers/pull/13874</a></li>\n</ul>\n<p>Compatible checkpoints can be found on the Hub: <a href=\"https://huggingface.co/models?other=trocr\">https://huggingface.co/models?other=trocr</a></p>\nSEW &amp; SEW-D\n<p>SEW and SEW-D (Squeezed and Efficient Wav2Vec) were proposed in <a href=\"https://arxiv.org/abs/2109.06870\">Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition</a> by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.</p>\n<p>SEW and SEW-D models use a Wav2Vec-style feature encoder and introduce temporal downsampling to reduce the length of the transformer encoder. SEW-D additionally replaces the transformer encoder with a DeBERTa one. Both models achieve significant inference speedups without sacrificing the speech recognition quality.</p>\n<ul>\n<li>Add the SEW and SEW-D speech models by @anton-l in <a href=\"https://github.com/huggingface/transformers/pull/13962\">https://github.com/huggingface/transformers/pull/13962</a></li>\n<li>Add SEW CTC models by @anton-l in <a href=\"https://github.com/huggingface/transformers/pull/14158\">https://github.com/huggingface/transformers/pull/14158</a></li>\n</ul>\n<p>Compatible checkpoints are available on the Hub: <a href=\"https://huggingface.co/models?other=sew\">https://huggingface.co/models?other=sew</a> and <a href=\"https://huggingface.co/models?other=sew-d\">https://huggingface.co/models?other=sew-d</a></p>\nDistilHuBERT\n<p>DistilHuBERT was proposed in <a href=\"https://arxiv.org/abs/2110.01900\">DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT</a>, by Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee.</p>\n<p>DistilHuBERT is a distilled version of the HuBERT model. Using only two transformer layers, the model scores competitively on the SUPERB benchmark tasks.</p>\n<p>Compatible checkpoint is available on the Hub: <a href=\"https://huggingface.co/ntu-spml/distilhubert\">https://huggingface.co/ntu-spml/distilhubert</a></p>\nTensorFlow improvements\n<p>Several bug fixes and UX improvements for TensorFlow</p>\nKeras callback\n<p>Introduction of a Keras callback to push to the hub each epoch, or after a given number of steps:</p>\n<ul>\n<li>Keras callback to push to hub each epoch, or after N steps by @Rocketknight1 in <a href=\"https://github.com/huggingface/transformers/pull/13773\">https://github.com/huggingface/transformers/pull/13773</a></li>\n</ul>\nUpdates on the encoder-decoder framework\n<p>The encoder-decoder framework is now available in TensorFlow, allowing mixing and matching different encoders and decoders together into a single encoder-decoder architecture!</p>\n<ul>\n<li>Add TFEncoderDecoderModel + Add cross-attention to some TF models by @ydshieh in <a href=\"https://github.com/huggingface/transformers/pull/13222\">https://github.com/huggingface/transformers/pull/13222</a></li>\n</ul>\n<p>Besides this, the <code>EncoderDecoderModel</code> classes have been updated to work similar to models like BART and T5. From now on, users don't need to pass <code>decoder_input_ids</code> themselves anymore to the model. Instead, they will be created automatically based on the <code>labels</code> (namely by shifting them one position to the right, replacing -100 by the <code>pad_token_id</code> and prepending the <code>decoder_start_token_id</code>). Note that this may result in training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0 that set the <code>decoder_input_ids</code> = <code>labels</code>.</p>\n<ul>\n<li>Fix EncoderDecoderModel classes to be more like BART and T5 by @NielsRogge  in <a href=\"https://github.com/huggingface/transformers/pull/14139\">https://github.com/huggingface/transformers/pull/14139</a></li>\n</ul>\nSpeech improvements\n<ul>\n<li>Add DistilHuBERT  by @anton-l in <a href=\"https://github.com/huggingface/transformers/pull/14174\">https://github.com/huggingface/transformers/pull/14174</a></li>\n<li>[Speech Examples] Add pytorch speech pretraining by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/13877\">https://github.com/huggingface/transformers/pull/13877</a></li>\n<li>[Speech Examples] Add new audio feature by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/14027\">https://github.com/huggingface/transformers/pull/14027</a></li>\n<li>Add ASR colabs by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/14067\">https://github.com/huggingface/transformers/pull/14067</a></li>\n<li>[ASR] Make speech recognition example more general to load any tokenizer by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/14079\">https://github.com/huggingface/transformers/pull/14079</a></li>\n<li>[Examples] Add an official audio classification example by @anton-l in <a href=\"https://github.com/huggingface/transformers/pull/13722\">https://github.com/huggingface/transformers/pull/13722</a></li>\n<li>[Examples] Use Audio feature in speech classification by @anton-l in <a href=\"https://github.com/huggingface/transformers/pull/14052\">https://github.com/huggingface/transformers/pull/14052</a></li>\n</ul>\nAuto-model API\n<p>To make it easier to extend the Transformers library, every Auto class a new <code>register</code> method, that allows you to register your own custom models, configurations or tokenizers. See more in the <a href=\"https://huggingface.co/transformers/model_doc/auto.html#extending-the-auto-classes\">documentation</a></p>\n<ul>\n<li>Add an API to register objects to Auto classes by @sgugger in <a href=\"https://github.com/huggingface/transformers/pull/13989\">https://github.com/huggingface/transformers/pull/13989</a></li>\n</ul>\nBug fixes and improvements\n<ul>\n<li>Fix filtering in test fetcher utils by @sgugger in <a href=\"https://github.com/huggingface/transformers/pull/13766\">https://github.com/huggingface/transformers/pull/13766</a></li>\n<li>Fix warning for gradient_checkpointing by @sgugger in <a href=\"https://github.com/huggingface/transformers/pull/13767\">https://github.com/huggingface/transformers/pull/13767</a></li>\n<li>Implement len in IterableDatasetShard by @sgugger in <a href=\"https://github.com/huggingface/transformers/pull/13780\">https://github.com/huggingface/transformers/pull/13780</a></li>\n<li>[Wav2Vec2] Better error message by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/13777\">https://github.com/huggingface/transformers/pull/13777</a></li>\n<li>Fix LayoutLM ONNX test error by @nishprabhu in <a href=\"https://github.com/huggingface/transformers/pull/13710\">https://github.com/huggingface/transformers/pull/13710</a></li>\n<li>Enable readme link synchronization by @qqaatw in <a href=\"https://github.com/huggingface/transformers/pull/13785\">https://github.com/huggingface/transformers/pull/13785</a></li>\n<li>Fix length of IterableDatasetShard and add test by @sgugger in <a href=\"https://github.com/huggingface/transformers/pull/13792\">https://github.com/huggingface/transformers/pull/13792</a></li>\n<li>[docs/gpt-j] addd instructions for how minimize CPU RAM usage by @patil-suraj in <a href=\"https://github.com/huggingface/transformers/pull/13795\">https://github.com/huggingface/transformers/pull/13795</a></li>\n<li>[examples <code>run_glue.py</code>] missing requirements <code>scipy</code>, <code>sklearn</code> by @stas00 in <a href=\"https://github.com/huggingface/transformers/pull/13768\">https://github.com/huggingface/transformers/pull/13768</a></li>\n<li>[examples/flax] use Repository API for push_to_hub by @patil-suraj in <a href=\"https://github.com/huggingface/transformers/pull/13672\">https://github.com/huggingface/transformers/pull/13672</a></li>\n<li>Fix gather for TPU by @sgugger in <a href=\"https://github.com/huggingface/transformers/pull/13813\">https://github.com/huggingface/transformers/pull/13813</a></li>\n<li>[testing] auto-replay captured streams by @stas00 in <a href=\"https://github.com/huggingface/transformers/pull/13803\">https://github.com/huggingface/transformers/pull/13803</a></li>\n<li>Add MultiBERTs conversion script by @gchhablani in <a href=\"https://github.com/huggingface/transformers/pull/13077\">https://github.com/huggingface/transformers/pull/13077</a></li>\n<li>[Examples] Improve mapping in accelerate examples by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/13810\">https://github.com/huggingface/transformers/pull/13810</a></li>\n<li>[DPR] Correct init by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/13796\">https://github.com/huggingface/transformers/pull/13796</a></li>\n<li>skip gptj slow generate tests by @patil-suraj in <a href=\"https://github.com/huggingface/transformers/pull/13809\">https://github.com/huggingface/transformers/pull/13809</a></li>\n<li>Fix warning situation: UserWarning: max_length is ignored when padding=True\" by @shirayu in <a href=\"https://github.com/huggingface/transformers/pull/13829\">https://github.com/huggingface/transformers/pull/13829</a></li>\n<li>Updating CITATION.cff to fix GitHub citation prompt BibTeX output. by @arfon in <a href=\"https://github.com/huggingface/transformers/pull/13833\">https://github.com/huggingface/transformers/pull/13833</a></li>\n<li>Add TF notebooks by @Rocketknight1 in <a href=\"https://github.com/huggingface/transformers/pull/13793\">https://github.com/huggingface/transformers/pull/13793</a></li>\n<li>Bart: check if decoder_inputs_embeds is set by @silviu-oprea in <a href=\"https://github.com/huggingface/transformers/pull/13800\">https://github.com/huggingface/transformers/pull/13800</a></li>\n<li>include megatron_gpt2 in installed modules by @stas00 in <a href=\"https://github.com/huggingface/transformers/pull/13834\">https://github.com/huggingface/transformers/pull/13834</a></li>\n<li>Delete MultiBERTs conversion script by @gchhablani in <a href=\"https://github.com/huggingface/transformers/pull/13852\">https://github.com/huggingface/transformers/pull/13852</a></li>\n<li>Remove a duplicated bullet point in the GPT-J doc by @yaserabdelaziz in <a href=\"https://github.com/huggingface/transformers/pull/13851\">https://github.com/huggingface/transformers/pull/13851</a></li>\n<li>Add Mistral GPT-2 Stability Tweaks by @siddk in <a href=\"https://github.com/huggingface/transformers/pull/13573\">https://github.com/huggingface/transformers/pull/13573</a></li>\n<li>Fix broken link to distill models in docs by @Randl in <a href=\"https://github.com/huggingface/transformers/pull/13848\">https://github.com/huggingface/transformers/pull/13848</a></li>\n<li>:sparkles: update image classification example by @nateraw in <a href=\"https://github.com/huggingface/transformers/pull/13824\">https://github.com/huggingface/transformers/pull/13824</a></li>\n<li>Update no_* argument (HfArgumentParser) by @BramVanroy in <a href=\"https://github.com/huggingface/transformers/pull/13865\">https://github.com/huggingface/transformers/pull/13865</a></li>\n<li>Update Tatoeba conversion by @Traubert in <a href=\"https://github.com/huggingface/transformers/pull/13757\">https://github.com/huggingface/transformers/pull/13757</a></li>\n<li>Fixing 1-length special tokens cut. by @Narsil in <a href=\"https://github.com/huggingface/transformers/pull/13862\">https://github.com/huggingface/transformers/pull/13862</a></li>\n<li>Fix flax summarization example: save checkpoint after each epoch and push checkpoint to the hub by @ydshieh in <a href=\"https://github.com/huggingface/transformers/pull/13872\">https://github.com/huggingface/transformers/pull/13872</a></li>\n<li>Fixing empty prompts for text-generation when BOS exists. by @Narsil in <a href=\"https://github.com/huggingface/transformers/pull/13859\">https://github.com/huggingface/transformers/pull/13859</a></li>\n<li>Improve error message when loading models from Hub by @aphedges in <a href=\"https://github.com/huggingface/transformers/pull/13836\">https://github.com/huggingface/transformers/pull/13836</a></li>\n<li>Initial support for symbolic tracing with torch.fx allowing dynamic axes by @michaelbenayoun in <a href=\"https://github.com/huggingface/transformers/pull/13579\">https://github.com/huggingface/transformers/pull/13579</a></li>\n<li>Allow dataset to be an optional argument for (Distributed)LengthGroupedSampler by @ZhaofengWu in <a href=\"https://github.com/huggingface/transformers/pull/13820\">https://github.com/huggingface/transformers/pull/13820</a></li>\n<li>Fixing question-answering with long contexts  by @Narsil in <a href=\"https://github.com/huggingface/transformers/pull/13873\">https://github.com/huggingface/transformers/pull/13873</a></li>\n<li>fix(integrations): consider test metrics by @borisdayma in <a href=\"https://github.com/huggingface/transformers/pull/13888\">https://github.com/huggingface/transformers/pull/13888</a></li>\n<li>fix: replace asserts by value error by @m5l14i11 in <a href=\"https://github.com/huggingface/transformers/pull/13894\">https://github.com/huggingface/transformers/pull/13894</a></li>\n<li>Update parallelism.md by @hyunwoongko in <a href=\"https://github.com/huggingface/transformers/pull/13892\">https://github.com/huggingface/transformers/pull/13892</a></li>\n<li>Autodocument the list of ONNX-supported models by @sgugger in <a href=\"https://github.com/huggingface/transformers/pull/13884\">https://github.com/huggingface/transformers/pull/13884</a></li>\n<li>Fixing GPU for token-classification in a better way. by @Narsil in <a href=\"https://github.com/huggingface/transformers/pull/13856\">https://github.com/huggingface/transformers/pull/13856</a></li>\n<li>Update FSNER code in examples-&gt;research_projects-&gt;fsner by @sayef in <a href=\"https://github.com/huggingface/transformers/pull/13864\">https://github.com/huggingface/transformers/pull/13864</a></li>\n<li>Replace assert statements with exceptions by @ddrm86 in <a href=\"https://github.com/huggingface/transformers/pull/13871\">https://github.com/huggingface/transformers/pull/13871</a></li>\n<li>Fixing Backward compatiblity for zero-shot by @Narsil in <a href=\"https://github.com/huggingface/transformers/pull/13855\">https://github.com/huggingface/transformers/pull/13855</a></li>\n<li>Update run_qa.py - CorrectTypo by @akulagrawal in <a href=\"https://github.com/huggingface/transformers/pull/13857\">https://github.com/huggingface/transformers/pull/13857</a></li>\n<li>T5ForConditionalGeneration: enabling using past_key_values and labels in training by @yssjtu in <a href=\"https://github.com/huggingface/transformers/pull/13805\">https://github.com/huggingface/transformers/pull/13805</a></li>\n<li>Fix trainer logging_nan_inf_filter in torch_xla mode by @ymwangg in <a href=\"https://github.com/huggingface/transformers/pull/13896\">https://github.com/huggingface/transformers/pull/13896</a></li>\n<li>Fix hp search for non sigopt backends by @sgugger in <a href=\"https://github.com/huggingface/transformers/pull/13897\">https://github.com/huggingface/transformers/pull/13897</a></li>\n<li>[Trainer] Fix nan-loss condition by @anton-l in <a href=\"https://github.com/huggingface/transformers/pull/13911\">https://github.com/huggingface/transformers/pull/13911</a></li>\n<li>Raise exceptions instead of asserts in utils/download_glue_data by @hirotasoshu in <a href=\"https://github.com/huggingface/transformers/pull/13907\">https://github.com/huggingface/transformers/pull/13907</a></li>\n<li>Add an example of exporting BartModel + BeamSearch to ONNX module. by @fatcat-z in <a href=\"https://github.com/huggingface/transformers/pull/13765\">https://github.com/huggingface/transformers/pull/13765</a></li>\n<li>#12789 Replace assert statements with exceptions by @djroxx2000 in <a href=\"https://github.com/huggingface/transformers/pull/13909\">https://github.com/huggingface/transformers/pull/13909</a></li>\n<li>Add missing whitespace to multiline strings by @aphedges in <a href=\"https://github.com/huggingface/transformers/pull/13916\">https://github.com/huggingface/transformers/pull/13916</a></li>\n<li>[Wav2Vec2] Fix mask_feature_prob by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/13921\">https://github.com/huggingface/transformers/pull/13921</a></li>\n<li>Fixes a minor doc issue (missing character) by @mishig25 in <a href=\"https://github.com/huggingface/transformers/pull/13922\">https://github.com/huggingface/transformers/pull/13922</a></li>\n<li>Fix LED by @Rocketknight1 in <a href=\"https://github.com/huggingface/transformers/pull/13882\">https://github.com/huggingface/transformers/pull/13882</a></li>\n<li>Add BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese by @datquocnguyen in <a href=\"https://github.com/huggingface/transformers/pull/13788\">https://github.com/huggingface/transformers/pull/13788</a></li>\n<li>[trainer] memory metrics: add memory at the start report by @stas00 in <a href=\"https://github.com/huggingface/transformers/pull/13915\">https://github.com/huggingface/transformers/pull/13915</a></li>\n<li>Image Segmentation pipeline by @mishig25 in <a href=\"https://github.com/huggingface/transformers/pull/13828\">https://github.com/huggingface/transformers/pull/13828</a></li>\n<li>Adding support for tokens being suffixes or part of each other. by @Narsil in <a href=\"https://github.com/huggingface/transformers/pull/13918\">https://github.com/huggingface/transformers/pull/13918</a></li>\n<li>Adds <code>PreTrainedModel.framework</code> attribute by @StellaAthena in <a href=\"https://github.com/huggingface/transformers/pull/13817\">https://github.com/huggingface/transformers/pull/13817</a></li>\n<li>Fixed typo: herBERT -&gt; HerBERT by @adamjankaczmarek in <a href=\"https://github.com/huggingface/transformers/pull/13936\">https://github.com/huggingface/transformers/pull/13936</a></li>\n<li>[Generation] Fix max_new_tokens by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/13919\">https://github.com/huggingface/transformers/pull/13919</a></li>\n<li>Fix typo in README.md by @fullyz in <a href=\"https://github.com/huggingface/transformers/pull/13883\">https://github.com/huggingface/transformers/pull/13883</a></li>\n<li>Update bug-report.md by @LysandreJik in <a href=\"https://github.com/huggingface/transformers/pull/13934\">https://github.com/huggingface/transformers/pull/13934</a></li>\n<li>fix issue #13904 -attribute does not exist-  by @oraby8 in <a href=\"https://github.com/huggingface/transformers/pull/13942\">https://github.com/huggingface/transformers/pull/13942</a></li>\n<li>Raise ValueError instead of asserts in src/transformers/benchmark/benchmark.py by @AkechiShiro in <a href=\"https://github.com/huggingface/transformers/pull/13951\">https://github.com/huggingface/transformers/pull/13951</a></li>\n<li>Honor existing attention mask in tokenzier.pad by @sgugger in <a href=\"https://github.com/huggingface/transformers/pull/13926\">https://github.com/huggingface/transformers/pull/13926</a></li>\n<li>[Gradient checkpoining] Correct disabling <code>find_unused_parameters</code> in Trainer when gradient checkpointing is enabled by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/13961\">https://github.com/huggingface/transformers/pull/13961</a></li>\n<li>Change DataCollatorForSeq2Seq to pad labels to a multiple of <code>pad_to_multiple_of</code> by @affjljoo3581 in <a href=\"https://github.com/huggingface/transformers/pull/13949\">https://github.com/huggingface/transformers/pull/13949</a></li>\n<li>Replace assert with unittest assertions by @LuisFerTR in <a href=\"https://github.com/huggingface/transformers/pull/13957\">https://github.com/huggingface/transformers/pull/13957</a></li>\n<li>Raise exceptions instead of asserts in  src/transformers/data/processors/xnli.py by @midhun1998 in <a href=\"https://github.com/huggingface/transformers/pull/13945\">https://github.com/huggingface/transformers/pull/13945</a></li>\n<li>Make username optional in hub_model_id by @sgugger in <a href=\"https://github.com/huggingface/transformers/pull/13940\">https://github.com/huggingface/transformers/pull/13940</a></li>\n<li>Raise exceptions instead of asserts in src/transformers/data/processors/utils.py by @killazz67 in <a href=\"https://github.com/huggingface/transformers/pull/13938\">https://github.com/huggingface/transformers/pull/13938</a></li>\n<li>Replace assert by ValueError of src/transformers/models/electra/modeling_{electra,tf_electra}.py and all other models that had copies by @AkechiShiro in <a href=\"https://github.com/huggingface/transformers/pull/13955\">https://github.com/huggingface/transformers/pull/13955</a></li>\n<li>Fix missing tpu variable in benchmark_args_tf.py by @hardianlawi in <a href=\"https://github.com/huggingface/transformers/pull/13968\">https://github.com/huggingface/transformers/pull/13968</a></li>\n<li>Specify im-seg mask greyscole mode by @mishig25 in <a href=\"https://github.com/huggingface/transformers/pull/13974\">https://github.com/huggingface/transformers/pull/13974</a></li>\n<li>[Wav2Vec2] Make sure tensors are always bool for mask_indices by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/13977\">https://github.com/huggingface/transformers/pull/13977</a></li>\n<li>Fixing the lecture values by making sure defaults are not changed by @Narsil in <a href=\"https://github.com/huggingface/transformers/pull/13976\">https://github.com/huggingface/transformers/pull/13976</a></li>\n<li>[parallel doc] dealing with layers larger than one gpu by @stas00 in <a href=\"https://github.com/huggingface/transformers/pull/13980\">https://github.com/huggingface/transformers/pull/13980</a></li>\n<li>Remove wrong model_args supplied by @qqaatw in <a href=\"https://github.com/huggingface/transformers/pull/13937\">https://github.com/huggingface/transformers/pull/13937</a></li>\n<li>Allow single byte decoding by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/13988\">https://github.com/huggingface/transformers/pull/13988</a></li>\n<li>Replace assertion with ValueError exception by @ddrm86 in <a href=\"https://github.com/huggingface/transformers/pull/14006\">https://github.com/huggingface/transformers/pull/14006</a></li>\n<li>Add strong test for configuration attributes by @sgugger in <a href=\"https://github.com/huggingface/transformers/pull/14000\">https://github.com/huggingface/transformers/pull/14000</a></li>\n<li>Fix FNet tokenizer tests by @LysandreJik in <a href=\"https://github.com/huggingface/transformers/pull/13995\">https://github.com/huggingface/transformers/pull/13995</a></li>\n<li>[Testing] Move speech datasets to <code>hf-internal</code> testing ... by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/14008\">https://github.com/huggingface/transformers/pull/14008</a></li>\n<li>Raise exceptions instead of asserts in src/transformers/models/bart/modeling<em>flax</em>[bart, marian, mbart, pegasus].py by @killazz67 in <a href=\"https://github.com/huggingface/transformers/pull/13939\">https://github.com/huggingface/transformers/pull/13939</a></li>\n<li>Scatter dummies + skip pipeline tests by @LysandreJik in <a href=\"https://github.com/huggingface/transformers/pull/13996\">https://github.com/huggingface/transformers/pull/13996</a></li>\n<li>Fixed horizon_length for PPLM by @jacksukk in <a href=\"https://github.com/huggingface/transformers/pull/13886\">https://github.com/huggingface/transformers/pull/13886</a></li>\n<li>Fix: replace assert statements with exceptions in file src/transformers/models/lxmert/modeling_lxmert.py by @murilo-goncalves in <a href=\"https://github.com/huggingface/transformers/pull/14029\">https://github.com/huggingface/transformers/pull/14029</a></li>\n<li>[Docs] More general docstrings by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/14028\">https://github.com/huggingface/transformers/pull/14028</a></li>\n<li>[CLIP] minor fixes by @patil-suraj in <a href=\"https://github.com/huggingface/transformers/pull/14026\">https://github.com/huggingface/transformers/pull/14026</a></li>\n<li>Don't duplicate the elements in dir by @sgugger in <a href=\"https://github.com/huggingface/transformers/pull/14023\">https://github.com/huggingface/transformers/pull/14023</a></li>\n<li>Replace assertions with ValueError exceptions by @ddrm86 in <a href=\"https://github.com/huggingface/transformers/pull/14018\">https://github.com/huggingface/transformers/pull/14018</a></li>\n<li>Fixes typo in <code>modeling_speech_to_text</code> by @mishig25 in <a href=\"https://github.com/huggingface/transformers/pull/14044\">https://github.com/huggingface/transformers/pull/14044</a></li>\n<li>[Speech] Move all examples to new audio feature by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/14045\">https://github.com/huggingface/transformers/pull/14045</a></li>\n<li>Update SEW integration test tolerance by @anton-l in <a href=\"https://github.com/huggingface/transformers/pull/14048\">https://github.com/huggingface/transformers/pull/14048</a></li>\n<li>[Flax] Clip fix test by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/14046\">https://github.com/huggingface/transformers/pull/14046</a></li>\n<li>Fix save when laod_best_model_at_end=True by @sgugger in <a href=\"https://github.com/huggingface/transformers/pull/14054\">https://github.com/huggingface/transformers/pull/14054</a></li>\n<li>[Speech] Refactor Examples by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/14040\">https://github.com/huggingface/transformers/pull/14040</a></li>\n<li>fix typo by @yyy-Apple in <a href=\"https://github.com/huggingface/transformers/pull/14049\">https://github.com/huggingface/transformers/pull/14049</a></li>\n<li>Fix typo by @ihoromi4 in <a href=\"https://github.com/huggingface/transformers/pull/14056\">https://github.com/huggingface/transformers/pull/14056</a></li>\n<li>[FX] Fix passing None as concrete args when tracing by @thomasw21 in <a href=\"https://github.com/huggingface/transformers/pull/14022\">https://github.com/huggingface/transformers/pull/14022</a></li>\n<li>TF Model train and eval step metrics for seq2seq models. by @pedro-r-marques in <a href=\"https://github.com/huggingface/transformers/pull/14009\">https://github.com/huggingface/transformers/pull/14009</a></li>\n<li>update to_py_obj to support np.number by @PrettyMeng in <a href=\"https://github.com/huggingface/transformers/pull/14064\">https://github.com/huggingface/transformers/pull/14064</a></li>\n<li>Trainer._load_rng_state() path fix (#14069) by @tlby in <a href=\"https://github.com/huggingface/transformers/pull/14071\">https://github.com/huggingface/transformers/pull/14071</a></li>\n<li>replace assert with exception in src/transformers/utils/model_pararallel_utils.py by @skpig in <a href=\"https://github.com/huggingface/transformers/pull/14072\">https://github.com/huggingface/transformers/pull/14072</a></li>\n<li>Add missing autocast() in Trainer.prediction_step() by @juice500ml in <a href=\"https://github.com/huggingface/transformers/pull/14075\">https://github.com/huggingface/transformers/pull/14075</a></li>\n<li>Fix assert in src/transformers/data/datasets/language_modeling.py by @skpig in <a href=\"https://github.com/huggingface/transformers/pull/14077\">https://github.com/huggingface/transformers/pull/14077</a></li>\n<li>Fix label attribution in token classification examples by @sgugger in <a href=\"https://github.com/huggingface/transformers/pull/14055\">https://github.com/huggingface/transformers/pull/14055</a></li>\n<li>Context managers by @lvwerra in <a href=\"https://github.com/huggingface/transformers/pull/13900\">https://github.com/huggingface/transformers/pull/13900</a></li>\n<li>Fix broken link in the translation section of task summaries by @h4iku in <a href=\"https://github.com/huggingface/transformers/pull/14087\">https://github.com/huggingface/transformers/pull/14087</a></li>\n<li>[ASR] Small fix model card creation by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/14093\">https://github.com/huggingface/transformers/pull/14093</a></li>\n<li>Change asserts in src/transformers/models/xlnet/ to raise ValueError by @WestonKing-Leatham in <a href=\"https://github.com/huggingface/transformers/pull/14088\">https://github.com/huggingface/transformers/pull/14088</a></li>\n<li>Replace assertions with ValueError exceptions by @ddrm86 in <a href=\"https://github.com/huggingface/transformers/pull/14061\">https://github.com/huggingface/transformers/pull/14061</a></li>\n<li>[Typo] Replace \"Masked\" with \"Causal\" in TF CLM script by @cakiki in <a href=\"https://github.com/huggingface/transformers/pull/14014\">https://github.com/huggingface/transformers/pull/14014</a></li>\n<li>[Examples] Add audio classification notebooks by @anton-l in <a href=\"https://github.com/huggingface/transformers/pull/14099\">https://github.com/huggingface/transformers/pull/14099</a></li>\n<li>Fix ignore_mismatched_sizes by @qqaatw in <a href=\"https://github.com/huggingface/transformers/pull/14085\">https://github.com/huggingface/transformers/pull/14085</a></li>\n<li>Fix typo in comment by @stalkermustang in <a href=\"https://github.com/huggingface/transformers/pull/14102\">https://github.com/huggingface/transformers/pull/14102</a></li>\n<li>Replace assertion with ValueError exception by @ddrm86 in <a href=\"https://github.com/huggingface/transformers/pull/14098\">https://github.com/huggingface/transformers/pull/14098</a></li>\n<li>fix typo in license docstring by @21jun in <a href=\"https://github.com/huggingface/transformers/pull/14094\">https://github.com/huggingface/transformers/pull/14094</a></li>\n<li>Fix a typo in preprocessing docs by @h4iku in <a href=\"https://github.com/huggingface/transformers/pull/14108\">https://github.com/huggingface/transformers/pull/14108</a></li>\n<li>Replace assertions with ValueError exceptions by @iDeepverma in <a href=\"https://github.com/huggingface/transformers/pull/14091\">https://github.com/huggingface/transformers/pull/14091</a></li>\n<li>[tests] fix hubert test sort by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/14116\">https://github.com/huggingface/transformers/pull/14116</a></li>\n<li>Replace assert statements with exceptions (#13871) by @ddrm86 in <a href=\"https://github.com/huggingface/transformers/pull/13901\">https://github.com/huggingface/transformers/pull/13901</a></li>\n<li>Translate README.md to Korean by @yeounyi in <a href=\"https://github.com/huggingface/transformers/pull/14015\">https://github.com/huggingface/transformers/pull/14015</a></li>\n<li>Replace assertions with valueError Exeptions by @jyshdewangan in <a href=\"https://github.com/huggingface/transformers/pull/14117\">https://github.com/huggingface/transformers/pull/14117</a></li>\n<li>Fix assertion in models by @skpig in <a href=\"https://github.com/huggingface/transformers/pull/14090\">https://github.com/huggingface/transformers/pull/14090</a></li>\n<li>[wav2vec2] Add missing --validation_split_percentage data arg by @falcaopetri in <a href=\"https://github.com/huggingface/transformers/pull/14119\">https://github.com/huggingface/transformers/pull/14119</a></li>\n<li>Rename variables with unclear naming by @qqaatw in <a href=\"https://github.com/huggingface/transformers/pull/14122\">https://github.com/huggingface/transformers/pull/14122</a></li>\n<li>Update TP parallel GEMM image by @hyunwoongko in <a href=\"https://github.com/huggingface/transformers/pull/14112\">https://github.com/huggingface/transformers/pull/14112</a></li>\n<li>Fix some typos in the docs by @h4iku in <a href=\"https://github.com/huggingface/transformers/pull/14126\">https://github.com/huggingface/transformers/pull/14126</a></li>\n<li>Supporting Seq2Seq model for question answering task by @karthikrangasai in <a href=\"https://github.com/huggingface/transformers/pull/13432\">https://github.com/huggingface/transformers/pull/13432</a></li>\n<li>Fix rendering of examples version links by @h4iku in <a href=\"https://github.com/huggingface/transformers/pull/14134\">https://github.com/huggingface/transformers/pull/14134</a></li>\n<li>Fix some writing issues in the docs by @h4iku in <a href=\"https://github.com/huggingface/transformers/pull/14136\">https://github.com/huggingface/transformers/pull/14136</a></li>\n<li>BartEnocder add set_input_embeddings by @Liangtaiwan in <a href=\"https://github.com/huggingface/transformers/pull/13960\">https://github.com/huggingface/transformers/pull/13960</a></li>\n<li>Remove unneeded <code>to_tensor()</code> in TF inline example by @Rocketknight1 in <a href=\"https://github.com/huggingface/transformers/pull/14140\">https://github.com/huggingface/transformers/pull/14140</a></li>\n<li>Enable DefaultDataCollator class by @Rocketknight1 in <a href=\"https://github.com/huggingface/transformers/pull/14141\">https://github.com/huggingface/transformers/pull/14141</a></li>\n<li>Fix lazy init to stop hiding errors in import by @sgugger in <a href=\"https://github.com/huggingface/transformers/pull/14124\">https://github.com/huggingface/transformers/pull/14124</a></li>\n<li>Add TF&lt;&gt;PT and Flax&lt;&gt;PT everywhere by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/14047\">https://github.com/huggingface/transformers/pull/14047</a></li>\n<li>Add Camembert to models exportable with ONNX by @ChainYo in <a href=\"https://github.com/huggingface/transformers/pull/14059\">https://github.com/huggingface/transformers/pull/14059</a></li>\n<li>[Speech Recognition CTC] Add auth token to fine-tune private models by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/14154\">https://github.com/huggingface/transformers/pull/14154</a></li>\n<li>Add vision_encoder_decoder to models/<strong>init</strong>.py by @ydshieh in <a href=\"https://github.com/huggingface/transformers/pull/14151\">https://github.com/huggingface/transformers/pull/14151</a></li>\n<li>[Speech Recognition] - Distributed training: Make sure vocab file removal and creation don't interfer  by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/14161\">https://github.com/huggingface/transformers/pull/14161</a></li>\n<li>Include Keras tensor in the allowed types by @sergiovalmac in <a href=\"https://github.com/huggingface/transformers/pull/14155\">https://github.com/huggingface/transformers/pull/14155</a></li>\n<li>[megatron_gpt2] dynamic gelu, add tokenizer, save config by @stas00 in <a href=\"https://github.com/huggingface/transformers/pull/13928\">https://github.com/huggingface/transformers/pull/13928</a></li>\n<li>Add Unispeech &amp; Unispeech-SAT by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/13963\">https://github.com/huggingface/transformers/pull/13963</a></li>\n<li>[ONNX] Add symbolic function for XSoftmax op for exporting to ONNX. by @fatcat-z in <a href=\"https://github.com/huggingface/transformers/pull/14013\">https://github.com/huggingface/transformers/pull/14013</a></li>\n<li>Typo on ner accelerate example code by @monologg in <a href=\"https://github.com/huggingface/transformers/pull/14150\">https://github.com/huggingface/transformers/pull/14150</a></li>\n<li>fix typos in error messages in speech recognition example and modelcard.py by @mgoldey in <a href=\"https://github.com/huggingface/transformers/pull/14166\">https://github.com/huggingface/transformers/pull/14166</a></li>\n<li>Replace assertions with ValueError exception by @huberemanuel in <a href=\"https://github.com/huggingface/transformers/pull/14142\">https://github.com/huggingface/transformers/pull/14142</a></li>\n<li>switch to inference_mode from no_gard by @kamalkraj in <a href=\"https://github.com/huggingface/transformers/pull/13667\">https://github.com/huggingface/transformers/pull/13667</a></li>\n<li>Fix gelu test for torch 1.10 by @LysandreJik in <a href=\"https://github.com/huggingface/transformers/pull/14167\">https://github.com/huggingface/transformers/pull/14167</a></li>\n<li>[Gradient checkpointing] Enable for Deberta + DebertaV2 + SEW-D by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/14175\">https://github.com/huggingface/transformers/pull/14175</a></li>\n<li>[Pipelines] Fix ASR model types check by @anton-l in <a href=\"https://github.com/huggingface/transformers/pull/14178\">https://github.com/huggingface/transformers/pull/14178</a></li>\n<li>Replace assert of data/data_collator.py by ValueError by @AkechiShiro in <a href=\"https://github.com/huggingface/transformers/pull/14131\">https://github.com/huggingface/transformers/pull/14131</a></li>\n<li>[TPU tests] Enable first TPU examples pytorch by @patrickvonplaten in <a href=\"https://github.com/huggingface/transformers/pull/14121\">https://github.com/huggingface/transformers/pull/14121</a></li>\n<li>[modeling_utils] respect original dtype in _get_resized_lm_head by @stas00 in <a href=\"https://github.com/huggingface/transformers/pull/14181\">https://github.com/huggingface/transformers/pull/14181</a></li>\n</ul>\nNew Contributors\n<ul>\n<li>@arfon made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13833\">https://github.com/huggingface/transformers/pull/13833</a></li>\n<li>@silviu-oprea made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13800\">https://github.com/huggingface/transformers/pull/13800</a></li>\n<li>@yaserabdelaziz made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13851\">https://github.com/huggingface/transformers/pull/13851</a></li>\n<li>@Randl made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13848\">https://github.com/huggingface/transformers/pull/13848</a></li>\n<li>@Traubert made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13757\">https://github.com/huggingface/transformers/pull/13757</a></li>\n<li>@ZhaofengWu made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13820\">https://github.com/huggingface/transformers/pull/13820</a></li>\n<li>@m5l14i11 made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13894\">https://github.com/huggingface/transformers/pull/13894</a></li>\n<li>@hyunwoongko made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13892\">https://github.com/huggingface/transformers/pull/13892</a></li>\n<li>@ddrm86 made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13871\">https://github.com/huggingface/transformers/pull/13871</a></li>\n<li>@akulagrawal made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13857\">https://github.com/huggingface/transformers/pull/13857</a></li>\n<li>@yssjtu made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13805\">https://github.com/huggingface/transformers/pull/13805</a></li>\n<li>@ymwangg made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13896\">https://github.com/huggingface/transformers/pull/13896</a></li>\n<li>@hirotasoshu made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13907\">https://github.com/huggingface/transformers/pull/13907</a></li>\n<li>@fatcat-z made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13765\">https://github.com/huggingface/transformers/pull/13765</a></li>\n<li>@djroxx2000 made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13909\">https://github.com/huggingface/transformers/pull/13909</a></li>\n<li>@adamjankaczmarek made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13936\">https://github.com/huggingface/transformers/pull/13936</a></li>\n<li>@oraby8 made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13942\">https://github.com/huggingface/transformers/pull/13942</a></li>\n<li>@AkechiShiro made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13951\">https://github.com/huggingface/transformers/pull/13951</a></li>\n<li>@affjljoo3581 made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13949\">https://github.com/huggingface/transformers/pull/13949</a></li>\n<li>@LuisFerTR made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13957\">https://github.com/huggingface/transformers/pull/13957</a></li>\n<li>@midhun1998 made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13945\">https://github.com/huggingface/transformers/pull/13945</a></li>\n<li>@killazz67 made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13938\">https://github.com/huggingface/transformers/pull/13938</a></li>\n<li>@hardianlawi made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13968\">https://github.com/huggingface/transformers/pull/13968</a></li>\n<li>@jacksukk made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13886\">https://github.com/huggingface/transformers/pull/13886</a></li>\n<li>@murilo-goncalves made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/14029\">https://github.com/huggingface/transformers/pull/14029</a></li>\n<li>@yyy-Apple made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/14049\">https://github.com/huggingface/transformers/pull/14049</a></li>\n<li>@ihoromi4 made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/14056\">https://github.com/huggingface/transformers/pull/14056</a></li>\n<li>@thomasw21 made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/14022\">https://github.com/huggingface/transformers/pull/14022</a></li>\n<li>@pedro-r-marques made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/14009\">https://github.com/huggingface/transformers/pull/14009</a></li>\n<li>@PrettyMeng made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/14064\">https://github.com/huggingface/transformers/pull/14064</a></li>\n<li>@tlby made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/14071\">https://github.com/huggingface/transformers/pull/14071</a></li>\n<li>@skpig made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/14072\">https://github.com/huggingface/transformers/pull/14072</a></li>\n<li>@juice500ml made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/14075\">https://github.com/huggingface/transformers/pull/14075</a></li>\n<li>@h4iku made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/14087\">https://github.com/huggingface/transformers/pull/14087</a></li>\n<li>@WestonKing-Leatham made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/14088\">https://github.com/huggingface/transformers/pull/14088</a></li>\n<li>@cakiki made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/14014\">https://github.com/huggingface/transformers/pull/14014</a></li>\n<li>@stalkermustang made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/14102\">https://github.com/huggingface/transformers/pull/14102</a></li>\n<li>@iDeepverma made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/14091\">https://github.com/huggingface/transformers/pull/14091</a></li>\n<li>@yeounyi made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/14015\">https://github.com/huggingface/transformers/pull/14015</a></li>\n<li>@jyshdewangan made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/14117\">https://github.com/huggingface/transformers/pull/14117</a></li>\n<li>@karthikrangasai made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/13432\">https://github.com/huggingface/transformers/pull/13432</a></li>\n<li>@ChainYo made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/14059\">https://github.com/huggingface/transformers/pull/14059</a></li>\n<li>@sergiovalmac made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/14155\">https://github.com/huggingface/transformers/pull/14155</a></li>\n<li>@huberemanuel made their first contribution in <a href=\"https://github.com/huggingface/transformers/pull/14142\">https://github.com/huggingface/transformers/pull/14142</a></li>\n</ul>\n<p><strong>Full Changelog</strong>: <a href=\"https://github.com/huggingface/transformers/compare/v4.11.0...v4.12.0\">https://github.com/huggingface/transformers/compare/v4.11.0...v4.12.0</a></p>", 
  "license": "", 
  "creator": [
    {
      "@type": "Person", 
      "name": "Wolf, Thomas"
    }, 
    {
      "@type": "Person", 
      "name": "Debut, Lysandre"
    }, 
    {
      "@type": "Person", 
      "name": "Sanh, Victor"
    }, 
    {
      "@type": "Person", 
      "name": "Chaumond, Julien"
    }, 
    {
      "@type": "Person", 
      "name": "Delangue, Clement"
    }, 
    {
      "@type": "Person", 
      "name": "Moi, Anthony"
    }, 
    {
      "@type": "Person", 
      "name": "Cistac, Perric"
    }, 
    {
      "@type": "Person", 
      "name": "Ma, Clara"
    }, 
    {
      "@type": "Person", 
      "name": "Jernite, Yacine"
    }, 
    {
      "@type": "Person", 
      "name": "Plu, Julien"
    }, 
    {
      "@type": "Person", 
      "name": "Xu, Canwen"
    }, 
    {
      "@type": "Person", 
      "name": "Le Scao, Teven"
    }, 
    {
      "@type": "Person", 
      "name": "Gugger, Sylvain"
    }, 
    {
      "@type": "Person", 
      "name": "Drame, Mariama"
    }, 
    {
      "@type": "Person", 
      "name": "Lhoest, Quentin"
    }, 
    {
      "@type": "Person", 
      "name": "Rush, Alexander M."
    }
  ], 
  "url": "https://zenodo.org/record/5608580", 
  "codeRepository": "https://github.com/huggingface/transformers/tree/v4.12.0", 
  "datePublished": "2020-10-01", 
  "version": "v4.12.0", 
  "@context": "https://schema.org/", 
  "identifier": "https://doi.org/10.5281/zenodo.5608580", 
  "@id": "https://doi.org/10.5281/zenodo.5608580", 
  "@type": "SoftwareSourceCode", 
  "name": "Transformers: State-of-the-Art Natural Language Processing"
}
37,139
1,293
views
downloads
All versions This version
Views 37,139187
Downloads 1,2936
Data volume 10.0 GB73.7 MB
Unique views 30,889151
Unique downloads 6676

Share

Cite as