2024-03-29T00:49:35Z
https://zenodo.org/oai2d
oai:zenodo.org:3765794
2020-06-22T14:09:52Z
user-mlperf
Ramesh Chukka
Srujana Gattupalli
Maxim Shevtsov
2020-04-24
<p>Bert-Large model fine-tuned on SQUADv1.1 dataset with WWM.</p>
https://doi.org/10.5281/zenodo.3765794
oai:zenodo.org:3765794
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3765793
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
bert_squad_wwm_int8_mixed_qdq.onnx
info:eu-repo/semantics/other
oai:zenodo.org:3353417
2019-07-31T16:45:00Z
user-mlperf
Dilip Sequeira
2019-07-27
<p>A mobilenet model with symmetric per-tensor quantization. Please see the README for how to use the weights. Please note that it is identical to the one at <a href="https://zenodo.org/record/2600560#.XTzNiehKguW">https://zenodo.org/record/2600560#.XTzNiehKguW</a> save that it has information on the provenance of the weights and is released under Apache 2.0 rather than CC 4.0.</p>
https://doi.org/10.5281/zenodo.3353417
oai:zenodo.org:3353417
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3353416
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
MobileNet v1 with Symmetric Per-Tensor quantization.
info:eu-repo/semantics/other
oai:zenodo.org:3239977
2019-06-20T16:52:51Z
user-mlperf
Itay Hubara
2019-06-05
<p><strong>Application: </strong> Single-stage Object Detection</p>
<p><strong>Base model:</strong> MobileNet-v1</p>
<p><strong>Framework:</strong> pytorch1.1</p>
<p><strong>Training Information: </strong>weights were imported from the equivalent TF model</p>
<p><strong>Quality:</strong> The COCO mAP(IoU=0.50:0.95) on 5000 validation images is 23.0%</p>
<p><strong>Precision:</strong> single-precision float</p>
<p><strong>Is Quantized: </strong>No</p>
<p><strong>Dataset: </strong>COCO val-2017</p>
https://doi.org/10.5281/zenodo.3239977
oai:zenodo.org:3239977
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3239976
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
mobilenet-v1-ssd300.pytorch model
info:eu-repo/semantics/other
oai:zenodo.org:3401714
2019-09-17T22:38:27Z
user-mlperf
Itay Hubara
2019-09-06
<p><strong>Application: </strong> Single-stage Object Detection</p>
<p><strong>Base model:</strong> MobileNet-v1</p>
<p><strong>Framework:</strong> tensorflow1.1</p>
<p><strong>Training Information: </strong>weights were fine-tuned using TF fake quantization nodes</p>
<p><strong>Quality:</strong> The COCO mAP(IoU=0.50:0.95) on 5000 validation images is 23.4%</p>
<p><strong>Precision:</strong> 8-bit precision</p>
<p><strong>Is Quantized: </strong>Yes, using fake quantization with symmetric=True. - i.e., weights appear in float32 but have only 256 unique values and no zero point. </p>
<p><strong>Dataset: </strong>COCO val-2017</p>
Fake quantization with symmetric=True. Weights appear in float32 but have only 256 unique values and no zero point.
https://doi.org/10.5281/zenodo.3401714
oai:zenodo.org:3401714
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3401713
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
mobilenet-v1-ssd300.tensorflow (8bit symmetrically quantized and fine-tuned)
info:eu-repo/semantics/other
oai:zenodo.org:3733868
2020-06-22T14:09:55Z
user-mlperf
Huang, Po-Han
Forster, Christopher
2020-03-31
<p>BERT TensorFlow model trained on SQuAD v1.1 for MLPerf Inference. To re-create the model, train on SQuAD v1.1 dataset according to the instructions here: https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT</p>
https://doi.org/10.5281/zenodo.3733868
oai:zenodo.org:3733868
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3733867
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
Inference
TensorFlow
BERT
LanguageModeling
MLPerf
SQuAD
Pretrained Model
MLPerf Inference BERT Tensorflow Model on SQuAD v1.1 dataset
info:eu-repo/semantics/other
oai:zenodo.org:3236545
2019-06-20T16:53:26Z
user-mlperf
Itay Hubara
2019-05-29
<p><strong>Application: </strong> Single-stage Object Detection</p>
<p><strong>Base model:</strong> ResNet-34</p>
<p><strong>Framework:</strong> pytorch1.1</p>
<p><strong>Training Information: </strong>based on mlperf/training/single_stage_detector.<strong> </strong>Details in mlperf/inference/ readme file</p>
<p><strong>Quality:</strong> The COCO mAP(IoU=0.50:0.95) on 5000 validation images is 20.0%</p>
<p><strong>Precision:</strong> single-precision float</p>
<p><strong>Is Quantized: </strong>No</p>
<p><strong>Dataset: </strong>COCO val-2017</p>
https://doi.org/10.5281/zenodo.3236545
oai:zenodo.org:3236545
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3235022
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
resnet34-ssd1200.pytorch model
info:eu-repo/semantics/other
oai:zenodo.org:2592612
2021-05-03T22:41:56Z
user-mlperf
https://github.com/mlperf/training/tree/master/image_classification
https://github.com/tensorflow/models/tree/master/official/resnet
2019-01-08
<ul>
<li><strong>Application: </strong>Image Classification</li>
<li><strong>ML Task:</strong> ResNet-50</li>
<li><strong>Framework:</strong> ONNX (via tensorflow)</li>
<li><strong>Training Information: </strong></li>
<li><strong>Quality:</strong> 76.53%</li>
<li><strong>Precision:</strong> fp32</li>
<li><strong>Is Quantized: </strong>no</li>
<li><strong>Is ONNX: yes</strong></li>
<li><strong>Dataset: </strong>http://www.image-net.org/challenges/LSVRC/2012/</li>
</ul>
How this model was created:
```
download the frozen tensorflow model from https://zenodo.org/deposit/2535873
pip install -U tf2onnx
python -m tf2onnx.convert --input resnet50_v1.pb --inputs input_tensor:0 --inputs-as-nchw input_tensor:0 --outputs ArgMax:0 --verbose --opset 8 --output resnet50_v1.onnx
```
https://doi.org/10.5281/zenodo.2592612
oai:zenodo.org:2592612
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.2535874
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
Image Classification, resnet50, tensorflow, ONNX, Inference, Imagenet2012, Inference, Pretrained Model
resnet50.onnx model
info:eu-repo/semantics/other
oai:zenodo.org:2541184
2021-05-03T22:41:55Z
user-mlperf
https://github.com/mlperf/training/tree/master/image_classification
https://github.com/tensorflow/models/tree/master/official/resnet
2019-01-08
<ul>
<li><strong>Application: </strong>Image Classification</li>
<li><strong>ML Task:</strong> ResNet-50</li>
<li><strong>Framework:</strong> ONNX (via tensorflow)</li>
<li><strong>Training Information: </strong></li>
<li><strong>Quality:</strong> 76.53%</li>
<li><strong>Precision:</strong> fp32</li>
<li><strong>Is Quantized: </strong>no</li>
<li><strong>Is ONNX: yes</strong></li>
<li><strong>Dataset: </strong>http://www.image-net.org/challenges/LSVRC/2012/</li>
</ul>
How this model was created:
```
download the frozen tensorflow model from https://zenodo.org/deposit/2535873
pip install -U tf2onnx
python -m tf2onnx.convert --input resnet50_v1.pb --inputs input_tensor:0 --inputs-as-nchw input_tensor:0 --outputs ArgMax:0 --verbose --opset 8 --output resnet50_v1.onnx
```
https://doi.org/10.5281/zenodo.2541184
oai:zenodo.org:2541184
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.2535874
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Image Classification, resnet50, tensorflow, ONNX, Inference, Imagenet2012, Inference, Pretrained Model
Trained resnet50 ONNX Model for MLPerf Cloud Inference
info:eu-repo/semantics/other
oai:zenodo.org:4735651
2021-05-03T22:47:05Z
user-mlperf
https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md
2019-04-10
<ul>
<li><strong>Application: </strong>Image Classification</li>
<li><strong>ML Task:</strong> mobilenetv1</li>
<li><strong>Framework:</strong> ONNX (via tensorflow)</li>
<li><strong>Training Information: </strong></li>
<li><strong>Quality:</strong> 70.9%</li>
<li><strong>Precision:</strong> fp32</li>
<li><strong>Is Quantized: </strong>no</li>
<li><strong>Is ONNX: yes</strong></li>
<li><strong>Dataset: </strong>http://www.image-net.org/challenges/LSVRC/2012/</li>
</ul>
script to convert from tensorflow:
https://github.com/mlcommons/inference/blob/master/vision/classification_and_detection/tools/convert-to-onnx.sh
https://doi.org/10.5281/zenodo.4735651
oai:zenodo.org:4735651
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.2635593
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
Image Classification, mobilenetv1, tensorflow, ONNX, Inference, Imagenet2012, Inference, Pretrained Model
mobilenet-v1.onnx model
info:eu-repo/semantics/other
oai:zenodo.org:2635594
2021-05-03T22:47:01Z
user-mlperf
https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md
2019-04-10
<ul>
<li><strong>Application: </strong>Image Classification</li>
<li><strong>ML Task:</strong> mobilenetv1</li>
<li><strong>Framework:</strong> ONNX (via tensorflow)</li>
<li><strong>Training Information: </strong></li>
<li><strong>Quality:</strong> 70.9%</li>
<li><strong>Precision:</strong> fp32</li>
<li><strong>Is Quantized: </strong>no</li>
<li><strong>Is ONNX: yes</strong></li>
<li><strong>Dataset: </strong>http://www.image-net.org/challenges/LSVRC/2012/</li>
</ul>
script that generated this model: https://github.com/mlperf/inference/blob/master/cloud/image_classification/tools/mobilenet-to-onnx.sh
https://doi.org/10.5281/zenodo.2635594
oai:zenodo.org:2635594
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.2635593
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Image Classification, mobilenetv1, tensorflow, ONNX, Inference, Imagenet2012, Inference, Pretrained Model
Trained mobilenetv1 ONNX Model for MLPerf Cloud Inference
info:eu-repo/semantics/other
oai:zenodo.org:3157894
2021-05-03T22:47:02Z
user-mlperf
https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md
2019-04-10
<ul>
<li><strong>Application: </strong>Image Classification</li>
<li><strong>ML Task:</strong> mobilenetv1</li>
<li><strong>Framework:</strong> ONNX (via tensorflow)</li>
<li><strong>Training Information: </strong></li>
<li><strong>Quality:</strong> 70.9%</li>
<li><strong>Precision:</strong> fp32</li>
<li><strong>Is Quantized: </strong>no</li>
<li><strong>Is ONNX: yes</strong></li>
<li><strong>Dataset: </strong>http://www.image-net.org/challenges/LSVRC/2012/</li>
</ul>
script to convert from tensorflow: https://gist.github.com/guschmue/788ae7f602c1f15ce3998b8d5f56ed2e
https://doi.org/10.5281/zenodo.3157894
oai:zenodo.org:3157894
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.2635593
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
Image Classification, mobilenetv1, tensorflow, ONNX, Inference, Imagenet2012, Inference, Pretrained Model
mobilenet-v1.onnx model
info:eu-repo/semantics/other
oai:zenodo.org:6605272
2022-06-02T20:30:41Z
user-mlperf
Ahmad Kiswani
2022-06-02
<ul>
<li><strong>Application: </strong>Object Detection</li>
<li><strong>ML Task:</strong> Retinanet-ResNext50</li>
<li><strong>Framework:</strong> Pytorch</li>
<li><strong>Training Information:</strong></li>
<li><strong>Quality:</strong> .3755 mAP</li>
<li><strong>Precision:</strong> single-precision float</li>
<li><strong>Is Quantized: </strong>no</li>
<li><strong>Is ONNX: </strong>no</li>
<li><strong>Dataset: OpenImages Object detection dataset</strong></li>
</ul>
https://doi.org/10.5281/zenodo.6605272
oai:zenodo.org:6605272
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.6605271
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
Object Detection
Retinanet
Pytorch
Openimages
Inference
Trained Model for Retinanet-ResNext50 for MLPerf Inference
info:eu-repo/semantics/other
oai:zenodo.org:3899507
2020-06-22T14:09:35Z
user-mlperf
Shamporov, Vasily
Shevtsov
2020-06-17
<p>Application: Question & Answering</p>
<p>ML Task: MobileBERT</p>
<p>Framework: ONNX</p>
<p>Training information: source <a href="https://github.com/google-research/google-research/tree/master/mobilebert">https://github.com/google-research/google-research/tree/master/mobilebert</a>. Imported https://storage.googleapis.com/cloud-tpu-checkpoints/mobilebert/uncased_L-24_H-128_B-512_A-4_F-4_OPT.tar.gz checkpoint to PyTorch using Huggingface transforms. Quantaization aware training using Huggingface to save the model in ONNX model.</p>
<p>Quality: F1 89.4% (INT8 model)</p>
<p>Precision: INT8</p>
<p>Is Quantized: Yes</p>
<p>Is ONNX: Yes</p>
<p>Daatset: SQUAD v1.1</p>
https://doi.org/10.5281/zenodo.3899507
oai:zenodo.org:3899507
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3899506
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
MobileBERT, ONNX, Squad v1.1, inference, int8 model
Trained model for MobileBERT in ONNX INT8 for MLPerf inference
info:eu-repo/semantics/other
oai:zenodo.org:3252084
2019-09-17T23:06:16Z
user-mlperf
Matan Haroush
2019-06-21
<p><strong>Application: </strong> Single-stage Object Detection</p>
<p><strong>Base model:</strong> MobileNet-v1</p>
<p><strong>Framework:</strong> tensorflow1.1</p>
<p><strong>Training Information: </strong>weights were fine-tuned using TF fake quantization nodes</p>
<p><strong>Quality:</strong> The COCO mAP(IoU=0.50:0.95) on 5000 validation images is 23.0%</p>
<p><strong>Precision:</strong> 8-bit precision</p>
<p><strong>Is Quantized: </strong>Yes, using fake quantization - i.e., weights appear in float32 but have only 256 unique values. </p>
<p><strong>Dataset: </strong>COCO val-2017</p>
https://doi.org/10.5281/zenodo.3252084
oai:zenodo.org:3252084
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3252083
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
mobilenet-v1-ssd300.tensorflow (8bit quantized and fine-tuned)
info:eu-repo/semantics/other
oai:zenodo.org:3345892
2019-07-29T17:52:59Z
user-mlperf
xilinx_ai
2019-07-19
<p>SSD-resnet34 model </p>
<p>© Copyright 2019 Xilinx, Inc.</p>
<p>Application: Object Detection<br>
ML Task: ResNet-34-SSD<br>
Framework: tensorflow 1.12<br>
Training Information:<br>
Quality: mmAP 22.1%<br>
Precision: single-precision float<br>
Is Quantized: no<br>
Is ONNX: no<br>
Dataset: COCO</p>
<p>Please download: <a href="https://zenodo.org/api/files/ad80f486-95ce-4d1a-a29b-1acc58eb66bc/tf_ssd_resnet34_22.1.zip?versionId=7aea39fd-372f-44b2-8be2-f20f7ed01632">tf_ssd_resnet34_22.1.zip</a> <br>
Model filename: resnet34_tf.22.1.pb<br>
Model details: <a href="https://github.com/lji72/inference/tree/tf_ssd_resent34_align_onnx/others/cloud/single_stage_detector/tensorflow">https://github.com/lji72/inference/tree/tf_ssd_resent34_align_onnx/others/cloud/single_stage_detector/tensorflow</a> </p>
https://doi.org/10.5281/zenodo.3345892
oai:zenodo.org:3345892
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3343063
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
SSD-resnet34 model
info:eu-repo/semantics/other
oai:zenodo.org:3343064
2019-07-29T17:52:59Z
user-mlperf
xilinx_ai
2019-07-19
<p>SSD-resnet34 for review</p>
https://doi.org/10.5281/zenodo.3343064
oai:zenodo.org:3343064
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3343063
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
SSD-resnet34 for review
info:eu-repo/semantics/other
oai:zenodo.org:3906875
2020-06-30T05:18:17Z
user-mlperf
Chukka, Ramesh
Talanin, Evgeny
2020-06-24
<p><strong>Application</strong>: Image Classification<br>
<strong>ML Task</strong>: MobileNetEdge<br>
<strong>Framework</strong>: Tensorflow<br>
<strong>Training Information</strong>:<br>
<strong>Quality</strong>: 72.436 (Top 1)<br>
<strong>Precision</strong>: FP32<br>
<strong>Is Quantized</strong>: No<br>
<strong>Is ONNX</strong>: No<br>
<strong>Dataset</strong>: ImageNet</p>
Please refer to README.txt enclosed in the zip file for the details on how the frozen model is created.
https://doi.org/10.5281/zenodo.3906875
oai:zenodo.org:3906875
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3906874
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
mobilenetedgetpu
FP32
frozen model
tensorflow
MobileNetEdge Frozen model in Tensorflow with FP32 precision
info:eu-repo/semantics/other
oai:zenodo.org:3726327
2020-06-22T14:09:56Z
user-mlperf
user-ck
Leo Gordon
Anton Lokhmotov
2020-03-24
<p><a href="https://developer.nvidia.com/tensorrt">TensorRT</a> plans generated on a <a href="https://developer.nvidia.com/embedded/jetson-agx-xavier-developer-kit">Jetson AGX Xavier Developer Kit</a> using code and instructions from <a href="https://github.com/mlperf/inference_results_v0.5/tree/master/closed/NVIDIA">NVIDIA's MLPerf Inference v0.5 submission</a> without any modifications.</p>
Also includes libnmsoptplugin.so
https://doi.org/10.5281/zenodo.3726327
oai:zenodo.org:3726327
Zenodo
https://zenodo.org/communities/mlperf
https://zenodo.org/communities/ck
https://doi.org/10.5281/zenodo.3726326
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
MLPerf
Inference
TensorRT
Xavier
int8
original
MLPerf Inference v0.5 - TensorRT plans for NVIDIA Jetson AGX Xavier - int8, original
info:eu-repo/semantics/other
oai:zenodo.org:3878955
2020-06-05T22:14:01Z
user-mlperf
Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, Denny Zhou
2020-06-05
<ul>
<li><strong>Application: </strong>Question & Answering</li>
<li><strong>ML Task:</strong> MobileBERT</li>
<li><strong>Framework:</strong> TensorFlow (Lite) 2.2</li>
<li><strong>Training Information: </strong>See source for float model @ <a href="https://github.com/google-research/google-research/tree/master/mobilebert">https://github.com/google-research/google-research/tree/master/mobilebert</a>. The quant model source will be made available soon.</li>
<li><strong>Quality:</strong> Float: 90 F1, Quant: 88 F1</li>
<li><strong>Precision:</strong> Float32, Int8</li>
<li><strong>Is Quantized: </strong>Yes</li>
<li><strong>Is ONNX: </strong>No</li>
<li><strong>Dataset: </strong>Squad v1.1</li>
</ul>
<p> </p>
<p><strong>Additional Model Details:</strong></p>
<ul>
<li><strong>Model: </strong>Vocab Size: 30k, Sequence Length: 384</li>
<li><strong>Inputs: </strong>input_ids (int32), input_mask (int32), segment_ids (int32)</li>
<li><strong>Outputs: </strong>start_logits, end_logits</li>
<li><strong>NNAPI Compat: </strong>No. Additional work required for the quantized model.</li>
</ul>
https://doi.org/10.5281/zenodo.3878955
oai:zenodo.org:3878955
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3878950
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
Q&A, MobileBERT, TensorFlow, TensorFlow Lite, Squad v1.1, Inference, Pretrained Model
TFLite Models for MobileBERT for MLPerf Inference
info:eu-repo/semantics/other
oai:zenodo.org:3878951
2020-06-05T22:14:01Z
user-mlperf
Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, Denny Zhou
2020-06-05
<ul>
<li><strong>Application: </strong>Question & Answering</li>
<li><strong>ML Task:</strong> MobileBERT</li>
<li><strong>Framework:</strong> TensorFlow (Lite) 2.2</li>
<li><strong>Training Information: </strong>See source for float model @ <a href="https://github.com/google-research/google-research/tree/master/mobilebert">https://github.com/google-research/google-research/tree/master/mobilebert</a>. The quant model source will be made available soon.</li>
<li><strong>Quality:</strong> Float: 90 F1, Quant: 88 F1</li>
<li><strong>Precision:</strong> Float32, Int8</li>
<li><strong>Is Quantized: </strong>Yes</li>
<li><strong>Is ONNX: </strong>No</li>
<li><strong>Dataset: </strong>Squad v1.1</li>
</ul>
<p> </p>
<p><strong>Additional Model Details:</strong></p>
<ul>
<li><strong>Model: </strong>Vocab Size: 30k, Sequence Length: 384</li>
<li><strong>Inputs: </strong>input_ids (int32), input_mask (int32), segment_ids (int32)</li>
<li><strong>Outputs: </strong>start_logits, end_logits</li>
<li><strong>NNAPI Compat: </strong>No. Additional work required for the quantized model.</li>
</ul>
https://doi.org/10.5281/zenodo.3878951
oai:zenodo.org:3878951
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3878950
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
Q&A, MobileBERT, TensorFlow, TensorFlow Lite, Squad v1.1, Inference, Pretrained Model
TFLite Models for MobileBERT for MLPerf Inference
info:eu-repo/semantics/other
oai:zenodo.org:3422182
2019-09-17T22:38:27Z
user-mlperf
Itay Hubara
2019-09-06
<p><strong>Application: </strong> Single-stage Object Detection</p>
<p><strong>Base model:</strong> MobileNet-v1</p>
<p><strong>Framework:</strong> tensorflow1.1</p>
<p><strong>Training Information: </strong>weights were fine-tuned using TF fake quantization nodes</p>
<p><strong>Quality:</strong> The COCO mAP(IoU=0.50:0.95) on 5000 validation images is 23.4%</p>
<p><strong>Precision:</strong> 8-bit precision</p>
<p><strong>Is Quantized: </strong>Yes, using fake quantization with symmetric=True. - i.e., weights appear in float32 but have only 256 unique values and no zero point. </p>
<p><strong>Dataset: </strong>COCO val-2017</p>
Fake quantization with symmetric=True. Weights appear in float32 but have only 256 unique values and no zero point. Additional information in the README file.
https://doi.org/10.5281/zenodo.3422182
oai:zenodo.org:3422182
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3401713
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
mobilenet-v1-ssd300.tensorflow (8bit symmetrically quantized and fine-tuned)
info:eu-repo/semantics/other
oai:zenodo.org:3235023
2019-06-20T16:53:25Z
user-mlperf
Itay Hubara
2019-05-29
<p><strong>Application: </strong> Single-stage Object Detection</p>
<p><strong>Base model:</strong> ResNet-34</p>
<p><strong>Framework:</strong> pytorch1.0</p>
<p><strong>Training Information: </strong>based on mlperf/training/single_stage_detector.<strong> </strong>Details in mlperf/inference/ readme file</p>
<p><strong>Quality:</strong> The COCO mAP(IoU=0.50:0.95) on 5000 validation images is 20.0%</p>
<p><strong>Precision:</strong> single-precision float</p>
<p><strong>Is Quantized: </strong>No</p>
<p><strong>Dataset: </strong>COCO val-2017</p>
https://doi.org/10.5281/zenodo.3235023
oai:zenodo.org:3235023
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3235022
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
resnet34-ssd1200.pytorch model
info:eu-repo/semantics/other
oai:zenodo.org:4735652
2021-05-03T22:55:18Z
user-mlperf
mlperf
2019-05-22
<ul>
<li><strong>Application: </strong>Object Detection</li>
<li><strong>ML Task:</strong> ssd-mobilenet</li>
<li><strong>Framework:</strong> onnx</li>
<li><strong>Training Information: </strong></li>
<li><strong>Quality: 0.21%</strong></li>
<li><strong>Precision:</strong> fp32</li>
<li><strong>Is Quantized: </strong>no</li>
<li><strong>Is ONNX: yes</strong></li>
<li><strong>Dataset: </strong>http://images.cocodataset.org/zips/val2014.zip</li>
<li><strong>Source Model:</strong> http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2018_01_28.tar.gz</li>
</ul>
converted via https://github.com/mlperf/inference/cloud/image_classification/tools/ssd-mobilenet-to-onnx.sh
https://doi.org/10.5281/zenodo.4735652
oai:zenodo.org:4735652
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3163025
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
mobilenet-v1-ssd300.onnx model
info:eu-repo/semantics/other
oai:zenodo.org:3733896
2020-06-22T14:09:53Z
user-mlperf
Huang, Po-Han
Forster, Christopher
2020-03-31
<p>This model is converted from the <a href="https://zenodo.org/record/3733868">MLPerf Inference BERT Tensorflow Model on SQuAD v1.1 dataset</a> using the script in MLPerf inference repo: https://github.com/mlperf/inference</p>
https://doi.org/10.5281/zenodo.3733896
oai:zenodo.org:3733896
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3733895
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
MLPerf Inference BERT PyTorch Model on SQuAD v1.1 dataset
info:eu-repo/semantics/other
oai:zenodo.org:4313974
2021-05-03T22:41:57Z
user-mlperf
https://github.com/mlperf/training/tree/master/image_classification
https://github.com/tensorflow/models/tree/master/official/resnet
2019-01-08
<ul>
<li><strong>Application: </strong>Image Classification</li>
<li><strong>ML Task:</strong> ResNet-50</li>
<li><strong>Framework:</strong> ONNX (via tensorflow)</li>
<li><strong>Training Information: </strong></li>
<li><strong>Quality:</strong> 76.53%</li>
<li><strong>Precision:</strong> fp32</li>
<li><strong>Is Quantized: </strong>no</li>
<li><strong>Is ONNX: yes</strong></li>
<li><strong>Dataset: </strong>http://www.image-net.org/challenges/LSVRC/2012/</li>
</ul>
How this model was created:
```
download the frozen tensorflow model from https://zenodo.org/deposit/2535873
pip install -U tf2onnx
python -m tf2onnx.convert --input resnet50_v1.pb --inputs input_tensor:0 --inputs-as-nchw input_tensor:0 --outputs ArgMax:0 --opset 12 --output resnet50_v1.onnx
```
https://doi.org/10.5281/zenodo.4313974
oai:zenodo.org:4313974
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.2535874
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
Image Classification, resnet50, tensorflow, ONNX, Inference, Imagenet2012, Inference, Pretrained Model
resnet50.onnx model
info:eu-repo/semantics/other
oai:zenodo.org:2535873
2019-07-26T16:48:59Z
user-mlperf
https://github.com/mlperf/training/tree/master/image_classification
https://github.com/tensorflow/models/tree/master/official/resnet
2019-01-08
<ul>
<li><strong>Application: </strong>Image Classification</li>
<li><strong>ML Task:</strong> ResNet-50</li>
<li><strong>Framework:</strong> tensorflow</li>
<li><strong>Training Information: </strong></li>
<li><strong>Quality:</strong> 76.53%</li>
<li><strong>Precision:</strong> fp32</li>
<li><strong>Is Quantized: </strong>no</li>
<li><strong>Is ONNX: </strong>no</li>
<li><strong>Dataset: </strong>http://www.image-net.org/challenges/LSVRC/2012/</li>
</ul>
To re-create the model, train https://github.com/mlperf/training/tree/master/image_classification or https://github.com/tensorflow/models/tree/master/official/resnet.
Make sure the data format is NHWC.
That re-export the model with batch_size=-1 (there is a script in https://github.com/mlperf/inference/cloud/image_classification/tools).
Create a frozen model tensorflow model from the exported saved_model.
https://doi.org/10.5281/zenodo.2535873
oai:zenodo.org:2535873
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.2535872
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
Image Classification, resnet50, tensorflow, Inference, Imagenet2012, Inference, Pretrained Model
resnet50.tensorflow model
info:eu-repo/semantics/other
oai:zenodo.org:4313976
2021-05-03T22:47:03Z
user-mlperf
https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md
2019-04-10
<ul>
<li><strong>Application: </strong>Image Classification</li>
<li><strong>ML Task:</strong> mobilenetv1</li>
<li><strong>Framework:</strong> ONNX (via tensorflow)</li>
<li><strong>Training Information: </strong></li>
<li><strong>Quality:</strong> 70.9%</li>
<li><strong>Precision:</strong> fp32</li>
<li><strong>Is Quantized: </strong>no</li>
<li><strong>Is ONNX: yes</strong></li>
<li><strong>Dataset: </strong>http://www.image-net.org/challenges/LSVRC/2012/</li>
</ul>
script to convert from tensorflow:
https://github.com/mlcommons/inference/blob/master/vision/classification_and_detection/tools/convert-to-onnx.sh
https://doi.org/10.5281/zenodo.4313976
oai:zenodo.org:4313976
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.2635593
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
Image Classification, mobilenetv1, tensorflow, ONNX, Inference, Imagenet2012, Inference, Pretrained Model
mobilenet-v1.onnx model
info:eu-repo/semantics/other
oai:zenodo.org:2530924
2019-06-25T17:08:34Z
user-mlperf
Cheng
2019-01-03
<p>GitHub link: </p>
<p>https://github.com/mlperf/inference/tree/master/cloud/translation/gnmt/tensorflow</p>
<p>Pre-trained TF GNMT model:</p>
<ul>
<li><strong>Application/ ML Task: </strong>Machine Translation</li>
<li><strong>Framework:</strong> TensorFlow</li>
<li><strong>Training Information: </strong>
<ul>
<li>https://github.com/mlperf/inference/blob/master/cloud/translation/gnmt/tensorflow/train_gnmt.txt</li>
</ul>
</li>
<li><strong>Quality:</strong> BLEU 22.9</li>
<li><strong>Precision:</strong> fp32</li>
<li><strong>Is Quantized: </strong>no</li>
<li><strong>Is ONNX: </strong>no</li>
<li><strong>Dataset: </strong>WMT16 English-German
<ul>
<li>http://www.statmt.org/wmt16/translation-task.html</li>
</ul>
</li>
</ul>
<p> </p>
https://doi.org/10.5281/zenodo.2530924
oai:zenodo.org:2530924
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.2530923
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
gnmt_4_layer.tensorflow
info:eu-repo/semantics/other
oai:zenodo.org:3361502
2019-08-06T16:41:08Z
user-mlperf
user-ck
The TensorFlow Authors
dividiti
2019-08-06
<p>SSD-MobileNet-v1 models used in MLPerf Inference:</p>
<ul>
<li>A TensorFlow model archived from the <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/README.md">TensorFlow Object Detection model zoo</a>.</li>
<li>A TFLite model obtained by <a href="http://dividiti.com">dividiti</a> from the above by using <a href="https://github.com/ctuning/ck-mlperf/tree/master/package/model-tflite-mlperf-ssd-mobilenet">instructions</a> adapted from <a href="https://medium.com/tensorflow/training-and-serving-a-realtime-mobile-object-detector-in-30-minutes-with-cloud-tpus-b78971cf1193">Google's blog</a>.</li>
</ul>
<p> </p>
https://doi.org/10.5281/zenodo.3361502
oai:zenodo.org:3361502
Zenodo
https://zenodo.org/communities/mlperf
https://zenodo.org/communities/ck
https://doi.org/10.5281/zenodo.3361501
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
MLPerf
Inference
SSD-MobileNet-v1
TensorFlow
TFLite
TF/TFLite SSD-MobileNet models (used in MLPerf Inference)
info:eu-repo/semantics/other
oai:zenodo.org:3439376
2019-09-17T23:06:17Z
user-mlperf
Matan Haroush
2019-06-21
<p><strong>Application: </strong> Single-stage Object Detection</p>
<p><strong>Base model:</strong> MobileNet-v1</p>
<p><strong>Framework:</strong> tensorflow1.1</p>
<p><strong>Training Information: </strong>weights were fine-tuned using TF fake quantization nodes</p>
<p><strong>Quality:</strong> The COCO mAP(IoU=0.50:0.95) on 5000 validation images is 23.0%</p>
<p><strong>Precision:</strong> 8-bit precision</p>
<p><strong>Is Quantized: </strong>Yes, using fake quantization - i.e., weights appear in float32 but have only 256 unique values. </p>
<p><strong>Dataset: </strong>COCO val-2017</p>
https://doi.org/10.5281/zenodo.3439376
oai:zenodo.org:3439376
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3252083
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
mobilenet-v1-ssd300.tensorflow (8bit quantized and fine-tuned)
info:eu-repo/semantics/other
oai:zenodo.org:3899484
2020-06-22T14:09:35Z
user-mlperf
Cheng, Christine
2020-06-17
<p>Please read the readme.txt in the zip file for more information.</p>
<p>There is no accuracy validation done with this model.</p>
<ul>
<li><strong>Application: </strong><em>Question Answering</em></li>
<li><strong>ML Task:</strong> Language</li>
<li><strong>Framework:</strong> Tensorflow</li>
<li><strong>Training Information: </strong>readme.txt in the file</li>
<li><strong>Quality:</strong> NA</li>
<li><strong>Precision:</strong> FP32</li>
<li><strong>Is Quantized: </strong>no</li>
<li><strong>Is ONNX: </strong>no</li>
<li><strong>Dataset: </strong>Squad v1.1</li>
</ul>
<p><strong>Version: </strong>MLPerf v0.7 Inference</p>
<p><strong>Keywords: </strong>Add the followings to keywords <Application>, <ML Task>, <Framework>, <Dataset>, <Training/Inference>, Pretrained Model</p>
<p> </p>
<p><br>
</p>
https://doi.org/10.5281/zenodo.3899484
oai:zenodo.org:3899484
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3899483
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
Application
Language
Squad v1.1
Tensorflow
Inference
Pretrained Model
MLPerf Inference MobileBERT Tensorflow Model on SQuAD v1.1 dataset
info:eu-repo/semantics/other
oai:zenodo.org:3923445
2020-06-30T14:33:08Z
user-mlperf
Chukka, Ramesh
2020-06-30
<p><strong>Application</strong>: Image Classification<br>
<strong>ML Task</strong>: MobileNetEdge<br>
<strong>Framework</strong>: Tensorflow-lite<br>
<strong>Training Information</strong>:<br>
<strong>Quality</strong>: <br>
<strong>Precision</strong>: FP32<br>
<strong>Is Quantized</strong>: No<br>
<strong>Is ONNX</strong>: No<br>
<strong>Dataset</strong>: ImageNet</p>
https://doi.org/10.5281/zenodo.3923445
oai:zenodo.org:3923445
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3923444
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
mobilenetedge
tensorflow lite
FP32
MobileNetEdge Tensorflow-lite model in FP32 precision
info:eu-repo/semantics/other
oai:zenodo.org:3229031
2020-06-22T14:10:02Z
user-mlperf
Shirron, Dan
2019-05-26
<p>Pre-trained wavenet model for MLPerf inference git</p>
https://doi.org/10.5281/zenodo.3229031
oai:zenodo.org:3229031
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3229030
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Pre-trained wavenet model
info:eu-repo/semantics/other
oai:zenodo.org:3228411
2019-07-26T16:44:17Z
user-mlperf
mlperf
2019-05-24
<ul>
<li><strong>Application: </strong>Object Detection</li>
<li><strong>ML Task:</strong> ssd-resnet34</li>
<li><strong>Framework:</strong> onnx</li>
<li><strong>Training Information: </strong></li>
<li><strong>Quality: 0.20%</strong></li>
<li><strong>Precision:</strong> fp32</li>
<li><strong>Is Quantized: </strong>no</li>
<li><strong>Is ONNX: yes</strong></li>
<li><strong>Dataset: </strong>http://images.cocodataset.org/zips/val2014.zip (resized to 1200x1200)</li>
<li><strong>Source Model:</strong> <a href="https://github.com/mlperf/inference/tree/master/cloud/single_stage_detector">https://github.com/mlperf/inference/tree/master/cloud/single_stage_detector</a></li>
</ul>
created from pytorch model using the instructions from here: https://github.com/BowenBao/inference/tree/master/cloud/single_stage_detector/pytorch#6-onnx
https://doi.org/10.5281/zenodo.3228411
oai:zenodo.org:3228411
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3228407
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
resnet34-ssd1200 onnx model
info:eu-repo/semantics/other
oai:zenodo.org:3163026
2021-05-03T22:55:15Z
user-mlperf
mlperf
2019-05-22
<ul>
<li><strong>Application: </strong>Object Detection</li>
<li><strong>ML Task:</strong> ssd-mobilenet</li>
<li><strong>Framework:</strong> onnx</li>
<li><strong>Training Information: </strong></li>
<li><strong>Quality: 0.21%</strong></li>
<li><strong>Precision:</strong> fp32</li>
<li><strong>Is Quantized: </strong>no</li>
<li><strong>Is ONNX: yes</strong></li>
<li><strong>Dataset: </strong>http://images.cocodataset.org/zips/val2014.zip</li>
<li><strong>Source Model:</strong> http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2018_01_28.tar.gz</li>
</ul>
converted via https://github.com/mlperf/inference/cloud/image_classification/tools/ssd-mobilenet-to-onnx.sh
https://doi.org/10.5281/zenodo.3163026
oai:zenodo.org:3163026
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3163025
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
mobilenet-v1-ssd300.onnx model
info:eu-repo/semantics/other
oai:zenodo.org:3228408
2019-07-26T16:44:16Z
user-mlperf
mlperf
2019-05-24
<ul>
<li><strong>Application: </strong>Object Detection</li>
<li><strong>ML Task:</strong> ssd-resnet34</li>
<li><strong>Framework:</strong> onnx</li>
<li><strong>Training Information: </strong></li>
<li><strong>Quality: 0.20%</strong></li>
<li><strong>Precision:</strong> fp32</li>
<li><strong>Is Quantized: </strong>no</li>
<li><strong>Is ONNX: yes</strong></li>
<li><strong>Dataset: </strong>http://images.cocodataset.org/zips/val2014.zip (resized to 1200x1200)</li>
<li><strong>Source Model:</strong> <a href="https://github.com/mlperf/inference/tree/master/cloud/single_stage_detector">https://github.com/mlperf/inference/tree/master/cloud/single_stage_detector</a></li>
</ul>
created from pytorch model using the instructions from here: https://github.com/BowenBao/inference/tree/master/cloud/single_stage_detector/pytorch#6-onnx
https://doi.org/10.5281/zenodo.3228408
oai:zenodo.org:3228408
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3228407
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Trained ssd-resnet34 onnx model for MLPerf Cloud Inference
info:eu-repo/semantics/other
oai:zenodo.org:3662521
2020-06-22T14:10:03Z
user-mlperf
Ryan Leary
Marek Wawrzos
Sam Davis
2020-02-11
<p>Pre-trained RNN-T model for MLPerf Inference</p>
https://doi.org/10.5281/zenodo.3662521
oai:zenodo.org:3662521
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3662520
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Pre-trained RNN-T model
info:eu-repo/semantics/other
oai:zenodo.org:3733910
2020-06-22T14:09:53Z
user-mlperf
Huang, Po-Han
Forster, Christopher
2020-03-31
<p>This model is converted from the <a href="https://zenodo.org/record/3733868">MLPerf Inference BERT Tensorflow Model on SQuAD v1.1 dataset</a> using the script in MLPerf inference repo: https://github.com/mlperf/inference</p>
https://doi.org/10.5281/zenodo.3733910
oai:zenodo.org:3733910
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.3733909
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
MLPerf Inference BERT ONNX Model on SQuAD v1.1 dataset
info:eu-repo/semantics/other
oai:zenodo.org:2535875
2021-05-03T22:41:54Z
user-mlperf
https://github.com/mlperf/training/tree/master/image_classification
https://github.com/tensorflow/models/tree/master/official/resnet
2019-01-08
<ul>
<li><strong>Application: </strong>Image Classification</li>
<li><strong>ML Task:</strong> ResNet-50</li>
<li><strong>Framework:</strong> ONNX (via tensorflow)</li>
<li><strong>Training Information: </strong></li>
<li><strong>Quality:</strong> 76.53%</li>
<li><strong>Precision:</strong> fp32</li>
<li><strong>Is Quantized: </strong>no</li>
<li><strong>Is ONNX: yes</strong></li>
<li><strong>Dataset: </strong>http://www.image-net.org/challenges/LSVRC/2012/</li>
</ul>
Take https://zenodo.org/deposit/2535873 and use https://github.com/onnx/tensorflow-onnx to convert it to ONNX
https://doi.org/10.5281/zenodo.2535875
oai:zenodo.org:2535875
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.2535874
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Image Classification, resnet50, tensorflow, ONNX, Inference, Imagenet2012, Inference, Pretrained Model
Trained resnet50 ONNX Model for MLPerf Cloud Inference
info:eu-repo/semantics/other
oai:zenodo.org:4735647
2021-05-03T22:42:00Z
user-mlperf
https://github.com/mlperf/training/tree/master/image_classification
https://github.com/tensorflow/models/tree/master/official/resnet
2019-01-08
<ul>
<li><strong>Application: </strong>Image Classification</li>
<li><strong>ML Task:</strong> ResNet-50</li>
<li><strong>Framework:</strong> ONNX (via tensorflow)</li>
<li><strong>Training Information: </strong></li>
<li><strong>Quality:</strong> 76.53%</li>
<li><strong>Precision:</strong> fp32</li>
<li><strong>Is Quantized: </strong>no</li>
<li><strong>Is ONNX: yes</strong></li>
<li><strong>Dataset: </strong>http://www.image-net.org/challenges/LSVRC/2012/</li>
</ul>
How this model was created:
```
download the frozen tensorflow model from https://zenodo.org/deposit/2535873
pip install -U tf2onnx
python -m tf2onnx.convert --input resnet50_v1.pb --inputs input_tensor:0 --inputs-as-nchw input_tensor:0 --outputs ArgMax:0 --opset 12 --output resnet50_v1.onnx
```
https://doi.org/10.5281/zenodo.4735647
oai:zenodo.org:4735647
Zenodo
https://zenodo.org/communities/mlperf
https://doi.org/10.5281/zenodo.2535874
info:eu-repo/semantics/openAccess
Apache License 2.0
http://www.apache.org/licenses/LICENSE-2.0
Image Classification, resnet50, tensorflow, ONNX, Inference, Imagenet2012, Inference, Pretrained Model
resnet50.onnx model
info:eu-repo/semantics/other