There is a newer version of the record available.

Published July 28, 2021 | Version 0.1
Software Open

OpenCLIP

Description

Welcome to the initial release of open_clip, an open source implementation of OpenAI's CLIP (Contrastive Language-Image Pre-training).

The goal of this repository is to enable training models with contrastive image-text supervision, and to investigate their properties such as robustness to distribution shift. Our starting point is an implementation of CLIP that matches the accuracy of the original CLIP models when trained on the same dataset.

Notes

If you use this software, please cite it as below.

Files

mlfoundations/open_clip-v0.1.zip

Files (2.5 MB)

Name Size Download all
md5:207f84e0654a2cb3da7659f8deb3c6c5
2.5 MB Preview Download

Additional details

Related works