DeepLabCut: markerless pose estimation of user-defined body parts with deep learning
Creators
- 1. Institute for Theoretical Physics and Werner Reichardt Centre for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany Department of Molecular & Cellular Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
- 2. Institute for Theoretical Physics and Werner Reichardt Centre for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany
- 3. Department of Neuroscience and the Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- 4. Department of Molecular & Cellular Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
- 5. These authors jointly directed this work: Mackenzie Weygandt Mathis, Matthias Bethge. Institute for Theoretical Physics and Werner Reichardt Centre for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany The Rowland Institute at Harvard, Harvard University, Cambridge, MA, USA
- 6. Institute for Theoretical Physics and Werner Reichardt Centre for Integrative Neuroscience, Eberhard Karls Universität Tübingen, Tübingen, Germany Max Planck Institute for Biological Cybernetics, Tübingen, Germany Bernstein Center for Computational Neuroscience, Tübingen, Germany Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
Description
This data entry contains annotated mouse data from the DeepLabCut Nature Neuroscience paper.
This data entry contains a public release of annotated mouse data from the DeepLabCut paper. The trail-tracking behavior is part of an investigation into odor guided navigation, where one or multiple wildtype (C57BL/6J) mice are running on a paper spool and following odor trails. These experiments were carried out by Alexander Mathis & Mackenzie Mathis in the Murthy lab at Harvard University.
Data was recorded by two different cameras (640×480 pixels with Point Grey Firefly (FMVU-03MTM-CS), and at approximately 1,700×1,200 pixels with Grasshopper 3 4.1MP Mono USB3 Vision (CMOSIS CMV4000-3E12)) at 30 Hz. The latter images were cropped around mice to generate images that are approximately 800×800.
Here we share 1066, frames from multiple experimental sessions observing 7 different mice. Pranav Mamidanna labeled the snout, the tip of the left and right ear as well as the base of the tail in the example images. The data is organized in DeepLabCut 2.0 project structure with images and annotations in the labeled-data folder. The names are pseudocodes indicating mouse id and session id, e.g. m4s1 = mouse 4 session 1.
Code for loading, visualizing & training deep neural networks available at https://github.com/DeepLabCut/DeepLabCut.
Files
openfield-Pranav-2018-08-20.zip
Files
(227.0 MB)
Name | Size | Download all |
---|---|---|
md5:1d8cb000d5950d89995694313c51bdc8
|
227.0 MB | Preview Download |
Additional details
References
- Dataset associated with Mathis, A., Mamidanna, P., Cury, K.M. et al. DeepLabCut: markerless pose estimation of user-defined body parts with deep learning. Nat Neurosci 21, 1281–1289 (2018). https://doi.org/10.1038/s41593-018-0209-y