Dataset Open Access
Tabak, Micheal A.; Norouzzadeh, Mohammad Sadegh; Tabak, Michael A.; Wolfson, David W.; Sweeney, Steven J.; Di Salvo, Paul A.; Miller, Ryan S.; Lewis, Jesse S.; Clune, Jeff; Brook, Ryan K.; Mandeville, Elizabeth G.; Lukacs, Paul M.; Moeller, Anna K.; Boughton, Raoul K.; Wight, Bethany; Beasley, James C.; Schlichting, Peter E.
<?xml version='1.0' encoding='utf-8'?> <oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"> <dc:creator>Tabak, Micheal A.</dc:creator> <dc:creator>Norouzzadeh, Mohammad Sadegh</dc:creator> <dc:creator>Tabak, Michael A.</dc:creator> <dc:creator>Wolfson, David W.</dc:creator> <dc:creator>Sweeney, Steven J.</dc:creator> <dc:creator>Di Salvo, Paul A.</dc:creator> <dc:creator>Miller, Ryan S.</dc:creator> <dc:creator>Lewis, Jesse S.</dc:creator> <dc:creator>Clune, Jeff</dc:creator> <dc:creator>Brook, Ryan K.</dc:creator> <dc:creator>Mandeville, Elizabeth G.</dc:creator> <dc:creator>Lukacs, Paul M.</dc:creator> <dc:creator>Moeller, Anna K.</dc:creator> <dc:creator>Boughton, Raoul K.</dc:creator> <dc:creator>Wight, Bethany</dc:creator> <dc:creator>Beasley, James C.</dc:creator> <dc:creator>Schlichting, Peter E.</dc:creator> <dc:date>2019-01-02</dc:date> <dc:description>Motion‐activated cameras ("camera traps") are increasingly used in ecological and management studies for remotely observing wildlife and are amongst the most powerful tools for wildlife research. However, studies involving camera traps result in millions of images that need to be analysed, typically by visually observing each image, in order to extract data that can be used in ecological analyses. We trained machine learning models using convolutional neural networks with the ResNet‐18 architecture and 3,367,383 images to automatically classify wildlife species from camera trap images obtained from five states across the United States. We tested our model on an independent subset of images not seen during training from the United States and on an out‐of‐sample (or "out‐of‐distribution" in the machine learning literature) dataset of ungulate images from Canada. We also tested the ability of our model to distinguish empty images from those with animals in another out‐of‐sample dataset from Tanzania, containing a faunal community that was novel to the model. The trained model classified approximately 2,000 images per minute on a laptop computer with 16 gigabytes of RAM. The trained model achieved 98% accuracy at identifying species in the United States, the highest accuracy of such a model to date. Out‐of‐sample validation from Canada achieved 82% accuracy and correctly identified 94% of images containing an animal in the dataset from Tanzania. We provide an r package (Machine Learning for Wildlife Image Classification) that allows the users to (a) use the trained model presented here and (b) train their own model using classified images of wildlife from their studies. The use of machine learning to rapidly and accurately classify wildlife in camera trap images can facilitate non‐invasive sampling designs in ecological studies by reducing the burden of manually analysing images. Our r package makes these methods accessible to ecologists.</dc:description> <dc:description>raw_labels</dc:description> <dc:identifier>https://zenodo.org/record/5009425</dc:identifier> <dc:identifier>10.5061/dryad.st8f5n7</dc:identifier> <dc:identifier>oai:zenodo.org:5009425</dc:identifier> <dc:relation>doi:10.1111/2041-210x.13120</dc:relation> <dc:relation>url:https://zenodo.org/communities/dryad</dc:relation> <dc:rights>info:eu-repo/semantics/openAccess</dc:rights> <dc:rights>https://creativecommons.org/publicdomain/zero/1.0/legalcode</dc:rights> <dc:title>Data from: Machine learning to classify animal species in camera trap images: applications in ecology</dc:title> <dc:type>info:eu-repo/semantics/other</dc:type> <dc:type>dataset</dc:type> </oai_dc:dc>
|Data volume||15.9 GB|