Dataset Open Access
<?xml version='1.0' encoding='utf-8'?> <resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd"> <identifier identifierType="DOI">10.5281/zenodo.4680119</identifier> <creators> <creator> <creatorName>Quentin Geissmann</creatorName> <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0001-6546-4306</nameIdentifier> <affiliation>University of British Columbia</affiliation> </creator> </creators> <titles> <title>Sticky Pi -- Machine Learning Data, Configuration and Models</title> </titles> <publisher>Zenodo</publisher> <publicationYear>2021</publicationYear> <subjects> <subject>instect traps</subject> <subject>behavioral ecology</subject> </subjects> <dates> <date dateType="Issued">2021-04-12</date> </dates> <language>en</language> <resourceType resourceTypeGeneral="Dataset"/> <alternateIdentifiers> <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/4680119</alternateIdentifier> </alternateIdentifiers> <relatedIdentifiers> <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.4680118</relatedIdentifier> </relatedIdentifiers> <rightsList> <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights> <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights> </rightsList> <descriptions> <description descriptionType="Abstract"><p><strong>Dataset for the Machine Learning section of the Sticky Pi project (https://doc.sticky-pi.com/)</strong></p> <p>Contains the dataset for the three algorithms described in the publication: Universal Insect Detector, Siamese Insect Matcher and Insect Tuboid Classifier.</p> <p><strong>Universal Insect Detector:</strong></p> <p>`universal_insect_detector/` contains training/validation data, configuration files to train the model, and the model as trained and used for publication.</p> <ul> <li>`data/` &ndash; A set of svg images that contain the embedded jpg raw image, and a set of non-intersecting polygon around the labelled insects</li> <li>`output/` <ul> <li>`model_final.pth` &ndash; the model as trained for the publication</li> </ul> </li> <li>`config/` <ul> <li>`config.yaml` &ndash; The configuration file defining the hyperparameters to train the model as well as the taxonomic labels</li> <li>`config.yaml `&ndash; The configuration file defining the hyperparameters to train the model</li> <li>`mask_rcnn_R_101_C4_3x.yaml` &ndash; the base configuration file from which config is derived</li> </ul> </li> </ul> <p>&nbsp;</p> <p><strong>Siamese Insect Matcher</strong></p> <p>`siamese_insect_matcher/` contains training/validation data, configuration files to train the model, and the model as trained and used for publication.</p> <ul> <li>`data/` &ndash; a set of svg images that contain two embedded jpg raw images vertically stacked corresponding to two frames in a series. Each predicted insect is labelled as a polygon. Insects that are labelled as the same instance, between the two frames, are grouped (i.e. SVG group). The filename of each image is `&lt;device&gt;.&lt;datetime_frame_1&gt;.&lt;datetime_frame_2&gt;.svg`</li> <li>`output/` <ul> <li>`model_final.pth` &ndash; the model as trained for the publication</li> </ul> </li> <li>`config/` <ul> <li>`config.yaml` &ndash; The configuration file defining the hyperparameters to train the model as well as the taxonomic labels</li> <li>`config.yaml` &ndash; The configuration file defining the hyperparameters to train the model</li> </ul> </li> </ul> <p>&nbsp;</p> <p><strong>Insect Tuboid Classifier:</strong></p> <p>`insect_tuboid_classifier/` contains images of insect tuboid, a database file describing their taxonomy, a configuration file to train the model, and the model as trained and used for publication.</p> <ul> <li>`data/` <ul> <li>`database.db`: a sqlite file with a single table `ANNOTATIONS`. The table maps a unique identifier of each tuboid (tuboid_id) to a set of manually annotated taxonomic variables.</li> <li>A directory tree of the form: `&lt;series_id&gt;/&lt;tuboid_id&gt;/`. Each terminal directory contains: <ul> <li> <ul> <li>`tuboid.jpg` &ndash; a jpeg image made of 224 x 224 tiles representing all the shots in a tuboid, left to right, top to bottom &ndash; might be padded with empty images</li> <li>`metadata.txt` &ndash; a csv text file with columns: <ul> <li> <ul> <li>parrent_image_id &ndash; &lt;device&gt;.&lt;UTC_datetime&gt;</li> <li>X &ndash; the X coordinates of the object centroid</li> <li>Y &ndash; the Y coordinates of the object centroid</li> </ul> </li> </ul> </li> <li>scale &ndash; The scaling factor applied between the original and image and the 224 x 224 tile (&gt;1 =&gt; image was enlarged)</li> <li>`context.jpg` &ndash; a representation of the first whole image of a series, with a box around the first tuboid shot (this is for debugging/labelling purposes)</li> </ul> </li> </ul> </li> </ul> </li> <li>`output/` <ul> <li>`model_final.pth` &ndash; the model as trained for the publication</li> </ul> </li> <li>config/ <ul> <li>`config.yaml` &ndash; The configuration file defining the hyperparameters to train the model as well as the taxonomic labels</li> </ul> </li> </ul></description> </descriptions> </resource>
All versions | This version | |
---|---|---|
Views | 210 | 128 |
Downloads | 505 | 276 |
Data volume | 1.1 TB | 435.8 GB |
Unique views | 187 | 118 |
Unique downloads | 162 | 38 |