Lunar Reconnaissance Orbiter Imagery for LROCNet Moon Classifier
Authors/Creators
- 1. NASA Jet Propulsion Laboratory, California Institute of Technology
Contributors
Data collectors:
Other:
Project manager:
Project members:
- 1. Self
- 2. NASA Jet Propulsion Laboratory, California Institute of Technology
- 3. NASA Jet Propulsion Laboratory
- 4. Swiss Federal Institute of Technology
Description
Summary
We provide imagery used to train LROCNet -- our Convolutional Neural Network classifier of orbital imagery of the moon. Images are divided into train, validation, and test zip files, which contain class specific sub-folders. We have three classes: "fresh crater", "old crater", and "none". Classes are described in detail in the attached labeling guide.
Directory Contents
We include the labeling guide and training, testing, and validation data. Training data was split to avoid upload timeouts.
- LROC_Labeling_Intro_for_release.ppt: Labeling guide
- val: Validation images divided into class sub-folders
- ejecta: "fresh crater" class
- oldcrater: "old crater" class
- none: "none" class
- test: Testing images divided into class sub-folders
- ejecta: "fresh crater" class
- oldcrater: "old crater" class
- none: "none" class
- ejecta_train: Training images of "fresh crater" class
- oldcrater_train: Training images of "old crater" class
- none_train1-4: Training images of "none" class (divided into 4 just for uploading)
Data Description
We use CDR (Calibrated Data Record) browse imagery (50% resolution) from the Lunar Reconnaissance Orbiter's Narrow Angle Cameras (NACs). Data we get from the NACs are 5-km swaths, at nominal orbit, so we perform a saliency detection step to find surface features of interest. A detector developed for Mars HiRISE (Wagstaff et al.) worked well for our purposes, after updating based on LROC NAC image resolution. We use this detector to create a set of image chipouts (small 227x277 cutouts) from the larger image, sampling the lunar globe.
Class Labeling
We select classes of interest based on what is visible at the NAC resolution, consulting with scientists and performing a literature review. Initially, we have 7 classes: "fresh crater", "old crater", "overlapping craters", "irregular mare patches", "rockfalls and landfalls", "of scientific interest", and "none".
Using the Zooniverse platform, we set up a labeling tool and labeled 5,000 images. We found that "fresh crater" make up 11% of the data, "old crater" 18%, with the vast majority "none". Due to limited examples of the other classes, we reduce our initial class set to: "fresh crater" (with impact ejecta), "old crater", and "none".
We divide the images into train/validation/test sets making sure no image swaths span multiple sets.
Data Augmentation
Using PyTorch, we apply the following augmentation on the training set only: horizontal flip, vertical flip, rotation by 90/180/270 degrees, and brightness adjustment (0.5, 2). In addition, we use weighted sampling so that each class is weighted equally. The training set included here does not include augmentation since that was performed within PyTorch.
Acknowledgements
The author would like to thank the volunteers who provided annotations for this data set, as well as others who contributed to this work (as in the Contributor list). We would also like to thank the PDS Imaging Node for support of this work.
The research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004).
CL#22-4763
© 2022 California Institute of Technology. Government sponsorship acknowledged.
Notes
Files
ejecta_train.zip
Files
(105.9 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:2286098569380a4c4824dbe5905051fd
|
5.4 MB | Preview Download |
|
md5:070777369d853f375174f2f4f2a83556
|
19.5 MB | Download |
|
md5:8eb7379f92adbdb473c8b773e68ac6aa
|
7.4 MB | Preview Download |
|
md5:7ce47fdab4fcf4cc9660c9bc1f200fea
|
14.5 MB | Preview Download |
|
md5:0e93d39b86a3e24c72b3c21693fd3083
|
11.2 MB | Preview Download |
|
md5:592bd7d2021d6e82e78ef2afa92806b3
|
13.6 MB | Preview Download |
|
md5:f90110f0f8ae964f7ca0bb8c0cb158bb
|
10.1 MB | Preview Download |
|
md5:901aad8a0ab5004212cffac41724d459
|
13.8 MB | Preview Download |
|
md5:4e965c0033ba82916c61b00b90131b07
|
10.4 MB | Preview Download |
Additional details
Related works
- References
- Dataset: DOI 10.5281/zenodo.4002935 (Handle)
References
- Kiri Wagstaff, Steven Lu, Emily Dunkel, Kevin Grimes, Brandon Zhao, Jesse Cai, Shoshanna B. Cole, Gary Doran, Raymond Francis, Jake Lee, and Lukas Mandrake. Mars Image Content Classification: Three Years of NASA Deployment and Recent Advances. Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence, 2021.
- https://pds-imaging.jpl.nasa.gov/documentation/documentation.html#(LRO)