Dataset Open Access

A Convolutional Neural Network classifier identifies tree species in mixed-conifer forest from hyperspectral imagery

Geoffrey A Fricker; Jonathan Daniel Ventura; Jeffrey Wolf; Malcolm P. North; Frank W. Davis; Janet Franklin


Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
  <dc:creator>Geoffrey A Fricker</dc:creator>
  <dc:creator>Jonathan Daniel Ventura</dc:creator>
  <dc:creator>Jeffrey Wolf</dc:creator>
  <dc:creator>Malcolm P. North</dc:creator>
  <dc:creator>Frank W. Davis</dc:creator>
  <dc:creator>Janet Franklin</dc:creator>
  <dc:date>2019-09-27</dc:date>
  <dc:description>Published online: https://www.mdpi.com/2072-4292/11/19/2326

DOI: 10.3390/rs11192326

Abstract:

In this study, we automate tree species classification and mapping using field-based training data, high spatial resolution airborne hyperspectral imagery, and a convolutional neural network classifier (CNN). We tested our methods by identifying seven dominant trees species as well as dead standing trees in a mixed-conifer forest in the Southern Sierra Nevada Mountains, CA (USA) using training, validation, and testing datasets composed of spatially-explicit transects and plots sampled across a single strip of imaging spectroscopy. We also used a three-band ‘Red-Green-Blue’ pseudo true-color subset of the hyperspectral imagery strip to test the classification accuracy of a CNN model without the additional non-visible spectral data provided in the hyperspectral imagery. Our classifier is pixel-based rather than object based, although we use three-dimensional structural information from airborne Light Detection and Ranging (LiDAR) to identify trees (points &gt; 5 m above the ground) and the classifier was applied to image pixels that were thus identified as tree crowns. By training a CNN classifier using field data and hyperspectral imagery, we were able to accurately identify tree species and predict their distribution, as well as the distribution of tree mortality, across the landscape. Using a window size of 15 pixels and eight hidden convolutional layers, a CNN model classified the correct species of 713 individual trees from hyperspectral imagery with an average F-score of 0.87 and F-scores ranging from 0.67–0.95 depending on species. The CNN classification model performance increased from a combined F-score of 0.64 for the Red-Green-Blue model to a combined F-score of 0.87 for the hyperspectral model. The hyperspectral CNN model captures the species composition changes across ~700 meters (1935 to 2630 m) of elevation from a lower-elevation mixed oak conifer forest to a higher-elevation fir-dominated coniferous forest. High resolution tree species maps can support forest ecosystem monitoring and management, and identifying dead trees aids landscape assessment of forest mortality resulting from drought, insects and pathogens. We publicly provide our code to apply deep learning classifiers to tree species identification from geospatial imagery and field training data

Digital Publication of the training data polygons and hyperspectral imagery used in the manuscript "A Convolutional Neural Network classifier identifies tree species in mixed-conifer forest from hyperspectral imagery".

Code is available in a Jupyter Notebook and can be found here: https://github.com/jonathanventura/canopy

National Ecological Observatory Network. 2018. Provisional data downloaded from http://data.neonscience.org on 22 June 2018. Battelle, Boulder, CO, USA</dc:description>
  <dc:description>Data to replicate the experiment is available for download in two zipped files:
"NEON_D17_TEAK_DP1QA_20170627_181333_RGB_Reflectance.zip" (Imagery)
"CNN_LABELS_2019.zip" (Training Label Shapefiles)

* Note: The imagery is 5.5 gb (zipped).  

All code used to run the analysis is located in a repository here:
https://github.com/jonathanventura/canopy

The only flightline you will need to repeat our results is called "NEON_D17_TEAK_DP1_20170627_181333".  
If you download your own NEON data, the raw HDF 5 files can be converted to a geotiff using R code found here: 
http://neonscience.github.io/neon-data-institute-2016//R/open-NEON-hdf5-functions/

Contact the National Ecological Observatory Network (NEON) to download the comparable imagery data files for all sites and collections: https://data.neonscience.org/home.</dc:description>
  <dc:identifier>https://zenodo.org/record/3470250</dc:identifier>
  <dc:identifier>10.5281/zenodo.3470250</dc:identifier>
  <dc:identifier>oai:zenodo.org:3470250</dc:identifier>
  <dc:language>eng</dc:language>
  <dc:relation>doi:10.5281/zenodo.3463588</dc:relation>
  <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
  <dc:rights>https://creativecommons.org/licenses/by/4.0/legalcode</dc:rights>
  <dc:source>Remote Sensing 11(19)</dc:source>
  <dc:subject>shapefile</dc:subject>
  <dc:subject>tree species</dc:subject>
  <dc:subject>convolutional neural network</dc:subject>
  <dc:subject>hyperspectral imagery</dc:subject>
  <dc:subject>NEON</dc:subject>
  <dc:title>A Convolutional Neural Network classifier identifies tree species in mixed-conifer forest from hyperspectral imagery</dc:title>
  <dc:type>info:eu-repo/semantics/other</dc:type>
  <dc:type>dataset</dc:type>
</oai_dc:dc>
408
6,510
views
downloads
All versions This version
Views 408293
Downloads 6,510774
Data volume 41.8 TB1.9 TB
Unique views 369270
Unique downloads 695154

Share

Cite as