Published October 26, 2023 | Version v1
Poster Open

Model-Aware Deep Learning for the Clustering of Hyperspectral Images with Context Preservation

Description

Model-aware deep clustering embeds domain knowledge into a deep neural network. For example, instead of solving a sparse coding problem, a neural network outputs the data representation matrix. Deep subspace clustering is an effective model-aware deep clustering method for clustering high-dimensional data, and it provides state-of-the-art results in clustering hyperspectral images (HSI). However, these methods typically suffer from the size of the so-called self-representation matrix that increases quadratically in size with the image to be clustered. This can result in significant demands on computing power and storage capacity, making it very challenging to apply these methods to large-scale image data. Recently emerging Efficient Deep Embedded Subspace Clustering focuses on learning the basis of different subspaces which needs much fewer training parameters than self-representation based model. In this work, we extend and generalize this approach to account for both local and non-local structures in HSI data. We propose a structured model-aware deep subspace clustering network for hyperspectral images where the contextual information is captured in the appropriately defined loss functions. A self-supervised loss captures the local spatial structure by considering the prediction of neighbors in spatial space, and the non-local structure is incorporated through a contrastive loss that encourages pixels with small feature distances to have the same prediction, while pixels with large feature distances have a distinct prediction. The experiments on real-world hyperspectral datasets demonstrate clear advantages over state-of-the-art methods for subspace clustering.

Files

fears_2023_poster.pdf

Files (1.6 MB)

Name Size Download all
md5:cf3569353717c93bba1eebb0607d7c75
1.6 MB Preview Download