Published August 11, 2021 | Version v1
Poster Open

Dynamic Spectra Sequence Modelling with Transformers

  • 1. SUNY Fredonia
  • 2. UC Berkeley

Description

Breakthrough Listen has a great abundance of data containing many hidden signals. The vast amount of (unlabeled) data lends itself to an exciting potential for unsupervised and/or self-supervised learning. In the past, we’ve leveraged mostly Computer Vision-based approaches for classification, feature extraction, and clustering. These were motivated by the fact that Computer Vision techniques are made to pick out visual features like straight lines (narrowband drifting signals) and curves (fast radio bursts). However, these attempts overlook the fact that dynamic spectra are inherently sequential, and that the two axes of the data (time and frequency) are not interchangeable. In other words, the spectrum at one timestep has a strong correlation to the spectra from the timesteps immediately preceding it. With that in mind, we present our results from training one such sequence model - the Transformer, on dynamic spectra, and compare its performance on various downstream tasks to previous models.

Files

2021 WVU Conference Poster.pdf

Files (1.2 MB)

Name Size Download all
md5:e34633cfea97c99a5718ba2ea7028522
1.2 MB Preview Download