Published July 29, 2021 | Version 2
Poster Open

Dynamic Spectra Sequence Modelling with Transformers

  • 1. SUNY Fredonia
  • 2. UC Berkeley

Description

Breakthrough Listen has a great abundance of data containing many hidden signals. The vast amount of (unlabeled) data lends itself to an exciting potential for unsupervised and/or self-supervised learning. In the past, we’ve leveraged mostly Computer Vision-based approaches for classification, feature extraction, and clustering. These were motivated by the fact that Computer Vision techniques are made to pick out visual features like straight lines (narrowband drifting signals) and curves (fast radio bursts). However, these attempts overlook the fact that dynamic spectra are inherently sequential, and that the two axes of the data (time and frequency) are not interchangeable. In other words, the spectrum at one timestep has a strong correlation to the spectra from the timesteps immediately preceding it. With that in mind, we present our results from training one such sequence model - the Transformer, on dynamic spectra, and compare its performance on various downstream tasks to previous models.

Files

2021 OOTO Conference Poster.pdf

Files (302.5 kB)

Name Size Download all
md5:bc0e947934f9e57c0f9c2e0d5e0d0915
302.5 kB Preview Download