Published June 14, 2021 | Version 1.0
Conference paper Open

In space image processing using AI embedded on system on module: example of OPS-SAT cloud segmentation

  • 1. IRT Saint Exupery / Thales Alenia Space
  • 2. IRT Saint Exupery
  • 3. Thales Research Technology / Univ. Côte d'Azur / LEAT / CNRS
  • 4. IRT Saint Exupery / Inria
  • 5. IRT Saint Exupery / MyDataModels
  • 6. Elsys Design
  • 7. IRT Saint Exupery / ActiveEon

Description

During the OBPDC-2020 conference held last year, we presented the publication “Onboard image processing using AI to reduce data transmission: example of OPS-SAT cloud segmentation”.

In this paper, we explained how we implemented three Artificial Neural Networks (ANNs) on OPS-SAT FPGA to perform cloud segmentation based on:
- A classical LeNet-5 architecture,
- A fully convolutional architecture,
- A hybrid convolutional / spiking architecture.

Cloud segmentation is a useful onboard service to filter unnecessary data and to preserve the limited storage and bandwidth of nanosats. This service is also compatible with OPS-SAT spatial resolution and the number of logic cells within its Cyclone V FPGA.

In the OBPDC-2020 paper, we detailed several challenges we had to tackle to achieve OPS-SAT implementation, specifically:
- Dataset engineering, which was made difficult by the fact that no actual OPS-SAT images were available at the time of ANN trainings,
- ANN architectures selection, which was almost completely driven by the execution target capability and required to come up with tiny designs,
- Hardware acceleration of the trained ANNs, using a VHDL based solution specifically developed to target OPS-SAT FPGA on Cyclone-V System on Chip.

In the continuity of the OBPDC-2020 paper, we propose for the OBDP-2021 conference to report the in-flight inferences of our ANNs on FPGA that is probably a world first. We will discuss the different parameters affecting the overall performance measured onboard OPS-SAT, while presenting, at least for one reference ANN, the impact of the different deployment steps on the inference metrics (in full precision on CPU, quantified on the validation board, and in-flight). Then we will propose relevant improvements.

We will especially analyze the generalization capability of the trained ANNs on real OPS-SAT images. Since these images display a wide variety of solar irradiance and geometry, we tested different kinds of pre-processing to handle this sensor behavior. We will therefore explain how we used the first images of OPS-SAT to create the new learning dataset.

We will also discuss the challenge of ANN quantization to perform inference on FPGA, with potential over or under flow depending on the selected arithmetic, applied not only to weights and biases but also on all the feature maps calculation. We will report the results of the solutions we tested, and we will propose further improvements.

The resulting pre and post processing time on HPS, and the inference time on FPGA, will be measured in-flight and compared to the on ground results.

Finally, some interesting information will be provided on the process that allowed, in coordination with ESOC team, uploading the full experiment onboard OPS-SAT: codes for the Hard Processor System (HPS) part, and bitstreams for the FPGA.

Depending on OPS-SAT availability before the conference, we could also present some improvements we intend to implement. In particular, we want to test an evolutionary algorithm as a complementary or challenging solution of the tiny Artificial Neural Networks.

Files

12.01 OBDP2021_Feresin_PPT.pdf

Files (4.2 MB)

Name Size Download all
md5:ec56d81cfeeb922d19e56d971a1c13a6
2.3 MB Preview Download
md5:1f2fe0718dc5553d407bb2bdeacde9a8
1.9 MB Preview Download