Conference paper Open Access

CloudLSTM: A Recurrent Neural Model for Spatiotemporal Point-cloud Stream Forecasting

Chaoyun Zhang; Marco Fiore; Iain Murray; Paul Patras

This paper introduces CloudLSTM, a new branch of recurrent neural models tailored to forecasting over data streams generated by geospatial point-cloud sources. We design a Dynamic Point-cloud Convolution (DConv) operator as the core component of CloudLSTMs, which performs convolution directly over point-clouds and extracts local spatial features from sets of neighboring points that surround different elements of the input. This operator maintains the permutation invariance of sequence-to-sequence learning frameworks, while representing neighboring correlations at each time step -- an important aspect in spatiotemporal predictive learning. The DConv operator resolves the grid-structural data requirements of existing spatiotemporal forecasting models and can be easily plugged into traditional LSTM architectures with sequence-to-sequence learning and attention mechanisms. We apply our proposed architecture to two representative, practical use cases that involve point-cloud streams, i.e., mobile service traffic forecasting and air quality indicator forecasting. Our results, obtained with real-world datasets collected in diverse scenarios for each use case, show that CloudLSTM delivers accurate long-term predictions, outperforming a variety of competitor neural network models.

This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no.101017109 "DAEMON", and from the Cisco University Research Program Fund (grant no. 2019- 197006).
Files (618.8 kB)
Name Size
aaai21_cloud-lstm.pdf
md5:9f12efb8cc29e9cb95295aa2e47f6ba9
618.8 kB Download
88
32
views
downloads
All versions This version
Views 8888
Downloads 3232
Data volume 19.8 MB19.8 MB
Unique views 7878
Unique downloads 3030

Share

Cite as