Published October 23, 2017 | Version v1
Conference paper Open

Multi-modal Deep Learning Approach for Flood Detection


In this paper we propose a multi-modal deep learning approach to detect floods in social media posts. Social media posts normally contain some metadata and/or visual information, therefore in order to detect the floods we use this information. The model is based on a Convolutional Neural Network which extracts the visual features and a bidirectional Long Short-Term Memory network to extract the semantic features from the textual metadata. We validate the method on images extracted from Flickr which contain both visual information and metadata and compare the results when using both, visual information only or metadata only. This work has been done in the context of the MediaEval Multimedia Satellite Task.



Files (801.2 kB)

Name Size Download all
801.2 kB Preview Download

Additional details


I-REACT – Improving Resilience to Emergencies through Advanced Cyber Technologies 700256
European Commission