Messina Nicola
Falchi Fabrizio
Gennaro Claudio
Amato Giuseppe
2021-08-05
<p>This paper describes the system used by the AIMH Team to approach the SemEval Task 6. We propose an approach that relies on an architecture based on the transformer model to process multimodal content (text and images) in memes. Our architecture, called DVTT (Double Visual Textual Transformer), approaches Subtasks 1 and 3 of Task 6 as multi-label classification problems, where the text and/or images of the meme are processed, and the probabilities of the presence of each possible persuasion technique are returned as a result. DVTT uses two complete networks of transformers that work on text and images that are mutually conditioned. One of the two modalities acts as the main one and the second one intervenes to enrich the first one, thus obtaining two distinct ways of operation. The two transformers outputs are merged by averaging the inferred probabilities for each possible label, and the overall network is trained end-to-end with a binary cross-entropy loss.</p>
https://doi.org/10.18653/v1/2021.semeval-1.140
oai:zenodo.org:5575854
eng
Zenodo
https://zenodo.org/communities/ai4eu
https://zenodo.org/communities/ai4media
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), August 5–6, 2021
Artificial Intelligence
multimodal classification
AIMH at SemEval-2021 - Task 6: multimodal classification using an ensemble of transformer models
info:eu-repo/semantics/conferencePaper