Published June 25, 2022 | Version v1
Journal article Open

Single-Layer Transformers for More Accurate Early Exits with Less Overhead

  • 1. DIGIT, Department of Electrical and Computer Engineering, Aarhus University, Denmark

Description

Deploying deep learning models in time-critical applications with limited computational resources, for instance in edge computing systems and IoT networks, is a challenging task that often relies on dynamic inference methods such as early exiting. In this paper, we introduce a novel architecture for early exiting based on the vision transformer architecture, as well as a fine-tuning strategy that significantly increase the accuracy of early exit branches compared to conventional approaches while introducing less overhead. Through extensive experiments on image and audio classification as well as audiovisual crowd counting, we show that our method works for both classification and regression problems, and in both single- and multi-modal settings. Additionally, we introduce a novel method for integrating audio and visual modalities within early exits in audiovisual data analysis, that can lead to a more fine-grained dynamic inference.

Notes

This work was partly funded by the European Union's Horizon 2020 research and innovation programme under grant agreement No 957337, and by the Danish Council for Independent Research under Grant No. 9131-00119B. This publication reflects the authors views only. The European Commission and the Danish Council for Independent Research are not responsible for any use that may be made of the information it contains.

Files

SinleLayer_ViT.pdf

Files (4.5 MB)

Name Size Download all
md5:672e3e3115abab46783b7db3852580cf
4.5 MB Preview Download

Additional details

Related works

Is published in
Journal article: 10.1016/j.neunet.2022.06.038 (DOI)
Is supplemented by
Software: https://gitlab.au.dk/maleci/sl_vit (URL)

Funding

MARVEL – Multimodal Extreme Scale Data Analytics for Smart Cities Environments 957337
European Commission