Published August 14, 2023 | Version v1
Conference paper Open

Domain Generalisation with Bidirectional Encoder Representations from Vision Transformers

  • 1. Dublin City University, Glasnevin, Dublin 9, Ireland

Description

Domain generalisation involves pooling knowledge from source domain(s) into a single model that can generalise to unseen target domain(s). Recent research in domain generalisation has faced challenges when using deep learning models as they interact with data distributions which differ from those they are trained on. Here we perform domain generalisation on out-of-distribution (OOD) vision benchmarks using vision transformers. Initially we examine four vision transformer architectures namely ViT, LeViT, DeiT, and BEIT on out-of-distribution data. As the bidirectional encoder representation from image transformers (BEIT) architecture performs best, we use it in further experiments on three benchmarks PACS, Home- Office and DomainNet. Our results show significant improvements in validation and test accuracy and our implementation significantly overcomes gaps between within-distribution and OOD data

Notes

HR is funded under the ML-Labs SFI Centre for Researcher Training in Machine Learning (18/CRT/6183) and AS is part-funded by SFI [12/RC/2289_P2] at Insight the SFI Research Centre for Data Analytics at DCU.

Files

IMVIP_2023___Hamza.pdf

Files (8.7 MB)

Name Size Download all
md5:349a4eb28535e08f86796ef20e3608be
8.6 MB Download
md5:ded81d494074cea64fd4d96c4787d456
83.9 kB Preview Download