Style-Hallucinated Dual Consistency Learning for Domain Generalized Semantic Segmentation
Description
In this paper, we study the task of synthetic-to-real domain
generalized semantic segmentation, which aims to learn a model that
is robust to unseen real-world scenes using only synthetic data. The
large domain shift between synthetic and real-world data, including the
limited source environmental variations and the large distribution gap
between synthetic and real-world data, significantly hinders the model
performance on unseen real-world scenes. In this work, we propose the
Style-HAllucinated Dual consistEncy learning (SHADE) framework to
handle such domain shift. Specifically, SHADE is constructed based on
two consistency constraints, Style Consistency (SC) and Retrospection
Consistency (RC). SC enriches the source situations and encourages the
model to learn consistent representation across style-diversified samples.
RC leverages real-world knowledge to prevent the model from overfitting
to synthetic data and thus largely keeps the representation consistent
between the synthetic and real-world models. Furthermore, we present
a novel style hallucination module (SHM) to generate style-diversified
samples that are essential to consistency learning. SHM selects basis
styles from the source distribution, enabling the model to dynamically
generate diverse and realistic samples during training. Experiments show
that our SHADE yields significant improvement and outperforms stateof-
the-art methods by 5.05% and 8.35% on the average mIoU of three
real-world datasets on single- and multi-source settings, respectively.
Files
136880530 (1).pdf
Files
(1.5 MB)
Name | Size | Download all |
---|---|---|
md5:0c153ba88964a7a8d66a2b341db2543e
|
1.5 MB | Preview Download |