Cascaded Cross MLP-Mixer GANs for Cross-View Image Translation
Description
It is hard to generate an image at target view well for previous cross-view image
translation methods that directly adopt a simple encoder-decoder or U-Net structure,
especially for drastically different views and severe deformation cases. To ease this
problem, we propose a novel two-stage framework with a new Cascaded Cross MLPMixer
(CrossMLP) sub-network in the first stage and one refined pixel-level loss in
the second stage. In the first stage, the CrossMLP sub-network learns the latent transformation
cues between image code and semantic map code via our novel CrossMLP
blocks. Then the coarse results are generated progressively under the guidance of those
cues. Moreover, in the second stage, we design a refined pixel-level loss that eases
the noisy semantic label problem with more reasonable regularization in a more compact
fashion for better optimization. Extensive experimental results on Dayton [40]
and CVUSA [42] datasets show that our method can generate significantly better results
than state-of-the-art methods. The source code and trained models are available at
https://github.com/Amazingren/CrossMLP.
Files
BinBMVC21.pdf
Files
(5.6 MB)
Name | Size | Download all |
---|---|---|
md5:4cc76a35674d04d5926306ff40b885ea
|
5.6 MB | Preview Download |