Published January 1, 2024 | Version v3
Conference paper Open

SpectralCLIP: Preventing Artifacts in Text-Guided Style Transfer from a Spectral Perspective

  • 1. ROR icon University of Trento
  • 2. ROR icon University of Modena and Reggio Emilia

Description

Owing to the power of vision-language foundation models, e.g., CLIP, the area of image synthesis has seen recent important advances. Particularly, for style transfer, CLIP enables transferring more general and abstract styles without collecting the style images in advance, as the style can be efficiently described with natural language, and the result is optimized by minimizing the CLIP similarity between the text description and the stylized image. However, directly using CLIP to guide style transfer leads to undesirable artifacts (mainly written words and unrelated visual entities) spread over the image. In this paper, we propose SpectralCLIP, which is based on a spectral representation of the CLIP embedding sequence, where most of the common artifacts occupy specific frequencies. By masking the band including these frequencies, we can condition the generation process to adhere to the target style properties (e.g., color, texture, paint stroke, etc.) while excluding the generation of larger-scale structures corresponding to the artifacts. Experimental results show that SpectralCLIP prevents the generation of artifacts effectively in quantitative and qualitative terms, without impairing the stylisation quality. We also apply SpectralCLIP to text-conditioned image generation and show that it prevents written words in the generated images. Our code is available at this https URL.

Files

2303.09270.pdf

Files (16.6 MB)

Name Size Download all
md5:390961d26425573930747f115bbe43d2
16.6 MB Preview Download

Additional details

Identifiers

Funding

ELIAS – European Lighthouse of AI for Sustainability 101120237
European Commission