AffectGAN: Affect-Based Generative Art Driven by Semantics
Description
This paper introduces a novel method for generating artistic images that express particular affective states. Leveraging state-of-the-art deep learning methods for visual generation (through generative adversarial networks), semantic models from OpenAI, and the annotated dataset of the visual art encyclopedia WikiArt, our AffectGAN model is able to generate images based on specific or broad semantic prompts and intended affective outcomes. A small dataset of 32 images generated by AffectGAN is annotated by 50 participants in terms of the particular emotion they elicit, as well as their quality and novelty. Results show that for most instances the intended emotion used as a prompt for image generation matches the participants' responses. This small-scale study brings forth a new vision towards blending affective computing with computational creativity, enabling generative systems with intentionality in terms of the emotions they wish their output to elicit.
Files
affectgan_affect-based_generative_art_driven_by_semantics.pdf
Files
(5.0 MB)
Name | Size | Download all |
---|---|---|
md5:56345e143b89ad810cbc9f45e6a2c6b7
|
5.0 MB | Preview Download |