Info: Zenodo’s user support line is staffed on regular business days between Dec 23 and Jan 5. Response times may be slightly longer than normal.

Published January 10, 2022 | Version author pre-print
Conference paper Open

AffectGAN: Affect-Based Generative Art Driven by Semantics

  • 1. University of Malta

Description

This paper introduces a novel method for generating artistic images that express particular affective states. Leveraging state-of-the-art deep learning methods for visual generation (through generative adversarial networks), semantic models from OpenAI, and the annotated dataset of the visual art encyclopedia WikiArt, our AffectGAN model is able to generate images based on specific or broad semantic prompts and intended affective outcomes. A small dataset of 32 images generated by AffectGAN is annotated by 50 participants in terms of the particular emotion they elicit, as well as their quality and novelty. Results show that for most instances the intended emotion used as a prompt for image generation matches the participants' responses. This small-scale study brings forth a new vision towards blending affective computing with computational creativity, enabling generative systems with intentionality in terms of the emotions they wish their output to elicit.

Files

affectgan_affect-based_generative_art_driven_by_semantics.pdf

Files (5.0 MB)

Additional details

Funding

AI4Media – A European Excellence Centre for Media, Society and Democracy 951911
European Commission