Intriguing Properties of Text-guided Diffusion Models

Qihao Liu 1 Adam Kortylewski 2,3 Yutong Bai 1 Song Bai 4 Alan Yuille 1

 1 Johns Hopkins University  2 University of Freiburg  3 Max-Planck-Institute for Informatics  4 ByteDance Inc.
 
 
 

 

Abstract

Text-guided diffusion models (TDMs) are widely applied but can fail unexpectedly. Common failures include: (i) natural-looking text prompts generating images with the wrong content, or (ii) different random samples of the latent variables that generate vastly different, and even unrelated, outputs despite being conditioned on the same text prompt. In this work, we aim to study and understand the failure modes of TDMs in more detail. To achieve this, we propose SAGE, an adversarial attack on TDMs that uses image classifiers as surrogate loss functions, to search over the discrete prompt space and the high-dimensional latent space of TDMs to automatically discover unexpected behaviors and failure cases in the image generation. We make several technical contributions to ensure that SAGE finds failure cases of the diffusion model, rather than the classifier, and verify this in a human study. Our study reveals four intriguing properties of TDMs that have not been systematically studied before: (1) We find a variety of natural text prompts producing images that fail to capture the semantics of input texts. We categorize these failures into ten distinct types based on the underlying causes. (2) We find samples in the latent space (which are not outliers) that lead to distorted images independent of the text prompt, suggesting that parts of the latent space are not well-structured. (3) We also find latent samples that lead to natural-looking images which are unrelated to the text prompt, implying a potential misalignment between the latent and prompt spaces. (4) By appending a single adversarial token embedding to an input prompt we can generate a variety of specified target objects, while only minimally affecting the CLIP score. This demonstrates the fragility of language representations and raises potential safety concerns. By shedding light on the failure modes and weaknesses of TDMs, our work aims to foster responsible development and deployment of generative AI systems.

 


 

I. Natural Text Prompts that are Unintelligible for Diffusion Models

We find a large number of concise text prompts (at most 15 words) that are natural and easily understood by humans but not by the SOTA diffusion models. Current models still struggle to grasp and describe certain linguistic representations. We manually summarize the key features and categorize them into ten distinct types, based on the underlying causes.

 

II. Latent Samples that Lead to Distorted Images

We find samples/regions in the latent space that consistently lead to distorted images under various commonly used prompts, implying that parts of the latent space are not well-structured. We also show that both generative models and discriminative models have representation biases that differ from those of human vision.

 

III. Latent Samples that Do Not Depict the Key Object but Correlated Background

We find latent samples that generate objects commonly associated with the key object, rather than generating the key object class itself. It indicates a partial misalignment between the latent space and the prompt space. Furthermore, we demonstrate the correlation between this misalignment and the stability of diffusion models.

 

IV. Universal Token Embeddings that Overwrite the Input Prompts

We also find adversarial token embeddings, which only slightly change the CLIP score, but cause the diffusion model to produce irrelevant images. These adversarial embeddings are universal in the sense that they cause failures across a variety of input prompts. It also reveals potential safety concerns.

 


 

More Results

We provide additional representative text prompts that diffusion models cannot understand.

 

Bibtex

@article{liu2023intriguing,
  title={Intriguing Properties of Text-guided Diffusion Models},
  author={Liu, Qihao and Kortylewski, Adam and Bai, Yutong and Bai, Song and Yuille, Alan},
  journal={arXiv preprint arXiv:2306.00974},
  year={2023}
}