=Paper=
{{Paper
|id=Vol-3888/paper4
|storemode=property
|title=How to Blend Concepts in Diffusion Models
|pdfUrl=https://ceur-ws.org/Vol-3888/Paper_4.pdf
|volume=Vol-3888
|authors=Lorenzo Olearo,Giorgio Longari,Simone Melzi,Alessandro Raganato,Rafael Peñaloza
|dblpUrl=https://dblp.org/rec/conf/isd2/OlearoLMRP24
}}
==How to Blend Concepts in Diffusion Models==
How to Blend Concepts in Diffusion Models
Lorenzo Olearo† , Giorgio Longari† , Simone Melzi, Alessandro Raganato and Rafael Peñaloza
University of Milano-Bicocca, Milan, Italy
Abstract
For the last decade, there has been a push to use multi-dimensional (latent) spaces to represent concepts; and yet
how to manipulate these concepts or reason with them remains largely unclear. Some recent methods exploit
multiple latent representations and their connection, making this research question even more entangled. Our
goal is to understand how operations in the latent space affect the underlying concepts. We hence explore the
task of concept blending through diffusion models. Diffusion models are based on a connection between a latent
representation of textual prompts and a latent space that enables image reconstruction and generation. This task
allows us to try different text-based combination strategies, and evaluate them visually. Our conclusion is that
concept blending through space manipulation is possible, although the best strategy depends on the context.
Keywords
Concept blending, Generative AI, Diffusion models
“Dog”
Prompt 1
Blending Methods
Stable
Diffusion
Prompt 2
“Rabbit”
(a) (b) (c)
Figure 1: A visualization of the proposed analysis. From left to right: (a) given two input textual concepts (“dog”,
“rabbit”), (b) four different techniques are applied to explore multiple ways to blend them together through stable
diffusion and (c) the obtained outputs are compared with qualitative analysis and a user study.
1. Introduction
The field of knowledge representation deals with the task of representing the knowledge of a domain in
a manner that can be used for intelligent applications [1]. Over the decades, most of the progress in the
field has focused on logic-based knowledge representation languages, and their reasoning capabilities. In
this setting, concepts—the first-class citizens of any domain representation—are formalised by limiting
the interpretations that they can be assigned to, and their connections with other concepts. A different,
more implicit approach represents concepts as points (or sometimes volumes) in a multidimensional so-
called latent space. This representation (or embedding) is built considering the semantic similarities and
differences between concepts. Although at an abstract level this representation is similar to Gardenfors’s
Proceedings of The Eighth Image Schema Day (ISD8), November 27-28, 2024, Bozen-Bolzano, Italy
†
These authors contributed equally.
$ lorenzo.olearo@unimib.it (L. Olearo); giorgio.longari@unimib.it (G. Longari); simone.melzi@unimib.it (S. Melzi);
alessandro.raganato@unimib.it (A. Raganato); rafael.penaloza@unimib.it (R. Peñaloza)
https://lorenzo.olearo.com (L. Olearo); https://sites.google.com/site/melzismn/ (S. Melzi); https://raganato.github.io/
(A. Raganato); https://rpenalozan.github.io/ (R. Peñaloza)
0009-0000-7290-3549 (L. Olearo); 0000-0002-2086-9091 (G. Longari); 0000-0003-2790-9591 (S. Melzi); 0000-0002-7018-7515
(A. Raganato); 0000-0002-2693-5790 (R. Peñaloza)
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
conceptual spaces [2], there are essential differences between the two; most notably, concept composition
cannot be achieved through simple set operations. Hence, while the use of the latent space is becoming
more common, it remains unclear how to navigate it and how to reason within this representation.
Our overarching goal is to understand the properties of the latent space and how different operations
within it affect the underlying concepts. It is usually understood that every point in the latent space
represents a concept, and thus, navigating it has the potential of creating new concepts. In this paper,
we focus on the question of concept blending; briefly, the task of creating new concepts combining
(“blending”) the properties of two or more concepts [3] (see Section 2 for more details). We explore the
possibility of constructing such blends through (text-to-image) diffusion models starting from textual
prompts describing the concepts. This choice is motivated by, first, the easy access to the latent space
through the textual prompts and, second, the ability to evaluate the quality of the results visually.
We study different strategies for concept blending which exploit the overall architecture of Stable
Diffusion (SD) [4]. None of these methods relies on further training or fine-tuning, but they all focus on
the topology of the latent space and the SD architecture. In general, visual depictions of concept blends
can be automatically generated through these techniques, although their quality may vary. An empirical
study was used to evaluate the relative performance of each method. The results suggest that there is no
absolute best method, but the choice of blending approach depends on the combined concepts. The task
of concept blending considered here is just a milestone towards our general goal of understanding the
properties of the latent space and how navigating it affects the underlying concepts. This understanding
will be useful towards an explainable use of latent spaces and embeddings in general.
2. Preliminaries
Human creativity has always been the key to the innovating process, giving us the possibility to imagine
things which are yet to be discovered, and diverging scenarios to explore. In recent years, the field of
artificial intelligence (AI) has been revolutionized by generative models, which are capable of creating
new and original contents by exploiting the countless examples these models have been trained on.
Among the multiple variants and possibilities to exploit generative AI, diffusion models like Stable
Diffusion [5], Dall-E, or Midjourney produce as output original images based on textual prompts or
images given as input for the model. To provide clarity for this work, we introduce the fundamental
concepts and components of Stable Diffusion and the notions of concept blending.
2.1. Stable Diffusion
Denoising U-Net input prompt
Stable Diffusion (SD) is a text-to-image generative model
𝑝
developed by Rombach et al. in 2022 [4], which follows 𝐷 𝐸
the typical architecture of diffusion models [6] compris-
ing a forward and a backward process. In the forward prompt
encoder
process, a clean sample from the data (in this case, an
image) is sequentially corrupted by random noise reach-
ing, after a defined number of steps, pure random noise.
In the backward process, a neural network is trained to
sequentially remove the noise, thereby restoring the clean data distribution; this is the main phase
intervening during image generation. The Stable Diffusion network architecture utilized during the
backward phase is principally made up of (i) a Variational Autoencoder (VAE) [7], (ii) a U-Net [8],
and (iii) an optional text encoder. The VAE characterizes SD as a Latent Diffusion Model, mapping
images into a lower-dimensional space through an encoder, followed by a diffusion model to craft the
distribution over encoded images. The images are then represented as points in the latent space (R𝑛 ).
Afterwards, a decoder is needed to convert a point back into an image.
The U-Net is composed of an encoder-decoder pair, where the bottleneck contains the compact
embedding representation of the images. The encoder 𝐸 maps the input samples according to the given
prompt embedding into this latent embedding, then the decoder 𝐷 processes this latent embedding
together with its prompt embedding to reconstruct a sample that is as close as possible to the original
one. The U-Net and text embedding are crucial in conditioning the output generated by the model. At
each step of the denoising process, the prompt embedding is injected into the three blocks of the U-Net
via cross-attention mechanism. In this way, the textual prompt conditions the denoising process and in
turn the generation of an image. The prompt embedding is generated by the text encoder, following the
pipeline of SD 1.4. In our experiments, as text encoder, we adopt a pre-trained CLIP Vit-L/14.
With these details we can now establish the focus of this study through the research question: can
diffusion models produce visual blends of two concepts? Identifying each concept through a word, we
want to create a new image that simultaneously represents a combination of both, simulating the human
capacity for associative thinking. To address this problem, we present various methodologies leveraging
SD as the backbone of our experiments. But before, we explain the notion of concept blending.
2.2. Concept Blending
Blending represents a cognitive mechanism that has been innately exploited to create new abstractions
from familiar concepts [9]. This process is often experienced in our daily interactions, even during a
casual conversation. This conceptual framework has been studied over the past three decades [10],
offering a model that incorporates mental spaces and conceptual projection. It examines the dynamic
formation of interconnected domains as discourse unfolds, aiming to discover fundamental principles
that underlie the intricate logic of everyday interactions. In this context, a mental space is a temporary
knowledge structure, which is dynamically created, for the purpose of local understanding, during
a social interaction. It is composed of elements (concepts) and their interconnections. It is context-
dependent and not necessarily a description of reality [11]. This general notion can be specified in
different notions. For our purpose, we are interested in visual conceptual blending, which combines
aspects of conceptual blending and visual blending.
Conceptual Blending constructs a partial match between two or more input mental spaces, and project
them into a new “blended” one [11]. This blended space has common characteristics of the input
spaces, allowing a mapping between its elements and their counterparts in each input space. Yet, it also
generates a new emergent conceptual structure, which is unpredictable from the input spaces and not
originally present in them. Therefore, blending occurs at the conceptual level. Representations of these
blends are valuable and frequently employed in advertising [12] and other domains [13].
The Visual Blending process, instead, is essential to generate new visual representations, such as
images, through the fusion of at least two existing ones. There are two primary options for visual
blending, according to the rendering style employed: photo-realistic rendering and non-photo-realistic
techniques, like drawings. Approaches that focus on text-to-image generation have as main goal the
visual representation of concepts, and, in the case of blending, the topology could to be summarized
as a bunch of visual operations, as analyzed by Phillips and McQuarrie [14]. One of these operators,
called fusion, partially depicts and merges the different inputs to create a hybrid image, allowing for
a higher coherence between the object parts of the object(s), and helping the viewers in perceiving
the hybrid object as a unified whole. In replacement, one input concept is present and its sole function
is to occupy the usual environment of the other concept, or have its shape adapted to resemble the
other input. Juxtaposition is a technique that involves placing two different elements side by side, to
create a harmonious or provoking whole. Good example of Visual Blending and different approaches to
the operations described (and others) can be found in [15]. Importantly, high-quality blends between
concepts require that only some of the main characteristics of the input concepts are taken into
account [16]. Exploiting the three main visual properties of color, silhouette, and internal details, helps
the creator to obtain a great resulting blend. An image result from blending can be evaluated by taking
in account the number of dimensions (or visual properties) over which the blend has been applied.
Visual Conceptual Blending introduces a model for creating visually blended images grounded in
strong conceptual reasoning. Cunha et al. [17] argue that visual conceptual blending goes beyond
simply merging two images: it emphasizes the importance of conceptual reasoning as the foundation of
the blending process, resulting in an image and accompanying conceptual elaborations. These blends
have context, are grounded in justifications, and can be named independently of the original concepts.
In contrast, standard Visual Blending focuses solely on image merging, and typically involves mapping
concepts to objects and integrating them while maintaining recognisably and inference association.
We now rephrase our research question as: can Stable Diffusion models merge two semantically
distant concepts into a new image, practically performing a Visual (Conceptual) Blending operation? We
investigate the efficacy of diffusion models, which are supposed to recreate each image that should
be imagined, in generating high-quality blended images. We assess existing approaches to perform
blending with stable diffusion, and propose novel methods. To the best of our knowledge, this is the first
investigation that evaluates the performance of different blending techniques with diffusion models
using only textual prompts. We initially operate on the latent space where the textual prompts are
embedded, and then explore alternative methods by directly manipulating the specific architecture of
the diffusion model; more precisely, the U-Net conditioning phase is manipulated to edit the textual
prompt that is injected (Section 3). To evaluate the results, we conducted a user survey where the
subjects were asked to rank the outcomes of different blending tasks, divided in multiple categories.
3. Blending Methods with Stable Diffusion
In this section, we briefly review some of the existing approaches for blending concepts with diffusion
models. Some of these methods were already published in previous work [18], while others are available
on public implementations, but without a full description of their details. We mention explicitly
whenever we are unsure if our implementation matches exactly the one proposed in the reference.
Experimental setup We fix the generative network 𝒢 as Stable Diffusion v1.4 [4] with the UniPCMul-
tistepScheduler [19] set at 25 steps. This version uses a fixed pretrained text encoder (CLIP Vit-L/14 [20]).
All images are generated as 512x512 pixels with the diffusion process carried in FP16 precision in a
space downscaled by a factor of 4. The conditioning signal is provided only in the form of textual
prompts, and the guidance scale is set to 7.5. We focused on Stable Diffusion as a good trade-off between
quality and computational cost; however, the blending methods analyzed can be implemented in other
diffusion models with no latent downscale. Our entire implementation of the blending methods in their
respective pipelines together with some of the generated samples is openly available.1
An important feature of many generative methods, which allows them to produce varying outputs
on the same prompt, is the use of a pseudo-random number generator (and pseudo-random noise)
which can be established through a seed. Given an input textual prompt 𝑝, and a seed 𝑠, we denote as
𝐼𝑠,𝑝 = 𝒢(𝑠, 𝑝) the image generated by the model 𝒢 given the input prompt 𝑝 and the seed 𝑠. Prompts
will be usually denoted with the letter 𝑝, sometimes with additional indices to distinguish between
them; e.g., 𝑝1 and 𝑝2 when two different prompts are used simultaneously. Given a prompt 𝑝, 𝑝* denotes
its latent representation; that is, the multi-dimensional vector obtained from the encoding operation.
Similarly, 𝑝*1 and 𝑝*2 denote the latent representations of 𝑝1 and 𝑝2 , respectively.
3.1. Blending in the Prompt Latent Space (TEXTUAL)
Denoising U-Net
The first method examined was recently proposed by input prompts
𝑝1 𝑝2
Melzi et al. [18]. It exploits the relationship between 𝐷 𝐸
conceptual blending and vector operations within the
prompt latent space. Given the two input prompts 𝑝1 prompt
and 𝑝2 , we first compute their latent representations 𝑝*1 encoder
and 𝑝*2 through the prompt encoder. The blended latent
mean
vector is the Euclidean mean between 𝑝*1 and 𝑝*2 . The
blended image is generated by conditioning SD with the
blended latent vector.
1
Project repository: https://github.com/LorenzoOlearo/blending-diffusion-models
Importantly, blending in the latent space representing the prompts does not correspond to blending
images directly, as in a visual blending process. Instead, it means generating an image representing
a specific fusion of the concepts provided as the input textual prompts. Indeed, the Euclidean mean
between the two representations is a (potentially unexplored) point of the latent space which intuitively
represents the concept that is closest to both input concepts, thus defining an “in-between” charac-
terisation. Although in this paper we only consider the mean of the two latent representations of the
input prompts, we highlight that Melzi et al. consider also other linear combinations of 𝑝*1 and 𝑝*2 to
avoid fully symmetric constructions. A similar technique was implemented in the Compel open source
library,2 which performs the weighted blend of two textual prompts.
3.2. Prompt Switching in the Iterative Diffusion Process (SWITCH)
This blending technique involves switching the textual Denoising U-Net input prompts
𝑝1 𝑝2
prompt during the iterative process of the diffusion 𝐷 𝐸
model. The inference process first starts with a single
prompt 𝑝1 and then, at a certain iteration, the prompt prompt
is switched to 𝑝2 until the end of the diffusion process. encoder
The generation is thus conditioned on both prompts The first N denoising iterations
leading to an image that, when the switch is executed at The last M denoising iterations
the right timestep, blends the two concepts. Intuitively,
SWITCH starts by generating the general shape of 𝑝1 , but then fills out the details based on 𝑝2 thus
producing a visual blend of the two concepts.
It is crucial to choose the right iteration to switch the prompt. Unfortunately, this is an intrinsic
challenge for each new image and does not depend only on the geometric distance between the 𝑝*1 and
𝑝*2 embeddings. From our experiments, it was observed that the optimal iteration for this switch is
directly related to the spatial similarity between the image generated by the model conditioned only
on 𝑝1 and the one generated by 𝑝2 . This technique was also implement in the Stable Diffusion web
UI developed by AUTOMATIC1111.3 Among its numerous functionalities, this implementation allows
prompt editing during the mid-generation of an image.
3.3. Alternating Prompts in the Iterative Diffusion Process (ALTERNATE)
In general diffusion models, at each timestep defined by Denoising U-Net input prompts
𝑝1 𝑝2
the scheduler of the diffusion process, the noise in the 𝐷 𝐸
sample is estimated by the U-Net model. This estima-
tion is performed by the model with knowledge of the prompt
timestep and the conditioning signal (i.e., the prompt). encoder
Even
The Alternating Prompt technique conditions the U-Net iteration
with a different prompt at each timestep: the prompt Odd
iteration
𝑝1 is shown to the U-Net at even timesteps, while 𝑝2 is
shown at odd timesteps. By performing this alternating prompt technique, the diffusion pipeline can
successfully generate an image that blends the two given prompts. Even though at different timesteps,
the U-Net is conditioned by both prompts during the diffusion process. The blending ratio can be
controlled by adjusting the number of iterations in which each prompt is shown to the U-Net. One can
intuitively think of this approach as an alternating superposition of the generation process between 𝑝1
and 𝑝2 . This method is also implemented in the Stable Diffusion web UI developed by AUTOMATIC1111.
2
Compel: https://github.com/damian0815/compel
3
Stable Diffusion web UI: https://github.com/AUTOMATIC1111/stable-diffusion-webui
4. Method
We now propose a different blending paradigm to visually combine two textual prompts in the diffusion
pipeline. In a standard diffusion architecture, given a single input prompt 𝑝, its corresponding embedding
𝑝* is injected with a cross-attention mechanism in the three main blocks of the U-Net: the encoder,
the bottleneck, and the decoder. During the encoding and bottleneck steps, the 𝑝* embedding is used
to guide the compression of the input sample into a latent representation that accurately maps the
concept 𝑝 that is being generated. Then, during the decoding phase, the 𝑝* embedding is used to guide
the reconstruction of the sample towards the distribution of the concept 𝑝 that is being generated. Our
idea arises from this compression and reconstruction operation and it is described in the following
subsection. To the best of our knowledge, this method has not been proposed before.
Different Prompts in Encoder and Decoder Components of the U-Net (UNET)
We implement our new method using text-based con- Denoising U-Net input prompts
ditioning, but it can theoretically be extended to other 𝐷 𝐸
𝑝1 𝑝2
conditioning domains. As describe above, the U-Net ar-
chitecture contains three main blocks: the encoder, the prompt
bottleneck and the decoder. Each of these block receives encoder
the prompt embedding 𝑝* as input together with the
sample from which the noise has to be estimated.
The key idea our method involves guiding the com-
pression of the sample into the bottleneck block with a first prompt embedding 𝑝*1 . Then, guide its
reconstruction towards the distribution of the second prompt 𝑝2 by injecting into the decoder block the
embedding 𝑝*2 as visualized in the figure. This allows the U-Net to construct a latent representation for
the sample matching the concept described by 𝑝1 and then reconstruct the sample with the features of
the second prompt 𝑝2 .
The expected result from this technique is to obtain an image that globally represents or recalls the
concept described by 𝑝1 and simultaneously shows some of the features that typically describe the
concept of the second prompt 𝑝2 . From our findings, changing the prompt embedding in the bottleneck
block does not significantly affect the final result. Consequently, we keep the prompt 𝑝1 in the decoder
and bottleneck blocks while we change the prompt 𝑝2 in the encoder block.
5. Validation and Results
We now describe the experimental setting and analysis made to evaluate the four blending approaches
presented in the previous sections, applied over two simple conceptual prompts. The outputs of these
models can be visualized in Figures 2 and 3. The experiments aimed to assess previously proposed
blending methods across four distinct macro-categories, which are visually explained in Figure 2. The
four categories are pair of animals, object and animal, compound words, and real-life scenarios. These were
selected to showcase different kinds of blending of concepts, which are expected to showcase diverging
properties. For pairs of animals, we expect that the shared characteristics between the concepts will aid
the blending process; the use of object and animal concepts in the second category is expected to widen
the semantic gap between the input prompts, leading to more “creative” artifacts. The third category
considers objects representing compound words, offering a more conceptual blending challenge. Here,
we observed how the methods responded to prompts comprising the compound’s constituent parts,
which are not literal descriptions of the target object but are rather interpretable as a figure of speech
or metaphor. We aimed to investigate whether the models should learn the necessary abstractness to
perform a blending similar to the concept associated with the compound word, or reach a new visual
blending that merges the characteristics of the two prompts. The last category draws inspiration from
real-world visual blend examples, regardless of their underlying concepts, deriving prompts to condition
the models, allowing us to investigate their adaptability and ability to reconstruct well-known blends.
Real Compound Words Object + Animal Pair of Animals
Man-Bat Kung fu-Panda Bull-Pit Butter-Fly Turtle-Brain Garlic-Swan Fox-Hamster Lion-Cat
TEXTUAL
Figure 2: Samples of two blends per category.
SWITCH
ALTERNATE
UNET
User Analysis
To impartially evaluate the quality of the methods, we conducted a survey on 23 participants involving
24 images from the four categories described. The survey was constructed as follows. We first selected
24 concept pairs covering examples from the four macro-categories; each concept in the pair was
described through a simple prompt. Then, the four different blending methods were used to generate
the visual conceptual blend of each pair. All images were generated with the same size and quality, and
presented to the users, with the instruction to rank them according to their blending effectiveness from
best to worst. Our participant pool was carefully selected to ensure they had no prior experience with
blending theory. While the two prompts used to generate the images were provided to the subjects, we
deliberately withheld information regarding the model responsible for each image, eliminating potential
bias. Additionally, to further mitigate bias, the order of images within each question was randomized for
each participant. This approach aimed to discern whether a superior blending method existed among
the four proposed and whether certain methods outperformed others within specific categories.
For each question, the top four images proposed were selected by the authors from a pool generated
using ten different seeds. Given that blending quality across all methods is not entirely independent of
seed choice, we aimed to minimize this dependency by carefully selecting the best results. For a better
understanding of the evaluation approach, Figure 2 shows some of the images that were presented to
the subjects for ranking, along with the methods that produced them.
Table 1 summarises the results of the survey, indicating the mean and mode (i.e., most frequent) rank
given to each method for each prompt pair and summarizing the results by category and globally. In
both cases, a higher value means a lower quality blend perceived by the subjects of the survey. The
goal of this analysis is to understand which blending method performs better in general (for the global
summary) and in a more fine-grained manner by category and by prompt pair. We emphasise that the
mean value should be handled with care, as a few low rankings (value 4) can greatly skew to mean of a
ranking that is typically considered of high quality. Indeed, in the last row of the table we can observe
that the average ranking of all methods throughout the whole experiment is quite similar, even though
SWITCH is most frequently selected as the best method, and UNET as the worst. Worth noticing is
also that the mode does not necessarily provide a full ranking between methods.
In the next section we will discuss the merits of the presented blending methods and the results
of the user survey; yet, for the moment we can already see that, at least from the perspective of the
ranking given, there is no clear best blending approach, but quality varies between images, and more
broadly between categories. For instance, UNET was ranked fourth in three categories, but second for
the category of real-life scenarios. Similarly, although UNET’s mode rank in compound words was 4, it
was also the highest ranked in three of the prompt pairs in this category.
6. Discussion
Figure 3 shows the results of the four different blending methods with the prompts Frog-Lizard, Butter-
Fly, Kung fu-Panda, Tortoise-Broccoli, and Tea-Pot. To better understand the behavior of each method, all
images in each row were generated using the same seed and thus starting from the same random noise.
Moreover, the blending ratio between the two prompts was kept constant at 0.5 across all methods.
We measure the visual distance of two concepts by visually evaluating the spatial similarity of the
images generated conditioning the pipeline on them. This is a key aspect to consider when evaluating
the quality of the blend as, with the exception of the TEXTUAL which instead focuses on the semantic
blend, it influences the performance of the blending methods.
When it comes to logical blends, one often considers a main concept which is modified by a secondary
one. That is, the blended concept is primarily an instance of the main concept, but with some charac-
teristics that recall the secondary concept. With the exception of TEXTUAL, the blending methods
presented in this paper are not symmetric, meaning that the order of the prompts in the blend affects
the final image. This is particularly important when dealing with compound words like pitbull, although
Table 1
Mean and first-mode rank in the results of the survey over 24 concept pairs (with 23 participants).
ALTERNATE SWITCH UNET TEXTUAL
Prompt
mean mode mean mode mean mode mean mode
Elephant-Duck 2.43 2 3.21 4 2.65 3 1.82 1
PairOfAnimals
Lion-Cat 2.52 2 2.30 3 3.52 4 1.65 1
Frog-Lizard 3.73 4 1.91 2 1.41 1 2.95 3
Fox-Hamster 2.68 3 1.82 2 3.77 4 1.82 1
Rabbit-Dog 2.95 4 2.68 3 2.72 2 1.68 1
CATEGORY TOTAL 2.86 3 2.39 2 2.82 4 1.98 1
Turtle-Brain 2.76 3 1.38 1 3.10 4 2.86 2
Pig-Cactus 2.38 3 1.52 1 3.43 4 2.76 2
Object+Animal
Garlic-Swan 1.86 1 1.81 1 3.71 4 2.71 2
Coconut-Monkey 1.62 2 1.67 1 3.81 4 2.95 3
Tortoise-Broccoli 1.62 1 3.43 4 2.48 3 2.57 2
Turtle-Wood 2.19 1 2.29 2 2.29 3 3.33 4
Turtle-Pizza 3.52 4 2.95 3 1.48 1 2.10 2
CATEGORY TOTAL 2.28 2 2.15 1 2.90 4 2.76 2
Butter-Fly 2.57 3 2.24 2 2.33 1 3.00 4
Dragon-Fly 2.62 3 3.43 4 1.67 1 2.43 2
CompoundWords
Bull-Pit 2.81 4 2.24 3 2.86 3 2.23 1
Blimp-Whale 2.95 3 2.05 1 2.05 2 3.04 4
Jelly-Fish 2.86 3 1.43 1 3.14 4 2.67 2
Fire-Fighter 3.00 4 1.48 1 2.81 2 2.90 2
Tea-Pot 1.62 1 2.48 2 3.81 4 2.29 2
Snow-Flake 3.33 4 2.62 2 1.48 1 2.62 3
Cup-Cake 2.71 2 2.85 4 2.62 4 1.90 1
CATEGORY TOTAL 2.72 4 2.31 1 2.53 4 2.57 2
Kung fu-Panda 1.95 1 2.76 3 2.05 2 3.43 4
Real
Man-Bat 3.67 4 1.67 1 1.67 2 3.05 3
Beaver-Duck 2.24 1 1.95 2 3.43 4 2.48 3
CATEGORY TOTAL 2.62 4 2.13 2 2.38 2 2.98 3
GLOBAL TOTAL 2.61 3 2.26 1 2.68 4 2.54 3
this word commonly refers to a specific breed of dog, its intrinsic semantic and historical meaning
refers to bull in a pit. When visually blending the two concepts pit and bull with the methods illustrated
in this paper, it is important to take into account which of the two concepts is the main one and which
is the modifier. By analyzing the results in Figure 3, it is evident that this primary-modifier relationship
is not coherent across all the analyzed methods. In TEXTUAL and ALTERNATE, the main concept of
the blend appears to be the second prompt while its modifier is the first one. The opposite behavior
is instead what characterizes SWITCH and UNET where the main concept of the blend is the first
prompt and the modifier is the second one. This behavior was not expected; to keep the experiments
straightforward all blends were generated considering the first prompt as the main concept and the
second as its modifier. This is the reason why when blending the words that make the compound word
Pitbull, the blend is generated as a Bull-Pit instead of Pit-Bull.
As expected from the work by Melzi et al. [18], performing the blending operation in the latent space
of the prompts, as in the case of TEXTUAL, does not always lead to an image that visually blends the
two concepts. This is particularly evident in the case of Kung fu-Panda, where the generated image is a
conceptual blend of the two prompts. From our findings, TEXTUAL usually produces inconsistent
results, although the conditioning embedding given to the pipeline always remains the same, the balance
prompt 1 prompt 2 TEXTUAL SWITCH ALTERNATE UNET
Frog-Lizard
Butter-Fly
Tortoise-Broccoli Kung fu-Panda
Tea-Pot
Figure 3: Comparison of the blending methods. On the left, the individual prompts, and on the right, the results
of the blending methods. All the images are generated starting from the same identical initial noise.
between visual and semantic blending changes drastically from one seed to another. An instance of this
behavior can be observed in its Kung fu-Panda sample at Figure 2. In this case, the model generated
possibly the best visual blend out of the four methods, however, out of all the other seeds tested, no
other sample was able to achieve the same level of blending.
As mentioned already, results from SWITCH vary considerably depending on the timestep at which
the prompt is switched; finding the right timestep is crucial to achieve a good visual blend. This is
evident in the case Tea-Pot and Butter-Fly shown in Figure 3: the images generated from the prompts
Butter and Fly are visually distant even though both of them start from the same initial noise. When
in the middle of the diffusion process the prompt is switched, the model is unable to shift and correct
the existing distribution towards the one of the new prompt and only the first prompt is retained
in the blend. Another undesired behavior of SWITCH is the cartoonification of the produced blend.
The diffusion pipeline, when unable to shift the pixel distribution towards the new prompt, corrects
the existing noisy image latent by progressively removing the high-frequency details, resulting in a
cartoonish image. This behavior can be clearly observed in the Kung fu-Panda blend produced by
SWITCH in Figure 3. From our experimental results, this behavior does not affect the other methods.
The ALTERNATE method, which alternates between the two prompts at each timestep, tends to
produce consistent results when the two blended concepts are visually very different. What is arguably
even more interesting is the type of visual blend that this technique produces when the two concepts
are both visually and semantically very different. This is the case of Tea-Pot and Butter-Fly, where
the model creates an image that literally and spatially contains both the first and the second prompt.
This is also evident in the Bull-Pit blend in Figure 2, where the ALTERNATE generates what could be
described as a bull in a pit. TEXTUAL also seems to produce a similar results but once again, it is too
inconsistent across the seeds space to state it as a general rule.
Compared to the other approaches, the UNET method which encodes in the U-Net the image latent
conditioned with the first prompt and then decodes it with the second one, produces more subtle blends
but generally consistent results. This might be the reason why this is the blend method that performs
worse in the survey, as the visual blend is not as evident as in the other methods. Interestingly, on the
Kung fu-Panda blend, UNET seems to slightly change the visual representation of the first prompt
while matching the colors of the second one. This subtle blending is also evident in the Bull-Pit blend
of Figure 2, where surprisingly the pipeline creates an image that somewhat resembles a pitbull.
The results of the survey summarized in Table 1 show that the most preferred method is SWITCH,
however, this comes with some considerations. In order to better represent each method, in the survey
we have chosen the best settings for each method, in the case of SWITCH this translates into using the
optimal timestep in which to switch the prompt for each blend. Finding this value is a tedious process
made by trial and error, with no clear and empirical way to determine it. Although UNET ranked
the lowest in the survey, while comparing its results with the ones of SWITCH with a fixed switch
timestep in the middle of the diffusion process (Figure 3), it is evident that the visual blend produced by
these two methods are generally similar if not better in the case of UNET.
7. Conclusions
Through this paper we tried to answer a novel research question: is it possible to produce visual
concept blends through diffusion models? We compared different possible solutions to force a diffusion
model (more specifically Stable Diffusion [4]) to generate contents that represent the blend of two
separated concepts. We collected three different alternatives from existing publications and from the
web. Additionally, we propose a completely new method, which we call UNET that exploits the internal
architecture of the adopted diffusion model. We collected the outputs of the different methods on 4
different categories of test; namely, pairs of animals, animal and object, compound words, and real life
scenarios. For each of these categories we produced various different pairs of concepts, and generated
all blends (in total, four blended images for each pair of prompts).
The quality of a blend, as any creative endeavor, has a subjective component on it. Thus, to evaluate
which approach is more adept at this task (in relation to human perception) we devised a user study
that was run by 23 subjects. In it, participants were asked to rank the results of the blending methods.
It is worth noting that two participants did not rank all methods, but 21 full surveys were submitted.
We still used the partial surveys to compare those pairs where the ranks were available.
From the user study it results that there is no single best blending method, but the perceived quality
varies from pair to pair and, more importantly, from category to category. And yet, from a positive
perspective, we can answer our research question on the affirmative: is it possible to produce visual
conceptual blends through diffusion models, and the results are often quite compelling (see the samples
in Figure 2. Indeed, the survey participants expressed surprise with some of them.
An important point to make is that, for this work, we used the latent space from Stable Diffusion
directly; that is, without any kind of fine-tuning or added training. Thus, our results are less fragile
towards model updates, and do not require significant effort to implement and execute. This is consistent
with our original stated goal of understanding how to manipulate the latent space as a representation
of concepts. This work only scratches the surface of this topic and we hope that it can inspire new
discussion and further analysis.
For future work, note that our blends are based on very simple (mainly one-worded) prompts. This
allows us to better understand the impact of the operations (in contrast to the subtleties of prompt-
engineering) but has the disadvantage of working over very general concepts, in general, and more in
particular is prone to ambiguities and misinterpretations. It would thus be interesting to explore ways
to guarantee a more specific identification of concepts selected for blending.
Acknowledgments
Work funded by the European Union–Next Generation EU within the project NRPP M4C2, Invest-
ment 1.,3 DD. 341-15 march 2022–FAIR; Future Artificial Intelligence Research – Spoke 4-PE00000013-
D53C22002380006. Part of this work was supported by the MUR for REGAINS, the Department of
Excellence DISCo at the University of Milano-Bicocca, the PRIN project PINPOINT Prot. 2020FNEB27,
CUP H45E21000210001, and by the NVIDIA Corporation with the RTX A5000 GPUs granted through
the Academic Hardware Grant Program to the University of Milano-Bicocca for the project “Learned
representations for implicit binary operations on real-world 2D-3D data.”
References
[1] R. Brachman, H. Levesque, Knowledge representation and reasoning, Morgan Kaufmann, 2004.
[2] P. Gardenfors, Conceptual Spaces: The Geometry of Thought, A Bradford book, MIT Press, 2004.
URL: https://books.google.it/books?id=FSLFjw1EcBwC.
[3] G. Fauconnier, M. Turner, The Way We Think: Conceptual Blending And The Mind’s Hidden
Complexities, Basic Books, 2008. URL: https://books.google.it/books?id=FdOLriVyzwkC.
[4] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, B. Ommer, High-resolution image synthesis with
latent diffusion models, in: Proc. IEEE/CVF conf. on comp. vision and pattern recog., 2022, pp.
10684–10695.
[5] D. Podell, Z. English, K. Lacey, et al., Sdxl: Improving latent diffusion models for high-resolution
image synthesis, arXiv preprint arXiv:2307.01952 (2023).
[6] J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, S. Ganguli, Deep unsupervised learning using
nonequilibrium thermodynamics, in: Proc. ICML’15, volume 37, PMLR, Lille, France, 2015, pp.
2256–2265. URL: https://proceedings.mlr.press/v37/sohl-dickstein15.html.
[7] D. P. Kingma, M. Welling, Auto-encoding variational bayes, arXiv preprint arXiv:1312.6114 (2013).
[8] O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmen-
tation, in: Proc. MICCAI 2015, Springer, 2015, pp. 234–241.
[9] R. Confalonieri, A. Pease, M. Schorlemmer, et al., Concept Invention: Foundations, Implementation,
Social Aspects and Applications, Computational Synthesis and Creative Systems, Springer, 2018.
[10] F. J. Costello, M. T. Keane, Efficient creativity: Constraint-guided conceptual combination, Cogni-
tive Science 24 (2000) 299–349.
[11] G. Fauconnier, Mental spaces: Aspects of meaning construction in natural language, CUP, 1994.
[12] A. Joy, J. F. Sherry Jr, J. Deschenes, Conceptual blending in advertising, Journal of business
research 62 (2009) 39–49.
[13] O. Kutz, J. Bateman, F. Neuhaus, T. Mossakowski, M. Bhatt, E pluribus unum: Formalisation,
use-cases, and computational support for conceptual blending, in: Computational Creativity
Research: Towards Creative Machines, Springer, 2014, pp. 167–196.
[14] B. J. Phillips, E. F. McQuarrie, Beyond visual metaphor: A new typology of visual rhetoric in
advertising, Marketing theory 4 (2004) 113–136.
[15] P. Xiao, S. Linkola, et al., Vismantic: Meaning-making with images., in: ICCC, 2015, pp. 158–165.
[16] L. B. Chilton, E. J. Ozmen, S. H. Ross, V. Liu, Visifit: Structuring iterative improvement for novice
designers, in: Proc. 2021 CHI Conf. on Human Factors in Computing Systems, 2021, pp. 1–14.
[17] J. Cunha, P. Martins, P. Machado, Let’s figure this out: A roadmap for visual conceptual blending,
in: Proc. of International Conference on Innovative Computing and Cloud Computing, 2020.
[18] S. Melzi, R. Peñaloza, A. Raganato, Does stable diffusion dream of electric sheep?, in: Proc. ISD7,
volume 3511 of CEUR, CEUR-WS.org, 2023. URL: https://ceur-ws.org/Vol-3511/paper_09.pdf.
[19] W. Zhao, L. Bai, Y. Rao, J. Zhou, J. Lu, Unipc: A unified predictor-corrector framework for fast
sampling of diffusion models, 2023. arXiv:2302.04867.
[20] A. Radford, J. W. Kim, C. Hallacy, et al., Learning transferable visual models from natural language
supervision, in: Proc. of International conference on machine learning, PMLR, 2021, pp. 8748–8763.