<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">How to Blend Concepts in Diffusion Models</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Lorenzo</forename><surname>Olearo</surname></persName>
							<email>lorenzo.olearo@unimib.it</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Milano-Bicocca</orgName>
								<address>
									<settlement>Milan</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Giorgio</forename><surname>Longari</surname></persName>
							<email>giorgio.longari@unimib.it</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Milano-Bicocca</orgName>
								<address>
									<settlement>Milan</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Simone</forename><surname>Melzi</surname></persName>
							<email>simone.melzi@unimib.it</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Milano-Bicocca</orgName>
								<address>
									<settlement>Milan</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Alessandro</forename><surname>Raganato</surname></persName>
							<email>alessandro.raganato@unimib.it</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Milano-Bicocca</orgName>
								<address>
									<settlement>Milan</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Rafael</forename><surname>Peñaloza</surname></persName>
							<email>rafael.penaloza@unimib.it</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Milano-Bicocca</orgName>
								<address>
									<settlement>Milan</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">How to Blend Concepts in Diffusion Models</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">ED975A2F26F98E532B1AF0B3A3A196DF</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:33+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Concept blending, Generative AI, Diffusion models . Peñaloza) 0009-0000-7290-3549 (L. Olearo)</term>
					<term>0000-0002-2086-9091 (G. Longari)</term>
					<term>0000-0003-2790-9591 (S. Melzi)</term>
					<term>0000-0002-7018-7515 (A. Raganato)</term>
					<term>0000-0002-2693-5790 (R. Peñaloza)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>For the last decade, there has been a push to use multi-dimensional (latent) spaces to represent concepts; and yet how to manipulate these concepts or reason with them remains largely unclear. Some recent methods exploit multiple latent representations and their connection, making this research question even more entangled. Our goal is to understand how operations in the latent space affect the underlying concepts. We hence explore the task of concept blending through diffusion models. Diffusion models are based on a connection between a latent representation of textual prompts and a latent space that enables image reconstruction and generation. This task allows us to try different text-based combination strategies, and evaluate them visually. Our conclusion is that concept blending through space manipulation is possible, although the best strategy depends on the context.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The field of knowledge representation deals with the task of representing the knowledge of a domain in a manner that can be used for intelligent applications <ref type="bibr" target="#b0">[1]</ref>. Over the decades, most of the progress in the field has focused on logic-based knowledge representation languages, and their reasoning capabilities. In this setting, concepts-the first-class citizens of any domain representation-are formalised by limiting the interpretations that they can be assigned to, and their connections with other concepts. A different, more implicit approach represents concepts as points (or sometimes volumes) in a multidimensional socalled latent space. This representation (or embedding) is built considering the semantic similarities and differences between concepts. Although at an abstract level this representation is similar to Gardenfors's conceptual spaces <ref type="bibr" target="#b1">[2]</ref>, there are essential differences between the two; most notably, concept composition cannot be achieved through simple set operations. Hence, while the use of the latent space is becoming more common, it remains unclear how to navigate it and how to reason within this representation. Our overarching goal is to understand the properties of the latent space and how different operations within it affect the underlying concepts. It is usually understood that every point in the latent space represents a concept, and thus, navigating it has the potential of creating new concepts. In this paper, we focus on the question of concept blending; briefly, the task of creating new concepts combining ("blending") the properties of two or more concepts <ref type="bibr" target="#b2">[3]</ref> (see Section 2 for more details). We explore the possibility of constructing such blends through (text-to-image) diffusion models starting from textual prompts describing the concepts. This choice is motivated by, first, the easy access to the latent space through the textual prompts and, second, the ability to evaluate the quality of the results visually.</p><p>We study different strategies for concept blending which exploit the overall architecture of Stable Diffusion (SD) <ref type="bibr" target="#b3">[4]</ref>. None of these methods relies on further training or fine-tuning, but they all focus on the topology of the latent space and the SD architecture. In general, visual depictions of concept blends can be automatically generated through these techniques, although their quality may vary. An empirical study was used to evaluate the relative performance of each method. The results suggest that there is no absolute best method, but the choice of blending approach depends on the combined concepts. The task of concept blending considered here is just a milestone towards our general goal of understanding the properties of the latent space and how navigating it affects the underlying concepts. This understanding will be useful towards an explainable use of latent spaces and embeddings in general.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Preliminaries</head><p>Human creativity has always been the key to the innovating process, giving us the possibility to imagine things which are yet to be discovered, and diverging scenarios to explore. In recent years, the field of artificial intelligence (AI) has been revolutionized by generative models, which are capable of creating new and original contents by exploiting the countless examples these models have been trained on. Among the multiple variants and possibilities to exploit generative AI, diffusion models like Stable Diffusion <ref type="bibr" target="#b4">[5]</ref>, Dall-E, or Midjourney produce as output original images based on textual prompts or images given as input for the model. To provide clarity for this work, we introduce the fundamental concepts and components of Stable Diffusion and the notions of concept blending.  <ref type="bibr" target="#b3">[4]</ref>, which follows the typical architecture of diffusion models <ref type="bibr" target="#b5">[6]</ref> comprising a forward and a backward process. In the forward process, a clean sample from the data (in this case, an image) is sequentially corrupted by random noise reaching, after a defined number of steps, pure random noise.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Stable Diffusion</head><p>In the backward process, a neural network is trained to sequentially remove the noise, thereby restoring the clean data distribution; this is the main phase intervening during image generation. The Stable Diffusion network architecture utilized during the backward phase is principally made up of (i) a Variational Autoencoder (VAE) <ref type="bibr" target="#b6">[7]</ref>, (ii) a U-Net <ref type="bibr" target="#b7">[8]</ref>, and (iii) an optional text encoder. The VAE characterizes SD as a Latent Diffusion Model, mapping images into a lower-dimensional space through an encoder, followed by a diffusion model to craft the distribution over encoded images. The images are then represented as points in the latent space (R 𝑛 ). Afterwards, a decoder is needed to convert a point back into an image. The U-Net is composed of an encoder-decoder pair, where the bottleneck contains the compact embedding representation of the images. The encoder 𝐸 maps the input samples according to the given prompt embedding into this latent embedding, then the decoder 𝐷 processes this latent embedding together with its prompt embedding to reconstruct a sample that is as close as possible to the original one. The U-Net and text embedding are crucial in conditioning the output generated by the model. At each step of the denoising process, the prompt embedding is injected into the three blocks of the U-Net via cross-attention mechanism. In this way, the textual prompt conditions the denoising process and in turn the generation of an image. The prompt embedding is generated by the text encoder, following the pipeline of SD 1.4. In our experiments, as text encoder, we adopt a pre-trained CLIP Vit-L/14.</p><p>With these details we can now establish the focus of this study through the research question: can diffusion models produce visual blends of two concepts? Identifying each concept through a word, we want to create a new image that simultaneously represents a combination of both, simulating the human capacity for associative thinking. To address this problem, we present various methodologies leveraging SD as the backbone of our experiments. But before, we explain the notion of concept blending.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Concept Blending</head><p>Blending represents a cognitive mechanism that has been innately exploited to create new abstractions from familiar concepts <ref type="bibr" target="#b8">[9]</ref>. This process is often experienced in our daily interactions, even during a casual conversation. This conceptual framework has been studied over the past three decades <ref type="bibr" target="#b9">[10]</ref>, offering a model that incorporates mental spaces and conceptual projection. It examines the dynamic formation of interconnected domains as discourse unfolds, aiming to discover fundamental principles that underlie the intricate logic of everyday interactions. In this context, a mental space is a temporary knowledge structure, which is dynamically created, for the purpose of local understanding, during a social interaction. It is composed of elements (concepts) and their interconnections. It is contextdependent and not necessarily a description of reality <ref type="bibr" target="#b10">[11]</ref>. This general notion can be specified in different notions. For our purpose, we are interested in visual conceptual blending, which combines aspects of conceptual blending and visual blending.</p><p>Conceptual Blending constructs a partial match between two or more input mental spaces, and project them into a new "blended" one <ref type="bibr" target="#b10">[11]</ref>. This blended space has common characteristics of the input spaces, allowing a mapping between its elements and their counterparts in each input space. Yet, it also generates a new emergent conceptual structure, which is unpredictable from the input spaces and not originally present in them. Therefore, blending occurs at the conceptual level. Representations of these blends are valuable and frequently employed in advertising <ref type="bibr" target="#b11">[12]</ref> and other domains <ref type="bibr" target="#b12">[13]</ref>.</p><p>The Visual Blending process, instead, is essential to generate new visual representations, such as images, through the fusion of at least two existing ones. There are two primary options for visual blending, according to the rendering style employed: photo-realistic rendering and non-photo-realistic techniques, like drawings. Approaches that focus on text-to-image generation have as main goal the visual representation of concepts, and, in the case of blending, the topology could to be summarized as a bunch of visual operations, as analyzed by Phillips and McQuarrie <ref type="bibr" target="#b13">[14]</ref>. One of these operators, called fusion, partially depicts and merges the different inputs to create a hybrid image, allowing for a higher coherence between the object parts of the object(s), and helping the viewers in perceiving the hybrid object as a unified whole. In replacement, one input concept is present and its sole function is to occupy the usual environment of the other concept, or have its shape adapted to resemble the other input. Juxtaposition is a technique that involves placing two different elements side by side, to create a harmonious or provoking whole. Good example of Visual Blending and different approaches to the operations described (and others) can be found in <ref type="bibr" target="#b14">[15]</ref>. Importantly, high-quality blends between concepts require that only some of the main characteristics of the input concepts are taken into account <ref type="bibr" target="#b15">[16]</ref>. Exploiting the three main visual properties of color, silhouette, and internal details, helps the creator to obtain a great resulting blend. An image result from blending can be evaluated by taking in account the number of dimensions (or visual properties) over which the blend has been applied.</p><p>Visual Conceptual Blending introduces a model for creating visually blended images grounded in strong conceptual reasoning. Cunha et al. <ref type="bibr" target="#b16">[17]</ref> argue that visual conceptual blending goes beyond simply merging two images: it emphasizes the importance of conceptual reasoning as the foundation of the blending process, resulting in an image and accompanying conceptual elaborations. These blends have context, are grounded in justifications, and can be named independently of the original concepts. In contrast, standard Visual Blending focuses solely on image merging, and typically involves mapping concepts to objects and integrating them while maintaining recognisably and inference association.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>We now rephrase our research question as: can Stable Diffusion models merge two semantically distant concepts into a new image, practically performing a Visual (Conceptual) Blending operation?</head><p>We investigate the efficacy of diffusion models, which are supposed to recreate each image that should be imagined, in generating high-quality blended images. We assess existing approaches to perform blending with stable diffusion, and propose novel methods. To the best of our knowledge, this is the first investigation that evaluates the performance of different blending techniques with diffusion models using only textual prompts. We initially operate on the latent space where the textual prompts are embedded, and then explore alternative methods by directly manipulating the specific architecture of the diffusion model; more precisely, the U-Net conditioning phase is manipulated to edit the textual prompt that is injected (Section 3). To evaluate the results, we conducted a user survey where the subjects were asked to rank the outcomes of different blending tasks, divided in multiple categories.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Blending Methods with Stable Diffusion</head><p>In this section, we briefly review some of the existing approaches for blending concepts with diffusion models. Some of these methods were already published in previous work <ref type="bibr" target="#b17">[18]</ref>, while others are available on public implementations, but without a full description of their details. We mention explicitly whenever we are unsure if our implementation matches exactly the one proposed in the reference.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Experimental setup</head><p>We fix the generative network 𝒢 as Stable Diffusion v1.4 <ref type="bibr" target="#b3">[4]</ref> with the UniPCMul-tistepScheduler <ref type="bibr" target="#b18">[19]</ref> set at 25 steps. This version uses a fixed pretrained text encoder (CLIP Vit-L/14 <ref type="bibr" target="#b19">[20]</ref>). All images are generated as 512x512 pixels with the diffusion process carried in FP16 precision in a space downscaled by a factor of 4. The conditioning signal is provided only in the form of textual prompts, and the guidance scale is set to 7.5. We focused on Stable Diffusion as a good trade-off between quality and computational cost; however, the blending methods analyzed can be implemented in other diffusion models with no latent downscale. Our entire implementation of the blending methods in their respective pipelines together with some of the generated samples is openly available. <ref type="foot" target="#foot_0">1</ref>An important feature of many generative methods, which allows them to produce varying outputs on the same prompt, is the use of a pseudo-random number generator (and pseudo-random noise) which can be established through a seed. Given an input textual prompt 𝑝, and a seed 𝑠, we denote as 𝐼 𝑠,𝑝 = 𝒢(𝑠, 𝑝) the image generated by the model 𝒢 given the input prompt 𝑝 and the seed 𝑠. Prompts will be usually denoted with the letter 𝑝, sometimes with additional indices to distinguish between them; e.g., 𝑝 1 and 𝑝 2 when two different prompts are used simultaneously. Given a prompt 𝑝, 𝑝 * denotes its latent representation; that is, the multi-dimensional vector obtained from the encoding operation. Similarly, 𝑝 * 1 and 𝑝 * 2 denote the latent representations of 𝑝 1 and 𝑝 2 , respectively.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Blending in the Prompt Latent Space (TEXTUAL)</head><p>Denoising U-Net</p><formula xml:id="formula_0">𝐸 𝐷 input prompts 𝑝1 𝑝2</formula><p>prompt encoder mean</p><p>The first method examined was recently proposed by Melzi et al. <ref type="bibr" target="#b17">[18]</ref>. It exploits the relationship between conceptual blending and vector operations within the prompt latent space. Given the two input prompts 𝑝 1 and 𝑝 2 , we first compute their latent representations 𝑝 * 1 and 𝑝 * 2 through the prompt encoder. The blended latent vector is the Euclidean mean between 𝑝 * 1 and 𝑝 * 2 . The blended image is generated by conditioning SD with the blended latent vector.</p><p>Importantly, blending in the latent space representing the prompts does not correspond to blending images directly, as in a visual blending process. Instead, it means generating an image representing a specific fusion of the concepts provided as the input textual prompts. Indeed, the Euclidean mean between the two representations is a (potentially unexplored) point of the latent space which intuitively represents the concept that is closest to both input concepts, thus defining an "in-between" characterisation. Although in this paper we only consider the mean of the two latent representations of the input prompts, we highlight that Melzi et al. consider also other linear combinations of 𝑝 * 1 and 𝑝 * 2 to avoid fully symmetric constructions. A similar technique was implemented in the Compel open source library,<ref type="foot" target="#foot_1">2</ref> which performs the weighted blend of two textual prompts.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Prompt Switching in the Iterative Diffusion Process (SWITCH)</head><formula xml:id="formula_1">Denoising U-Net 𝐸 𝐷 input prompts 𝑝1 𝑝2</formula><p>prompt encoder</p><p>The first N denoising iterations The last M denoising iterations</p><p>This blending technique involves switching the textual prompt during the iterative process of the diffusion model. The inference process first starts with a single prompt 𝑝 1 and then, at a certain iteration, the prompt is switched to 𝑝 2 until the end of the diffusion process.</p><p>The generation is thus conditioned on both prompts leading to an image that, when the switch is executed at the right timestep, blends the two concepts. Intuitively, SWITCH starts by generating the general shape of 𝑝 1 , but then fills out the details based on 𝑝 2 thus producing a visual blend of the two concepts.</p><p>It is crucial to choose the right iteration to switch the prompt. Unfortunately, this is an intrinsic challenge for each new image and does not depend only on the geometric distance between the 𝑝 * 1 and 𝑝 * 2 embeddings. From our experiments, it was observed that the optimal iteration for this switch is directly related to the spatial similarity between the image generated by the model conditioned only on 𝑝 1 and the one generated by 𝑝 2 . This technique was also implement in the Stable Diffusion web UI developed by AUTOMATIC1111. <ref type="foot" target="#foot_2">3</ref> Among its numerous functionalities, this implementation allows prompt editing during the mid-generation of an image. In general diffusion models, at each timestep defined by the scheduler of the diffusion process, the noise in the sample is estimated by the U-Net model. This estimation is performed by the model with knowledge of the timestep and the conditioning signal (i.e., the prompt). The Alternating Prompt technique conditions the U-Net with a different prompt at each timestep: the prompt 𝑝 1 is shown to the U-Net at even timesteps, while 𝑝 2 is shown at odd timesteps. By performing this alternating prompt technique, the diffusion pipeline can successfully generate an image that blends the two given prompts. Even though at different timesteps, the U-Net is conditioned by both prompts during the diffusion process. The blending ratio can be controlled by adjusting the number of iterations in which each prompt is shown to the U-Net. One can intuitively think of this approach as an alternating superposition of the generation process between 𝑝 1 and 𝑝 2 . This method is also implemented in the Stable Diffusion web UI developed by AUTOMATIC1111.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Alternating Prompts in the Iterative Diffusion Process (ALTERNATE)</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Method</head><p>We now propose a different blending paradigm to visually combine two textual prompts in the diffusion pipeline. In a standard diffusion architecture, given a single input prompt 𝑝, its corresponding embedding 𝑝 * is injected with a cross-attention mechanism in the three main blocks of the U-Net: the encoder, the bottleneck, and the decoder. During the encoding and bottleneck steps, the 𝑝 * embedding is used to guide the compression of the input sample into a latent representation that accurately maps the concept 𝑝 that is being generated. Then, during the decoding phase, the 𝑝 * embedding is used to guide the reconstruction of the sample towards the distribution of the concept 𝑝 that is being generated. Our idea arises from this compression and reconstruction operation and it is described in the following subsection. To the best of our knowledge, this method has not been proposed before.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Different Prompts in Encoder and Decoder Components of the U-Net (UNET)</head><formula xml:id="formula_2">Denoising U-Net 𝐸 𝐷 input prompts 𝑝1 𝑝2</formula><p>prompt encoder</p><p>We implement our new method using text-based conditioning, but it can theoretically be extended to other conditioning domains. As describe above, the U-Net architecture contains three main blocks: the encoder, the bottleneck and the decoder. Each of these block receives the prompt embedding 𝑝 * as input together with the sample from which the noise has to be estimated.</p><p>The key idea our method involves guiding the compression of the sample into the bottleneck block with a first prompt embedding 𝑝 * 1 . Then, guide its reconstruction towards the distribution of the second prompt 𝑝 2 by injecting into the decoder block the embedding 𝑝 * 2 as visualized in the figure. This allows the U-Net to construct a latent representation for the sample matching the concept described by 𝑝 1 and then reconstruct the sample with the features of the second prompt 𝑝 2 .</p><p>The expected result from this technique is to obtain an image that globally represents or recalls the concept described by 𝑝 1 and simultaneously shows some of the features that typically describe the concept of the second prompt 𝑝 2 . From our findings, changing the prompt embedding in the bottleneck block does not significantly affect the final result. Consequently, we keep the prompt 𝑝 1 in the decoder and bottleneck blocks while we change the prompt 𝑝 2 in the encoder block.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Validation and Results</head><p>We now describe the experimental setting and analysis made to evaluate the four blending approaches presented in the previous sections, applied over two simple conceptual prompts. The outputs of these models can be visualized in Figures <ref type="figure" target="#fig_3">2 and 3</ref>. The experiments aimed to assess previously proposed blending methods across four distinct macro-categories, which are visually explained in Figure <ref type="figure" target="#fig_2">2</ref>. The four categories are pair of animals, object and animal, compound words, and real-life scenarios. These were selected to showcase different kinds of blending of concepts, which are expected to showcase diverging properties. For pairs of animals, we expect that the shared characteristics between the concepts will aid the blending process; the use of object and animal concepts in the second category is expected to widen the semantic gap between the input prompts, leading to more "creative" artifacts. The third category considers objects representing compound words, offering a more conceptual blending challenge. Here, we observed how the methods responded to prompts comprising the compound's constituent parts, which are not literal descriptions of the target object but are rather interpretable as a figure of speech or metaphor. We aimed to investigate whether the models should learn the necessary abstractness to perform a blending similar to the concept associated with the compound word, or reach a new visual blending that merges the characteristics of the two prompts. The last category draws inspiration from real-world visual blend examples, regardless of their underlying concepts, deriving prompts to condition the models, allowing us to investigate their adaptability and ability to reconstruct well-known blends. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>User Analysis</head><p>To impartially evaluate the quality of the methods, we conducted a survey on 23 participants involving 24 images from the four categories described. The survey was constructed as follows. We first selected 24 concept pairs covering examples from the four macro-categories; each concept in the pair was described through a simple prompt. Then, the four different blending methods were used to generate the visual conceptual blend of each pair. All images were generated with the same size and quality, and presented to the users, with the instruction to rank them according to their blending effectiveness from best to worst. Our participant pool was carefully selected to ensure they had no prior experience with blending theory. While the two prompts used to generate the images were provided to the subjects, we deliberately withheld information regarding the model responsible for each image, eliminating potential bias. Additionally, to further mitigate bias, the order of images within each question was randomized for each participant. This approach aimed to discern whether a superior blending method existed among the four proposed and whether certain methods outperformed others within specific categories.</p><p>For each question, the top four images proposed were selected by the authors from a pool generated using ten different seeds. Given that blending quality across all methods is not entirely independent of seed choice, we aimed to minimize this dependency by carefully selecting the best results. For a better understanding of the evaluation approach, Figure <ref type="figure" target="#fig_2">2</ref> shows some of the images that were presented to the subjects for ranking, along with the methods that produced them.</p><p>Table <ref type="table" target="#tab_1">1</ref> summarises the results of the survey, indicating the mean and mode (i.e., most frequent) rank given to each method for each prompt pair and summarizing the results by category and globally. In both cases, a higher value means a lower quality blend perceived by the subjects of the survey. The goal of this analysis is to understand which blending method performs better in general (for the global summary) and in a more fine-grained manner by category and by prompt pair. We emphasise that the mean value should be handled with care, as a few low rankings (value 4) can greatly skew to mean of a ranking that is typically considered of high quality. Indeed, in the last row of the table we can observe that the average ranking of all methods throughout the whole experiment is quite similar, even though SWITCH is most frequently selected as the best method, and UNET as the worst. Worth noticing is also that the mode does not necessarily provide a full ranking between methods.</p><p>In the next section we will discuss the merits of the presented blending methods and the results of the user survey; yet, for the moment we can already see that, at least from the perspective of the ranking given, there is no clear best blending approach, but quality varies between images, and more broadly between categories. For instance, UNET was ranked fourth in three categories, but second for the category of real-life scenarios. Similarly, although UNET's mode rank in compound words was 4, it was also the highest ranked in three of the prompt pairs in this category.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Discussion</head><p>Figure <ref type="figure" target="#fig_3">3</ref> shows the results of the four different blending methods with the prompts Frog-Lizard, Butter-Fly, Kung fu-Panda, Tortoise-Broccoli, and Tea-Pot. To better understand the behavior of each method, all images in each row were generated using the same seed and thus starting from the same random noise. Moreover, the blending ratio between the two prompts was kept constant at 0.5 across all methods.</p><p>We measure the visual distance of two concepts by visually evaluating the spatial similarity of the images generated conditioning the pipeline on them. This is a key aspect to consider when evaluating the quality of the blend as, with the exception of the TEXTUAL which instead focuses on the semantic blend, it influences the performance of the blending methods.</p><p>When it comes to logical blends, one often considers a main concept which is modified by a secondary one. That is, the blended concept is primarily an instance of the main concept, but with some characteristics that recall the secondary concept. With the exception of TEXTUAL, the blending methods presented in this paper are not symmetric, meaning that the order of the prompts in the blend affects the final image. This is particularly important when dealing with compound words like pitbull, although this word commonly refers to a specific breed of dog, its intrinsic semantic and historical meaning refers to bull in a pit. When visually blending the two concepts pit and bull with the methods illustrated in this paper, it is important to take into account which of the two concepts is the main one and which is the modifier. By analyzing the results in Figure <ref type="figure" target="#fig_3">3</ref>, it is evident that this primary-modifier relationship is not coherent across all the analyzed methods. In TEXTUAL and ALTERNATE, the main concept of the blend appears to be the second prompt while its modifier is the first one. The opposite behavior is instead what characterizes SWITCH and UNET where the main concept of the blend is the first prompt and the modifier is the second one. This behavior was not expected; to keep the experiments straightforward all blends were generated considering the first prompt as the main concept and the second as its modifier. This is the reason why when blending the words that make the compound word Pitbull, the blend is generated as a Bull-Pit instead of Pit-Bull.</p><p>As expected from the work by Melzi et al. <ref type="bibr" target="#b17">[18]</ref>, performing the blending operation in the latent space of the prompts, as in the case of TEXTUAL, does not always lead to an image that visually blends the two concepts. This is particularly evident in the case of Kung fu-Panda, where the generated image is a conceptual blend of the two prompts. From our findings, TEXTUAL usually produces inconsistent results, although the conditioning embedding given to the pipeline always remains the same, the balance between visual and semantic blending changes drastically from one seed to another. An instance of this behavior can be observed in its Kung fu-Panda sample at Figure <ref type="figure" target="#fig_2">2</ref>. In this case, the model generated possibly the best visual blend out of the four methods, however, out of all the other seeds tested, no other sample was able to achieve the same level of blending. As mentioned already, results from SWITCH vary considerably depending on the timestep at which the prompt is switched; finding the right timestep is crucial to achieve a good visual blend. This is evident in the case Tea-Pot and Butter-Fly shown in Figure <ref type="figure" target="#fig_3">3</ref>: the images generated from the prompts Butter and Fly are visually distant even though both of them start from the same initial noise. When in the middle of the diffusion process the prompt is switched, the model is unable to shift and correct the existing distribution towards the one of the new prompt and only the first prompt is retained in the blend. Another undesired behavior of SWITCH is the cartoonification of the produced blend. The diffusion pipeline, when unable to shift the pixel distribution towards the new prompt, corrects the existing noisy image latent by progressively removing the high-frequency details, resulting in a cartoonish image. This behavior can be clearly observed in the Kung fu-Panda blend produced by SWITCH in Figure <ref type="figure" target="#fig_3">3</ref>. From our experimental results, this behavior does not affect the other methods.</p><p>The ALTERNATE method, which alternates between the two prompts at each timestep, tends to produce consistent results when the two blended concepts are visually very different. What is arguably even more interesting is the type of visual blend that this technique produces when the two concepts are both visually and semantically very different. This is the case of Tea-Pot and Butter-Fly, where the model creates an image that literally and spatially contains both the first and the second prompt. This is also evident in the Bull-Pit blend in Figure <ref type="figure" target="#fig_2">2</ref>, where the ALTERNATE generates what could be described as a bull in a pit. TEXTUAL also seems to produce a similar results but once again, it is too inconsistent across the seeds space to state it as a general rule.</p><p>Compared to the other approaches, the UNET method which encodes in the U-Net the image latent conditioned with the first prompt and then decodes it with the second one, produces more subtle blends but generally consistent results. This might be the reason why this is the blend method that performs worse in the survey, as the visual blend is not as evident as in the other methods. Interestingly, on the Kung fu-Panda blend, UNET seems to slightly change the visual representation of the first prompt while matching the colors of the second one. This subtle blending is also evident in the Bull-Pit blend of Figure <ref type="figure" target="#fig_2">2</ref>, where surprisingly the pipeline creates an image that somewhat resembles a pitbull.</p><p>The results of the survey summarized in Table <ref type="table" target="#tab_1">1</ref> show that the most preferred method is SWITCH, however, this comes with some considerations. In order to better represent each method, in the survey we have chosen the best settings for each method, in the case of SWITCH this translates into using the optimal timestep in which to switch the prompt for each blend. Finding this value is a tedious process made by trial and error, with no clear and empirical way to determine it. Although UNET ranked the lowest in the survey, while comparing its results with the ones of SWITCH with a fixed switch timestep in the middle of the diffusion process (Figure <ref type="figure" target="#fig_3">3</ref>), it is evident that the visual blend produced by these two methods are generally similar if not better in the case of UNET.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Conclusions</head><p>Through this paper we tried to answer a novel research question: is it possible to produce visual concept blends through diffusion models? We compared different possible solutions to force a diffusion model (more specifically Stable Diffusion <ref type="bibr" target="#b3">[4]</ref>) to generate contents that represent the blend of two separated concepts. We collected three different alternatives from existing publications and from the web. Additionally, we propose a completely new method, which we call UNET that exploits the internal architecture of the adopted diffusion model. We collected the outputs of the different methods on 4 different categories of test; namely, pairs of animals, animal and object, compound words, and real life scenarios. For each of these categories we produced various different pairs of concepts, and generated all blends (in total, four blended images for each pair of prompts).</p><p>The quality of a blend, as any creative endeavor, has a subjective component on it. Thus, to evaluate which approach is more adept at this task (in relation to human perception) we devised a user study that was run by 23 subjects. In it, participants were asked to rank the results of the blending methods. It is worth noting that two participants did not rank all methods, but 21 full surveys were submitted. We still used the partial surveys to compare those pairs where the ranks were available.</p><p>From the user study it results that there is no single best blending method, but the perceived quality varies from pair to pair and, more importantly, from category to category. And yet, from a positive perspective, we can answer our research question on the affirmative: is it possible to produce visual conceptual blends through diffusion models, and the results are often quite compelling (see the samples in Figure <ref type="figure" target="#fig_2">2</ref>. Indeed, the survey participants expressed surprise with some of them.</p><p>An important point to make is that, for this work, we used the latent space from Stable Diffusion directly; that is, without any kind of fine-tuning or added training. Thus, our results are less fragile towards model updates, and do not require significant effort to implement and execute. This is consistent with our original stated goal of understanding how to manipulate the latent space as a representation of concepts. This work only scratches the surface of this topic and we hope that it can inspire new discussion and further analysis.</p><p>For future work, note that our blends are based on very simple (mainly one-worded) prompts. This allows us to better understand the impact of the operations (in contrast to the subtleties of promptengineering) but has the disadvantage of working over very general concepts, in general, and more in particular is prone to ambiguities and misinterpretations. It would thus be interesting to explore ways to guarantee a more specific identification of concepts selected for blending.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: A visualization of the proposed analysis. From left to right: (a) given two input textual concepts ("dog", "rabbit"), (b) four different techniques are applied to explore multiple ways to blend them together through stable diffusion and (c) the obtained outputs are compared with qualitative analysis and a user study.</figDesc><graphic coords="1,120.40,304.10,352.00,144.25" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Samples of two blends per category.</figDesc><graphic coords="7,141.91,65.61,315.90,622.90" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Comparison of the blending methods. On the left, the individual prompts, and on the right, the results of the blending methods. All the images are generated starting from the same identical initial noise.</figDesc><graphic coords="10,83.28,61.44,428.70,377.15" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Denoising U-Net 𝐸 𝐷 input prompt 𝑝</head><label></label><figDesc></figDesc><table /><note>prompt encoder Stable Diffusion (SD) is a text-to-image generative model developed by Rombach et al. in 2022</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1</head><label>1</label><figDesc>Mean and first-mode rank in the results of the survey over 24 concept pairs (with 23 participants).</figDesc><table><row><cell></cell><cell>Prompt</cell><cell cols="8">ALTERNATE mean mode mean mode mean mode mean mode SWITCH UNET TEXTUAL</cell></row><row><cell>PairOfAnimals</cell><cell>Elephant-Duck Lion-Cat Frog-Lizard Fox-Hamster Rabbit-Dog</cell><cell>2.43 2.52 3.73 2.68 2.95</cell><cell>2 2 4 3 4</cell><cell>3.21 2.30 1.91 1.82 2.68</cell><cell>4 3 2 2 3</cell><cell>2.65 3.52 1.41 3.77 2.72</cell><cell>3 4 1 4 2</cell><cell>1.82 1.65 2.95 1.82 1.68</cell><cell>1 1 3 1 1</cell></row><row><cell></cell><cell>CATEGORY TOTAL</cell><cell>2.86</cell><cell>3</cell><cell>2.39</cell><cell>2</cell><cell>2.82</cell><cell>4</cell><cell>1.98</cell><cell>1</cell></row><row><cell></cell><cell>Turtle-Brain</cell><cell>2.76</cell><cell>3</cell><cell>1.38</cell><cell>1</cell><cell>3.10</cell><cell>4</cell><cell>2.86</cell><cell>2</cell></row><row><cell>Object+Animal</cell><cell>Pig-Cactus Garlic-Swan Coconut-Monkey Tortoise-Broccoli Turtle-Wood Turtle-Pizza</cell><cell>2.38 1.86 1.62 1.62 2.19 3.52</cell><cell>3 1 2 1 1 4</cell><cell>1.52 1.81 1.67 3.43 2.29 2.95</cell><cell>1 1 1 4 2 3</cell><cell>3.43 3.71 3.81 2.48 2.29 1.48</cell><cell>4 4 4 3 3 1</cell><cell>2.76 2.71 2.95 2.57 3.33 2.10</cell><cell>2 2 3 2 4 2</cell></row><row><cell></cell><cell>CATEGORY TOTAL</cell><cell>2.28</cell><cell>2</cell><cell>2.15</cell><cell>1</cell><cell>2.90</cell><cell>4</cell><cell>2.76</cell><cell>2</cell></row><row><cell></cell><cell>Butter-Fly</cell><cell>2.57</cell><cell>3</cell><cell>2.24</cell><cell>2</cell><cell>2.33</cell><cell>1</cell><cell>3.00</cell><cell>4</cell></row><row><cell>CompoundWords</cell><cell>Dragon-Fly Bull-Pit Blimp-Whale Jelly-Fish Fire-Fighter Tea-Pot Snow-Flake</cell><cell>2.62 2.81 2.95 2.86 3.00 1.62 3.33</cell><cell>3 4 3 3 4 1 4</cell><cell>3.43 2.24 2.05 1.43 1.48 2.48 2.62</cell><cell>4 3 1 1 1 2 2</cell><cell>1.67 2.86 2.05 3.14 2.81 3.81 1.48</cell><cell>1 3 2 4 2 4 1</cell><cell>2.43 2.23 3.04 2.67 2.90 2.29 2.62</cell><cell>2 1 4 2 2 2 3</cell></row><row><cell></cell><cell>Cup-Cake</cell><cell>2.71</cell><cell>2</cell><cell>2.85</cell><cell>4</cell><cell>2.62</cell><cell>4</cell><cell>1.90</cell><cell>1</cell></row><row><cell></cell><cell>CATEGORY TOTAL</cell><cell>2.72</cell><cell>4</cell><cell>2.31</cell><cell>1</cell><cell>2.53</cell><cell>4</cell><cell>2.57</cell><cell>2</cell></row><row><cell></cell><cell>Kung fu-Panda</cell><cell>1.95</cell><cell>1</cell><cell>2.76</cell><cell>3</cell><cell>2.05</cell><cell>2</cell><cell>3.43</cell><cell>4</cell></row><row><cell>Real</cell><cell>Man-Bat Beaver-Duck</cell><cell>3.67 2.24</cell><cell>4 1</cell><cell>1.67 1.95</cell><cell>1 2</cell><cell>1.67 3.43</cell><cell>2 4</cell><cell>3.05 2.48</cell><cell>3 3</cell></row><row><cell></cell><cell>CATEGORY TOTAL</cell><cell>2.62</cell><cell>4</cell><cell>2.13</cell><cell>2</cell><cell>2.38</cell><cell>2</cell><cell>2.98</cell><cell>3</cell></row><row><cell></cell><cell>GLOBAL TOTAL</cell><cell>2.61</cell><cell>3</cell><cell>2.26</cell><cell>1</cell><cell>2.68</cell><cell>4</cell><cell>2.54</cell><cell>3</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Project repository: https://github.com/LorenzoOlearo/blending-diffusion-models</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">Compel: https://github.com/damian0815/compel</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">Stable Diffusion web UI: https://github.com/AUTOMATIC1111/stable-diffusion-webui</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>Work funded by the European Union-Next Generation EU within the project NRPP M4C2, Investment 1.,3 DD. 341-15 march 2022-FAIR; Future Artificial Intelligence Research -Spoke 4-PE00000013-D53C22002380006. Part of this work was supported by the MUR for REGAINS, the Department of Excellence DISCo at the University of Milano-Bicocca, the PRIN project PINPOINT Prot. 2020FNEB27, CUP H45E21000210001, and by the NVIDIA Corporation with the RTX A5000 GPUs granted through the Academic Hardware Grant Program to the University of Milano-Bicocca for the project "Learned representations for implicit binary operations on real-world 2D-3D data. "</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Knowledge representation and reasoning</title>
		<author>
			<persName><forename type="first">R</forename><surname>Brachman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Levesque</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
			<publisher>Morgan Kaufmann</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Conceptual Spaces: The Geometry of Thought, A Bradford book</title>
		<author>
			<persName><forename type="first">P</forename><surname>Gardenfors</surname></persName>
		</author>
		<ptr target="https://books.google.it/books?id=FSLFjw1EcBwC" />
		<imprint>
			<date type="published" when="2004">2004</date>
			<publisher>MIT Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">G</forename><surname>Fauconnier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Turner</surname></persName>
		</author>
		<ptr target="https://books.google.it/books?id=FdOLriVyzwkC" />
		<title level="m">The Way We Think: Conceptual Blending And The Mind&apos;s Hidden Complexities</title>
				<imprint>
			<publisher>Basic Books</publisher>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">High-resolution image synthesis with latent diffusion models</title>
		<author>
			<persName><forename type="first">R</forename><surname>Rombach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Blattmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Lorenz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Esser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Ommer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. IEEE/CVF conf. on comp. vision and pattern recog</title>
				<meeting>IEEE/CVF conf. on comp. vision and pattern recog</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="10684" to="10695" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>Podell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>English</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lacey</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2307.01952</idno>
		<title level="m">Sdxl: Improving latent diffusion models for high-resolution image synthesis</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Deep unsupervised learning using nonequilibrium thermodynamics</title>
		<author>
			<persName><forename type="first">J</forename><surname>Sohl-Dickstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Weiss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Maheswaranathan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ganguli</surname></persName>
		</author>
		<ptr target="https://proceedings.mlr.press/v37/sohl-dickstein15.html" />
	</analytic>
	<monogr>
		<title level="m">Proc. ICML&apos;15</title>
				<meeting>ICML&apos;15<address><addrLine>PMLR, Lille, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="page" from="2256" to="2265" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">P</forename><surname>Kingma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Welling</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1312.6114</idno>
		<title level="m">Auto-encoding variational bayes</title>
				<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">U-net: Convolutional networks for biomedical image segmentation</title>
		<author>
			<persName><forename type="first">O</forename><surname>Ronneberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Fischer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Brox</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. MICCAI 2015</title>
				<meeting>MICCAI 2015</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="234" to="241" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Confalonieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pease</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schorlemmer</surname></persName>
		</author>
		<title level="m">Concept Invention: Foundations, Implementation, Social Aspects and Applications, Computational Synthesis and Creative Systems</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Efficient creativity: Constraint-guided conceptual combination</title>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">J</forename><surname>Costello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Keane</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Cognitive Science</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="299" to="349" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">G</forename><surname>Fauconnier</surname></persName>
		</author>
		<title level="m">Mental spaces: Aspects of meaning construction in natural language</title>
				<imprint>
			<publisher>CUP</publisher>
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Conceptual blending in advertising</title>
		<author>
			<persName><forename type="first">A</forename><surname>Joy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Sherry</surname><genName>Jr</genName></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Deschenes</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of business research</title>
		<imprint>
			<biblScope unit="volume">62</biblScope>
			<biblScope unit="page" from="39" to="49" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">E pluribus unum: Formalisation, use-cases, and computational support for conceptual blending</title>
		<author>
			<persName><forename type="first">O</forename><surname>Kutz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Bateman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Neuhaus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mossakowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bhatt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computational Creativity Research: Towards Creative Machines</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="167" to="196" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Beyond visual metaphor: A new typology of visual rhetoric in advertising</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">J</forename><surname>Phillips</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">F</forename><surname>Mcquarrie</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Marketing theory</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="113" to="136" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Vismantic: Meaning-making with images</title>
		<author>
			<persName><forename type="first">P</forename><surname>Xiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Linkola</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ICCC</title>
		<imprint>
			<biblScope unit="page" from="158" to="165" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Visifit: Structuring iterative improvement for novice designers</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">B</forename><surname>Chilton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">J</forename><surname>Ozmen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename><surname>Ross</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. 2021 CHI Conf. on Human Factors in Computing Systems</title>
				<meeting>2021 CHI Conf. on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="1" to="14" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Let&apos;s figure this out: A roadmap for visual conceptual blending</title>
		<author>
			<persName><forename type="first">J</forename><surname>Cunha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Martins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Machado</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of International Conference on Innovative Computing and Cloud Computing</title>
				<meeting>of International Conference on Innovative Computing and Cloud Computing</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Does stable diffusion dream of electric sheep?</title>
		<author>
			<persName><forename type="first">S</forename><surname>Melzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Peñaloza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Raganato</surname></persName>
		</author>
		<ptr target="https://ceur-ws.org/Vol-3511/paper_09.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proc. ISD7</title>
				<meeting>ISD7</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">3511</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<author>
			<persName><forename type="first">W</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Bai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Rao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lu</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2302.04867</idno>
		<title level="m">Unipc: A unified predictor-corrector framework for fast sampling of diffusion models</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Learning transferable visual models from natural language supervision</title>
		<author>
			<persName><forename type="first">A</forename><surname>Radford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">W</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Hallacy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of International conference on machine learning</title>
				<meeting>of International conference on machine learning<address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="8748" to="8763" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
