<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Advancements in Text-to-Image Generation: A Comparative Study of Model Architectures, Datasets, and Performance Metrics</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Tejas</forename><surname>Goyal</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Computer Science and Engineering</orgName>
								<orgName type="institution">PES University</orgName>
								<address>
									<addrLine>100 Feet Ring Road BSK III Stage</addrLine>
									<postCode>PO-560085</postCode>
									<settlement>Bangalore</settlement>
									<region>Karnataka</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Kaveesh</forename><surname>Khattar</surname></persName>
							<email>kaveeshkhattar@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">Computer Science and Engineering</orgName>
								<orgName type="institution">PES University</orgName>
								<address>
									<addrLine>100 Feet Ring Road BSK III Stage</addrLine>
									<postCode>PO-560085</postCode>
									<settlement>Bangalore</settlement>
									<region>Karnataka</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Kubtoor</forename><forename type="middle">Patel</forename><surname>Dhruv</surname></persName>
							<email>kpdhruvin@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">Computer Science and Engineering</orgName>
								<orgName type="institution">PES University</orgName>
								<address>
									<addrLine>100 Feet Ring Road BSK III Stage</addrLine>
									<postCode>PO-560085</postCode>
									<settlement>Bangalore</settlement>
									<region>Karnataka</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Aditya</forename><surname>Hombal</surname></persName>
							<email>hombaladitya30@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">Computer Science and Engineering</orgName>
								<orgName type="institution">PES University</orgName>
								<address>
									<addrLine>100 Feet Ring Road BSK III Stage</addrLine>
									<postCode>PO-560085</postCode>
									<settlement>Bangalore</settlement>
									<region>Karnataka</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Mamatha</forename><forename type="middle">Hosalli</forename><surname>Ramappa</surname></persName>
							<email>mamathahr@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">Computer Science and Engineering</orgName>
								<orgName type="institution">PES University</orgName>
								<address>
									<addrLine>100 Feet Ring Road BSK III Stage</addrLine>
									<postCode>PO-560085</postCode>
									<settlement>Bangalore</settlement>
									<region>Karnataka</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Advancements in Text-to-Image Generation: A Comparative Study of Model Architectures, Datasets, and Performance Metrics</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">4A9B8479B3C65D66F194BD0CDC6D0AC7</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:47+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Image Models</term>
					<term>Image Processing</term>
					<term>Text-to-Image</term>
					<term>Generative AI</term>
					<term>GAN</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Text-to-image creation is a fast expanding topic that has received a lot of attention in the last few years. This study provides a thorough comparative examination of cutting-edge text-to-image generation models, with the goal of providing an overview of their improvements and capabilities. The investigation focuses on the various model architectures, datasets utilised for training and assessment, and performance measures used to assess picture creation quality. Researchers and practitioners may get significant insights into the strengths and shortcomings of different techniques by comparing and contrasting these models, allowing informed decision-making for picking the best text-to image generating model for certain applications.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Text-to-image and image-to-text creation <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref> is becoming very popular because of its vast use. The goal of this comparison analysis is to identify the advantages and disadvantages of various text-to-image creation techniques <ref type="bibr" target="#b2">[3]</ref>. We may learn about the underlying mechanisms that contribute to their picture synthesis skills by investigating their architectural designs. Cogview (ELBO), discrete variational auto-encoders (dVAE), multi-stage AttnGAN, generative adversarial networks (GANs), LSTM+GAN, CycleGAN+BERT, DF-GAN, MirrorGAN, VQ-SEG (a modified VQ-VAE), StackGAN+fine-tuned BERT text encoding models, and DALL-E-2 are among the models investigated. We look at the datasets used by these models for training and assessment in addition to architectural comparisons. This includes well-known benchmarks like as COCO and CUB, as well as bespoke datasets created expressly for text-to-image creation <ref type="bibr" target="#b3">[4]</ref>. The diversity and quantity of these datasets, as well as any pre-processing techniques used, have a significant impact on model performance. Various performance indicators have been used in the field to analyse the quality of produced photographs. Our study incorporates human assessments, user research, and additional qualitative evaluations that the analyzed models employed, along with perceptual similarity metrics like Frechet Inception Distance and Inception Score. This allows for a thorough assessment of each model's visual accuracy and realism. We hope that this comparative analysis will give scholars and practitioners a full grasp of the various text-to-image creation techniques. We provide vital insights for making educated decisions in picking the most appropriate model for certain applications by emphasising the strengths and drawbacks of each model based on architectural choices, dataset utilisation, and performance indicators. We give a deep study of the model designs, datasets, and performance indicators in the next sections of this work, as well as a comprehensive comparison analysis. We end by summarising the important findings and outlining potential future research avenues in text-to-image creation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Text to Image Models</head><p>Text-to-image creation is a difficult problem that seeks to translate verbal descriptions into aesthetically realistic and semantically consistent images automatically. This task is critical in a variety of applications, including computer vision <ref type="bibr" target="#b4">[5]</ref>, multimedia content generation, and virtual reality. The objective is to bridge the gap between natural language and visual representations, making it possible for robots to interpret and produce visual content. Several models have been created to do this, including older approaches such as cogview and dVAE, as well as cutting-edge techniques such as different GAN models and BERT. These models use largescale picture datasets like MSCOCO, CUB, and Oxford 102 to understand the relationship between written descriptions and visual representations. These models help to improve human machine interactions and facilitate creative content development by creating high-quality visuals that correspond to the provided text. This review lays the groundwork for a more indepth examination and comparison of various models in the next sections of this study. Brief Introduction to Models:</p><p>1. Cogview: Cogview is a state-of-the-art text-to-image generating model that combines cognitive theories and deep learning methods. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Datasets</head><p>Here are summaries of the mentioned datasets: A mix of supervised and reinforcement learning losses is used to train the model. To match the produced photos to the written descriptions, the supervised loss is utilised. The reinforcement learning loss is used to encourage the model to create aesthetically attractive pictures. A collection of text and picture captions is used to train the system. While the text encoder is trained using the written captions, the image decoder is trained using the images. The model is trained with the Adam optimizer, which has a 3e-4 learning rate. The CogView model has been demonstrated to be capable of producing realistic pictures from text descriptions. The model was tested on a range of datasets and found to be competitive with existing text-to-image creation techniques. details of the CogView architecture:</p><p>• The text encoder is a one-way Transformer that produces a series of latent codes after receiving a text caption as input.</p><p>• The text encoder's latent codes are used by the image decoder, a convolutional neural network, to create an image. • A dataset including 1.56 million Chinese text-image pairings is used to train the algorithm.</p><p>• The model is trained for 144,000 steps.</p><p>• The learning rate is decayed using a cosine annealing schedule.</p><p>• The batch size is 6,144.</p><p>• The Adam optimizer is used with a learning rate of 3e-4.</p><p>• The model is trained on a mix of 16-bit and 32-bit precision.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>• The model uses a technique called Precision Bottleneck Relaxation (PB-Relax) to stabilize</head><p>training. • The model uses a technique called Sandwich Layernorm to improve the stability of training.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">dVAE (disentangled Variational Autoencoder)</head><p>dVAE (disentangled Variational Autoencoder) is a textto-image synthesis model that generates pictures from text descriptions using a disentangled latent space. The following is the model architecture: The text encoder takes a text caption as input and produces a sequence of latent codes. The image decoder takes the latent codes from the text encoder and produces an image.</p><p>Because the dVAE's latent space is disentangled, the latent codes reflect distinct features of the picture. This enables the model to provide more realistic and varied visuals. A mix of supervised and reinforcement learning losses is used to train the model. To match the produced photos to the written descriptions, the supervised loss is utilised. The reinforcement learning loss is used to encourage the model to create aesthetically attractive pictures. The algorithm is trained on a dataset of picture captions and text captions. The photos are utilised to train the image decoder, while the text captions are used to train the text encoder. The Adam optimizer is used to train the model, which has a learning rate of 3e-4. The dVAE model has been demonstrated to be capable of producing realistic visuals from text descriptions. The model was tested on a range of datasets and found to be competitive with existing text-to-image creation techniques <ref type="bibr" target="#b6">[7]</ref>. Here are some of the details of the dVAE architecture:</p><p>• The text encoder is a bidirectional LSTM that takes a text caption as input and produces a sequence of latent codes. • The image decoder is a convolutional neural network that takes the latent codes from the text encoder and produces an image. • The latent space of the dVAE is disentangled into three factors of variation: pose, shape, and appearance. • The model is trained on a dataset of 100,000 text-image pairs. • The model is trained for 100 epochs.</p><p>• The learning rate is decayed using a cosine annealing schedule.</p><p>• The batch size is 64.</p><p>• The Adam optimizer is used with a learning rate of 3e-4.</p><p>• Here are some of the specific training details of the dVAE model.</p><p>• The model uses a technique called Wasserstein loss to improve the stability of training.</p><p>• The model uses a technique called KL annealing to gradually increase the importance of the KL divergence loss during training.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Multi-Stage AttnGAN</head><p>A text-to-image synthesis model called AttGAN (Attention GAN) <ref type="bibr" target="#b7">[8]</ref> employs attention to regulate the creation of pictures from text descriptions. The model architecture is as follows:</p><p>The text encoder takes a text caption as input and produces a sequence of latent codes. The image generator takes the latent codes from the text encoder and produces an image. The image discriminator takes an image as input and produces a probability that the image is real or fake. When producing the picture, the attention mechanism allows the image generator to focus on select sections of the written description. As a result, the model may provide visuals that are more compatible with the written description. A mix of adversarial and supervised losses is used to train the model. The adversarial loss is used to teach the image generator and discriminator to compete. To match the produced photos to the written descriptions, the supervised loss is utilised. The algorithm is trained on a dataset of picture captions and text captions. The pictures are utilised to train the image discriminator, while the text captions are used to train the text encoder and image generator. The Adam optimizer is used to train the model, which has a learning rate of 3e-4. The AttGAN model has been demonstrated to be capable of producing realistic pictures from text descriptions. The model was tested on a range of datasets and found to be competitive with existing text-to-image creation techniques <ref type="bibr" target="#b8">[9]</ref>. details of the AttGAN architecture:</p><p>• The text encoder is a bidirectional LSTM that takes a text caption as input and produces a sequence of latent codes. • The image generator is a convolutional neural network that takes the latent codes from the text encoder and produces an image. • The image discriminator is a convolutional neural network that takes an image as input and produces a probability that the image is real or fake. • The image discriminator is a convolutional neural network that takes an image as input and produces a probability that the image is real or fake.</p><p>Here are some specific training details of the AttGAN model:</p><p>• The model is trained for 100 epochs.</p><p>• The batch size is 64.</p><p>• The Adam optimizer is used with a learning rate of 3e-4. • The image discriminator is a convolutional neural network that takes an image as input and produces a probability that the image is real or fake. • The BERT model <ref type="bibr" target="#b9">[10]</ref> is a Transformer-based model that takes the text caption as input and produces a sequence of hidden states. • The image generator is a convolutional neural network that uses the latent codes from the text encoder to create an image. • The model uses a technique called Wasserstein loss to improve the stability of training.</p><p>• The model is trained on a dataset of 500,000 text-image pairs. • The training goes on for 100 epochs and the learning rate is decayed using a cosine annealing schedule, with a size of 64 for each batch. • The Adam optimizer is used with a learning rate of 3e-4. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.5.">DF-GAN</head><p>TThe DF-GAN <ref type="bibr" target="#b10">[11]</ref> is made up of the generator, discriminator, and pre-trained text encoder. To guarantee the diversity of the pictures it generates, the generator needs two inputs: a phrase vector encoded by a text encoder and a noise vector sampled from a Gaussian distribution. The noise vector is first turned into a completely connected layer. The properties of the image are then upsampled using a sequence of UP-Blocks. An upsample layer, a residual block, and DF-Blocks make up the UPBlock, which combines the text and picture attributes throughout the image manufacturing process. Finally, an image feature is converted into an image using a convolution layer. Images are converted into attributes by the discriminator using a sequence of DownBlocks. After that, a copy of the vector phrase is mixed with the properties of the photo. To evaluate the visual realism and semantic coherence of the inputs, an adversarial loss will be anticipated. The discriminator aids the generator in producing pictures of superior quality and textimage semantic coherence by differentiating synthetic images from authentic examples. The bidirectional long short-term memory (LSTM) text encoder extracts semantic vectors from the text description. We employ AttnGAN's pre-trained model directly. Image scaling and normalisation are two ways to preprocess the image data.</p><p>• Images undergo resizing and normalizing the images. • Text undergoes tokenisation followed by verctorisation. • Text undergoes tokenisation followed by verctorisation.</p><p>• Train the image encoder using a dataset of real images.</p><p>• Update the image encoder's weights using a suitable loss function (e.g., Mean Squared Error). • Freeze the generator and discriminator.</p><p>• Train the text encoder using the vectorized text descriptions.</p><p>• Update the text encoder's weights using a suitable loss function.</p><p>• Unfreeze the generator, discriminator, and encoders.</p><p>• Generate fake images by sampling from random noise and text features.</p><p>• The discriminator part of the GAN is trained to distinguish between real and fake images.</p><p>• The generative part of the GAN is trained to fool the discriminator by generating realistic images. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.6.">MirrorGAN</head><p>The figure above depicts the MirrorGAN implementation <ref type="bibr" target="#b11">[12]</ref>, which includes a mirror structure that combines T2I (textto-image) and I2T (image-to-text) functions. MirrorGAN's fundamental idea is to use the concept of redescription to train T2I generation. MirrorGAN then reproduces the image's description, successfully matching the created image's underlying semantics with the given text description. The MirrorGAN model is made up of three major components: STEM, GLAM, and STREAM. Each module performs a distinct task in the model's overall operation.Pretrain a text encoder network using a large-scale text dataset (e.g., text corpus).</p><p>• Pretrain a text encoder network using a large-scale text dataset (e.g., text corpus).</p><p>• The text description is fed into this network, which then encodes it into a fixed-length feature vector. • The weights of the text encoder are updated using an appropriate loss function (such as Cross-Entropy Loss).</p><p>• The generator is trained to produce realistic images from text features and random noise.</p><p>• The discriminator is trained to discern between generated and real images.</p><p>• The discriminator is trained to discern between generated and real images.</p><p>• Turn off the discriminator and generator.</p><p>• Update the text encoder's weights using an appropriate loss function (such as Triplet Loss) after training it with the dataset's text descriptions. • Create false images by sampling from text features and random noise.</p><p>• Switch between training the discriminator and generator. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.7.">VQ-SEG -a modified version of the Variational Quantum VAE (VQVAE)</head><p>A version of the Variational Quantum VAE (VQVAE) architecture designed for image synthesis and segmentation tasks is called VQ-SEG. The architecture of VQ-SEG consists of several key parts. Using an encoder network, incoming images are first transformed into a representation in a lower-dimensional latent space. In order to capture hierarchical information at many scales, this encoder typically has convolutional layers followed by downsampling techniques like pooling or strided convolutions. The latent space representation is then obtained by the Vector Quantization (VQ) layer. The continuous latent coding is discretely quantized by the VQ layer. It uses a preset codebook that it learned during training to match each latent code to the closest codeword. Computation and storage are facilitated by this distinct form.</p><p>The VQ-SEG picture segmentation process now has an additional branch thanks to modifications made to the VQVAE architecture. This branch produces segmentation masks, pixel by pixel, that show the class labels of different regions inside the input image. VQSEG incorporates an extra decoder network to enable picture segmentation. This decoder produces pixel-wise predictions for every class label using quantized latent codes. Often, the decoder network consists of upsampling or transposed convolutions to improve the feature maps' spatial resolution. VQ-SEG integrates segmentation and reconstruction losses during training. The reconstruction loss promotes output images that are similar to the original input images, but the segmentation loss penalises differences between expected segmentation masks and ground truth masks. Typically, these losses are determined using pixel-by-pixel comparisons, such as mean squared error or cross-entropy loss. In essence, VQ-SEG is a modified VQVAE architecture that combines discrete quantization with segmentation branches to add image segmentation capabilities.   </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.9.">DALLE-2</head><p>Diffusion models are used by the DallE-2 architecture to produce high-resolution images conditioned on optional text descriptions and CLIP image embeddings. CLIP embeddings are projected into and added to the current timestep embedding in this enhanced architecture. In addition, four additional context tokens are projected from CLIP embeddings and concatenated with the GLIDE text encoder's output sequence. It is discovered that the text conditioning pathway provides minimal assistance in this area, despite its attempt to capture natural language elements that CLIP might miss. In order to improve sample quality, training involves randomly removing the text caption 50% of the time and randomly setting CLIP embeddings to zero or applying learnt embeddings 10% of the time. Guidance on conditioning information is also used. It takes two trained diffusion upsampler models to produce high-resolution images. While the second upsampler further upsamples the photos to 1024×1024 resolution, the first upsampler concentrates on boosting the resolution from 64×64 to 256×256. By employing methods like Gaussian blur and varied BSR degradation to slightly contaminate the conditioned images during training, the robustness of the upsamplers is increased. The Dalle2 architecture does not include attention layers; it only uses spatial convolutionals. The model is applied immediately at the target resolution during inference, demonstrating its capacity to generalise to higher resolutions without the requirement for extra conditioning on the text caption. The upsamplers use the unconditional ADMNets technique and are not conditioned on the caption. To summarise, the GLIDE text encoder, CLIP image embeddings, and diffusion models are used in the Dalle2 architecture to produce high-resolution images. The resulting images' quality and resilience are enhanced by the processes of conditioning and upsampling. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.10.">LSTM+GAN</head><p>1. The LSTM (Long Short-Term Memory) model <ref type="bibr" target="#b13">[14]</ref>, is known for its ability to capture long-range dependencies in data. A single LSTM memory cell, which consists of input gates (it), forget gates (ft), output gates (ot), cell state (ct), and cell input activation vectors.</p><p>The LSTM model utilizes composite functions to calculate these components based on input and previous hidden states. The model employs the logistic sigmoid function and the hyperbolic tangent function to process the inputs. The original LSTM algorithm used an approximate gradient calculation, but this paper adopts backpropagation through time for gradient calculation. However, training with the full gradient can lead to large derivative values. The LSTM unit receives inputs from external sources at each time step and updates its internal cell state and hidden state based on these inputs and previous states. 2. Recurrent neural networks (RNNs) composed of LSTM (Long Short-Term Memory) units are used in the LSTM Autoencoder Model, an unsupervised learning model. An encoder LSTM and a decoder LSTM are the two RNNs that make up the model. An image patch or set of features is a sequence of vectors that are fed into the model. Following the processing of this input sequence by the encoder LSTM, the decoder LSTM assumes control after all inputs have been read. A prediction for the target sequence-which is the input sequence in reverse order-is produced by the decoder LSTM. Both conditioned and unconditioned decoders are possible. The final created output frame is fed into a conditional decoder, but it is not fed into an unconditioned decoder.</p><p>3. The Future Predictor Model shares the same design as the Autoencoder Model, with the key difference lying in the decoder LSTM. While the Autoencoder Model predicts the target sequence that matches the input sequence, the Future Predictor Model goes a step further and predicts frames of the video that come after the input sequence. Essentially, this model is designed to forecast a longer sequence into the future, extending beyond the input sequence.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Comparative Analysis</head><p>This section contains the analysis and findings from our investigation, which assessed the effectiveness of the eleven GAN and auto encoder models created for text-to-image conversion. One of the challenges encountered in the CogView framework is the slow generation process inherent to autoregressive models, as images are being generated token-bytoken. Additionally, blurriness is introduced as a substantial restriction by the use of VQVAE. When discretizing continuous data for use with discrete variational auto-encoders (dVAEs), there are disadvantages including limited expressiveness and possible information loss. Due to a lack of textimage pairs for each category and the inclusion of more abstract captions in the dataset, such as COCO, the multistage Attention-GAN model is constrained. Gaps exist in the ability of Generative Adversarial Networks (GANs) to produce coherent, high-quality images that are in line with the input data. While leveraging unsupervised learning, the LSTM+GAN technique struggles to produce clusters that truly reflect the truth, leading to restricted expressiveness regarding input information. CycleGAN+BERT's performance is hampered by insufficient training time and the absence of hyperparameter adjustment because of time restrictions. Due to its strong sensitivity to hyper-parameters, DF-GAN primarily relies on pre-trained models and lacks diversity in generated data. Basic text embedding techniques have a negative impact on STEM integration and the quality of the outcomes for MirrorGAN. The image quality of VQ-SEG, a modified VQVAE, could be better, however the improvements also cause losses in perceptual knowledge and awareness of a specific region. Further studies are required to construct complex loss functions and efficiently create images from text with little data for StackGAN+fine-tuned BERT text encoding models. Finally, Conditional Adversarial Networks (cGAN) still have difficulties in producing visually and semantically cohesive video sequences from textual descriptions. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Performance Metrics</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Mean Opinion Score(MOS)</head><p>The generated photos' quality could be evaluated subjectively using the Mean Opinion Score (MOS). On a numerical scale, human participants would be asked to score the generated images' perceived fidelity or quality. The MOS, which provides an overall assessment of the image quality, would then be calculated by averaging the ratings given by several people. Higher MOS values correspond to more visually attractive or realistic perceptions of the created images, whereas lower MOS values correspond to lower quality or fidelity. User happiness can be measured and text-to-image conversion model <ref type="bibr" target="#b14">[15]</ref> enhancements can be directed via MOS evaluations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Future Research Directions</head><p>Converting text to image generation models have come a long way, but there are still a number of areas that need more research and development. This section highlights potential future research directions based on the current state of models and identified areas for improvement Improved Semantic Understanding: Enhancing the semantic understanding of text is crucial for generating more accurate and contextually relevant images. Future research could focus on incorporating advanced natural language processing techniques, such as pre-trained language models or knowledge graphs, to capture a deeper understanding of text semantics. This could enable models to generate images that align more closely with the intended meaning of the input text. One area that could potentially be beneficial is generating knowledge graphs from text embeddings to improve context and positional understanding greatly.</p><p>Increased Resolution and Realism: Though current models have come a long way in producing high-quality photographs, resolution and photo-realism may still use some work. Future research could focus on developing techniques to generate images at higher resolutions, allowing for more detailed and visually appealing results. Additionally, exploring advanced loss functions or perceptual similarity metrics could further enhance the realism of generated images, making them indistinguishable from real photographs.</p><p>Fine-grained Control and Manipulation: Current text-to-image models often lack finegrained control over generated images. Future research could investigate methods to enable precise control and manipulation of image attributes, such as object positions, colors, and styles, based on textual input. This could involve exploring novel conditioning techniques or incorporating additional information during the generation process to produce images that align with specific user requirements.</p><p>Handling Ambiguity and Multi-modal Outputs: Textual descriptions often contain ambiguous or subjective elements that can lead to multiple plausible interpretations. Future research could explore methods to handle such ambiguity and generate diverse, multi-modal outputs that capture different interpretations of the same textual input. This could involve incorporating uncertainty estimation techniques, exploring variational approaches, or leveraging adversarial learning to encourage the generation of diverse image outputs. One possible approach is training the image generator module on a combination of parsed output from scene graph and the actual prompt for a more objective understanding leading to potentially reduce ambiguity to a reasonable extent.</p><p>Incorporating User Feedback and Interactive Generation: Interactive text-to-image generation systems that incorporate user feedback and preferences hold great potential for enhancing user satisfaction and enabling personalized image generation. Future research could focus on developing models that can adapt and refine their generation process based on user interactions, allowing users to provide feedback and guide the image synthesis process in real-time.</p><p>Ethical Considerations and Bias Mitigation: As text-to-image generation becomes more prevalent, it is important to address ethical considerations and mitigate potential biases in the generated content. Future research should explore meth-ods to ensure fairness, diversity, and inclusivity in generated images, avoiding the reinforcement of harmful stereotypes or biases present in the training data. This could involve developing bias detection <ref type="bibr" target="#b15">[16]</ref> and mitigation techniques or incorporating fairness constraints during the training process. These new lines of inquiry could further the field of text-to-image generation and open up new avenues for producing contextually appropriate, high-quality images from textual input. Through the exploration of novel methodologies and resolution of these obstacles, scholars can facilitate the development of more advanced and adaptable text-to-image generation models that have wider applications across diverse fields.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8.">Conclusion</head><p>In conclusion, our comparative study of 11 text-to-image generation models highlighted Stack-GAN as the top performer. StackGAN achieved a remarkable inception score of 4.44, indicating its ability to generate visually diverse and high-quality images. Additionally, StackGAN outperformed other models with a FID score of 37.7, demonstrating its superior ability to capture image fidelity and similarity to real images. While other models, such as cogview <ref type="bibr" target="#b16">[17]</ref> and dVAE, showcased strengths in specific areas, they fell short in terms of overall performance compared to StackGAN. The models based on GAN architecture, including Multi-Stage AttnGAN, LSTM+GAN, and CycleGAN+BERT, exhibited promising results in capturing global and local image details, but StackGAN surpassed them in terms of both inception and FID scores.Our study also emphasized the impact of dataset selection on model performance. The MSCOCO dataset provided a diverse range of images, contributing to the evaluation and comparison of the models. The outcomes demonstrated that StackGAN could make good use of the dataset, producing better image creation results. These results offer insightful information to the text-to-image generating sector and help practitioners and researchers select models that are suitable for their particular requirements. Future studies can concentrate on developing StackGAN even more and investigating its possible uses in a range of fields, including as virtual reality, multimedia content generation, and computer vision.In conclusion, our comparison analysis shows that StackGAN is the best model for text-to-image creation, doing remarkably well with an origin score of 4.44 and a Fretchet Inception Distance score of 37.7. These outcomes demonstrate the effectiveness of StackGAN as a method for producing realistic and varied graphics from text descriptions.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Cogview architecture.</figDesc><graphic coords="6,110.13,84.19,375.03,147.77" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: dvae architecture.</figDesc><graphic coords="7,110.13,84.19,375.03,210.90" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: AttGAN architecture.</figDesc><graphic coords="8,110.13,147.40,375.03,154.58" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: cycle GAN+BERT architecture.</figDesc><graphic coords="9,110.13,84.19,375.02,271.54" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: df-GAN architecture.</figDesc><graphic coords="10,110.13,260.91,375.04,173.79" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: MirrorGAN architecture.</figDesc><graphic coords="11,110.13,218.61,375.01,142.93" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: VQVAE: The architecture of the scene-based method: Images are created from input text with optional layout. Transformer creates tokens that networks then encode and decode.</figDesc><graphic coords="12,110.13,150.00,375.04,259.64" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>4. 8 .</head><label>8</label><figDesc>StackGAN + Fine-tuned BERT Text Encoding Models Using the BERT model and the StackGAN architecture, realistic visuals are produced from textual descriptions. There are two phases to the architecture: Stage 1 entails turning text into low-resolution images, and Stage 2 concentrates on enhancing those images into highresolution versions. The target dataset is used to fine-tune a pretrained BERT model in the BERT-based text embedding process. The BERT model can now efficiently comprehend and reflect the semantics of the provided textual descriptions thanks to this fine-tuning. The first stage of the StackGAN model seeks to produce low-resolution pictures that roughly depict the colour and shape mentioned in the textual descriptions. The BERT-based text embedding vector and a random noise vector are sent into the generating network. It generates an image with low resolution that matches the written description. The generated low-resolution images are compared to the real images, which are dependent on the textual descriptions, by the discriminator network. In order to produce high-resolution photographs with better details, Stage 2 concentrates on enhancing the low-resolution images created in Stage 1. The lowresolution image created in Stage 1 and the BERT-based text embedding vector are inputs used by the generator network in Stage 2. It generates an upscale picture that corresponds with the provided written explanation. The generator's high-resolution images are compared to the actual high-resolution images conditioned on the textual descriptions by the discriminator network in Stage 2. Mini-batches and iterations are used in the StackGAN with BERT training method. Adversarial training is used to update the discriminator and generator networks' parameters. Carefully chosen learning rates for the discriminator and generator guarantee successful convergence during training. The model seeks to produce realistic images [13] that are coherent with the given textual descriptions by combining the power of BERT-based text embeddings with the hierarchical image creation approach of StackGAN.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: StackGAN + Fine-tuned BERT architecture.</figDesc><graphic coords="13,151.80,218.09,291.68,179.37" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head>Figure 9 :</head><label>9</label><figDesc>Figure 9: DALLE 2 architecture.</figDesc><graphic coords="14,110.13,175.47,375.02,162.64" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_10"><head>Figure 10 :</head><label>10</label><figDesc>Figure 10: lstm+GAN architecture: The Composite Model forecasts the future of natural image patches. In the first two rows are ground truth sequences. It uses 16 input frames and displays the most recent 10. The true future is shown in the next 13 frames. Here are the predicted and reconstructed frames for two model examples.</figDesc><graphic coords="15,110.13,187.17,375.01,173.24" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_11"><head>Figure 11 :</head><label>11</label><figDesc>Figure 11: scene graphs giving grater positional knowledge.</figDesc><graphic coords="18,110.13,84.19,375.03,473.18" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>Multi-Stage AttnGAN is a multistage attention-based GAN model that refines the produced pictures gradually. It uses a hierarchical structure to collect both global and local picture data, resulting in highquality images that match the specified text descriptions. 4. LSTM+GAN: To produce visuals from text, LSTM+GAN combines long short-term memory (LSTM) networks with GANs. The LSTM component makes it easier to model sequential information in text, while the GAN component guarantees that the produced pictures are both visually appealing and semantically appropriate. 5. CycleGAN+BERT: CycleGAN+BERT is a sophisticated image-to-image translation model that combines CycleGAN with BERT, a pre-trained language model. This paradigm facilitates cross-modal translation between textual descriptions and visual representations by using the bidirectional link between text and images. 6. GAN: GAN (Generative Adversarial Network) is a fundamental paradigm for text-toimage generation. It consists of a discriminator network and a generator network that engage in competition during training. Eventually, the generator produces realistic images from text by learning to create images that deceive the discriminator. 7. DF-GAN: Deep Fusion Generative Adversarial Network (DF-GAN) is a GAN variation that uses deep fusion methods to collect fine-grained features during picture production. It intends to generate high-resolution pictures with increased visual quality and semantic coherence. 8. MirrorGAN: MirrorGAN makes use of an innovative mirrored approach to improve the alignment of text and picture elements. It employs a two-stage generating process, with the first focusing on global coherence and the second on local details, resulting in aesthetically appealing visuals.</figDesc><table /><note>To generate aesthetically consistent pictures from verbal descriptions, it employs attention processes and generative adversarial networks (GANs) [6]. 2. dVAE: dVAE (disentangled Variational Autoencoder) is a novel model that uses variational autoencoders to disentangle several aspects of variation in pictures. This gives the model more control over the generation process, allowing it to generate various and relevant visuals based on text input. 3. Multi-Stage AttnGAN: 9. VQSEG: VQ-SEG (Vector Quantized Variational Autoencoder with Semantic Expansion and Geometric Constraints) is a model that combines vector quantization, variational autoencoders, and semantic expansion techniques. It guarantees that the produced pictures have both semantic consistency and visual quality, making them true to the written descriptions supplied. 10. StackGAN:StackGAN is a two-step stacked generative adversarial network that creates pictures. The first step creates low-resolution pictures based on text descriptions, which are then refined to produce high-resolution images with increased details and realism. 11. Dalle2: Dalle2 is a DALL-E model variation that combines transformers with VQ-VAE (Vector Quantized Variational Autoencoder). It excels at producing very different and imaginative pictures based on text input, providing a wide range of text-to-image conversion options. These models under consideration provide a broad variety of strategies and approaches for text-to-image creation, each with its own set of strengths and qualities.</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>(Yahoo Flickr 100 Million Creative Commons): is</head><label></label><figDesc></figDesc><table><row><cell>a huge dataset that</cell></row><row><cell>contains 100 million Flickr photographs and videos. It is freely distributed under the</cell></row><row><cell>Creative Commons licence, making it an excellent resource for computer vision and</cell></row><row><cell>multimedia research. The dataset has been utilised for picture categorization, object iden-</cell></row><row><cell>tification, and deep learning applications, allowing for breakthroughs in visual perception</cell></row><row><cell>and analysis.</cell></row><row><cell>2.</cell></row></table><note>1. YFCC100M</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Microsoft Common Objects in Context (MS-COCO): The</head><label></label><figDesc></figDesc><table><row><cell>boxes, segmentation masks, and image descriptions. MS COCO has made major contri-</cell></row><row><cell>butions to computer vision research by generating cutting-edge models for a variety of</cell></row><row><cell>visual comprehension problems.</cell></row><row><cell>3.</cell></row><row><cell>MS-COCO benchmark</cell></row><row><cell>dataset is commonly used for object identification, segmentation, and captioning tasks. It</cell></row><row><cell>includes almost 200,000 photos that have precise annotations such as object bounding</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>CUB dataset (Caltech-UCSD Birds-200-2011): The dataset</head><label></label><figDesc></figDesc><table><row><cell>is commonly utilised in</cell></row><row><cell>computer vision for fine-grained bird species detection. It includes 200 bird species and</cell></row><row><cell>11,788 photos in total. Each image in the collection has bounding boxes, part positions,</cell></row><row><cell>and characteristics labelled on it. The CUB dataset has been used to create and test</cell></row><row><cell>algorithms for fine-grained classification, attribute prediction, and other bird species</cell></row><row><cell>recognition tasks.</cell></row><row><cell>4. Oxford-102 Flowers: The Oxford-102 Flowers dataset is a well-known benchmark</cell></row><row><cell>dataset for fine-grained flower categorization in the field of computer vision. It has 102</cell></row><row><cell>flower categories with a total of 8,189 photos. Each photograph is labelled with the</cell></row><row><cell>flower species it depicts. The dataset contains a wide variety of floral photos, allowing</cell></row><row><cell>researchers to create and test algorithms for flower detection, classification, and other</cell></row><row><cell>tasks. It has been frequently employed in the research of fine-grained visual categorization</cell></row><row><cell>and the improvement of algorithms in this domain.</cell></row><row><cell>5. KTH Action Recognition: The KTH Action Recognition dataset is a popular benchmark</cell></row><row><cell>dataset for recognising human actions in videos. It comprises of six separate films of</cell></row><row><cell>people walking, jogging, running, boxing, handwaving, and clapping. The collection</cell></row><row><cell>includes numerous sequences for each activity done by many people and captured from</cell></row><row><cell>various perspectives. It is a typical dataset for assessing and developing action detection</cell></row><row><cell>systems, such as those based on motion analysis, spatio-temporal characteristics, and</cell></row><row><cell>deep learning approaches.</cell></row><row><cell>6. UCF Sports: The UCF Sports activity dataset is a wellknown benchmark dataset for</cell></row><row><cell>recognising activity in sports videos. It is a broad collection of videos that capture</cell></row><row><cell>numerous athletic activities such as basketball, soccer, diving, horseback riding, and</cell></row><row><cell>more. The dataset provides a diverse variety of action classes captured from various</cell></row><row><cell>perspectives and under variable settings. It's frequently used for testing and refining</cell></row><row><cell>action recognition algorithms, allowing academics to progress the field of sports action</cell></row><row><cell>analysis and video comprehension.</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 1 Dataset Information Dataset Name Dataset Size</head><label></label><figDesc></figDesc><table><row><cell>YFCC 100M</cell><cell>15 GB</cell></row><row><cell>MS-COCO</cell><cell>25 GB</cell></row><row><cell>CUB Dataset</cell><cell>1.1 GB</cell></row><row><cell>Oxford-102</cell><cell>0.32 GB</cell></row><row><cell cols="2">KTH Action Recognition 2.2 GB</cell></row><row><cell>UCF Sports</cell><cell>1.7 GB</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head>4. Architecture 4.1. Cogview The</head><label></label><figDesc>tokenizer of CogView, a text-to-image synthesis model, is a vector-quantized variational autoencoder, or VQ-VAE. The model architecture is as follows: The text encoder reads a text caption and generates a sequence of latent codes. The image decoder uses the text encoder's latent codes to generate an image. After training the VQ-VAE to reconstruct pictures, a separate language model is utilised to translate user input text to the VQ-VAE's latent space, where image production happens.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_6"><head>Table 2</head><label>2</label><figDesc>PERFORMANCE METRICS OF DIFFERENT MODELS</figDesc><table><row><cell>Model Name</cell><cell cols="2">Inception Score Frechet Inception Distance</cell></row><row><cell>CogView</cell><cell>32.2</cell><cell>23.6</cell></row><row><cell>Discrete Variational Autoencoder</cell><cell>23.6</cell><cell>30</cell></row><row><cell>AttentionGAN</cell><cell>4.58</cell><cell>19</cell></row><row><cell>GAN</cell><cell>17</cell><cell>23</cell></row><row><cell>LSTM + GAN</cell><cell>16</cell><cell>21</cell></row><row><cell>VQ-VAE</cell><cell>18.2</cell><cell>23.6</cell></row><row><cell>DF-GAN</cell><cell>5.10</cell><cell>19.32</cell></row><row><cell>MirrorGAN</cell><cell>4.54</cell><cell>20</cell></row><row><cell>StackGAN + BERT</cell><cell>4.44</cell><cell>37.7</cell></row><row><cell>CycleGAN + BERT</cell><cell>6</cell><cell>28</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_7"><head>Fréchet Inception Distance(FID) A</head><label></label><figDesc>A statistic called the Inception Score is employed to evaluate the calibre and variety of images produced by GANs. It indicates both image quality and diversity by measuring the difference between the average class probabilities across all generated images and the individual class probabilities. version of the Inception Distance metric created especially for assessing the effectiveness of picture generating models is called Frenchet Inception Distance. By adding Frechet Distance, ´which gauges how similar two distributions are, it increases the initial Inception Distance. The distributions of feature representations taken from a pre-trained Inceptionv3 model for both actual and generated images are compared using the Frenchet Inception Distance. Better picture generating quality is indicated by a lower Frenchet Inception Distance.</figDesc><table><row><cell>Inception Score(IS)</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Bank cheque validation using image processing</title>
		<author>
			<persName><forename type="first">D</forename><surname>Chaudhary</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Agrawal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Madaan</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-981-15-0108-1_15</idno>
		<ptr target="https://link.springer.com/chapter/10.1007/978-981-15-0108-1_15" />
	</analytic>
	<monogr>
		<title level="m">International Conference on Advanced Informatics for Computing Research</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="148" to="159" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Solving direction sense based reasoning problems using natural language processing</title>
		<author>
			<persName><forename type="first">V</forename><surname>Madaan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Sood</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Agrawal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">K</forename><surname>Shukla</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Machine Learning and Data Science: Fundamentals and Applications</title>
				<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="215" to="230" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">B</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Qi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lukasiewicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">H S</forename><surname>Torr</surname></persName>
		</author>
		<title level="m">controllable Text-to-Image generation</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">U</forename><surname>Singer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Polyak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Hayes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Yin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>An</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Ashual</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Gafni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Parikh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sonal</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yaniv</forename><surname>Taigman</surname></persName>
		</author>
		<title level="m">Make-A-Video: Text-to-Video Generation without Text-Video Data</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">E-gardener: building a plant caretaker robot using computer vision</title>
		<author>
			<persName><forename type="first">S</forename><surname>Chauhan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Agrawal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Madaan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2018 4th International Conference on Computing Sciences (ICCS), IEEE</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="137" to="142" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Generative adversarial text to image synthesis</title>
		<author>
			<persName><forename type="first">S</forename><surname>Reed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Akata</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Logeswaran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Schiele</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Lee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 33rd international conference on machine learning</title>
				<meeting>the 33rd international conference on machine learning</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">48</biblScope>
			<biblScope unit="page" from="1060" to="1069" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">DreamBooth: Fine tuning Text-to-Image diffusion models for Subject-Driven generation</title>
		<author>
			<persName><forename type="first">N</forename><surname>Ruiz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Jampani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Pritch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rubinstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Aberman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</title>
				<meeting>the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Attngan: Fine-grained text to image generation with attentional generative adversarial networks</title>
		<author>
			<persName><forename type="first">T</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Gan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Huang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)</title>
				<meeting>the IEEE conference on computer vision and pattern recognition (CVPR)</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1316" to="1324" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">A Simple and Effective Baseline for Text-to-Image Synthesis</title>
		<author>
			<persName><forename type="first">M</forename><surname>Tao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Jing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Bao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><surname>Df-Gan</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><surname>Tsue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<title level="m">Cycle Text-To-Image GAN with BERT</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">A Simple and Effective Baseline for Text-to-Image Synthesis</title>
		<author>
			<persName><forename type="first">M</forename><surname>Tao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Jing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Bao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Df-</forename><surname>Gan</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">MirrorGAN: Learning Text-to-image Generation by Redescription</title>
		<author>
			<persName><forename type="first">T</forename><surname>Qiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Tao</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Realistic image generation from text by using BERT-Based embedding</title>
		<author>
			<persName><forename type="first">S</forename><surname>Na</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Do</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Electronics</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Unsupervised Learning of Video Representations using LSTMs</title>
		<author>
			<persName><forename type="first">N</forename><surname>Srivastava</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Mansimov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Salakhutdinov</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors</title>
		<author>
			<persName><forename type="first">O</forename><surname>Gafni</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Detecting Scenes in Fiction: A New Segmentation Task</title>
		<author>
			<persName><forename type="first">A</forename><surname>Zehe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Konle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">K</forename><surname>Dumpelmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Gius</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hotho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Jannidis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kaufmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Krug</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Puppe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Reiter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Schreiber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Wiedmer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume</title>
				<meeting>the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Ding</surname></persName>
		</author>
		<title level="m">CogView: Mastering Text-to-Image generation via transformers</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
