<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Synthesis of biomedical images based on generative intelligence tools ⋆</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Oleh</forename><surname>Berezsky</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">West Ukrainian National University</orgName>
								<address>
									<addrLine>11 Lvivska st</addrLine>
									<postCode>46001</postCode>
									<settlement>Ternopil</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Petro</forename><surname>Liashchynskyi</surname></persName>
							<email>p.liashchynskyi@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">West Ukrainian National University</orgName>
								<address>
									<addrLine>11 Lvivska st</addrLine>
									<postCode>46001</postCode>
									<settlement>Ternopil</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Grygoriy</forename><surname>Melnyk</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">West Ukrainian National University</orgName>
								<address>
									<addrLine>11 Lvivska st</addrLine>
									<postCode>46001</postCode>
									<settlement>Ternopil</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Maksym</forename><surname>Dombrovskyi</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">West Ukrainian National University</orgName>
								<address>
									<addrLine>11 Lvivska st</addrLine>
									<postCode>46001</postCode>
									<settlement>Ternopil</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Mykola</forename><surname>Berezkyi</surname></persName>
							<email>mykolaberezkyy@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">West Ukrainian National University</orgName>
								<address>
									<addrLine>11 Lvivska st</addrLine>
									<postCode>46001</postCode>
									<settlement>Ternopil</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Synthesis of biomedical images based on generative intelligence tools ⋆</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">7B2B922AF9C247B53E5F15450D8391C2</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:10+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>cytological images, generative intelligence, image generation, generative adversarial networks, data sets, diffusion model, IS metric, FID metric 1 M. Berezkyi) 0000-0001-9931-4154 (O. Berezsky)</term>
					<term>0000-0002-3920-6239 (P. Liashchynskyi)</term>
					<term>0000-0003-0646-7448 (G. Melnyk)</term>
					<term>0009-0008-2287-416X (M. Dombrovskyi)</term>
					<term>0000-0001-6507-9117 (M. Berezkyi)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The paper substantiates the use of generative intelligence tools to generate biomedical images. Analysis of the literature is conducted on methods and techniques for generating images using GAN and diffusion models. A new GAN architecture and algorithm have been developed for synthesizing cytological images based on a diffusion model. The analysis focuses on established datasets used for training deep neural networks. The widely recognized metrics for evaluating the quality of synthetic images are being analyzed: IS, FID. Computer experiments were conducted for synthesis of cytological images based on GAN and Stable Diffusion. The following results were obtained: diffusion model -FID -0.63, IS -3.99, GAN -FID -3.39, IS -3.95.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Generative intelligence has now become the pinnacle of research in artificial intelligence. Generative intelligence systems allow you to generate texts, images, sounds, etc. Generative intelligence systems are based on deep neural network models that are trained on large samples of data.</p><p>Consequently, a variety of generative intelligence systems have emerged that transform text into image, image into image, image into text, sound into text, text into sound, sound into sound. Textto-image transformation takes place on a fixed set of data. For this purpose, a transformer was used, which autoregressively simulates text and graphic tokens <ref type="bibr">[1]</ref>. The Codex GPT language model, which is trained on GitHub, makes it possible to write code in Python. The paper <ref type="bibr">[3]</ref> analyzes the opportunities and risks of fundamental models, such as language, vision, reasoning. In addition, the analysis of technical principles -the architecture of models, learning algorithms, data, is carried out. The impact of generative intelligence on society has also been studied.</p><p>Other papers <ref type="bibr">[4]</ref> investigated a family of neurospeech models for LaMDA dialogue applications. The model generates responses based on learning from known sources. The authors investigated the LaMDA system in education.</p><p>Generative intelligence has also found applications in medicine. The paper investigates the use of generative intelligence in oncology, in particular for generating cytological images of breast cancer.</p><p>Breast cancer is one of the most common cancers among women worldwide. Early diagnosis and accurate determination of the stage of disease development are key factors for successful treatment and reduction of mortality. Cytological, histological and immunohistochemical images are used to detect pathologies. These images are a class of biomedical images. Cytological analysis of images of cell preparations is one of the diagnostic methods, which allows the detection of pathological changes at the cellular level <ref type="bibr">[5]</ref>.</p><p>To train automatic systems for diagnosing breast cancer, large and high-quality datasets are needed that reflect the variety of possible pathological changes. Datasets of cytological images of breast cancer have the following features:  Diversity of cell structures: normal cells, different types of atypical and malignant cells.  Image variability: changes in color, lighting, focus, etc.  Annotations &amp; markups: availability of expert markup for supervised learning.</p><p>The available datasets of real images are limited and poorly annotated. Therefore, an actual problem is the generation of biomedical images in oncology. This provides the necessary accuracy in the classification of biomedical images. To solve this problem, the paper uses the means of generative intelligence: GAN and diffusion models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Literature review</head><p>Researchers in their works have developed a number of approaches to solving the problem of generating biomedical images. In particular, the article discusses the problems of creating medically significant fine-grained images of pulmonary adenocarcinomas using Stable Diffusion models <ref type="bibr" target="#b14">[6]</ref>. The authors show how these models can be used to generate images with a limited number of samples, which is important for medical research where data can be scarce.</p><p>Other papers present the analysis of diffusion models in medical imaging <ref type="bibr" target="#b15">[7]</ref>. The authors consider modern methods and approaches in the processing of medical images using deep learning, in particular diffusion models, which can significantly improve the quality of diagnostics.</p><p>The paper <ref type="bibr" target="#b16">[8]</ref> presents a novel generative model that uses Langevin dynamics to generate samples by estimating gradients in data distribution with the addition of Gaussian noise. This avoids problems with low-dimensional manifolds and improves sample quality.</p><p>The paper explores how computer vision models trained on large sets of images from the Internet automatically learn human social biases, such as racism and sexism <ref type="bibr" target="#b17">[9]</ref>. This question becomes important in the context of the ethical use of generative models.</p><p>The authors' article <ref type="bibr" target="#b18">[10]</ref> describes the process of synthetic data generation in digital pathology using diffusion models. The authors present a comprehensive approach to assessing the quality of the generated images, which can be useful for educational purposes.</p><p>An article by A. Radford, J.W. Kim, C. Hallacy and other authors describes the CLIP model, which is trained on large datasets of images and texts to perform a variety of computer vision tasks without special training for each task <ref type="bibr" target="#b19">[11]</ref>. The model demonstrates the ability to zero-learn on many datasets, which opens up new possibilities for application.</p><p>The authors R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer describe latent diffusion models for high-resolution image generation <ref type="bibr" target="#b20">[12]</ref>. They use autoencoders to reduce the dimensionality of the data, which allows for a reduction in computational costs without losing image quality.</p><p>The article <ref type="bibr" target="#b21">[13]</ref> presents the Imagen model, which is a text-to-image diffusion model with a high level of photorealism. The model uses large language models to encode the text, which greatly improves the quality of the samples.</p><p>A paper by other researchers describes the use of diffusion probabilistic models for the synthesis of histopathological images, which is important for pathology research <ref type="bibr" target="#b22">[14]</ref>.</p><p>The paper <ref type="bibr" target="#b23">[15]</ref> presents diffusion probabilistic models used to generate high-quality images. These models demonstrate high-quality samples on various datasets, such as CIFAR10 and LSUN. Thus, the analysis of literature sources indicates significant progress in the development of image synthesis methods, in particular, through the use of diffusion models and GANs. This opens up new possibilities for improving the quality and diversity of synthesized images in medical imaging.</p><p>In the paper <ref type="bibr" target="#b24">[16]</ref>, researchers consider a deep learning approach using non-stationary thermodynamics. They represent diffusion probabilistic models that gradually break down the structure in the data through the diffusion process and then train the reverse process to reconstruct the structure, creating a flexible and computationally efficient generative model.</p><p>In the paper, the authors investigate diffusion models that are superior to generative adversarial networks (GANs) in image synthesis tasks <ref type="bibr" target="#b25">[17]</ref>. They demonstrate that diffusion models can achieve high quality image samples, surpassing current generative models.</p><p>The paper <ref type="bibr" target="#b26">[18]</ref> presents the use of cascading diffusion models to generate high-quality images. The cascade diffusion model consists of several stages, where each subsequent stage increases the resolution of the image.</p><p>The authors T. Karras, S. Laine, and T. Aila describe a new generator architecture for generative adversarial networks (GANs) that borrows ideas from stylistic transference <ref type="bibr" target="#b27">[19]</ref>. This architecture allows for automatic and uncontrolled separation of high-level attributes from stochastic variations in generated images.</p><p>The paper describes a new approach to variational autoencoders (VAEs) for image generation <ref type="bibr" target="#b28">[20]</ref>. The NVAE network uses deep-cut convolutions and batch normalization to improve the quality of generated images.</p><p>The paper <ref type="bibr" target="#b29">[21]</ref> describes a novel approach to generative modeling that uses stochastic differential equations (SDE) to transform a data distribution to a simple noise distribution and vice versa. The model achieves high results in image generation and demonstrates the capabilities for solving inverse problems.</p><p>The authors of another paper <ref type="bibr" target="#b30">[22]</ref> developed a method for filling images using diffusion probabilistic denoiseing models (DDPM). Based on this method, diverse and semantically meaningful images can be generated, surpassing current GAN-based methods</p><p>In <ref type="bibr" target="#b31">[23]</ref>, the authors describe improvements to diffusion probabilistic denoiseing models for image generation. They use accuracy and completeness metrics to compare images. Experiments have shown that diffusion models achieve higher completeness at similar values of the FID metric.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Analysis of image datasets</head><p>When creating datasets of cytological images, it is important to standardize the annotation, as it ensures high quality, reliability and compatibility of data for their further use in machine learning and diagnostic processes. In addition, proper annotation increases the efficiency of training AI models, as well-defined labels reduce error rates in the learning process and help algorithms better recognize cell features and pathological changes. When segmenting and annotating objects on cytological images, it is important to adhere to the image annotation formats used in the PASCAL VOC <ref type="bibr" target="#b40">[32]</ref> and COCO <ref type="bibr" target="#b41">[33]</ref> datasets.</p><p>The APCData dataset <ref type="bibr" target="#b42">[34]</ref> consists of cytological images of the cervix, developed in collaboration with the laboratory of anatomical pathology and cytology, located in Rivera, Uruguay. The set includes 425 images divided into 6 classes. The cells are labeled using bounding boxes and centers of the nuclei.</p><p>The dataset consists of 425 images of 2048 x 1532 pixels, corresponding to 73 diagnosed with Papanicolaou test. A total of 3619 cells were annotated. The images were taken using the Olympus CX40RF100 microscope and the Olympus LC30 Optical Microscope camera. Images are processed using Olympus L.Cmicro software. Bounding boxes were created for cells in an appropriate format for use with the YOLO convolutional neural network architecture.</p><p>The UFSC OCPap dataset <ref type="bibr" target="#b43">[35]</ref> contains 9797 annotated images of 1200x1600 pixels in size, obtained from 5 slides with diagnosed oral tissue cancer and 3 healthy samples. The slides are provided by the Hospital Dental Center of the University Hospital of the Federal University of Santa Catarina. The dataset contains binary kernel masks and cell annotations in Json format. The images are divided into subsets of training, validation, and testing. The images were taken using an Axio Scan.Z1 microscope and a Hitachi HV-F202SCL camera. Dataset images are derived from virtual slides measuring 214,000 x 161,000 pixels (0.111 μm x 0.111 μm per pixel). For annotation, medical specialists used LabelMe and LabelBox tools.</p><p>The authors have developed a database of cytological images of breast cancer <ref type="bibr" target="#b44">[36]</ref>. The image was obtained using a laboratory setup that includes a Delta Optical microscope, a Tucsen digital CMOS camera with a resolution of 8 megapixels. The sources of microscope slides and diagnostic information are provided by the Department of Pathological Anatomy with the Sectional Course of Forensic Medicine of the Ternopil National Medical University. The database consists of 14 related tables. The table of studies includes basic information about each study, its title, the object of the study, as well as references to the patient and doctor associated with this study.</p><p>All images of cytological samples are divided into 4 classes. The database supports several user roles: physician, expert, administrator. The database contains information about the segmentation algorithm used. For each cell there are the following features: area, perimeter, contour height, contour width, contour circularity, center coordinates, main axis of inertia, minor axis of inertia, angle of inclination of the main axis, Feret diameter, coordinates of the bounding rectangle, roundness, compactness.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">GAN-Based Artificial Image Synthesis</head><p>As you know, the architecture of modern GANs consists of a generator and a discriminator <ref type="bibr" target="#b45">[37]</ref>.</p><p>The generator and discriminator architectures are based on cells. A cell consists of nodes performing an append operation and operations between them.</p><p>The following operations are used in the generator cell: convolution by kernel 11, 33, 55; separable convolution by kernel 33; zero; skip connection. The cell architecture remains the same for the entire generator model. In contrast to the generator, the set of operations in the discriminator cell is extended by two operations: the maximum pooling by the kernel 33 and the average pooling by the 33 core. The architecture of the generator is shown in Figure <ref type="figure" target="#fig_0">1</ref> and described in Table <ref type="table">1 and Table 2</ref> L18: Output 64643 The discriminator architecture is shown in Figure <ref type="figure" target="#fig_2">2</ref> and described in Table <ref type="table" target="#tab_1">3</ref> and Table <ref type="table">4</ref>. The generator takes a noise vector from a Gaussian distribution of 1×128 as input, and outputs an image of 64×64×3.</p><p>The number of nodes in the generator and discriminator cells is 4 and 5 respectively. There are two skip connection operations in the generator cell, and 3 in the discriminator cell. There is also a zero operation in the discriminator cell, which is not present in the generator. The Self-Attention operation is applied 2 times in both the generator and the discriminator. However, in the generator, this operation is placed towards the end of the network, and in the discriminator, on the contrary, it is closer to the beginning.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 2</head><p>Generator CELLG Cell and Upsample Block Structure  1. training based on its dataset of images in the Hypernetwork environment; 2. the process of making noise of the initial 𝐼 𝐶 dataset; 3. noise reduction process.</p><formula xml:id="formula_0">L0: Input H  W  C L1: Upsample Scale = 2, mode = nearest (H  2)  (W  2)  C L2: Convolution Kernel = 3, stride = 1, padding = 1 (H  2)  (W  2)  C L3: Conditional Batch Norm Number of classes = 4 (H  2)  (W  2)  C L4: Gated Linear Unit (GLU) Dimension = 1 (H  2)  (W  2)  (C / 2)</formula><p>Let's detail the steps. The initial dataset is transformed to a latency space: 𝐼 𝐶 → 𝑍 0𝐶 . Based on 𝑍 0𝐶 , we calculate the noise value at each step t as follows:</p><formula xml:id="formula_1">𝑍𝑡 = 𝛼 𝑡 𝑍 0𝐶 + 1 − 𝛼 𝑡 𝜀 𝑡 ,</formula><p>where 𝑎 𝑡 is the coefficient that determines the noise rate at step t. The value of step t is selected from the range 𝑡 ∈ [0, 𝑇] where T is the number of steps; 𝜀 𝑡 -is the value of random Gaussian noise at step t. Value 𝜀 𝑡 calculated according to the expression:</p><formula xml:id="formula_2">𝜀 𝑡 : 𝑁(𝐸, 𝐷),</formula><p>where N is a normal distribution law with a expected value of E = 0 and a variance of D = 1. L19: L17 + L18 11 L20: Output 11 The noise reduction value is calculated according to the expression:</p><formula xml:id="formula_3">𝑍 𝑡−1 = 1 𝛼 𝑡 𝑍 𝑡 − 𝛽 𝑡 1−𝛼 𝑡 𝜀 𝑡 ,</formula><p>where 𝜀 𝑡 is the estimated noise value at step t; 𝛼 𝑡 -the coefficient that determines the noise level in the previous step t; 𝛽 𝑡 is a coefficient that controls the level of noise reduction.</p><p>After performing the noise reduction process (after traversing t=T steps), a 𝑍 1𝐶 vector is formed in the latency space. The encoder then transforms 𝑍 1𝐶 into a set of 𝐼 𝐶𝐷 images, with 𝐼 𝐶𝐷 ≫ 𝐼 𝐶 . The quality of the generated images is checked by IS and FID metrics.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Metrics for Synthesized Image Evaluation</head><p>Two main metrics are used to assess the quality of synthesized images: the IS metric and the FID metric.</p><p>The IS metric is based on the Google Inception V3 neural network model for color image classification. This metric was tested on the ImageNet dataset with a capacity of 1.2 million RGB images, which are divided into 1000 classes.</p><p>The analytic expression for the metric is as follows:</p><p>This configuration provides high performance for creating AI-generated images, allowing you to effectively use the capabilities of the Stable Diffusion model to generate high-quality results.</p><p>Experiment results. FID metric value -0.63 (class 1 -0.54, class 2 -0.6, class 3 -0.7, class 4 -0.68). The value of the IS metric is 3.99.</p><p>An example of real images is shown in Figure <ref type="figure">4</ref>. An example of synthetic images is shown in Figure <ref type="figure">5</ref>.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="9.">Discussions</head><p>Let's analyze the conducted computer experiments using GAN and Stable Diffusion. The results of comparison of synthesized cytological images quality using the developed GAN architecture and other known architectures are given in Table <ref type="table" target="#tab_2">5</ref>. The advantages of GAN are as follows:</p><p>1. The ability to generate high-quality, realistic images, video, and audio.</p><p>2. The ability to control the synthesis process (from the smallest details to common features in the image). 3. Relatively high speed of image synthesis, which is synthesized in one pass (forward pass) of the neural network.</p><p>The disadvantages of GAN are as follows:</p><p>1. Significant computing resources and the need for expertise to learn effectively, making them less accessible. 2. Collapse mode, where the generator begins to produce a limited number of images, which reduces the variety of synthetic images. 3. The learning process is complex and long because GAN consists of two neural networks competing with each other.</p><p>The advantages of diffusion models are as follows:</p><p>1. The ability to produce high-quality images that often surpass GAN in terms of realism and variety. 2. The ability to work with complex data distributions, which makes diffusion models universal for different areas. 3. A simpler learning process compared to GAN, which avoids the problem of collapse.</p><p>The disadvantages of diffusion models are as follows:</p><p>1. Significant computing resources for training and generation, which may limit the availability of use. 2. Data generation using an iterative process is quite resource-intensive compared to the forward pass method used by GAN.</p><p>Diffusion models transform noise distribution into data distribution through a diffusion process, gradually improving the generated image. This process provides a high degree of control over the generation process, as the model can be stopped at any point to obtain different levels of detail.</p><p>However, GANs generate data in a single step, where the generator creates the image and the discriminator evaluates it. Although this process is faster, it can lead to collapse mode, where the generator produces a limited number of images.</p><p>Consequently, GAN is built using the concept of competition between a generator and a discriminator to create realistic images, while diffusion models transform noise into images through an iterative process of diffusion (noise reduce). Diffusion models involve careful tuning of hyperparameters and longer training times. In addition, both approaches require a large amount of training data to perform optimally.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="10.">Conclusions</head><p>As a result, the tools for synthesizing cytological images have been developed and their comparison has been conducted in the work.</p><p>At the same time, the following results were obtained:</p><p>1. A new GAN architecture has been developed, which, unlike existing architectures, uses the Self-Attention mechanism in the generator and discriminator, which made it possible to improve the quality of synthesized images. The developed architecture for image synthesis supports the mechanism of image synthesis by labels (conditional generation), which is not relevant for the above architectures and approaches. 2. A new algorithm for the synthesis of cytological images based on diffusion models has been developed. In the Stable Diffusion environment, an algorithm for synthesizing cytological images was implemented, which made it possible to synthesize a sufficient sample of images for CNN training. Consequently, generation based on the diffusion model in the Stable Diffusion environment showed better results compared to generation based on GAN.</p><p>Therefore, further research will be the development of new diffusion models for generating histological and immunohistochemical images.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Generator Architecture</figDesc><graphic coords="5,76.56,62.40,447.60,140.40" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc> ELU  Batch Norm Kernel = 3, stride = 1, padding = 1 L2: L1 + Conv 33  Conv 11  ELU  Batch Norm Conv 3x3 = (Kernel = 3, stride = 1, padding = 1), Conv 1x1 = (Kernel = 1, stride = 1, padding = 0) L3: L2 + Conv (L1) + Conv (L0) Kernel = 3, stride = 1, padding = 1 L0: Input L1: Conv  ELU  Batch Norm Kernel = 3, stride = 1, padding = 1 Upsample Block Structure</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Discriminator architecture</figDesc><graphic coords="6,76.56,385.92,447.60,144.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Examples of synthesized images</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 4 :Figure 5 :</head><label>45</label><figDesc>Figure 4: Example of Real Images</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>3 .</head><label>3</label><figDesc>Computer experiments based on the diffusion model in the Stable Diffusion environment were carried out, and the following results were obtained: the value of the FID metric is 0.63 (class 1 -0.54, class 2 -0.6, class 3 -0.7, class 4 -0.68), and the value of the IS metric is 3.99. Generating based on GAN provided the following results: FID -3.39 (class 1 -3.42, class 2 -3.42, class 3 -3.35, class 4 -3.37), IS -3.95.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>.</figDesc><table><row><cell>Table 1</cell><cell></cell><cell></cell></row><row><cell>Generator Architecture</cell><cell></cell><cell></cell></row><row><cell>Layer</cell><cell>Options</cell><cell>Output= Form</cell></row><row><cell>L1: Input</cell><cell>Gaussian noise</cell><cell>1128</cell></row><row><cell>L2: Transposed Conv + ELU activation</cell><cell>Kernel = 4, stride = 1, padding = 0</cell><cell>441024</cell></row><row><cell>L3: CELLG</cell><cell>Nodes = 4</cell><cell>441024</cell></row><row><cell>L4: L2 + L3</cell><cell></cell><cell>441024</cell></row><row><cell>L5: Upsample</cell><cell>Scale = 2</cell><cell>881024</cell></row><row><cell>L6: CELLG</cell><cell>Nodes = 4</cell><cell>881024</cell></row><row><cell>L7: L5 + L6</cell><cell></cell><cell>881024</cell></row><row><cell>L8: Upsample</cell><cell>Scale = 2</cell><cell>1616512</cell></row><row><cell>L9: CELLG</cell><cell>Nodes = 4</cell><cell>1616512</cell></row><row><cell>L10: Self Attention</cell><cell>Input channels = 512</cell><cell>1616512</cell></row><row><cell>L11: L8 + L10 + L9</cell><cell></cell><cell>1616512</cell></row><row><cell>L11: Upsample</cell><cell>Scale = 2</cell><cell>3232256</cell></row><row><cell>L12: CELLG</cell><cell>Nodes = 4</cell><cell>3232256</cell></row><row><cell>L13: Self Attention</cell><cell>Input channels = 256</cell><cell>3232256</cell></row><row><cell>L14: L11 + L13 + L12</cell><cell></cell><cell>3232256</cell></row><row><cell>L15: Upsample</cell><cell>Scale = 2</cell><cell>6464128</cell></row><row><cell>L16: Convolution</cell><cell>Kernel = 3, stride = 1, padding = 1</cell><cell>6464128</cell></row><row><cell>L17: Convolution</cell><cell>Kernel = 3, stride = 1, padding = 1</cell><cell>64643</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 3</head><label>3</label><figDesc></figDesc><table><row><cell>Discriminator architecture</cell><cell></cell><cell></cell></row><row><cell>Layer</cell><cell>Options</cell><cell>Output Form</cell></row><row><cell>L1: Input</cell><cell>Image</cell><cell>64643</cell></row><row><cell>L2: Conv + ELU activation</cell><cell>Kernel = 3, stride = 1, padding = 1</cell><cell>646464</cell></row><row><cell>L3: CELLD</cell><cell>Nodes = 5</cell><cell>646464</cell></row><row><cell>L4: Self Attention</cell><cell>Input channels = 64</cell><cell>646464</cell></row><row><cell>L5: L2 + L4 + L3</cell><cell></cell><cell>646464</cell></row><row><cell>L6: Downsample</cell><cell>Scale = 2</cell><cell>3232128</cell></row><row><cell>L7: CELLD</cell><cell>Nodes = 5</cell><cell>3232128</cell></row><row><cell>L8: Self Attention</cell><cell>Input channels = 64</cell><cell>3232128</cell></row><row><cell>L9: L6 + L8 + L7</cell><cell></cell><cell>3232128</cell></row><row><cell>L10: Downsample</cell><cell>Scale = 2</cell><cell>1616256</cell></row><row><cell>L11: CELLD</cell><cell>Nodes = 5</cell><cell>1616256</cell></row><row><cell>L12: L10 + L11</cell><cell></cell><cell>1616256</cell></row><row><cell>L13: Downsample</cell><cell>Scale = 2</cell><cell>88512</cell></row><row><cell>L14: CELLD</cell><cell>Nodes = 5</cell><cell>88512</cell></row><row><cell>L15: L13 + L14</cell><cell></cell><cell>88512</cell></row><row><cell>L16: Downsample</cell><cell>Scale = 2</cell><cell>441024</cell></row><row><cell>L17: Linear(Sum(L16))</cell><cell></cell><cell>11</cell></row><row><cell>L18: Sum(Multiply(Sum(L16), Embedding))</cell><cell>Number of classes = 4</cell><cell>11</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 5</head><label>5</label><figDesc>Results of comparison with other GAN architectures</figDesc><table><row><cell>Method</cell><cell>FID</cell></row><row><cell>DCGAN</cell><cell>12,67</cell></row><row><cell>WGAN</cell><cell>12,72</cell></row><row><cell>WGAN-GP</cell><cell>19,09</cell></row><row><cell>BGAN</cell><cell>10,03</cell></row><row><cell>BEGAN</cell><cell>15,32</cell></row><row><cell>Developed architecture</cell><cell>3,39</cell></row><row><cell cols="2">Consequently, the developed GAN architecture provided better results in terms of FID metrics</cell></row><row><cell>than other well-known architectures.</cell><cell></cell></row><row><cell cols="2">Let's analyze the advantages and disadvantages of generating images based on GAN and based</cell></row><row><cell>on diffusion models.</cell><cell></cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The authors of another publication <ref type="bibr" target="#b32">[24]</ref> developed an algorithm for stochastic variational Bayesian inference. This approach allows you to train model parameters without using iterative inference schemes.</p><p>The authors of this publication have been analyzing biomedical images for over twenty years under the guidance of Professor Oleh Berezsky. A number of publications reflect methods, algorithms, and software tools for analyzing cytological, histological and immunohistochemical images <ref type="bibr" target="#b33">[25]</ref><ref type="bibr" target="#b34">[26]</ref><ref type="bibr" target="#b35">[27]</ref><ref type="bibr" target="#b36">[28]</ref><ref type="bibr" target="#b37">[29]</ref><ref type="bibr" target="#b38">[30]</ref><ref type="bibr" target="#b39">[31]</ref>. This is the result of a creative collaboration of researchers from West Ukrainian National University and Ivan Horbachevsky Ternopil National Medical University.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Problem statement</head><p>Given: the set of real cytological images of 𝐼 𝐶 . Image synthesis will be carried out on the basis of GAN and networks that are built on DMN diffusion models. After generating by means of GAN, we get a set of 𝐼 𝐶𝐺 images. Using DMN, we get a set of 𝐼 𝐶𝐷 images. In addition, we are given two metrics: IS and FID.</p><p>It is necessary to find the 𝑀 𝐼𝑆 and 𝑀 𝐹𝐼𝐷 distances between the set of real 𝐼 𝐶 cytological images and the sets of 𝐼 𝐶𝐺 and 𝐼 𝐶𝐷 synthetic images using the IS and FID metrics, i.e.:</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>where 𝐸 is the math expected value; 𝑥~𝑝 𝑔 shows what 𝑥 an image synthesized from a distribution -𝑝 𝑔 (𝑔𝑒𝑛𝑒𝑟𝑎𝑡𝑜𝑟 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛); 𝐷 𝐾𝐿 is the Kullback-Leibler distance between the conditional probability distribution and the marginal distribution 𝑝(𝑦) <ref type="bibr" target="#b46">[38]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 4</head><p>Discriminator CELLD Cell and Downsample block structure</p><p>The IS metric measures the average Kullback-Leibler distance between a conditional distribution 𝑝(𝑦|𝑥) and a marginal class distribution 𝑝(𝑦). The minimum value of the metric is 1, and the maximum value is the number of classes.</p><p>The FID metric compares the distributions of original and synthetic data. Based on this metric, the distance between images is calculated as follows:</p><p>), where (𝑚 𝑟 𝐶 𝑟 ) and (𝑚 𝑔 𝐶 𝑔 ) are the average and covariance of the real and synthesized data distributions respectively, 𝑇𝑟 − sum of the diagonal elements of the matrix. Therefore, the smaller the value of the metric, the smaller the distance between the distributions, that is, the images are more similar to each other <ref type="bibr" target="#b47">[39]</ref>. The FID metric is sensitive to distortion in images (shift, noise, etc.).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8.">Computer experiments</head><p>Computer experiments on the synthesis of cytological images were carried out using GAN and Stable Diffusion.</p><p>To conduct computational experiments, a training set of cytological images was used, which was published on the Zenodo platform <ref type="bibr" target="#b48">[40]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8.1.">Computer experiments with GAN</head><p>Images from the training dataset have been transformed to a resolution of 64×64 pixels (the original resolution is 3264×2448). The initial number of images is around 100, which is not enough. Therefore, the dataset is expanded to 800 images by applying affine transformations. By applying this technique, the dataset was balanced -it contains the same number (200 images) for each class. To extend the training dataset, Rudi own library with default parameters <ref type="bibr" target="#b49">[41]</ref> was used. Images are randomly rotated, flipped, scaled. All operations were applied with a probability of 50%.</p><p>Hardware. The Python programming language and the Pytorch framework were used to write the code. A virtual machine with the following configuration was used for experiments: 16 GB RAM, 10 vCPU x 2.2 GHz, Nvidia Tesla V100 GPU 16 GB (13.2 TFLOPS).</p><p>Training Options. In experiments, Hinge Loss was used as a loss function and Adam optimizer (betas = 0.5, 0.999). A technique called the Two Time-scale Update Rule is also used, which involves the use of different learning norms for the generator and the discriminator. Accordingly, the learning rate of the generator is 0.0001, and the discriminator is 0.0004. For all convolutional, deconvolutional, and linear layers, the spectral normalization technique was applied in both models, which allows to stabilize the learning process. Batch size -128, number of iterations -100,000. Training time ~13.6 GPU hours.</p><p>Experiment results. The FID metric value is 3.39 (Class 1 -3.42, Class 2 -3.42, Class 3 -3.35, Class 4 -3.37), and the IS metric value is 3.95</p><p>Examples of synthesized images are shown in Figure <ref type="figure">3</ref>. Training Options. To train the model, the Linear loss function and the Adam optimizer were used. 768, 1024, 320, 640, 1280 layers with linear activation and initialization of Normal weights were chosen as the hypermodel structure. Batch size was set to 1 and Gradient Accumulation Steps to 1. Gradient Clipping with a value of 0.1 was used to stabilize learning. The training took place with a learning norm for the hypermodel of 0.00001. The total number of iterations was 20,000 steps, and the size of the images was fixed at 512x512 pixels. The training was carried out using text prompts based on a style_filewords.txt template. The intermediate results of the images were saved in the log directory every 100 steps.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8.2.">Computer experiments in Stable Diffusion environment</head><p>Hardware. For the experiments, the infrastructure from Jarvis Labs was used, which has the following computing resources:</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Declaration on Generative AI</head><p>The authors have not employed any Generative AI tools.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m">𝑀 𝐼𝑆 (𝐼 𝐶 , 𝐼 𝐶𝐺 ) and 𝑀 𝐹𝐼𝐷</title>
				<imprint/>
	</monogr>
	<note>𝐼 𝐶. 𝐼 𝐶𝐺</note>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m">𝐼 𝐶 , 𝐼 𝐶𝐷 ) and 𝑀 𝐹𝐼𝐷</title>
				<editor>
			<persName><surname>𝑀 𝐼𝑆</surname></persName>
		</editor>
		<imprint/>
	</monogr>
	<note>𝐼 𝐶. 𝐼 𝐶𝐷</note>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m">𝑀 𝐹𝐼𝐷 (𝐼 𝐶 , 𝐼 𝐶𝐷 ) and 𝑀 𝐹𝐼𝐷</title>
				<imprint/>
	</monogr>
	<note>𝐼 𝐶. 𝐼 𝐶𝐺</note>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m">𝐼 𝐶 , 𝐼 𝐶𝐷 ) and 𝑀 𝐼𝑆</title>
				<editor>
			<persName><surname>𝑀 𝐼𝑆</surname></persName>
		</editor>
		<imprint/>
	</monogr>
	<note>𝐼 𝐶. 𝐼 𝐶𝐺</note>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m">GPU: 1 x A6000 Ampere</title>
				<imprint/>
	</monogr>
	<note>CUDA 12.3</note>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">CPUs</title>
		<imprint>
			<biblScope unit="page">7</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><surname>Ram</surname></persName>
		</author>
		<title level="m">GB RAM</title>
				<imprint>
			<biblScope unit="page">32</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m">Video memory</title>
				<imprint>
			<biblScope unit="page">48</biblScope>
		</imprint>
	</monogr>
	<note>GB VRAM</note>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m">Linux system version</title>
				<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page">4</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Zero-Shot Text-to-Image Generation</title>
		<author>
			<persName><forename type="first">A</forename><surname>Ramesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pavlov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Goh</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2102.12092</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Evaluating Large Language Models Trained on Code</title>
		<author>
			<persName><forename type="first">M</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Tworek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Jun</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2107.03374</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">On the Opportunities and Risks of Foundation Models</title>
		<author>
			<persName><forename type="first">R</forename><surname>Bommasani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Hudson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Adeli</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2108.07258</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Thoppilan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">De</forename><surname>Freitas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hall</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2201.08239</idno>
		<title level="m">LaMDA: Language Models for Dialog Applications</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Global cancer statistics 2022: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries</title>
		<author>
			<persName><forename type="first">F</forename><surname>Bray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Laversanne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Sung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ferlay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">L</forename><surname>Siegel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Soerjomataram</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename></persName>
		</author>
		<idno type="DOI">10.3322/caac.21834</idno>
	</analytic>
	<monogr>
		<title level="j">CA Cancer J Clin</title>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">TDASD: Generating Medically Significant Fine-Grained Lung Adenocarcinoma Nodule CT Images Based on Stable Diffusion Models with Limited Sample Size</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Liang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhuo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Xiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhou</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.cmpb.2024.108103</idno>
	</analytic>
	<monogr>
		<title level="j">Computer Methods and Programs in Biomedicine</title>
		<imprint>
			<biblScope unit="volume">248</biblScope>
			<biblScope unit="page">108103</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Diffusion Models in Medical Imaging: A Comprehensive Survey</title>
		<author>
			<persName><forename type="first">A</forename><surname>Kazerouni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Khodapanah Aghdam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Heidari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Azad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Fayyaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Hacihaliloglu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Merhof</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.media.2023.102846</idno>
	</analytic>
	<monogr>
		<title level="j">Medical Image Analysis</title>
		<imprint>
			<biblScope unit="volume">88</biblScope>
			<biblScope unit="page">102846</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Generative Modeling by Estimating Gradients of the Data Distribution</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ermon</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.1907.05600</idno>
	</analytic>
	<monogr>
		<title level="j">NeurIPS</title>
		<imprint>
			<date type="published" when="2019">2019. 2019</date>
		</imprint>
	</monogr>
	<note>Oral</note>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases</title>
		<author>
			<persName><forename type="first">R</forename><surname>Steed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Caliskan</surname></persName>
		</author>
		<idno type="DOI">10.1145/3442188.3445932</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT &apos;21)</title>
				<meeting>the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT &apos;21)</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="701" to="713" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">Generating Synthetic Data in Digital Pathology Through Diffusion Models: A Multifaceted Approach to Evaluation</title>
		<author>
			<persName><forename type="first">M</forename><surname>Pozzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Noei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Robbi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Cima</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Moroni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Munari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Torresani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Jurman</surname></persName>
		</author>
		<idno type="DOI">10.1101/2023.11.21.23298808</idno>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note>bioRxiv preprint</note>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Learning Transferable Visual Models From Natural Language Supervision</title>
		<author>
			<persName><forename type="first">A</forename><surname>Radford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">W</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Hallacy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ramesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Goh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Agarwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sastry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Askell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Mishkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Clark</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Krueger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Sutskever</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2103.00020</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Rombach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Blattmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Lorenz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Esser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Ommer</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2112.10752</idno>
		<title level="m">High-Resolution Image Synthesis with Latent Diffusion Models</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title level="m" type="main">Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding</title>
		<author>
			<persName><forename type="first">C</forename><surname>Saharia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Chan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Saxena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Whang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Denton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">K</forename><surname>Seyed Ghasemipour</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Karagol Ayan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Mahdavi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Lopes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Salimans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Fleet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Norouzi</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2205.11487</idno>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<title level="m" type="main">A Morphology Focused Diffusion Probabilistic Model for Synthesis of Histopathology Images</title>
		<author>
			<persName><forename type="first">P</forename><surname>Moghadam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Van Dalen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">C</forename><surname>Martin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lennerz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yip</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Farahani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bashashati</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2209.13167</idno>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Ho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Abbeel</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2006.11239</idno>
		<title level="m">Denoising Diffusion Probabilistic Models</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Sohl-Dickstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">A</forename><surname>Weiss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Maheswaranathan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ganguli</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.1503.03585</idno>
		<title level="m">Deep Unsupervised Learning Using Nonequilibrium Thermodynamics</title>
				<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<title level="m" type="main">Diffusion Models Beat GANs on Image Synthesis</title>
		<author>
			<persName><forename type="first">P</forename><surname>Dhariwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Nichol</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2105.05233</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<title level="m" type="main">Cascaded Diffusion Models for High Fidelity Image Generation</title>
		<author>
			<persName><forename type="first">J</forename><surname>Ho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Saharia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Chan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Fleet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Norouzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Salimans</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2106.15282</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b27">
	<monogr>
		<title level="m" type="main">A Style-Based Generator Architecture for Generative Adversarial Networks</title>
		<author>
			<persName><forename type="first">T</forename><surname>Karras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Laine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Aila</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.1812.04948</idno>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b28">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Vahdat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kautz</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2007.03898</idno>
		<title level="m">NVAE: A Deep Hierarchical Variational Autoencoder</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b29">
	<monogr>
		<title level="m" type="main">Score-Based Generative Modeling through Stochastic Differential Equations</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sohl-Dickstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">P</forename><surname>Kingma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ermon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Poole</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2011.13456</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Lugmayr</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Danelljan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Romero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Timofte</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Van Gool</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2201.09865</idno>
		<title level="m">RePaint: Inpainting using Denoising Diffusion Probabilistic Models</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b31">
	<monogr>
		<title level="m" type="main">Improved Denoising Diffusion Probabilistic Models</title>
		<author>
			<persName><forename type="first">A</forename><surname>Nichol</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2102.09672</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">P</forename><surname>Kingma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Welling</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.1312.6114</idno>
		<title level="m">Auto-Encoding Variational Bayes</title>
				<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Synthesis of Convolutional Neural Network architectures for biomedical image classification</title>
		<author>
			<persName><forename type="first">O</forename><surname>Berezsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Liashchynskyi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Pitsun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Izonin</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.bspc.2024.106325</idno>
	</analytic>
	<monogr>
		<title level="j">Biomedical Signal Processing and Control</title>
		<imprint>
			<biblScope unit="volume">95</biblScope>
			<biblScope unit="page">106325</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Method and Software Tool for Generating Artificial Databases of Biomedical Images Based on Deep Neural Networks</title>
		<author>
			<persName><forename type="first">O</forename><surname>Berezsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Liashchynskyi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Pitsun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Melnyk</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proceedings</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="15" to="26" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">An Approach toward Automatic Specifics Diagnosis of Breast Cancer Based on an Immunohistochemical Image</title>
		<author>
			<persName><forename type="first">O</forename><surname>Berezsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Pitsun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Melnyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Datsko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Izonin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Derysh</surname></persName>
		</author>
		<idno type="DOI">10.3390/jimaging9010012</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Imaging</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page">12</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">Computational Intelligence in Medicine</title>
		<author>
			<persName><forename type="first">O</forename><surname>Berezsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Pitsun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Liashchynskyi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Derysh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Batryn</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-16203-9_28</idno>
	</analytic>
	<monogr>
		<title level="m">Lecture Notes in Data Engineering, Computational Intelligence, and Decision Making. ISDMCI 2022</title>
		<title level="s">Lecture Notes on Data Engineering and Communications Technologies</title>
		<editor>
			<persName><forename type="first">S</forename><surname>Babichev</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">V</forename><surname>Lytvynenko</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">149</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">Comparison of Deep Neural Network Learning Algorithms for Biomedical Image Processing</title>
		<author>
			<persName><forename type="first">O</forename><surname>Berezsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Liashchynskyi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Pitsun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Berezkyy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proceedings</title>
				<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="135" to="145" />
		</imprint>
	</monogr>
	<note>IDDM-2022</note>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Segmentation of cytological and histological images of breast cancer cells</title>
		<author>
			<persName><forename type="first">O</forename><surname>Berezsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Batko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Melnyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Verbovyy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Haida</surname></persName>
		</author>
		<idno type="DOI">10.1109/IDAACS.2015.7340745</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 8th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS)</title>
				<imprint>
			<date type="published" when="2015">2015. 2015</date>
			<biblScope unit="page" from="287" to="292" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">The intelligent system for diagnosing breast cancers based on image analysis</title>
		<author>
			<persName><forename type="first">O</forename><surname>Berezsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Verbovyy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Datsko</surname></persName>
		</author>
		<idno type="DOI">10.1109/ITIB.2015.7355067</idno>
	</analytic>
	<monogr>
		<title level="m">Information Technologies in Innovation Business Conference (ITIB)</title>
				<meeting><address><addrLine>Kharkiv, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015">2015. 2015</date>
			<biblScope unit="page" from="27" to="30" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<monogr>
		<title level="m" type="main">The PASCAL Visual Object Classes Homepage</title>
		<ptr target="http://host.robots.ox.ac.uk/pascal/VOC/" />
		<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<monogr>
		<title level="m" type="main">Common Objects in Context dataset</title>
		<ptr target="https://cocodataset.org" />
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<monogr>
		<title level="m" type="main">APCData cervical cytology cells</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">Cuña</forename><surname>Cabrera</surname></persName>
		</author>
		<idno type="DOI">10.17632/YTD568RH3P.1</idno>
		<ptr target="https://data.mendeley.com/datasets/ytd568rh3p/1.doi:10.17632/YTD568RH3P.1" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b43">
	<monogr>
		<author>
			<persName><forename type="first">André</forename><surname>Victória</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Matias</forename></persName>
		</author>
		<idno type="DOI">10.17632/DR7YDY9XBK.1</idno>
		<ptr target="https://data.mendeley.com/datasets/dr7ydy9xbk/1.doi:10.17632/DR7YDY9XBK.1" />
		<title level="m">Papanicolaou Stained Oral Cytology Dataset</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note>UFSC OCPap</note>
</biblStruct>

<biblStruct xml:id="b44">
	<analytic>
		<title level="a" type="main">Database of Digital Histological and Cytological Images &quot;ВРСІ2100</title>
		<author>
			<persName><forename type="first">O</forename><surname>Berezsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Datsko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Melnyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Nykoliuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Pitsun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Verbovyy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Database. Copyright registration certificate number 75359</title>
				<imprint>
			<date type="published" when="2017-12-14">December 14, 2017. January 26, 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b45">
	<analytic>
		<title level="a" type="main">Generative adversarial networks</title>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">J</forename><surname>Goodfellow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Pouget-Abadie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mirza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Warde-Farley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ozair</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">C</forename><surname>Courville</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Communications of the ACM</title>
		<imprint>
			<biblScope unit="volume">63</biblScope>
			<biblScope unit="page" from="139" to="144" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b46">
	<monogr>
		<title level="m" type="main">A Note on the Inception Score</title>
		<author>
			<persName><forename type="first">S</forename><surname>Barratt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Sharma</surname></persName>
		</author>
		<idno type="DOI">10.48550/ARXIV.1801.01973</idno>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b47">
	<analytic>
		<title level="a" type="main">Pros and cons of GAN evaluation measures</title>
		<author>
			<persName><forename type="first">A</forename><surname>Borji</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.cviu.2018.10.009</idno>
	</analytic>
	<monogr>
		<title level="j">Comput. Vis. Image Underst</title>
		<imprint>
			<biblScope unit="volume">179</biblScope>
			<biblScope unit="page" from="41" to="65" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b48">
	<monogr>
		<title level="m" type="main">Cytological and histological images of breast cancer</title>
		<author>
			<persName><forename type="first">O</forename><surname>Berezsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Datsko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Melnyk</surname></persName>
		</author>
		<idno type="DOI">10.5281/zenodo.7890874</idno>
		<ptr target="https://doi.org/10.5281/zenodo.7890874.doi:10.5281/zenodo.7890874" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b49">
	<monogr>
		<title/>
		<author>
			<persName><surname>Rudi</surname></persName>
		</author>
		<ptr target="https://github.com/liashchynskyi/rudi" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
