<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Application of neural network platforms for text-based image generation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oleh Yasniy</string-name>
          <email>oleh.yasniy@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Abdellah Menou</string-name>
          <email>abdellahmenou1@gmail.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andriy Mykytyshyn</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vitalii Kubashok</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Iryna Didych</string-name>
          <email>iryna.didych1101@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>D School, Mohammed VI Polytechnic University</institution>
          ,
          <addr-line>Benguérir</addr-line>
          ,
          <country country="MA">Morocco</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Ternopil Ivan Puluj National Technical University</institution>
          ,
          <addr-line>56, Ruska Street, Ternopil, 46001</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This article takes a closer look at various aspects of neural networks and their use in the modern world. Neural networks are a powerful artificial intelligence tool with many advantages in numerous areas, from image processing to content generation. Using examples of specific neural networks, such as DALL-E 3, Midjourney, ImageFX, Adobe Firefly, and Leonardo, a small experiment was performed to compare different neural networks based on the results of their work in response to a specific query. The Copilot neural network on the DALL-E 3 platform showed excellent results in generating images and text, making it one of the most successful neural networks among the tested ones.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;artificial intelligence</kwd>
        <kwd>neural networks</kwd>
        <kwd>platforms1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Artificial neural networks, or neural networks, are one of the most exciting and dynamic areas
in information technology. Over the past few decades, they have evolved from concept to
reality, creating incredible opportunities for development and innovation in various industries</p>
      <p>Next, the article discusses practical cases of neural networks in creating text-based content,
such as images. Finally, the main directions of development and challenges faced by
researchers in this area are analysed.</p>
      <p>The aim of this paper is to compare different neural networks such as DALL-E 3,
Midjourney, ImageFX, Adobe Firefly, and Leonardo based on their performance in response to
a specific query</p>
    </sec>
    <sec id="sec-2">
      <title>2. Application of neural networks</title>
      <p>
        Neural networks are among the most innovative technologies affecting various aspects of our
lives. They are widely used in various industries and solve various tasks, from pattern
recognition to market trends forecasting. Let's take a closer look at some of the most
important areas of neural networks and provide specific examples[
        <xref ref-type="bibr" rid="ref7 ref8">7-8</xref>
        ]:
      </p>
      <p>Medicine:
 Disease diagnosis: Neural networks are employed to analyse medical images, such as
X-rays or magnetic resonance imaging (MRI) images, to detect signs of diseases such as cancer
or heart disease. For example, neural networks can help to detect cancer in X-ray images of
the gastrointestinal tract.</p>
      <p> Treatment prediction: Neural networks can analyse patients' medical data and medical
history to predict the effectiveness of different treatments. For example, they can help doctors
choose the most appropriate cancer treatment for a particular patient.</p>
      <p>Finance:
 Financial market prediction: Neural networks are utilized to analyse financial data and
predict trends in stock and currency markets. They can analyse large amounts of data,
considering various factors such as economic indicators, political events, and social trends.</p>
      <p> Fraud detection: Using neural networks, financial institutions detect fraud and
anomalous transactions. They can analyse large amounts of transactional data and detect
unusual patterns that may indicate fraud.</p>
      <p>Technology:
 Pattern recognition: Neural networks can recognise faces, driver's licences, car licence
plates, etc. For example, neural networks can automatically recognise and identify faces in
images or videos.</p>
      <p> Autonomous systems: In autonomous cars, neural networks analyse data from sensors,
such as radars, cameras, and lidars, and make real-time decisions. They help the car react to
its environment and avoid accidents.</p>
      <p>
        Education:
 Personalised learning: Neural networks help to create individualised learning
programmes and materials, taking into account the characteristics of each student. They
analyse student performance, personality, and learning style data to provide an optimal
learning experience. For example, AI platforms can recommend individualised tasks and
materials for each learner according to their needs and abilities [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p> Adaptive testing: Neural networks can create tests that adapt to the learner's level of
knowledge. They analyse the learner's answers to previous questions and, based on this,
determine subsequent questions difficulty level. This helps assess students' knowledge
effectively and offer them the appropriate tasks to increase motivation and learning outcomes.</p>
      <sec id="sec-2-1">
        <title>2.1. The principle of neural networks</title>
        <p>
          Neural networks are complex mathematical models designed based on the properties of
biological neural networks in the human brain. Their operating principles include the
following aspects [
          <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13">10-13</xref>
          ]:
1. Architecture of the neural network:
 Layers of neurons: Neural networks typically have several layers, including an
input layer, internal (hidden) layers, and an output layer. The input layer receives data, the
hidden layers process this data, and the output layer generates the final result.
        </p>
        <p> Connections between neurons: Every neuron in one layer is connected to every
neuron in the next. These connections contain weights that determine how strongly the input
signals influence the neuron activation.</p>
        <p>2. Activation function:
 Linear or nonlinear: The activation function determines how the signal is
transmitted through the neuron. Neurons can have a linear or nonlinear activation function.
Non-linear activation functions, such as a relu or sigmoid, allow neural networks to model
more complex non-linear relationships in the input data.</p>
        <p>3. Training of neural networks:
 Backpropagation: This is a basic neural network training method in which the
neurons weights are adjusted according to the error between the predicted and expected
outcomes. The weights are updated in a way that reduces this error.</p>
        <p> Method of changing weights: Various methods for adjusting weights, such as
the momentum method, Adam's optimisation, and others help neural networks learn faster
and more efficiently.</p>
        <p>4. Regularisation:
 Reducing overfitting: When the neural network becomes overly adapted to the
training data, regularisation techniques such as dropout, L1 or L2 regularisation are applied to
prevent overfitting.</p>
        <p>5. Output layer and loss function:
 Loss function: This function determines how closely the neural network
prediction matches the actual data. It can include a squared error for regression or
crossentropy for classification.</p>
        <p> Activate the original layer: The activation function of the output layer may
differ depending on the task. For example, a sigmoid function is used for binary classification,
while for multi-class classification, a softmax function is utilized.</p>
        <p>Understanding these principles is the key to designing and optimising neural networks for
various machine learning and artificial intelligence tasks. Neural networks can solve
realworld problems and form the basis of innovative applications.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Generative artificial neural networks</title>
        <p>
          Neural networks have a wide range of applications in various industries, including generating
content from text to images, video, and sound. Let's take a closer look at each type of
application [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]:
1. Generation from text to image (Midjourney, Leonardo.ai, DALL-E 3, ImageFX):
 Principle of operation: Such neural networks are trained to associate textual
descriptions of images and generate corresponding images. The neural network first receives
an input text description, which it then converts into the components of a vector
representation. This vector is then passed through a generative model that creates the
corresponding image.
        </p>
        <p> Application: This method is widely used in art, design, and creative projects.
For example, it can automatically generate illustrations for articles or books based on
description text.</p>
        <p>2. Generation from text to video (Synthesia, Pika):
 Principle of operation: In this case, the neural network learns correlations
between text descriptions and video scenes. It receives an input text describing a plot or
sequence of events and generates a corresponding video. This requires processing a large
amount of video and text data to train the model.</p>
        <p> Applications: This method can find applications in film, advertising, and social
media content production. For example, it can automatically generate commercials or short
films based on scripts.</p>
        <p>3. Generation from text to audio (Sound of text):
 Principle of operation: In this case, the neural network converts a text
description into an audio file. It can learn the relationship between the text and the acoustic
properties of the sound, such as voice timbre, intonation, and speech rate. This can include
speech synthesis or music generation from text.</p>
        <p> Applications: This method is useful for various tasks, including audiobooks,
podcasts, audio ads, and speech synthesis for apps and services. For example, it can be used to
automatically create audio ads from a textual description of a product.</p>
        <p>These applications of neural networks have significant potential to create different types of
content using textual data. Although they require significant computing resources and
training data, they open up new opportunities for automating the creative process and
creating innovative content.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Neural networks MIDJORNEY, LEONARDO, DALL-E, IMAGEFX</title>
        <p>Midjourney neural network: Midjourney is an advanced neural network for image and
graphics processing. It offers a wide range of capabilities, from image editing and restoration
to creating graphic effects and filters. Midjourney allows for the automation of image
processing processes, making the tasks of graphic designers and photographers easier.</p>
        <p>Leonardo.ai: Leonardo.ai is a platform that uses neural networks and artificial intelligence
to create various graphic solutions. It provides tools for creating artwork, logo design, image
processing, and more. Leonardo.ai allows users to quickly and efficiently create impressive
graphic elements using artificial intelligence.</p>
        <p>
          DALL-E 3, ImageFX: DALL-E 3 and ImageFX are innovative neural networks designed to
generate images from text descriptions and apply various graphic effects to images,
respectively. They can create complex, realistic graphic objects using artificial intelligence
algorithms [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Advantages and disadvantages of neural networks</title>
        <p>Neural networks have several advantages, making them a powerful tool in artificial
intelligence. First, they are known for their high accuracy. This means that they can solve
complex problems with great accuracy and efficiency. For example, neural networks are
successfully used for pattern recognition, text classification, and data analysis.</p>
        <p>Secondly, neural networks can automate many processes that would have previously
required significant human involvement. They can quickly and efficiently perform routine
tasks, such as data processing, image editing, or speech synthesis.</p>
        <p>In addition, neural networks can learn from a large amount of data, allowing them to
improve over time. They can adapt to new conditions and tasks, which makes them versatile
tools for various industries and fields of activity.It is also important to note the flexibility of
neural networks. They can be applied to different simple and complex tasks and adapt to
different data types. This makes them useful for various applications, from medicine to
finance, advertising, and art.</p>
        <p>Despite their advantages, neural networks also have several disadvantages. For example,
they require a large amount of data to train, and insufficient data can lead to poor model
accuracy. In addition, training neural networks can be time-consuming and require significant
computing resources.</p>
        <p>Another drawback is the difficulty of interpreting the results. Neural networks are often
considered "black boxes" because their decisions can be difficult to understand and explain.
This can complicate decision-making processes and the implementation of models in practice.
In addition, as the complexity of neural networks increases, so does the need for computing
resources. Large and complex neural networks can require significant computing power to
train and use, which can be difficult for many organisations and researchers.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Operation of neural networks in practice and comparison</title>
      <p>To analyse the effectiveness of neural networks in improving individualised learning for
students, a small experiment was used to compare different neural networks based on the
results of their work in response to a specific query. This experiment became the basis for
analysing the effectiveness and capabilities of neural networks in the context of the learning
process.</p>
      <p>The experiment used the following promo generated in a chat with the GPT artificial
intelligence language model:</p>
      <p>"Create a high quality, realistic image of Lesya Ukrainka wearing a traditional Ukrainian
embroidered shirt, reflecting her modern spirit and cultural identity. She should be standing in
a busy place in modern Ukraine, against the backdrop of Khreshchatyk. In the background,
the inscription "Ukraine" should be visible, embedded in the cityscape, emphasising the
connection between the figure and the place. Every detail should be carefully reproduced."</p>
      <p>Figure 1 shows the image built by Google's neural network, ImageFX.</p>
      <p>ImageFX, a neural network from Google, provided quite beautiful images with good
human detail, including Lesya Ukrainka in a Ukrainian embroidered shirt. However, there was
a problem with the clarity of the background and the generation of text in the background,
which made it difficult to create a realistic image.</p>
      <p>Figure 2 shows the result of the Adobe Firefly neural network.</p>
      <p>Firefly, a neural network from Adobe, also provided high-quality images. Still, it was not
always possible to generate the city background and text, significantly affecting the created
image realism.</p>
      <p>Figure 3 shows the image generated by the Leonardo neural network.</p>
      <p>The Leonardo neural network did not provide the expected results, which may be due to its
technical features and limitations.</p>
      <p>Figure 4 shows the image built by Midjourney neural network.</p>
      <p>Midjourney also failed to meet expectations, which indicates that this neural network
likely has limitations in creating realistic images.</p>
      <p>Figure 5 shows the image built by Copilot neural network on the DALL-E 3 platform.</p>
      <p>The Copilot neural network on the DALL-E 3 platform demonstrated excellent results in
generating images and text, making it one of the most successful neural networks among the
tested ones.</p>
      <p>The results show that neural networks can be an effective tool in improving personalised
learning, but they have their limitations. Some neural networks, such as Copilot on the
DALLE 3 platform, have shown good results in generating images and text, but they are imperfect
and require further improvement.</p>
      <p>Table 1 compares different neural networks such as DALL-E 3, Midjourney, ImageFX,
Adobe Firefly, and Leonardo based on their performance on a specific query.</p>
      <sec id="sec-3-1">
        <title>Clarity of the background</title>
      </sec>
      <sec id="sec-3-2">
        <title>Generation of text in the background + +</title>
      </sec>
      <sec id="sec-3-3">
        <title>Adobe Firefly + +</title>
        <p>+
+
+
+
+</p>
      </sec>
      <sec id="sec-3-4">
        <title>DALL-E 3</title>
        <p>+
+
+
+</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions</title>
      <p>The article describes different types of neural networks, including deep neural networks,
delving into their structure and principles of operation. Popular neural network architectures
and their application in solving various tasks are also considered. In addition, the advantages
and disadvantages of using neural networks are discussed. In particular, the Copilot neural
network on the DALL-E 3 platform demonstrated excellent results in generating images and
text, making it one of the most successful neural networks among the tested ones.</p>
      <p>In conclusion, it is important to emphasise that neural networks have great potential in
many fields and continue to evolve and improve daily. They have the potential to change how
we work, learn, and communicate, and it is important to continue exploring their capabilities
to maximise their usage in the future.</p>
      <sec id="sec-4-1">
        <title>Text. Retrieved from</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>LeCun</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bengio</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Hinton</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>Deep learning</article-title>
          .
          <source>Nature</source>
          ,
          <volume>521</volume>
          (
          <issue>7553</issue>
          ),
          <fpage>436</fpage>
          -
          <lpage>444</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Goodfellow</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bengio</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Courville</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Deep Learning</article-title>
          . MIT Press.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Schmidhuber</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>Deep learning in neural networks: An overview</article-title>
          .
          <source>Neural Networks</source>
          ,
          <volume>61</volume>
          ,
          <fpage>85</fpage>
          -
          <lpage>117</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Deng</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>Deep learning: methods and applications</article-title>
          .
          <source>Foundations and Trends® in Signal Processing</source>
          ,
          <volume>7</volume>
          (
          <issue>3-4</issue>
          ),
          <fpage>197</fpage>
          -
          <lpage>387</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Karpathy</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Fei-Fei</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>Deep visual-semantic alignments for generating image descriptions</article-title>
          .
          <source>In Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          (pp.
          <fpage>3128</fpage>
          -
          <lpage>3137</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Radford</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Child</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Luan</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Amodei</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Sutskever</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Language models are unsupervised multitask learners</article-title>
          .
          <source>OpenAI blog</source>
          ,
          <volume>1</volume>
          (
          <issue>8</issue>
          ),
          <fpage>9</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Vaswani</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shazeer</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parmar</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Uszkoreit</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jones</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gomez</surname>
            ,
            <given-names>A. N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kaiser</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Polosukhin</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Attention is all you need</article-title>
          .
          <source>In Advances in neural information processing systems</source>
          (pp.
          <fpage>5998</fpage>
          -
          <lpage>6008</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Brock</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Donahue</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Simonyan</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Large scale GAN training for high fidelity natural image synthesis</article-title>
          .
          <source>In International Conference on Learning Representations.</source>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Yasniy</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mykytyshyn</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Didych</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kubashok</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Boiko</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Application of artificial intelligence to improve the work of educational platforms</article-title>
          .
          <source>In ITTAP</source>
          (pp.
          <fpage>605</fpage>
          -
          <lpage>609</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Haykin</surname>
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Neural Networks - A Comprehensive Foundation - Simon Haykin</surname>
          </string-name>
          .pdf. McMaster University, Hamilton, Ontario, Canada,
          <year>2006</year>
          . P.
          <volume>823</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11] N. Richard: Applied regression analysis, third ed., John Wiley &amp; Sons, New York,
          <year>1998</year>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Philip</surname>
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Wasserman</surname>
          </string-name>
          .
          <source>Neural Computing: Theory and Practice</source>
          , New York: Coriolis Group (Sd),
          <fpage>1989</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>K.</given-names>
            <surname>Gurney</surname>
          </string-name>
          :
          <article-title>An introduction to neural networks</article-title>
          , first ed., Taylor &amp; Francis Group, London,
          <year>1997</year>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Karras</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aila</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Laine</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Lehtinen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>A style-based generator architecture for generative adversarial networks</article-title>
          .
          <source>In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source>
          (pp.
          <fpage>4401</fpage>
          -
          <lpage>4410</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>OpenAI.</surname>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>DALL-E 3: Creating Images from https://openai</article-title>
          .com/dall-e-3
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>