<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>I. Fedorchenko);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Dynamic User-Guided Evolutionary System for Generative Music</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ievgen Fedorchenko</string-name>
          <email>evg.fedorchenko@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrii Oliinyk</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tetiana Fedoronchak</string-name>
          <email>t.fedoronchak@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maksym Chornobuk</string-name>
          <email>chornobuk.maksym@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National University Zaporizhzhia Polytechnic</institution>
          ,
          <addr-line>Zaporizhzhia 69011</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>This article presents a music generation system based on a modified interactive genetic algorithm (IGA) with dynamic user engagement tracking. The developed approach allows the creation of monophonic MIDI compositions based on user feedback. The proposed model automatically adjusts the probabilities of mutations and injections in the population depending on user engagement level, there-by reducing user fatigue and accelerating convergence toward acceptable results. Conducted experiments with five volunteers showed that the system is able to generate compositions rated by users as attractive in an average of 4.6 iterations, while the generation time of a new generation is less than 1 ms. The system has a simple interface based on .NET MAUI and allows exporting results in MIDI format. The proposed solution combines high speed with the possibility of fine-tuning the generation parameters, which makes it promising for creating interactive music products in real time.</p>
      </abstract>
      <kwd-group>
        <kwd>genetic algorithm</kwd>
        <kwd>music generation</kwd>
        <kwd>MIDI 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Dynamic music generation has been known and used in various digital products since the end of the
are widely used in the field of music generation, where the subjective attractiveness of the same track
depends on the aesthetic preferences of a particular user or group of users [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        Several existing systems demonstrate the potential of IGAs for music generation. For example,
GenJam [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] uses user evaluation for generating jazz solos; Darwin-Tunes [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] uses similar
evolutionary approaches for music generation. These systems typically encode musical properties
(e.g. pitch, rhythm, harmony) as genomes, apply selection and mutation, and rely on user feedback
to evolve increasingly attractive compositions. However, such models often suffer from limitations
such as high user fatigue, slow convergence, or fixed mutation strategies that do not adapt to listener
engagement.
      </p>
      <p>This paper examines an improved adaptive interactive genetic algorithm designed to generate
music in the MIDI format. The developed model is characterized by simplicity and high speed but
differs from a number of other models. In particular, the developed system monitors user feedback
in different generations and evaluates user engagement based on statistical indicators calculated
from the flow of user ratings. The calculated indicator dynamically adjusts the probability of
mutations and injections in the population in order to reduce user fatigue, accelerate convergence,
and maintain interest throughout the entire process of using the system.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Overview of existing systems</title>
      <p>
        The article [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] describes GenJam, one of the early systems for generating music using an interactive
genetic algorithm. GenJam was created for generating jazz solos. The system can work in one of
three modes:
•
•
•
      </p>
      <p>Learning mode. Random compositions are generated; the user evaluates them. The
evolutionary process has not started.</p>
      <p>Demo mode. The best of the previously generated compositions is played.</p>
      <p>Evolution mode. Genetic operations are used for the dynamic generation of a new population
of musical compositions.</p>
      <p>GenJam uses a two-level genetic coding scheme to represent compositions. Musical content is
hierarchically structured into phrases and measures, each encoded as binary chromosomes of fixed
length. This presentation combines rhythm and pitch. Each bar corresponds to a 32-bit chromosome
representing eight consecutive eighth-note positions in a 4/4-time signature. Each position is
encoded by a 4-bit event: Rest, Hold, New Note.</p>
      <p>The system is quite limited: the compositions do not differ in speed (BPM value), they use
predetermined sequences of chords.</p>
      <p>
        The GP-Music system described in the article [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] is more modern. This system has several
distinctive features. The leaves of the trees represent individual musical notes, pseudo-chords or
pauses. Internal nodes represent musical transformations or operations applied to sequences. Thus,
each node of the tree returns a sequence of notes, and the final composition is returned by the root
of the tree. Such a structure has its advantages: it is possible to carry out mutation and crossover
operations conveniently. Another important feature is the use of a simple neural network that learns
during the phase of feedback from the user and then is able to give its own evaluations to
compositions, reducing the burden on the user. This method is called the Surrogate Fitness Function
and has great potential, although in real conditions it faces the problem of not having enough
resources for a complex model capable of qualitatively simulating user evaluations. Nevertheless, the
system is quite limited and allows the creation of only compact, monophonic musical compositions.
A big drawback is the fact that all the notes in the generated compositions are of the same length:
this limits the system significantly, preventing the creation of complex compositions similar to real
music.
      </p>
      <p>
        Another approach is described in the article [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. The authors developed the DarwinTunes system
based on an interactive genetic algorithm that did not use a surrogate fitness function, but aggregated
feedback from more than 6,000 users, using their aggregated ratings as a fitness function for small
pieces of music. The proposed system used a population of tree-like digital genomes, each of which
described a program that generated a small, looping piece of music 8 seconds long.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Problem statement</title>
      <p>Musical Digital Interface (MIDI) is a standard developed in 1983 as a result of a collaboration between
leading manufacturers of electronic instruments. The standard provides interoperability between
equipment from different manufacturers and pro-vides a digital abstraction layer over analog and
digital sound generation processes.</p>
      <p>The Standard MIDI File (SMF) format allows compact storage of data about musical compositions.
Unlike classic audio files that store the data about sound directly, MIDI files store sequences of
discrete musical events that are then converted into music using software or hardware solutions.
Each MIDI file consists of a header and one or more tracks. The header defines global parameters
such as file type, number of tracks, and tempo. Each track consists of a sequence of time-ordered
events, each of which has a timestamp relative to the previous event. This allows for precise
placement of notes and other musical events throughout the track.</p>
      <p>
        The most common are the Note On and Note Off event types, which together determine when a
note starts and ends. Each note playback event is characterized by a MIDI channel (0 15), note pitch
(0 127), note velocity (0 127). These parameters describe both the horizontal (time-based) and
vertical (pitch-based) structure of a musical composition [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>The simplicity and widespread nature of the MIDI format justify its choice as the basis for the
system under development. Considering the need to create simple monophonic compositions, as well
as the absence of the need to change the velocity of notes, it is possible to formally describe a musical
composition as follows:
where P is the set of allowed pitch values; D is the set of allowed values of the length of the note.</p>
      <p>Then, formally, the task of developing a music generation system is reduced to finding such a
function M:
 = (</p>
      <p>, { 1,  2, . . . ,   }),
  = (  ,   ) or   = (∅,   ),</p>
      <p>∈  ,   ∈  ,
 ( +1) =  (  ,   ),

 = { 1,  2, . . . ,   },
  = { 1,  2, . . . ,   },
 ∈  ,
(1)
(2)
(3)
(4)
(5)
(6)
(7)
where T is the set of all possible musical compositions, considering the limitations of the system.</p>
      <p>The most difficult challenge during system development is finding such a function M, which is
able to quickly generate compositions that will receive a high rating from the user.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Development of a modified interactive genetic algorithm</title>
      <p>A system was developed based on a modified interactive genetic algorithm capable of generating and
playing small musical compositions using the MIDI technology described above. After the generation
of the next generation Sn the system offers the user to rate each of the compositions on a 10-point
scale. Obtained values Yn are used as a fitness function of compositions.
4.1.
The system encodes each composition using a genome consisting of discrete, quantitative, and
sequential genes. Discrete genes encode the scale and root note. Quantitative genes encode the
number of beats per minute. Sequential genes encode harmonic and rhythmic sequences and
arpeggio types in bars. Such data is enough to encode compositions that are not limited to one genre
of music, as in the GenJam system. The system is flexible and customizable, because it allows the
configuration of limit values for quantitative genes and possible values for discrete genes.
4.2.</p>
      <p>Workflow</p>
      <sec id="sec-4-1">
        <title>The system has the following operating cycle: (generation size) is established in 5 empirically. 1.</title>
        <p>Generate the initial population S1 random musical compositions (individuals). Value x
2. Show the user the current population. Get the rating value yi for each individual si from the
Generate a new population of x individuals. Each individual in the population Sn will be the
offspring of two individuals from the previous population S(n-1) with probability
13.
4.
5.</p>
        <p>user.</p>
      </sec>
      <sec id="sec-4-2">
        <title>Carry out mutations in the new population Sn</title>
        <p>6. Return to step 2.</p>
        <p>( ) =  −1 ,</p>
        <p>∑ =0</p>
        <p>During reproduction, the offspring receives a random combination of the genes of its parents.
Each of the genes is independently generated on the basis of parental genes. Genes encoding
quantitative parameters are chosen randomly between parental values. Genes encoding discrete
parameters randomly take one of the parental values. Genes encoding sequence parameters are
generated based on the random combination of the sequences of the parameters of the parents.</p>
      </sec>
      <sec id="sec-4-3">
        <title>Parents are chosen randomly as follows:</title>
        <p>
          where P(i) the probability of choosing the i-th individual as a parent, and yi is the i-th rating
received from the user, yi ∈ [
          <xref ref-type="bibr" rid="ref1">0,1</xref>
          ].
        </p>
        <p>A feature of the developed modified system is a change in the probability of injections and
mutations based on the parameter user interest. This technique is important for a system performing
a task such as music generation, where formal evaluation of the quality of the generated population
is not possible. Using this parameter allows dynamic change of the rate of mutations and injections
in the population of music compositions, decreasing the diversity proportionally to user interest.
 =  
+ ( 
min
−  
 =
= 

0.5
∗</p>
        <p>,
( 1,  2, …   ),
where ri the i-th rating received from the user, in the range 0
of the values r.</p>
        <p>,
+ ( 
min
 =  
−  
the minimum injection probabilities specified as system
(12)
(13)
(14)
(15)
(16)
where max
value of gene k for generated individual.   ,
(
1) and  (
 ,
2)
values of gene k for
the first and second parent individuals respectively, U(x,y) is a random variable with a uniform
distribution on the interval [x, y]. Lchild
is the length of the generated sequenced gene, L1, L2
⬚
lengths of this gene for the first and second parent, respectively.   , is the i-th element of the k-th
sequential gene.</p>
        <p>This crossover implementation correctly handles different genes according to their nature. For
example, the BPM (speed of composition) value for the generated individual will lie between the
values of this gene in the parent individuals. And the chord sequence will be a combination of the
chords used in the parent individuals.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Developed software</title>
      <p>
        A simple and concise user interface based on the popular cross-platform .NET MAUI library was
developed for the system. This allows using the developed system on Windows and MacOS platforms
[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>The system allows users to listen and evaluate each of the compositions generated in the current
generation many times in an arbitrary order. Each of the compositions can be exported as a file in
form of a graphic indicator at the bottom of the window. Additional functionality is also provided in
the form of statistics ex-port containing user ratings for each of the compositions in each generation.
ulated value of the user's interest in the</p>
    </sec>
    <sec id="sec-6">
      <title>6. Experiments</title>
      <p>volunteer.</p>
      <p>The developed system was tested using the help of five volunteers. The volunteers were instructed
to use the system until one of the generated short compositions was subjectively pleasing to the
which the user rated with the minimum possible rating. The composition sounds like a random set
of sounds and is musically unappealing. Fig. 2 shows the composition that was generated by the
system at generation 5 and evaluated for the maximum number of points. The first composition is
characterized by a sharp transition to the use of short sixteenth notes and pauses, multiple repetition
of individual notes in the middle of the composition, lack of harmonic movement, but the second
composition does not have such problems. In the second composition, there is a harmonious
movement that alternates ups and downs several times, there is no sharp and unnatural use of short
notes and pauses. There is no repetition of the same notes several times in a row, instead simple but
effective techniques of creating tension for the listener and relieving it are used. In general, the
second composition is perceived as more aesthetically pleasing due to greater musicality and
structural coherence.</p>
      <p>The results of the interaction of 5 volunteers with the system are shown in the graph in Figure 3.
The graph shows the value of the maximum rating for compositions from each generation. The test
results indicate the following properties of the system. Its disadvantages are the unpredictability of
the results of the system. But the system also demonstrated the ability to generate an acceptable
result quite quickly: the highest speed of achieving an acceptable result demonstrated by 5 volunteers
is 9 iterations of the system. The average value is 4.6 iterations.</p>
      <p>Testing of the developed software system was carried out on a computer with the Windows 11
operating system equipped with an Intel I7-12650H central processor. Table 1 and Figure 4 show the
results of testing over 10 generations. Based on the results of the tests, it was established that the
average time for the computing of the next generation on this hardware is less than 1 millisecond,
generating the next generation is imperceptible to real users of the system.</p>
      <p>Table 1
System performance test results</p>
      <p>Generation Time spent, milliseconds
1 2,3868
2 0,3068
3 0,1194
4 0,119
5 0,0923
6 0,0747
7 0,0646
8 0,1081
9 0,0931
10 0,1471</p>
      <p>While the developed system, like most models based on genetic encoding, is still restricted to
generating only monophonic compositions, this limitation should be viewed in the broader context
of its performance characteristics. For many practical applications: background music for games,
simple generative soundscapes, educational tools, or interactive art installations, monophonic sound
can be sufficient. More importantly, the system demonstrates several notable advantages that
distinguish it from analogues.</p>
      <p>First, its dynamic tracking of user engagement directly addresses one of the key drawbacks of
interactive genetic algorithms: user fatigue. By continuously adapting mutation and injection
probabilities, the system reduces repetitive or unproductive iterations, enabling users to reach
satisfying results in fewer cycles. This approach can be seen as a lightweight alternative to surrogate
fitness functions, which are computationally demanding and often difficult to generalize across
users.</p>
      <p>Second, the computational efficiency of the model makes it practical for real-time applications.
Tests confirm that new generations can be produced in less than one millisecond, even on low-end
hardware. Thus, the system can be used with software where performance is critical without
requiring cloud-based processing or expensive hardware. For example, video games or live
interactive performances.</p>
      <p>Third, the system allows users to configure stylistic parameters such as tempo, scale, and chord
sets, thereby ensuring that the generated music aligns more closely with the intended genre or style.
This adaptability makes the system more adaptive than solutions like GenJam or DarwinTunes,
which are narrowly specialized in genre or structural constraints.</p>
      <p>In summary, while the lack of polyphonic capability represents a structural limitation of the
reducing user fatigue make it a promising and practical tool. These strengths collectively suggest
that the proposed solution may serve not only as a research prototype but also as a basis for a
realworld interactive music generation system.</p>
    </sec>
    <sec id="sec-7">
      <title>8. Conclusions</title>
      <p>Methods and existing systems of music generation based on genetic methods were reviewed. In
particular, systems that use interactive genetic algorithms and also use surrogate fitness functions
in the process of evolution are considered.</p>
      <p>
        A system is proposed that generates music using a modified interactive genetic algorithm and
also uses dynamic user engagement tracking. The proposed system also differs from analogues in
the ability to generate music in different styles, as well as more subtle settings available to users.
In the future, it is possible to improve the system by integrating a surrogate fitness function similar
to the one used in the GP-Music system [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgements</title>
      <p>The article was prepared as part of the research work Information Technologies of Intelligent
Computing .</p>
    </sec>
    <sec id="sec-9">
      <title>Declaration on Generative AI</title>
      <sec id="sec-9-1">
        <title>The authors have not employed any Generative AI tools.</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>K.</given-names>
            <surname>Collins</surname>
          </string-name>
          ,
          <article-title>An introduction to procedural music in video games</article-title>
          ,
          <source>Contemp. Music Rev. 28.1</source>
          (
          <issue>2009</issue>
          )
          <article-title>5 15</article-title>
          . doi:
          <volume>10</volume>
          .1080/07494460802663983.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>MuseNet</surname>
          </string-name>
          ,
          <year>2019</year>
          . URL: https://openai.com/index/musenet/.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <article-title>[3] MusicLM - AI model for music generation</article-title>
          ,
          <year>2023</year>
          . URL: https://musiclm.com.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <article-title>A comprehensive survey on deep music generation: multi-level representations, algorithms</article-title>
          , evaluations, and future directions,
          <source>Preprint</source>
          ,
          <year>2020</year>
          . arxiv. doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>2011</year>
          .
          <volume>06801</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>H.</given-names>
            <surname>Takagi</surname>
          </string-name>
          ,
          <article-title>Interactive evolutionary computation: fusion of the capabilities of EC optimization and human evaluation</article-title>
          ,
          <source>Proc. IEEE 89.9</source>
          (
          <year>2001</year>
          )
          <fpage>1275</fpage>
          1296. doi:
          <volume>10</volume>
          .1109/5.949485.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Biles</surname>
          </string-name>
          ,
          <article-title>Life with GenJam: interacting with a musical IGA</article-title>
          ,
          <source>in: IEEE SMC'99 conference proceedings</source>
          .
          <source>1999 IEEE international conference on systems, man, and cybernetics</source>
          , IEEE. doi:
          <volume>10</volume>
          .1109/icsmc.
          <year>1999</year>
          .
          <volume>823290</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>R. M. MacCallum</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Mauch</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Burt</surname>
            ,
            <given-names>A. M.</given-names>
          </string-name>
          <string-name>
            <surname>Leroi</surname>
          </string-name>
          ,
          <article-title>Evolution of music by public choice</article-title>
          ,
          <source>Proc. Natl. Acad. Sci</source>
          .
          <volume>109</volume>
          .30 (
          <year>2012</year>
          )
          <fpage>12081</fpage>
          12086. doi:
          <volume>10</volume>
          .1073/pnas.1203182109.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <article-title>[8] GP-Music: an interactive genetic programming system for music generation with automated fitness raters</article-title>
          ,
          <source>in: Genetic programming 1998: proceedings of the third annual conference</source>
          ,
          <year>1998</year>
          , pp.
          <fpage>181</fpage>
          <lpage>186</lpage>
          . doi:
          <volume>10</volume>
          .1109/TEVC.
          <year>1999</year>
          .
          <volume>771172</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Midi</surname>
          </string-name>
          . URL: https://midi.org.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10] .
          <article-title>NET Multi-platform App UI documentation -</article-title>
          .
          <source>NET MAUI</source>
          . URL: https://learn.microsoft.com/enus/dotnet/maui.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>I.</given-names>
            <surname>Fedorchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Oliinyk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Stepanenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Zaiko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shylo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Svyrydenko</surname>
          </string-name>
          ,
          <article-title>Development of the modified methods to train a neural network to solve the task on recognition of road users</article-title>
          ,
          <source>Eastern-European J. Enterp. Technol. 2</source>
          .
          <issue>9</issue>
          (
          <issue>98</issue>
          ) (
          <year>2019</year>
          )
          <fpage>46</fpage>
          55. doi:
          <volume>10</volume>
          .15587/
          <fpage>1729</fpage>
          -
          <lpage>4061</lpage>
          .
          <year>2019</year>
          .
          <volume>164789</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>N. A.</given-names>
            <surname>Afifie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. W. Y.</given-names>
            <surname>Khang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Bin Ja'afar</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. F. B. M. Amin</surname>
            ,
            <given-names>J. A. J.</given-names>
          </string-name>
          <string-name>
            <surname>Alsayaydehahmad</surname>
            ,
            <given-names>W. A.</given-names>
          </string-name>
          <string-name>
            <surname>Indra</surname>
            ,
            <given-names>S. G.</given-names>
          </string-name>
          <string-name>
            <surname>Herawan</surname>
            ,
            <given-names>A. B.</given-names>
          </string-name>
          <string-name>
            <surname>Ramli</surname>
          </string-name>
          ,
          <article-title>Evaluation Method of Mesh Protocol over ESP32 and ESP8266</article-title>
          ,
          <string-name>
            <given-names>Baghdad</given-names>
            <surname>Sci</surname>
          </string-name>
          .
          <source>J</source>
          .
          <volume>18</volume>
          .4(
          <issue>Suppl</issue>
          .) (
          <year>2021</year>
          )
          <article-title>1397</article-title>
          . doi:
          <volume>10</volume>
          .21123/bsj.
          <year>2021</year>
          .
          <volume>18</volume>
          .4(
          <issue>suppl</issue>
          .).
          <volume>1397</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>