<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Automatic Creation of Masks for Marking Histological Images of the Epithelium of the Paranasal Sinuses.</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alina.Nechyporenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yevhen Hubarenko</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Victoriia Alekseeva</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vitaliy Gargin</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maryna</string-name>
          <email>maryna.gubarenko@nure.ua</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hubarenko</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Violeta</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kalnytska</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Division Molecular Biotechnology and Functional Genomics Technical University of Applied Sciences</institution>
          ,
          <addr-line>Hochschulring 1, Wildau, 15745</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Kharkiv International Medical University</institution>
          ,
          <addr-line>38 Molochna str., Kharkiv, 61001</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Kharkiv National Medical University</institution>
          ,
          <addr-line>4 Nauky Avenue, Kharkiv, 61000</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Kharkiv National University of Radio Electronics</institution>
          ,
          <addr-line>14 Nauky Avenue, Kharkiv, 61116</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The article discusses the approach to solving the problem of reducing time spent on the preparation of medical images for teaching neural networks, by reducing the time of creating masks for images. The task is considered on the example of processing images of the mucous membrane of the paranasal sinus. The specifics of the task did not allow effectively using existing software solutions. During the study, a software solution was proposed, which made it possible to radically reduce the time of creating masks for images. The article also analyzes the shortcomings of the automated creation of masks, as well as the directions of their solution. The loss of time due to the adjustment of the color palette can be reduced even more to 1-2 minutes, the average deviation is 7.61%</p>
      </abstract>
      <kwd-group>
        <kwd>1 Neural networks</kwd>
        <kwd>masks</kwd>
        <kwd>microscopic images</kwd>
        <kwd>epithelium</kwd>
        <kwd>inflammatory changes</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The results of the study in many sectors of medicine are based on a detailed study of images [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
Moreover, the correct assessment of the received data often depends on the large number of indicators
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The need to evaluate medical images today is one of the priority tasks in radiology,
pathomorphology, dentistry, otolaryngology and many other medical specialties [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. Such an
assessment and interpretation of the results are most often carried out in manual mode. The load on
medical personnel is known to increase on a daily basis, which can underlie errors at all stages of
research, and therefore lead to misdiagnosis and selection of inadequate therapy or different kind of
physical training [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]. Of course, one of the most important and difficult tasks is the processing of
histological samples, which differ in the variety of structure and the complexity of the configuration.
Often, it is in the study of histological samples that medical workers have many doubts and erroneous
results. In this regard, the unification of data processing is a crucial task for doctors of any specialty
[
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ]. This investigation is devoted to the study of the mucous membrane of the paranasal sinuses.
This is stipulated by the increasing number of diseases of this anatomical region.
      </p>
      <p>
        It is fundamentally important to detect the signs of inflammatory changes, characterized by the
presence of both focal and diffuse clusters of inflammatory cells, with predominance of lymphocytes.
It pays a significant role especially in pathological conditions in the case of the presence of supportive
diseases [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ]. There is surface damage to epithelium cells up to the appearance of single erosion
defects. Most epithelial cells of the surface layer are characterized by a pronounced vacuolization of
the cytoplasm, which is usually regarded as a manifestation of hydropic dystrophy. The basal
epitheliocytes are high, narrow, which, apparently, is a consequence of their proliferative activity.
Besides, moderate edema is noted in all layers of the epithelium. The morphofunctional state of the
microcirculatory channel (MCC) has a pronounced violation observed in the vessels [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ]. The
vascular system is characterized by uneven blood supply against the background of devastated vessels
with dropped gaps and the presence of sharply expanded capillaries filled with blood. Mucoid and
fibrinoid swelings are observed, endotheliocytes are more often flattened, with signs of peeling. At
the same time, signs of a sclerotic process are noted in the perivascular space in their own plate.
      </p>
      <p>Manual determination of these indicators is sometimes problematic and may be associated with
errors in the interpretation of data.</p>
      <p>
        One of the promising areas, which should reduce errors due to the human factor, is the use of
knowledge bases, specialized decision support systems (SDSS), neural networks and other
technologies related to artificial intelligence. Thanks to the dynamics of the development of machine
learning technologies, modern SDSSs got the opportunity to take quite large volumes of processing of
medical images and identify the alleged abnormalities. However, in order to fully train the neural
network, it is necessary to make a relevant DataSet, which would contain a sufficient number of
variations of abnormalities. Preliminary processing of such DataSet implies the processing of each
image in order to create masks. Namely, manual discharge of each cell, in one image, the number of
such cells can reach several tens to several hundred. The routine, uniformity, painstaking and volumes
of such work, sharply reduce the quality of the training of the DataSet for the training of the neural
network, and also reduce the variety of possible variations, which can affect the universality of the use
of a trained network in the future. This study could be useful in other fields of medicine and could be
added [
        <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
        ] with other scientific approaches [
        <xref ref-type="bibr" rid="ref15 ref16">15, 16</xref>
        ].
      </p>
      <p>In connection with the foregoing, the purpose of the study can be formulated as follows. It is
necessary to develop a software tool for processing images of the mucous membrane of the paranasal
sinus (stained with hematoxylin-eosin, x400), with automated mask detection, for further training of
the neural network. The result of the work of such a software tool will be masks for each image of the
paranasal sinus membrane, which makes up the DataSet, which should reduce the overall training
time of the neural network by reducing the time for image preparation and improving the quality of
masking. Considering all of the above, the goal of our study was to develop an algorithm for
automatically creating masks for evaluating microscopic images of the epithelium of the paranasal
sinuses.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Material and Methods</title>
      <p>For the correct training of the neural network, it is necessary to organize its correct training. One
of the ways of learning is to select an object in the image by using a mask or marking areas. Image
Annotation is one of the main tasks in computer vision technology, and, consequently, in the
development of artificial intelligence elements. Annotated images are needed as input for training
neural networks: object recognition in images allows computers to perceive data coming from video
cameras not as a set of pixels, but as a collection of objects and processes.</p>
      <p>Manual labeling of objects in images is a time-consuming and rather costly task, especially if it is
necessary to to label large data sets. To train a neural network, the minimum DataSet size should have
several dozen unique images; for comfortable work, the DataSet size should be hundreds and
thousands of images.</p>
      <p>Automatic Image Annotation is a process in which a computer automatically assigns metadata to a
digital image.Paragraph text.Paragraph text. Paragraph text.Paragraph text.Paragraph text.Paragraph
text.Paragraph text.Paragraph text.</p>
    </sec>
    <sec id="sec-3">
      <title>2.1.Overview of software for image labelling</title>
      <p>Neural network-based markup tools are used to select objects much faster and more efficiently,
process a much larger number of images, automate the bulk of manual tasks, and they can be
additionally trained to recognize new images more accurately.</p>
    </sec>
    <sec id="sec-4">
      <title>2.1.1. Open source tools for Image Annotation</title>
      <p>LabelImg is a free graphical image labeling tool written in Python that is used to highlight objects
in an image. Annotations can be saved as XML files in PASCAL VOC/YOLO format. LabelImg can
be used to create bounding boxes for labeling objects in the Qt GUI.</p>
      <p>CVAT is a free and open source tool for labeling digital images and videos and easily preparing
datasets for computer vision algorithms. It marks up data for several machine learning tasks: object
recognition, image classification, and image segmentation. CVAT supports a number of additional
components: Deep Learning Deployment Toolkit (a component of OpenVINO), NVIDIA CUDA
Toolkit, TensorFlow Object Detection API, and others.</p>
      <p>Auto_Annotate is a 2D markup using a neural network. It is an open source solution for automated
image tagging. A Python class called “generate XML” lays out images via pretrained model inference
to determine the positions of the bounding boxes. The script also uses the TensorFlow repository for
training. The resulting images (with markup in the form of bounding boxes) and XML can then be
opened in LabelImg.</p>
      <p>Labelme is a GUI labeling software. It is written in Python and the GUI uses Qt (PyQt). Labelme
may label image data in various forms. Labelme stores label information in JSON files. Labelme
software designates an image in the form of rectangles, circles, polygons, line segments and points. It
can be used for target detection, image segmentation, image classification, video annotation, VOC
and COCO data set generation.</p>
    </sec>
    <sec id="sec-5">
      <title>2.1.2. Commercial tools for Image Annotation</title>
      <p>Hasty.ai is automatic data labeling with Artificial Intelligence (AI). The platform offers several
AI-based annotation tools (DEXTR, classification prediction, object detection and segmentation
assistant, etc.) along with manual markup tools. The automatically drawn contours of objects can be
corrected manually to improve the accuracy and quality of the markup.</p>
      <p>V7 Darwin is a per-pixel markup of images based on a neural network. It is an automated
AIbased markup tool that works with all data and automatically generates polygonal and pixel-by-pixel
masks. It is possible to set the area for recognition – the deep learning algorithm will determine the
most noticeable object or its visible part and apply markup.</p>
      <p>Dataloop is a markup of large data arrays. It is a cloud-based annotation platform consisting of a
variety of applications to automate the data preparation process for retail, robotics, autonomous
vehicles, precision agriculture, and more. Dataloop markup tools work with all kinds of images
(pictures, videos). It is possible to integrate deep learning models and automate the markup process
using pre-trained classes. The data markup specialist then only checks the accuracy of the contours
and makes the necessary changes, which speeds up the annotation process.</p>
    </sec>
    <sec id="sec-6">
      <title>2.2.Descriptions of images that populate the Dataset</title>
      <p>
        The study involved 25 male and female subjects who were distributed by sex and age according to
the recommendations of the World Health Organization (WHO). The age of patients ranged from 20
to 74 years, due to the maximum prevalence of chronic diseases of the nasal cavity in this age group.
All subjects were patients of the department of otorhinolaryngology of Municipal Non-Profit
Enterprise of Kharkiv Regional Council “Center of Emergency Medical Care and Disaster Medicine”.
The participants were diagnosed with chronic polypous rhinosinusitis and underwent surgical
treatment in the scope of functional endoscopic rhinosurgery, which, according to the latest EPOS
recommendations, is the gold standard for the treatment of chronic polypous rhinosinusitis. 25
samples are covered all possible cases of chronic polyposus rhinosinusitis. In the course of surgical
treatment, polyposis formations were removed and the natural anastomosis was expanded, which
made it possible to obtain and examine histological samples. The study was approved by the Bioethics
Committee of Kharkiv National University in accordance with the Helsinki Declaration [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ],
European Convention for the protection of vertebrate animals (18.03.1986), European Economic
Society Council Directive on the Protection of Vertebrate Animals (24.11.1986). All patients signed a
voluntary informed consent to participate in the study.
      </p>
      <p>The specimens of soft tissues of the paranasal sinuses were stained with hematoxylin and eosin
after the routine proceeding. The microscopicstudy was performed on an“Olympus BX-41”
microscope with subsequent processing by “Olympus DP-soft version 3.2” software. Morphometric
studies were performed in thezone of ostiomeatal complex which was chosen for morphological
interpretation.</p>
      <p>The lamina propria of the mucous membrane of the paranasal sinuses consists of a papillary layer
located under the epithelium, which is represented by loose connective tissue, and a deeper reticular
layer, with coarser connective tissue fibers. Surface fibers are thin, delicate and sinuous. They form
the basement membrane and stroma network. Between the fibers, single cellular elements are
detected, among which plasmocytes, macrophages, and tissue basophils predominate. Fibroblasts,
histiocytes, lymphocytes are rare. Vascular bed has uniform blood supply, endotheliocytes with
hyperchromic nuclei, are large (see Fig. 1).</p>
    </sec>
    <sec id="sec-7">
      <title>2.3.Mask generation</title>
      <sec id="sec-7-1">
        <title>Existing image markup algorithms can be divided into two categories:</title>
        <p> model-based learning methods explore the correlation between visual features and their
semantic meaning to discover display features using machine learning or knowledge representation
models for image labeling;
 database-driven models immediately produce a sequence of likely labels in accordance with
the already annotated images in the database.</p>
        <p>As can be seen from the description of alternative software products and analysis of an example
image that will be assessed, it is necessary either to pre-train the neural network to determine the
masks, or to use the neural network to determine the universal contours or boundaries of objects with
further retraining of the network with reference to new objects and user adjustment.</p>
        <p>Pre-training a neural network deprives any sense of the very procedure of training a neural
network, because the network, in fact, has already been trained and there is no need to train the
network with a duplicate of its properties. In addition, to train a preliminary neural network, it is
necessary to prepare the same images and create masks for them.</p>
        <p>The second approach makes it necessary to select each object, which saves a lot of time, there is no
need to create an elliptical cell outline, however, it is needed to select each cell and, if necessary,
adjust each object, unfortunately, the objects are very small and their placement density causes many
difficulties, which can reduce advantage of such software to zero. Although under other conditions,
the use of such means is highly justified.</p>
        <p>Another feature of the task of recognizing a cell mask may be the fact that it is necessary to
determine a homogeneous convex ellipse-like object of various sizes, which may differ in shape –
although for the most part it is still an ellipse, and saturation – a cell can have a more or less saturated
color.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>2.4.Methods and algorithm</title>
      <p>The image processing and mask creation method can be represented as a sequence of the following
steps:
1. Determination of the preliminary characteristics of the image: resolution, color palette;
2. Setting the conditions for matching a pixel to a mask;
3. Iterating over all pixels of the image;
4. If the pixel meets the specified conditions, then it belongs to the mask, otherwise it is excluded
from consideration (see Fig. 2).</p>
      <p>In other words, to create masks of incoming images, it is necessary to set up the color interval in
rgb format. Next, the program compares the color of each pixel with the specified interval. If
necessary, the accuracy can be improved by the correction of the specified interval. In this way, we
have got a simple method of obtaining a mask, which requires a minimal amount of time in the
comparison to manual marking.</p>
      <p>Also this method has got some improvement that allows us to make our markings of cells
smoother and closer to the native shape. It is obtained because of the possibility to draw a correct
ellipse (circle). The center of ellipse lies in the current pixel and the color of ellipse satisfies the
specified interval, and the radius of this ellipse is such minimal that it does not create a large error (in
our case it was only 2 pixels).</p>
      <p>By changing of these parameters (color interval and radius), it was possible to achieve more
effective results.</p>
      <p>The disadvantage of such algorithm can be considered as possibility to use it only in the case,
when the object has a clear color recognition. So possibility of appearance of other objects which are</p>
      <sec id="sec-8-1">
        <title>START</title>
        <p>Get image and algorithm</p>
        <p>parameters
Are there more lines in</p>
        <p>the image?
Is there another pixel</p>
        <p>in the line?
Does the pixel match
the condition?
Go to next pixel
Save mask</p>
        <p>END
no
yes
yes
Go to next line</p>
        <p>yes
no
Pixel is not part of
the mask
Pixel is part of the
mask
the same in the color is excluded. Consequently, the scope of application of the developed software is
mostly in the labeling of cell images.</p>
        <p>The software was developed in the Python language, which contains a number of libraries that are
able to simplify the image processing and to analyze the obtained results. The image was uploaded
and processed using the Pillow library. By means of the language, a search of all the pixels of the
image was organized. The aim of the search was to analyze all the pixels of a current image and to
find out those that satisfy the condition and drawing ellipses. Also, the program allows to draw the
final binary mask, or to apply the resulting mask to the starting image to analyze the result and check
the effectiveness of the parameters` selection.</p>
        <p>This tool does not require prior training, which greatly simplifies its use and eliminates the need
for a large number of images. For example, more than 500 marked images are required for
segmentation of images. For its marking usually we need significant expenditure of time and human
resources for manual marking. At the same time, with the help of this application, the process of
images
will
take
no
more
than
a
few
a)
however, the recommended radius is 2 pixels. This radius was determined experimentally and may
differ for different tasks.</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>3. Results</title>
      <p>Figure 5 shows the process of compiling an image mask for training a neural network using the
developed software tool with a modernized algorithm: a – the original image, b – labeling by a
specialist, c – the resulting mask
Masks were made for 25 images. The result for various methods of obtaining image masks is shown
in Fig. 6.</p>
      <p>The diagram of the algorithm for creating an image mask (Fig. 2) shows the algorithm of the
software tool, however, the block “Does the pixel meet the condition?” needs further clarification.
Based on the characteristics of the task and the images that form the DataSet, restrictions are imposed
on the values that the RGB (red, green, blue) channel parameters can take.</p>
      <p>Therefore, if the pixel corresponds to the interval for each of the channels, then the pixel belongs
to the mask, otherwise it is ignored. The intervals for each RGB channel are user-defined. For each
image they are different. However, due to the peculiarities of the task and images, the differences are
insignificant, and in fact the deviations are less than 15-20% for each of the channels.</p>
      <p>
        This study is promising for the detection of cell elements in the different physiological [
        <xref ref-type="bibr" rid="ref17">17, 18</xref>
        ]
and pathological conditions [19, 20]. Probably in future it can be combined with new scientific
approaches [21-23] for medical specialist of described area [24, 25].
      </p>
      <p>Table 2 summarizes the results of the masking process study. Column 1 indicates the methods for
obtaining masks; two programs were taken that are usually used to create masks, the developed
software and its modernization. The second column displays the total time it took to create masks for
25 images, the time is given in minutes. Column 3 provides average time spent processing one image,
time given in minutes. Column 4 is the share of the reference value, % (the result of work with
Labelme is taken as the reference), it is in the first position, clearly showing the savings in time and
resources. Column 5 is the average discrepancy between the masks, the masks are compared pixel by
pixel, and in case of a mismatch, such a pixel is taken into account as a discrepancy pixel, then the
total number of discrepancy pixels is divided by the total number of image pixels. The result obtained
was averaged for all 25 images and is given as a percentage. However, there were controversial
situations when the masks were different.</p>
      <p>Table 2
Comparing the results of creating masks for an images</p>
      <p>4.8
280.8</p>
      <p>The mask that the proposed software produced was of better quality, or it would not be possible to
unequivocally classify the pixel as erroneous. Also for programmatic creation of masks, the effect of
lone pixels is observed. This shortcoming can be overcome by additional image processing and by
discarding pixels that do not have other pixels next to them.</p>
      <p>a)
b)</p>
      <p>The operation of the algorithm will increase the duration of image processing by 2-10 seconds,
depending on the size. Examples of such situations are shown in Fig.7.</p>
      <p>Therefore, it was decided to add the 6th column, which indicates unambiguously negative
discrepancies, those pixels that were unequivocally determined by the specialist as an error; the
results are given as an average value; however, there were 7 images where the specialist did not notice
errors, and only 2 images that gave enough a high error rate of 5.9% and 7.8%, the rest gave less than
1%, which most likely indicates an incorrectly selected color palette for these images. Approximately
the same situation was observed with the modernization of the algorithm; only 3 images exceeded the
threshold of 1% discrepancy.</p>
      <p>However, if we correlate that in manual mode, one image is processed for about 5 hours, and for
software, taking into account manual adjustment of the color palette, it was 3-5 minutes, given that
there was no interface and we had to make changes directly to the code. If the interface is improved,
then the loss of time due to the adjustment of the color palette can be reduced even more to 1-2
minutes, the average deviation is 7.61%.</p>
    </sec>
    <sec id="sec-10">
      <title>4. Conclusions</title>
      <p>In the course of the study, a basic algorithm was developed and put into practice for automatically
creating masks for marking microscopic images of the epithelium of the paranasal sinuses. This
method is distinguished by its information content and accuracy. It can greatly reduce the time it takes
to process medical images. The loss of time due to the adjustment of the color palette can be reduced
even more to 1-2 minutes, the average deviation is 7.61%.</p>
    </sec>
    <sec id="sec-11">
      <title>5. References</title>
      <p>[18] Nechyporenko, A. S., Alekseeva, V. V., Sychova, L. V., Cheverda, V. M., Yurevych, N. O., Gargin,
V.V. (2020). anatomical prerequisites for the development of rhinosinusitis. Lekarsky Obzor, 6(10),
334-338.
[19] R. Radutniy, A. Nechyporenko, V. Alekseeva, G. Titova, D. Bibik and V. Gargin, "Automated
Measurement of Bone Thickness on SCT Sections and Other Images", 2020 IEEE Third International
Conference on Data Stream Mining &amp;amp; Processing (DSMP), 2020. doi:
10.1109/dsmp47368.2020.9204289.
[20] A. Nechyporenko et al., "Comparative Characteristics of the Anatomical Structures of the Ostiomeatal
Complex Obtained by 3D Modeling", 2020 IEEE International Conference on Problems of
Infocommunications. Science and Technology (PIC S&amp;amp;T), 2020. doi:
10.1109/picst51311.2020.9468111.
[21] V. Kovtun, I. Izonin and M. Gregus, "Formalization of the metric of parameters for quality evaluation
of the subject-system interaction session in the 5G-IoT ecosystem", Alexandria Engineering Journal,
vol. 61, no. 10, pp. 7941-7952, 2022. doi: 10.1016/j.aej.2022.01.054.
[22] I. Izonin, R. Tkachenko, Z. Duriagina, N. Shakhovska, V. Kovtun and N. Lotoshynska, "Smart Web
Service of Ti-Based Alloy’s Quality Evaluation for Medical Implants Manufacturing", Applied
Sciences, vol. 12, no. 10, p. 5238, 2022. doi: 10.3390/app12105238.
[23] D. Chumachenko, "On Intelligent Multiagent Approach to Viral Hepatitis B Epidemic Processes
Simulation", 2018 IEEE Second International Conference on Data Stream Mining &amp;amp; Processing
(DSMP), 2018. doi: 10.1109/dsmp.2018.8478602.
[24] Y. Kuzenko, O. Mykhno, V. Sikora, V. Bida, O. Bida, “Dental terminology "discoloration" or
"pigment dystrophy" - a review and practical recommendations”, Pol Merkur Lekarski,
2022;50(295):65-67.
[25] Y. Kuzenko, A. Romanyuk, A. Politun, L. Karpenko, “S100, bcl2 and myeloperoxid protein
expirations during periodontal inflammation”, BMC Oral Health, 2015;15:93.
doi:10.1186/s12903015-0077-8</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>O.</given-names>
            <surname>Elemento</surname>
          </string-name>
          , “
          <article-title>The future of precision medicine: towards a more predictive personalized medicine</article-title>
          .
          <source>” Emerging Topics in Life Sciences</source>
          , vol.
          <volume>4</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>175</fpage>
          -
          <lpage>177</lpage>
          ,
          <year>2020</year>
          , doi: 10.1042/etls20190197.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>H.</given-names>
            <surname>Abdelhalim</surname>
          </string-name>
          et al.,
          <source>"Artificial Intelligence</source>
          , Healthcare,
          <string-name>
            <given-names>Clinical</given-names>
            <surname>Genomics</surname>
          </string-name>
          , and
          <article-title>Pharmacogenomics Approaches in Precision Medicine"</article-title>
          ,
          <source>Frontiers in Genetics</source>
          , vol.
          <volume>13</volume>
          ,
          <year>2022</year>
          , doi: 10.3389/fgene.
          <year>2022</year>
          .
          <volume>929736</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C.</given-names>
            <surname>Bales</surname>
          </string-name>
          et al.,
          <string-name>
            <surname>“</surname>
          </string-name>
          <article-title>Can machine learning be used to recognize and diagnose coughs?” In: 2020 8th EHealth</article-title>
          and Bioengineering Conference,
          <string-name>
            <surname>EHB</surname>
          </string-name>
          <year>2020</year>
          ;
          <year>2020</year>
          , doi: 10.1109/EHB50910.
          <year>2020</year>
          .
          <volume>92801</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhang</surname>
          </string-name>
          et al.,
          <article-title>"Correction of out-of-focus microscopic images by deep learning"</article-title>
          ,
          <source>Computational and Structural Biotechnology Journal</source>
          , vol.
          <volume>20</volume>
          , pp.
          <fpage>1957</fpage>
          -
          <lpage>1966</lpage>
          ,
          <year>2022</year>
          , doi: 10.1016/j.csbj.
          <year>2022</year>
          .
          <volume>04</volume>
          .003.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A. E.</given-names>
            <surname>Listyarini</surname>
          </string-name>
          , “
          <article-title>The Relations of Using Digital Media and Physical Activity with the Physical Fitness of 4th and 5th Grade Primary School Students</article-title>
          .”
          <source>Physical Education Theory and Methodology</source>
          , vol.
          <volume>21</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>281</fpage>
          -
          <lpage>287</lpage>
          ,
          <year>2021</year>
          , doi: 10.17309/tmfv.
          <year>2021</year>
          .
          <volume>3</volume>
          .12.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>W. S. A</surname>
          </string-name>
          . Al Attar, “
          <article-title>The Current Implementation of an Evidence-Based Hamstring Injury Prevention Exercise (Nordic Hamstring Exercise) among Athletes Globally</article-title>
          .”
          <source>Physical Education Theory and Methodology</source>
          , vol.
          <volume>21</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>273</fpage>
          -
          <lpage>280</lpage>
          ,
          <year>2021</year>
          , doi: 10.17309/tmfv.
          <year>2021</year>
          .
          <volume>3</volume>
          .11.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R.</given-names>
            <surname>Nazaryan</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Kryvenko</surname>
          </string-name>
          , “
          <article-title>Salivary oxidative analysis and periodontal status in children with atopy</article-title>
          .
          <source>” Interventional Medicine and Applied Science</source>
          , vol.
          <volume>9</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>199</fpage>
          -
          <lpage>203</lpage>
          ,
          <year>2017</year>
          , doi: 10.1556/1646.9.
          <year>2017</year>
          .
          <volume>32</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>N.</given-names>
            <surname>Gutarova</surname>
          </string-name>
          et al.,
          <article-title>“Features of the morphological state of bone tissue of the lower wall of the maxillary sinus with the use of fixed orthodontic appliances”</article-title>
          ,
          <source>Pol Merkur Lekarski</source>
          , vol.
          <volume>49</volume>
          , no.
          <issue>286</issue>
          , pp.
          <fpage>232</fpage>
          -
          <lpage>235</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>L.</given-names>
            <surname>Shepherd</surname>
          </string-name>
          , Á. Borges,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ledergerber</surname>
          </string-name>
          , et al.”
          <article-title>Infection-related and -unrelated malignancies, HIV and the aging population”</article-title>
          ,
          <source>HIV Med</source>
          , vol.
          <volume>17</volume>
          , no 8, pp.
          <fpage>590</fpage>
          -
          <lpage>600</lpage>
          ,
          <year>2016</year>
          , doi:10.1111/hiv.12359.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>P.</given-names>
            <surname>Myronov</surname>
          </string-name>
          , “
          <article-title>Low-frequency ultrasound increase effectiveness of silver nanoparticles in a purulent wound model</article-title>
          .
          <source>” Biomedical Engineering Letters</source>
          , vol.
          <volume>10</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>621</fpage>
          -
          <lpage>631</lpage>
          ,
          <year>2020</year>
          , doi: 10.1007/s13534-020-00174-5.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>V.</given-names>
            <surname>Shevchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Odushkina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Mikulinska-Rudich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Mys</surname>
          </string-name>
          , R. Nazaryan, “
          <article-title>A method of increasing the effectiveness of antibacterial therapy with ceftriaxone in the complex treatment of inflammatory diseases of the maxillofacial area in children”</article-title>
          ,
          <source>Pharmacologyonline</source>
          ,vol.
          <volume>3</volume>
          , pp.
          <fpage>652</fpage>
          -
          <lpage>62</lpage>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yaroslavska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mikhailenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kuzina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Sychova</surname>
          </string-name>
          , R. Nazaryan, “
          <article-title>Antibiotic therapy in the complex pathogenic treatment of patients with sialolithiasis in the stage of exacerbation of chronic sialoadenitis”</article-title>
          ,
          <source>Pharmacologyonline</source>
          , vol.
          <volume>3</volume>
          , pp.
          <fpage>624</fpage>
          -
          <lpage>31</lpage>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Nechyporenko</surname>
          </string-name>
          et al.,
          <article-title>“Application of spiral computed tomography for determination of the minimal bone density variability of the maxillary sinus walls in chronic odontogenic and rhinogenic sinusitis”</article-title>
          ,
          <source>Ukr J Radiol Oncol</source>
          , vol.
          <volume>29</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>65</fpage>
          -
          <lpage>75</lpage>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Polyvianna</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chumachenko</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chumachenko</surname>
            <given-names>T.</given-names>
          </string-name>
          <article-title>Computer aided system of time series analysis methods for forecasting the epidemics outbreaks</article-title>
          .
          <source>2019 15th International Conference on the Experience of Designing and Application of CAD Systems, CADSM</source>
          <year>2019</year>
          :
          <article-title>1-4</article-title>
          . doi:
          <volume>10</volume>
          .1109/CADSM.
          <year>2019</year>
          .8779344
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Yakovlev</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bazilevych</surname>
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chumachenko</surname>
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chumachenko</surname>
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hulianytskyi</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Meniailov</surname>
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tkachenko</surname>
            <given-names>A</given-names>
          </string-name>
          .
          <article-title>The concept of developing a decision support system epidemic morbidity control</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          ,
          <year>2020</year>
          , vol.
          <volume>2753</volume>
          , pp.
          <fpage>265</fpage>
          -
          <lpage>274</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Chumachenko</surname>
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Balitskii</surname>
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chumachenko</surname>
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Makarova</surname>
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Railian</surname>
            <given-names>M.</given-names>
          </string-name>
          <article-title>Intelligent expert system of knowledge examination of medical staff regarding infections associated with the provision of medical care</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          ,
          <year>2019</year>
          , vol.
          <volume>2386</volume>
          , pp.
          <fpage>321</fpage>
          -
          <lpage>330</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>V.</given-names>
            <surname>Gargin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Radutny</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Titova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bibik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kirichenko</surname>
          </string-name>
          and
          <string-name>
            <given-names>O.</given-names>
            <surname>Bazhenov</surname>
          </string-name>
          ,
          <article-title>"Application of the computer vision system for evaluation of pathomorphological images"</article-title>
          ,
          <source>2020 IEEE 40th International Conference on Electronics and Nanotechnology (ELNANO)</source>
          ,
          <year>2020</year>
          . doi:
          <volume>10</volume>
          .1109/elnano50318.
          <year>2020</year>
          .
          <volume>9088898</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>