=Paper= {{Paper |id=Vol-3302/paper11 |storemode=property |title=Automatic Creation of Masks for Marking Histological Images of the Epithelium of the Paranasal Sinuses |pdfUrl=https://ceur-ws.org/Vol-3302/paper6.pdf |volume=Vol-3302 |authors=Alina Nechyporenko,Yevhen Hubarenko,Maryna Hubarenko,Violeta Kalnytska,Victoriia Alekseeva,Vitaliy Gargin |dblpUrl=https://dblp.org/rec/conf/iddm/NechyporenkoHHK22 }} ==Automatic Creation of Masks for Marking Histological Images of the Epithelium of the Paranasal Sinuses== https://ceur-ws.org/Vol-3302/paper6.pdf
Automatic Creation of Masks for Marking Histological Images of
the Epithelium of the Paranasal Sinuses.
Alina.Nechyporenkoa,b, Yevhen Hubarenkob, Maryna Hubarenkob, Violeta Kalnytskac,
Victoriia Alekseevac,d and Vitaliy Garginc,d
a
  Division Molecular Biotechnology and Functional Genomics Technical University of Applied Sciences,
  Hochschulring 1, Wildau, 15745, Germany
b
  Kharkiv National University of Radio Electronics, 14 Nauky Avenue, Kharkiv, 61116, Ukraine
c
  Kharkiv National Medical University, 4 Nauky Avenue, Kharkiv, 61000, Ukraine
d
  Kharkiv International Medical University, 38 Molochna str., Kharkiv, 61001, Ukraine

              Abstract
              The article discusses the approach to solving the problem of reducing time spent on the
              preparation of medical images for teaching neural networks, by reducing the time of creating
              masks for images. The task is considered on the example of processing images of the mucous
              membrane of the paranasal sinus. The specifics of the task did not allow effectively using
              existing software solutions. During the study, a software solution was proposed, which made
              it possible to radically reduce the time of creating masks for images. The article also analyzes
              the shortcomings of the automated creation of masks, as well as the directions of their
              solution. The loss of time due to the adjustment of the color palette can be reduced even more
              to 1-2 minutes, the average deviation is 7.61%

              Keywords 1
              Neural networks, masks, microscopic images, epithelium, inflammatory changes.

1. Introduction

    The results of the study in many sectors of medicine are based on a detailed study of images [1].
Moreover, the correct assessment of the received data often depends on the large number of indicators
[2]. The need to evaluate medical images today is one of the priority tasks in radiology,
pathomorphology, dentistry, otolaryngology and many other medical specialties [3, 4]. Such an
assessment and interpretation of the results are most often carried out in manual mode. The load on
medical personnel is known to increase on a daily basis, which can underlie errors at all stages of
research, and therefore lead to misdiagnosis and selection of inadequate therapy or different kind of
physical training [5, 6]. Of course, one of the most important and difficult tasks is the processing of
histological samples, which differ in the variety of structure and the complexity of the configuration.
Often, it is in the study of histological samples that medical workers have many doubts and erroneous
results. In this regard, the unification of data processing is a crucial task for doctors of any specialty
[7, 8]. This investigation is devoted to the study of the mucous membrane of the paranasal sinuses.
This is stipulated by the increasing number of diseases of this anatomical region.
    It is fundamentally important to detect the signs of inflammatory changes, characterized by the
presence of both focal and diffuse clusters of inflammatory cells, with predominance of lymphocytes.
It pays a significant role especially in pathological conditions in the case of the presence of supportive


IDDM-2022: 5th International Conference on Informatics & Data-Driven Medicine, November 18–20, 2022,
Lyon, France
EMAIL:         alinanechyporenko@mail.com;    eugen.gubarenko@nure.ua;     maryna.gubarenko@nure.ua;
violetta.kalnitskaya@gmail.com; vik13052130@i.ua; vitgarg@ukr.net
ORCID: 0000-0002-4501-7426; 0000-0001-8564-8487; 0000-0001-8719-7915; 0000-0003-4221-2610; 0000-
0001-5272-8704; 0000-0001-8194-4019
         © 2022 Copyright for this paper by its authors.
         Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
         CEUR Workshop Proceedings (CEUR-WS.org)
diseases [9, 10]. There is surface damage to epithelium cells up to the appearance of single erosion
defects. Most epithelial cells of the surface layer are characterized by a pronounced vacuolization of
the cytoplasm, which is usually regarded as a manifestation of hydropic dystrophy. The basal
epitheliocytes are high, narrow, which, apparently, is a consequence of their proliferative activity.
Besides, moderate edema is noted in all layers of the epithelium. The morphofunctional state of the
microcirculatory channel (MCC) has a pronounced violation observed in the vessels [11, 12]. The
vascular system is characterized by uneven blood supply against the background of devastated vessels
with dropped gaps and the presence of sharply expanded capillaries filled with blood. Mucoid and
fibrinoid swelings are observed, endotheliocytes are more often flattened, with signs of peeling. At
the same time, signs of a sclerotic process are noted in the perivascular space in their own plate.
    Manual determination of these indicators is sometimes problematic and may be associated with
errors in the interpretation of data.
    One of the promising areas, which should reduce errors due to the human factor, is the use of
knowledge bases, specialized decision support systems (SDSS), neural networks and other
technologies related to artificial intelligence. Thanks to the dynamics of the development of machine
learning technologies, modern SDSSs got the opportunity to take quite large volumes of processing of
medical images and identify the alleged abnormalities. However, in order to fully train the neural
network, it is necessary to make a relevant DataSet, which would contain a sufficient number of
variations of abnormalities. Preliminary processing of such DataSet implies the processing of each
image in order to create masks. Namely, manual discharge of each cell, in one image, the number of
such cells can reach several tens to several hundred. The routine, uniformity, painstaking and volumes
of such work, sharply reduce the quality of the training of the DataSet for the training of the neural
network, and also reduce the variety of possible variations, which can affect the universality of the use
of a trained network in the future. This study could be useful in other fields of medicine and could be
added [13, 14] with other scientific approaches [15, 16].
    In connection with the foregoing, the purpose of the study can be formulated as follows. It is
necessary to develop a software tool for processing images of the mucous membrane of the paranasal
sinus (stained with hematoxylin-eosin, x400), with automated mask detection, for further training of
the neural network. The result of the work of such a software tool will be masks for each image of the
paranasal sinus membrane, which makes up the DataSet, which should reduce the overall training
time of the neural network by reducing the time for image preparation and improving the quality of
masking. Considering all of the above, the goal of our study was to develop an algorithm for
automatically creating masks for evaluating microscopic images of the epithelium of the paranasal
sinuses.


2. Material and Methods
   For the correct training of the neural network, it is necessary to organize its correct training. One
of the ways of learning is to select an object in the image by using a mask or marking areas. Image
Annotation is one of the main tasks in computer vision technology, and, consequently, in the
development of artificial intelligence elements. Annotated images are needed as input for training
neural networks: object recognition in images allows computers to perceive data coming from video
cameras not as a set of pixels, but as a collection of objects and processes.
   Manual labeling of objects in images is a time-consuming and rather costly task, especially if it is
necessary to to label large data sets. To train a neural network, the minimum DataSet size should have
several dozen unique images; for comfortable work, the DataSet size should be hundreds and
thousands of images.
   Automatic Image Annotation is a process in which a computer automatically assigns metadata to a
digital image.Paragraph text.Paragraph text. Paragraph text.Paragraph text.Paragraph text.Paragraph
text.Paragraph text.Paragraph text.

   2.1.Overview of software for image labelling
   Neural network-based markup tools are used to select objects much faster and more efficiently,
process a much larger number of images, automate the bulk of manual tasks, and they can be
additionally trained to recognize new images more accurately.


        2.1.1. Open source tools for Image Annotation

  LabelImg is a free graphical image labeling tool written in Python that is used to highlight objects
in an image. Annotations can be saved as XML files in PASCAL VOC/YOLO format. LabelImg can
be used to create bounding boxes for labeling objects in the Qt GUI.
  CVAT is a free and open source tool for labeling digital images and videos and easily preparing
datasets for computer vision algorithms. It marks up data for several machine learning tasks: object
recognition, image classification, and image segmentation. CVAT supports a number of additional
components: Deep Learning Deployment Toolkit (a component of OpenVINO), NVIDIA CUDA
Toolkit, TensorFlow Object Detection API, and others.
  Auto_Annotate is a 2D markup using a neural network. It is an open source solution for automated
image tagging. A Python class called “generate XML” lays out images via pretrained model inference
to determine the positions of the bounding boxes. The script also uses the TensorFlow repository for
training. The resulting images (with markup in the form of bounding boxes) and XML can then be
opened in LabelImg.
  Labelme is a GUI labeling software. It is written in Python and the GUI uses Qt (PyQt). Labelme
may label image data in various forms. Labelme stores label information in JSON files. Labelme
software designates an image in the form of rectangles, circles, polygons, line segments and points. It
can be used for target detection, image segmentation, image classification, video annotation, VOC
and COCO data set generation.

        2.1.2. Commercial tools for Image Annotation
   Hasty.ai is automatic data labeling with Artificial Intelligence (AI). The platform offers several
AI-based annotation tools (DEXTR, classification prediction, object detection and segmentation
assistant, etc.) along with manual markup tools. The automatically drawn contours of objects can be
corrected manually to improve the accuracy and quality of the markup.
   V7 Darwin is a per-pixel markup of images based on a neural network. It is an automated AI-
based markup tool that works with all data and automatically generates polygonal and pixel-by-pixel
masks. It is possible to set the area for recognition – the deep learning algorithm will determine the
most noticeable object or its visible part and apply markup.
   Dataloop is a markup of large data arrays. It is a cloud-based annotation platform consisting of a
variety of applications to automate the data preparation process for retail, robotics, autonomous
vehicles, precision agriculture, and more. Dataloop markup tools work with all kinds of images
(pictures, videos). It is possible to integrate deep learning models and automate the markup process
using pre-trained classes. The data markup specialist then only checks the accuracy of the contours
and makes the necessary changes, which speeds up the annotation process.


   2.2.Descriptions of images that populate the Dataset
    The study involved 25 male and female subjects who were distributed by sex and age according to
the recommendations of the World Health Organization (WHO). The age of patients ranged from 20
to 74 years, due to the maximum prevalence of chronic diseases of the nasal cavity in this age group.
All subjects were patients of the department of otorhinolaryngology of Municipal Non-Profit
Enterprise of Kharkiv Regional Council “Center of Emergency Medical Care and Disaster Medicine”.
The participants were diagnosed with chronic polypous rhinosinusitis and underwent surgical
treatment in the scope of functional endoscopic rhinosurgery, which, according to the latest EPOS
recommendations, is the gold standard for the treatment of chronic polypous rhinosinusitis. 25
samples are covered all possible cases of chronic polyposus rhinosinusitis. In the course of surgical
treatment, polyposis formations were removed and the natural anastomosis was expanded, which
made it possible to obtain and examine histological samples. The study was approved by the Bioethics
Committee of Kharkiv National University in accordance with the Helsinki Declaration [17],
European Convention for the protection of vertebrate animals (18.03.1986), European Economic
Society Council Directive on the Protection of Vertebrate Animals (24.11.1986). All patients signed a
voluntary informed consent to participate in the study.
    The specimens of soft tissues of the paranasal sinuses were stained with hematoxylin and eosin
after the routine proceeding. The microscopicstudy was performed on an“Olympus BX-41”
microscope with subsequent processing by “Olympus DP-soft version 3.2” software. Morphometric
studies were performed in thezone of ostiomeatal complex which was chosen for morphological
interpretation.

Table 1
Distribution of patients by gender and age according to the WHO classification
           Gender, Age                         Female                             Male
              20-44                               3                                2
              45-59                               7                                5
              60-74                               5                                3




Figure 1: Mucous membrane of the paranasal sinus. H&E stain, x400.

   The lamina propria of the mucous membrane of the paranasal sinuses consists of a papillary layer
located under the epithelium, which is represented by loose connective tissue, and a deeper reticular
layer, with coarser connective tissue fibers. Surface fibers are thin, delicate and sinuous. They form
the basement membrane and stroma network. Between the fibers, single cellular elements are
detected, among which plasmocytes, macrophages, and tissue basophils predominate. Fibroblasts,
histiocytes, lymphocytes are rare. Vascular bed has uniform blood supply, endotheliocytes with
hyperchromic nuclei, are large (see Fig. 1).

   2.3.Mask generation

    Existing image markup algorithms can be divided into two categories:
        model-based learning methods explore the correlation between visual features and their
    semantic meaning to discover display features using machine learning or knowledge representation
    models for image labeling;
        database-driven models immediately produce a sequence of likely labels in accordance with
    the already annotated images in the database.
    As can be seen from the description of alternative software products and analysis of an example
image that will be assessed, it is necessary either to pre-train the neural network to determine the
masks, or to use the neural network to determine the universal contours or boundaries of objects with
further retraining of the network with reference to new objects and user adjustment.
    Pre-training a neural network deprives any sense of the very procedure of training a neural
network, because the network, in fact, has already been trained and there is no need to train the
network with a duplicate of its properties. In addition, to train a preliminary neural network, it is
necessary to prepare the same images and create masks for them.
    The second approach makes it necessary to select each object, which saves a lot of time, there is no
need to create an elliptical cell outline, however, it is needed to select each cell and, if necessary,
adjust each object, unfortunately, the objects are very small and their placement density causes many
difficulties, which can reduce advantage of such software to zero. Although under other conditions,
the use of such means is highly justified.
    Another feature of the task of recognizing a cell mask may be the fact that it is necessary to
determine a homogeneous convex ellipse-like object of various sizes, which may differ in shape –
although for the most part it is still an ellipse, and saturation – a cell can have a more or less saturated
color.


   2.4.Methods and algorithm
    The image processing and mask creation method can be represented as a sequence of the following
steps:
    1. Determination of the preliminary characteristics of the image: resolution, color palette;
    2. Setting the conditions for matching a pixel to a mask;
    3. Iterating over all pixels of the image;
    4. If the pixel meets the specified conditions, then it belongs to the mask, otherwise it is excluded
from consideration (see Fig. 2).
    In other words, to create masks of incoming images, it is necessary to set up the color interval in
rgb format. Next, the program compares the color of each pixel with the specified interval. If
necessary, the accuracy can be improved by the correction of the specified interval. In this way, we
have got a simple method of obtaining a mask, which requires a minimal amount of time in the
comparison to manual marking.
    Also this method has got some improvement that allows us to make our markings of cells
smoother and closer to the native shape. It is obtained because of the possibility to draw a correct
ellipse (circle). The center of ellipse lies in the current pixel and the color of ellipse satisfies the
specified interval, and the radius of this ellipse is such minimal that it does not create a large error (in
our case it was only 2 pixels).
    By changing of these parameters (color interval and radius), it was possible to achieve more
effective results.
    The disadvantage of such algorithm can be considered as possibility to use it only in the case,
when the object has a clear color recognition. So possibility of appearance of other objects which are
the same in the color is excluded. Consequently, the scope of application of the developed software is
mostly in the labeling of cell images.

                                              START

                                       Get image and algorithm
                                             parameters



                                 yes                               no
                                         Are there more lines in
                                               the image?



         Go to next line




                             no        Is there another pixel       yes
                                             in the line?




                            no                                      yes
                                         Does the pixel match
                                           the condition?



          Pixel is not part of                                            Pixel is part of the
               the mask                                                          mask




                                            Go to next pixel




                                             Save mask



                                                END
Figure 2: Algorithm of mask generation.

   The software was developed in the Python language, which contains a number of libraries that are
able to simplify the image processing and to analyze the obtained results. The image was uploaded
and processed using the Pillow library. By means of the language, a search of all the pixels of the
image was organized. The aim of the search was to analyze all the pixels of a current image and to
find out those that satisfy the condition and drawing ellipses. Also, the program allows to draw the
final binary mask, or to apply the resulting mask to the starting image to analyze the result and check
the effectiveness of the parameters` selection.
   This tool does not require prior training, which greatly simplifies its use and eliminates the need
for a large number of images. For example, more than 500 marked images are required for
segmentation of images. For its marking usually we need significant expenditure of time and human
resources for manual marking. At the same time, with the help of this application, the process of
processing         500     images        will      take       no       more       than           a   few
hours.




             a)                                  b)                                  c)

Figure 3: Image mask for training dataset.

    Figure 3 shows: a – original image, b – labeling by a specialist, c – resulting mask.




                  a)                                  b)                                    c)

Figure 4: Image mask for training dataset.

   Figure 4 shows the process of compiling an image mask for training a neural network using the
developed software tool: a – the original image, b – labeling by a specialist, c – the resulting mask.
   It is proposed to refer to the mask not only the pixel that was identified as suitable, but also the
surrounding pixels. The pixel inclusion radius can be adjusted by the user individually for each image;
however, the recommended radius is 2 pixels. This radius was determined experimentally and may
differ for different tasks.




              a)                                  b)                               c)

Figure 5: Comparison of obtained masks using the developed software tool with modernized
algorithm.

   The result of obtaining an image mask for training a neural network using the developed software
tool with a modernized algorithm.

3. Results




                            a)                         b)                     c)
Figure 6: Comparison of masks a – mask made manually, b – mask made with the help of software, c
– mask made with the help of software during the upgrade.

    Figure 5 shows the process of compiling an image mask for training a neural network using the
developed software tool with a modernized algorithm: a – the original image, b – labeling by a
specialist, c – the resulting mask
Masks were made for 25 images. The result for various methods of obtaining image masks is shown
in Fig. 6.
    The diagram of the algorithm for creating an image mask (Fig. 2) shows the algorithm of the
software tool, however, the block “Does the pixel meet the condition?” needs further clarification.
Based on the characteristics of the task and the images that form the DataSet, restrictions are imposed
on the values that the RGB (red, green, blue) channel parameters can take.
    Therefore, if the pixel corresponds to the interval for each of the channels, then the pixel belongs
to the mask, otherwise it is ignored. The intervals for each RGB channel are user-defined. For each
image they are different. However, due to the peculiarities of the task and images, the differences are
insignificant, and in fact the deviations are less than 15-20% for each of the channels.
    This study is promising for the detection of cell elements in the different physiological [17, 18]
and pathological conditions [19, 20]. Probably in future it can be combined with new scientific
approaches [21-23] for medical specialist of described area [24, 25].
    Table 2 summarizes the results of the masking process study. Column 1 indicates the methods for
obtaining masks; two programs were taken that are usually used to create masks, the developed
software and its modernization. The second column displays the total time it took to create masks for
25 images, the time is given in minutes. Column 3 provides average time spent processing one image,
time given in minutes. Column 4 is the share of the reference value, % (the result of work with
Labelme is taken as the reference), it is in the first position, clearly showing the savings in time and
resources. Column 5 is the average discrepancy between the masks, the masks are compared pixel by
pixel, and in case of a mismatch, such a pixel is taken into account as a discrepancy pixel, then the
total number of discrepancy pixels is divided by the total number of image pixels. The result obtained
was averaged for all 25 images and is given as a percentage. However, there were controversial
situations when the masks were different.
Table 2
Comparing the results of creating masks for an images
  Methods         for Total for Average for 1 Share of the Average                       Average
  obtaining masks       25 images, image, min           reference        divergence of negative
                        min                             value of time, masks, %          discrepancy,
                                                        % (the result                    %
                                                        of work with
                                                        Labelme      is
                                                        taken as the
                                                        reference)
           1                  2              3                4                5                6
  Mask application
  by specialist 1,
                            7020          280.8             100%
  Labelme
  software
  Mask application
  by specialist 2,
                            8950           358             127.49%           0.69%           0.08%
  Labelme
  software
  Mask application
  by specialist 2,
                             95             3.8             1.35%            7.61%            1.3%
  Labelme
  software
 Software mask
                         120             4.8              1.7%            6.89%             1.16%
 application
 Software mask
 application with        7020           280.8            100%
 an upgrade

  The mask that the proposed software produced was of better quality, or it would not be possible to
unequivocally classify the pixel as erroneous. Also for programmatic creation of masks, the effect of
lone pixels is observed. This shortcoming can be overcome by additional image processing and by
discarding pixels that do not have other pixels next to them.




                           a)                                                 b)

Figure 7: Variants of mask divergence: a – the mask was applied using software, b – the mask was
applied manually

   The operation of the algorithm will increase the duration of image processing by 2-10 seconds,
depending on the size. Examples of such situations are shown in Fig.7.
  Therefore, it was decided to add the 6th column, which indicates unambiguously negative
discrepancies, those pixels that were unequivocally determined by the specialist as an error; the
results are given as an average value; however, there were 7 images where the specialist did not notice
errors, and only 2 images that gave enough a high error rate of 5.9% and 7.8%, the rest gave less than
1%, which most likely indicates an incorrectly selected color palette for these images. Approximately
the same situation was observed with the modernization of the algorithm; only 3 images exceeded the
threshold of 1% discrepancy.
  However, if we correlate that in manual mode, one image is processed for about 5 hours, and for
software, taking into account manual adjustment of the color palette, it was 3-5 minutes, given that
there was no interface and we had to make changes directly to the code. If the interface is improved,
then the loss of time due to the adjustment of the color palette can be reduced even more to 1-2
minutes, the average deviation is 7.61%.


4. Conclusions
   In the course of the study, a basic algorithm was developed and put into practice for automatically
creating masks for marking microscopic images of the epithelium of the paranasal sinuses. This
method is distinguished by its information content and accuracy. It can greatly reduce the time it takes
to process medical images. The loss of time due to the adjustment of the color palette can be reduced
even more to 1-2 minutes, the average deviation is 7.61%.
    5. References

[1] O. Elemento, “The future of precision medicine: towards a more predictive personalized
     medicine.” Emerging Topics in Life Sciences, vol. 4, no. 2, pp. 175-177, 2020, doi:
     10.1042/etls20190197.
[2] H. Abdelhalim et al., "Artificial Intelligence, Healthcare, Clinical Genomics, and Pharmacogenomics
     Approaches in Precision Medicine", Frontiers in Genetics, vol. 13, 2022, doi:
     10.3389/fgene.2022.929736.
[3] C. Bales et al., “Can machine learning be used to recognize and diagnose coughs?” In: 2020 8th E-
     Health and Bioengineering Conference, EHB 2020; 2020, doi: 10.1109/EHB50910.2020.92801.
[4] C. Zhang et al., "Correction of out-of-focus microscopic images by deep learning", Computational
     and Structural Biotechnology Journal, vol. 20, pp. 1957-1966, 2022, doi: 10.1016/j.csbj.2022.04.003.
[5] A. E. Listyarini, “The Relations of Using Digital Media and Physical Activity with the Physical
     Fitness of 4th and 5th Grade Primary School Students.” Physical Education Theory and Methodology,
     vol. 21, no. 3, pp. 281-287, 2021, doi: 10.17309/tmfv.2021.3.12.
[6] W. S. A. Al Attar, “The Current Implementation of an Evidence-Based Hamstring Injury Prevention
     Exercise (Nordic Hamstring Exercise) among Athletes Globally.” Physical Education Theory and
     Methodology, vol. 21, no. 3, pp. 273-280, 2021, doi: 10.17309/tmfv.2021.3.11.
[7] R. Nazaryan and L. Kryvenko, “Salivary oxidative analysis and periodontal status in children with
     atopy.” Interventional Medicine and Applied Science, vol. 9, no. 4, pp. 199-203, 2017, doi:
     10.1556/1646.9.2017.32.
[8] N. Gutarova et al., “Features of the morphological state of bone tissue of the lower wall of the
     maxillary sinus with the use of fixed orthodontic appliances”, Pol Merkur Lekarski, vol. 49, no. 286,
     pp.232-235, 2020.
[9] L. Shepherd, Á. Borges, B. Ledergerber, et al.”Infection-related and -unrelated malignancies, HIV
     and the aging population”, HIV Med, vol. 17, no 8, pp. 590-600, 2016, doi:10.1111/hiv.12359.
[10] P. Myronov, “Low-frequency ultrasound increase effectiveness of silver nanoparticles in a purulent
     wound model.” Biomedical Engineering Letters, vol. 10, no. 4, pp. 621-631, 2020, doi:
     10.1007/s13534-020-00174-5.
[11] V. Shevchuk, N. Odushkina, Y. Mikulinska-Rudich, V. Mys, R. Nazaryan, “A method of increasing
     the effectiveness of antibacterial therapy with ceftriaxone in the complex treatment of inflammatory
     diseases of the maxillofacial area in children”, Pharmacologyonline,vol. 3, pp. 652-62, 2021.
[12] Y. Yaroslavska, N. Mikhailenko, V. Kuzina, L. Sychova, R. Nazaryan, “Antibiotic therapy in the
     complex pathogenic treatment of patients with sialolithiasis in the stage of exacerbation of chronic
     sialoadenitis”, Pharmacologyonline, vol. 3, pp. 624-31, 2021.
[13] A. Nechyporenko et al., “Application of spiral computed tomography for determination of the
     minimal bone density variability of the maxillary sinus walls in chronic odontogenic and rhinogenic
     sinusitis”, Ukr J Radiol Oncol, vol. 29, no. 4, pp. 65-75, 2021.
[14] Polyvianna Y, Chumachenko D, Chumachenko T. Computer aided system of time series analysis
     methods for forecasting the epidemics outbreaks. 2019 15th International Conference on the
     Experience of Designing and Application of CAD Systems, CADSM 2019:1-4. doi:
     10.1109/CADSM.2019.8779344
[15] Yakovlev S., Bazilevych K., Chumachenko D., Chumachenko T., Hulianytskyi L., Meniailov I.,
     Tkachenko A. The concept of developing a decision support system epidemic morbidity control,
     CEUR Workshop Proceedings, 2020, vol. 2753, pp. 265-274.
[16] Chumachenko D., Balitskii V., Chumachenko T., Makarova V., Railian M. Intelligent expert system
     of knowledge examination of medical staff regarding infections associated with the provision of
     medical care, CEUR Workshop Proceedings, 2019, vol. 2386, pp. 321-330.
[17] V. Gargin, R. Radutny, G. Titova, D. Bibik, A. Kirichenko and O. Bazhenov, "Application of the
     computer vision system for evaluation of pathomorphological images", 2020 IEEE 40th International
     Conference        on     Electronics      and       Nanotechnology     (ELNANO),        2020.    doi:
     10.1109/elnano50318.2020.9088898.
[18] Nechyporenko, A. S., Alekseeva, V. V., Sychova, L. V., Cheverda, V. M., Yurevych, N. O., Gargin,
     V.V. (2020). anatomical prerequisites for the development of rhinosinusitis. Lekarsky Obzor, 6(10),
     334-338.
[19] R. Radutniy, A. Nechyporenko, V. Alekseeva, G. Titova, D. Bibik and V. Gargin, "Automated
     Measurement of Bone Thickness on SCT Sections and Other Images", 2020 IEEE Third International
     Conference on Data Stream Mining & Processing (DSMP), 2020. doi:
     10.1109/dsmp47368.2020.9204289.
[20] A. Nechyporenko et al., "Comparative Characteristics of the Anatomical Structures of the Ostiomeatal
     Complex Obtained by 3D Modeling", 2020 IEEE International Conference on Problems of
     Infocommunications.        Science     and      Technology     (PIC    S&T),       2020.     doi:
     10.1109/picst51311.2020.9468111.
[21] V. Kovtun, I. Izonin and M. Gregus, "Formalization of the metric of parameters for quality evaluation
     of the subject-system interaction session in the 5G-IoT ecosystem", Alexandria Engineering Journal,
     vol. 61, no. 10, pp. 7941-7952, 2022. doi: 10.1016/j.aej.2022.01.054.
[22] I. Izonin, R. Tkachenko, Z. Duriagina, N. Shakhovska, V. Kovtun and N. Lotoshynska, "Smart Web
     Service of Ti-Based Alloy’s Quality Evaluation for Medical Implants Manufacturing", Applied
     Sciences, vol. 12, no. 10, p. 5238, 2022. doi: 10.3390/app12105238.
[23] D. Chumachenko, "On Intelligent Multiagent Approach to Viral Hepatitis B Epidemic Processes
     Simulation", 2018 IEEE Second International Conference on Data Stream Mining & Processing
     (DSMP), 2018. doi: 10.1109/dsmp.2018.8478602.
[24] Y. Kuzenko, O. Mykhno, V. Sikora, V. Bida, O. Bida, “Dental terminology "discoloration" or
     "pigment dystrophy" - a review and practical recommendations”, Pol Merkur Lekarski,
     2022;50(295):65-67.
[25] Y. Kuzenko, A. Romanyuk, A. Politun, L. Karpenko, “S100, bcl2 and myeloperoxid protein
     expirations during periodontal inflammation”, BMC Oral Health, 2015;15:93. doi:10.1186/s12903-
     015-0077-8