=Paper= {{Paper |id=Vol-2753/paper36 |storemode=property |title=Technologies of Object Recognition in Space for Visually Impaired People |pdfUrl=https://ceur-ws.org/Vol-2753/paper24.pdf |volume=Vol-2753 |authors=Nataliya Boyko,Bohdan Mandych |dblpUrl=https://dblp.org/rec/conf/iddm/BoykoM20 }} ==Technologies of Object Recognition in Space for Visually Impaired People== https://ceur-ws.org/Vol-2753/paper24.pdf
Technologies of Object Recognition in Space for Visually
Impaired People
Nataliya Boyko a, Bohdan Mandycha
a
     Lviv Polytechnic National University, Profesorska Street 1, Lviv, 79013, Ukraine

                 Abstract
                 Eyesight is a person's unique ability to visually notice obstacles in their path. But,
                 unfortunately, this statement does not apply to everyone. Moving from one place to another
                 cause many challenges for visually impaired people. In the period of the rapid development
                 of artificial intelligence systems, as well as the expansion of the capabilities of mobile
                 devices, there is an excellent opportunity to find an affordable and effective solution
                 necessary to solve the problems of blind people. Development of applications that allow
                 detecting objects in the user's environment is one of the priority approaches to this problem.
                 The mobile application can warn the user of obstacles in his path and help him to move from
                 one place to another, as well as giving user the opportunity to avoid unwanted collisions and
                 stumbling. The target devices for deploying such applications are smartphones running on the
                 Android operating system. The reason for this is the breadth and high availability of this
                 devices. Android smartphones of any price segment are almost everywhere. Also, the choice
                 of just such a popular category of devices allows to save time on the development and testing
                 of new special gadgets for navigation, which could serve as an alternative in this solution.
                 Therefore, this article proposes a model that allows to use the smartphone, a popular device
                 accessible to everyone, on which there will be installed software that can help the visually
                 impaired person to detect objects in his environment and help him navigate his way to
                 destination place. The user will be able to receive all the original processed information in
                 sound form, provided in the form of navigation instructions or a short description of the
                 object.

                 Keywords 1
                 detection scheme, convolutional neural network, machine learning, TensorFlow API, Google
                 Cloud Vision.

1. Introduction
   Due to the rapid development of deep learning technologies, engineers have been able to create
complex machine learning models designed to detect objects in images, regardless of their features
and geometric shape. It also facilitated the replacement of existing heuristic-based systems in favor of
machine learning models with better performance and speed. The massive spread of mobile phones
and smartphones among users around the world, as well as the ever-increasing demands and
expectations for greater performance, have challenged the industry to more widely use the latest and
greatest technologies to meet demand. One of the topical solutions is the use of machine learning
algorithms for object detection in space [1]. Learning is the phase where a model, usually a neural
network, learns to behave in a certain way based on certain sets of data. This step can be easily
implemented in the cloud and shared on mobile devices, where trained models can be used to train
from previously unknown data. When using more advanced technologies and algorithms on a mobile
device, one of the problems is the limited computing power of its equipment. In such a case, it is
important that the operations performed are optimized for mobile devices. With the original mobile

IDDM’2020: 3rd International Conference on Informatics & Data-Driven Medicine, November 19–21, 2020, Växjö, Sweden
EMAIL: nataliya.i.boyko@lpnu.ua (N. Boyko); mandych9819@gmail.com (B. Mandych)
ORCID: 0000-0002-6962-9363 (N. Boyko); 0000-0003-4223-3067 (B. Mandych)
            ©️ 2020 Copyright for this paper by its authors.
            Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
            CEUR Workshop Proceedings (CEUR-WS.org)
version of TensorFlow, namely TensorFlow Mobile, and the updated mobile library TensorFlow Lite,
developers can use pre-built mobile device models for training, optimized for mobile hardware.
    The object of study is a study of the means for deploying intelligent object detection and spatial
navigation systems.
    The subject of study is a TensorFlow machine learning technology capable of building and
training neural networks to identify and decipher patterns and correlations.
    The purpose of work is an analysis of the implementation of machine learning technologies in the
area of object detection and spatial navigation implemented on devices running the Android operating
system. Unfortunately, in the modern world, there are not many available tools that could make life
easier for people with visual impairments. And thanks to the constant development of machine
learning technologies and the growth in the number of platforms for their implementation, there is a
great prospect to find the optimal solution to this problem. Therefore, the main purpose of this work is
the study of applied tools for determining objects and spatial navigation, as well as analyzing the
operation of convolutional neural networks, which make it possible to introduce an accessible and
understandable software solution for this category of people.

2. Theoretical analysis
2.1. Visual Processing
    The ability to see clearly depends on how well the components of the human eye are functioning.
Light rays are reflected from all objects. The eyes receive light rays and transmit detailed information
to the brain, which interprets them as images. Each part of the eye plays its own role in the
transmission of these images. The sclera is a dense, opaque, protective protein membrane. In front is
the cornea (cornea), a transparent window that allows light to enter the eye. Around the cornea is a
thin, transparent membrane called the conjunctiva that helps protect the rest of the eye from the front
and inside under the eyelids. [2]. Behind the sclera is the middle layer - the choroid. It is dark in color
to prevent light from reflecting inside the eye and contains primarily the blood vessels that feed the
eye. The front part of the choroid is the iris, which gives the eyes their color. At the center of the iris
is the pupil, a round hole that looks like a black dot. The muscles in the iris control the size of the
pupil by letting in more or less light. in bright light, the pupil shrinks to a small size to prevent too
much light from entering. In low light or darkness, the pupil is enlarged to allow more light to enter
the eye [3]. The task of the retina is to collect light information that the main nerve of the eye (optic
nerve) sends to the brain in the form of nerve impulses [18]. The brain then transforms these messages
into images. The retina has two types of light-sensitive cells - rods and cones, that capture rays of
light. Rods help to see in dim light, while cones help to see details and colors. The transparent and
flexible lens of the eye also plays an important role in the visual process. It focuses light on the retina.
For precise tasks, light is focused at the center of the retina, in an area called the macula. The muscles
around the lens control its shape, allowing the human to see objects at different distances. The cavity
between the lens and the cornea contains a fluid called aqueous humour. A jelly-like substance called
aqueous humour fills the cavity behind the lens. The aqueous humour and vitreous give the eyes their
shape. By changing its shape, the lens provides clear vision at different distances. To focus on near
objects, the muscles of the eye contract and the lens becomes rounder. When a person looks at a
distant object, the same musculature relaxes and the lens is flattened [4].
    The retina consists of 10 layers of photoreceptor cells, 6 of which are layers of light-sensitive cells.
The 2 types of photoreceptors have a special shape, which is why they are called cones and rods. The
rods are extremely sensitive to light and provide black-and-white perception and night vision to the
eye. The cones, in turn, are not so sensitive to light, but they are able to distinguish colors - the
optimal performance of the cones is noted during the daytime. Thanks to the work of photoreceptors,
light rays are transformed into complexes of electrical impulses and are sent to the brain at an
incredibly high speed, and these impulses themselves overcome over a million nerve fibers in a
fraction of a second. The communication of photoreceptor cells in the retina is very complex. The
cones and rods are not directly connected to the brain. Having received the signal, they redirect it to
bipolar cells, and they redirect the signals already processed by themselves to the ganglion cells, more
than a million axons (neurites through which nerve impulses are transmitted) of which make up a
single optic nerve through which the data goes to the brain [5]. After the processed visual information
enters the brain, it begins sorting, processing and analyzing it, and also forms a whole image from the
individual data. With the help of two eyes, two "pictures" of the world that surrounds a person are
formed - one for each retina. Both "pictures" are transmitted to the brain, and in reality a person sees
two images at the same time. Image separation and highly complex optical pathways make it possible
for the brain to see with each of its hemispheres separately using each of the eyes. This allows human
to speed up the processing of the flow of incoming information, and also can provide vision only with
one eye, if suddenly a person for any reason ceases to see with the other. This whole process of
collecting analysis and processing visual data by the human brain is called visual processing [6].

2.2.    Human vision and blindness

       Human vision is the process of visually detecting images and the location of objects in the
   surrounding world. The human visual system consists of two separate parts. The eyes act as a
   receiving channel of visual information from the outside world. Their main function is to capture
   and transform light rays into signals transmitted to special areas of the brain that are able to
   process these signals and perform them in the form of images [18]. As a result, thanks to the
   information received from the eyes, forms an internal picture of the surrounding space visible to a
   person. During the visual information processing, brain removes "blind" spots and distortions
   caused by the micromovement of eyes, blinking and a narrow viewing angle, offering the person
   an adequate integral image. Despite the lack of a clear distribution of function between the brain
   and eyes, it is still possible to consider each of the components of the visual system separately [7].
       Limitations of Human Vision:
       limited memory, a human cannot remember a quickly flashed image
       limited to visible spectrum
       illusion




Figure 1: Stages of image processing by human organs
   The World Health Organization (WHO) notes an increase in the number of people with visual
impairment, who are characterized by moderate to severe visual impairmentю. For such categories of
people, the use of contact lenses or regular glasses is not relevant and effective. As a result, a person
loses the ability to fully function when performing routine tasks or completely loses his robotic
ability. Blindness is a condition characterized by an absolute loss of vision, temporarily or
permanently. Various diseases of the central nervous system, such as meningitis, encephalitis, and
toxic brain damage, can lead to blindness. In the elderly, a sharp decrease in vision is often associated
with degenerative-dystrophic changes in the retina and optic nerve.
Figure 2: Global estimate of visual impairment (World Health Organization data for year 2012)

    According to the 2012 WHO criteria, a global estimate suggests that of the 285 million people
with visual impairments, approximately 39 million are blind. Most often, vision problems occur in
people living in developing countries. In these countries, the main cause of many cases of blindness
(about 48% of cases) is cataract (partial or complete opacity in the lens of the eye). Most often, vision
problems occur among the elderly. Globally, women are at greater risk than men. If the treatment is
started correctly and on time, about 85% of cases of visual impairment can be avoided, and about
75% of cases of blindness can be prevented or cured [8, 13].

2.3.    Computer Vision
   Computer vision is an interdisciplinary field that deals with the extraction of information from
digital images and videos, regardless of their type and format. The purpose of this area is the
automation of processes and tasks in the field of computer technology, which are performed by the
human visual system.




Figure 3: Stages of image processing performed by computer vision

    The task of computer vision is to find optimal methods for analyzing and processing images and
presenting it in the form of multidimensional models, digital and symbolic data or equations. The
information that different algorithms extract from images in computer vision can have different nature
and type. Some algorithms simply split the image into parts corresponding to individual objects or
different parts of objects. The concept of understanding digital images is based on the principle of
distributing symbolic information and image data using models built on the theory of physics,
statistics, geometry and machine learning [18, 15].
    In object classification, it was known that the work of the human brain is characterized by know-
how in the semantic area, that is, semantically significant elements are equivalent to line segments, its
shape and boundaries. In any case, even with the use of later data processing techniques, such
components still cannot be accurately recognized by a computer, so it is still difficult with computer
vision to represent visual data the way humans do. A computer needs to prepare visual data in an
information space framed by significantly disparate but less important components, for example,
shades, surfaces, and other. Thus, the philosophy of working with visual objects in computer vision is
not at all the same as in humans. That is, if a person is able to perceive the visible picture integrally in
his consciousness, the computer in its logic relies on the differences between the elements of the
image.
Figure 4: Example of computer vision image recognition

    Computer vision is used in the tasks of automatic analysis, processing and understanding of useful
information obtained from an image or sequences of several frames. First, it is necessary to agree on
the theoretical foundations of the approach and prepare the algorithmic basis for achieving automation
of the understanding of visual data. In its work, computer vision relies on the technology of applying
machine learning in the field of identifying objects in images. The effectiveness of computer vision
solutions is constantly growing, regardless of their field of application. Computer vision is widely
used in various sectors of industry and social life, from medicine (medical scanners) and security
equipment (CCTV cameras) to the military industry (missile guidance optics) and the automotive
industry (autopilot and other innovations). The end result of applying machine learning models and
theories in this discipline is the creation of automated computer vision systems.
    The field of machine learning models for visual recognition emerged in the late 2000s and now
dominates the area of computer vision. Due to the large amount of labeled data, complex algorithms
and increasing computing power, these models are able to classify objects without human
intervention. Currently, the most common algorithm for detecting objects in the field of computer
vision are convolutional neural networks, for which it has been proven that their ability to classify
images exceeds the human level [9, 17].
    The computational characteristics of artificial neural networks are similar to those of graphical
computations in real time when rendering objects in video games, where operations such as matrix
animation and splitting are performed per pixel in parallel. Around 2005, researchers realized the
potential benefits of deploying and training artificial neural networks on a graphics processing unit
rather than a central processing unit, resulting in faster computations and better performance. This
allowed the researchers to add more layers in neural networks, also known as deep neural networks,
and use more data while maintaining optimal execution times [10, 20].

3. Architecture diagram

                              User                      User                 ...      User


                                  response                 response                      response




                                                           request




                                                     Application



                                              Send                   Send
                                             image                   label

                            API


                                                                                   Classification
                                                      Feature
                           Segmentation                                                 and
                                                     extraction
                                                                                   identification




                                                        Dataset
Figure 5: System architecture
    The above diagram is a four-layer architectural / conceptual diagram of the system. This
conceptual diagram shows the interaction between different parts of the proposed system by dividing
them into four separate components. This division more accurately represent the relationship of
different elements in the scheme and allows better understand the structure of the system. The outer
layer of this architecture is represented by the user layer and is the system entry point. The figure also
shows the interaction between users and the Android application, which is the middle layer of the
architecture. The Android application contains the user interface of the system, it also receives
requests from users and sends them to the API for further processing. The API uses a dataset that
contains thousands of labelled images to process requests. In this case, the system compares the
current image received from the user with the images in the data set. After classification and
identification, the API sends to the application a label and a detection accuracy parameter of the
current image.

4. About system and methods
4.1. Classification and object detection
   The process of determining which of the k possible categories some x input belongs, is called the
problem of classification. This can be described by the function f:        →{1, ..., k}. The result can be
transferred by a class, or by a vector from the distribution of values of all classes. Classification of the
image - the process of establishing the classification of the category, before which the object should
be placed on the image [20].
   Object detection is the process of detecting objects in an image; it applies a recognition algorithm
on all sub-windows of the original image, ranging from one to several classes of objects [11]. Object
detection can be used to detect faces, pedestrians or vehicles in an image. For example, image
detection in the YOLO network preforms the process of dividing images into subsets [12]. To identify
objects, it is necessary to localize objects within the image that do not have a classification. At first
YOLO architecture image is divided into a grid, and then this grid is used to evaluate multiple image
sub-windows.
   If there is performed video analysis, that consists of several images (frames) per second, and
detection is performed in real time, then such an algorithm is considered an algorithm for detecting
objects in real time. Analyzing multiple images per second in object detection puts a lot of emphasis
on efficient algorithms as the processing power required increases.
   In machine learning, the training stage is the stage where the parameters of the model θ are
optimized to minimize the cost function, and essentially learns the mapping function f* from input to
output. The inference phase of the machine learning model is that the fully trained model, that shows
some input value x, and outputs some initial value y obtained from the composition of the proper
function.

4.2.    Mean Average Precision
   The most common metric for object detection performance is the mean Average Precision (mAP),
as defined by the PASCAL VOC. Best performance is reported as a higher mAP value based on the
ideal expected result field and class data for the object detection task. Before using mAP to detect an
object, all predicted fields and classes are sorted in descending order of probability and aligned with
the fields and classes of the ideal expected result. If the prediction classes and the ideal expected
result are the same, and their intersection on the union (IoU, also known as the Jaccard index) is
greater than or equal to 0,5 (0,5IOU), then the prediction is considered a match. A match is
considered true if and only if it has not been used before to reduce duplicate object detection [13]. The
ranking quality metric is calculated using numerical integration as the ratio of the area under the
precision and recall curve, and then the mAP result is achieved by calculating the ranking of all
classes.
   To get mAP, we have to calculate the precision and calculate all the objects in the images. We also
need to consider the precision result for each object detected by the model in the image. There is a
necessity to consider all the assumed limiting fields with a precision result above a certain threshold.
Bounding boxes that are above the threshold are considered to be positive, and any provided bounding
boxes below the threshold are considered negative. So, the higher is the precision threshold, the lower
mAP will be.

5. Experimental part
     The typical software used for machine learning tasks on Android devices is TensorFlow Lite. It is
widespread because it offers an interface for implementing common machine learning algorithms and
executable model code. Models created in TensorFlow can be ported to heterogeneous systems with
little or no change to devices ranging from mobile phones to distributed servers. This software was
created and maintained by Google and is used internally for machine learning purposes. TensorFlow
performs computation as a data flow graph with states.
     Google designed TensorFlow Lite to be able to run on heterogeneous systems, including mobile
devices. This was due to the problems of transferring data between devices and data centers where
calculations could be performed on, without involvement of particular device. TensorFlow Lite
allowed developers to create interactive programs without the need for backward network latency for
related computation [14, 18].
     Since the machine learning task is computationally expensive, model optimization is used to
improve performance. TensorFlow Lite minimum hardware requirements for random access memory
(RAM) size and processor speed are low, and the primary bottleneck is computation speed, since
latency is desirable for mobile applications. For example, a mobile device with hardware capable of
10 gigaflops per second (FLOPS) floating point operations is limited to running a 5 gigaflops model
at 2 frames per second, which may make the desired program performance impossible.
     Some of the optimizations included in TensorFlow Lite include hardware acceleration through the
silicon layer, templates such as the Android Neural Network API, and optimized mobile ANNs such
as MobileNets [15] and SqueezeNet [16, 19]. Learned TensorFlow models are automatically
converted to TensorFlow Lite model format.
     Also, an important component of such a system is the use of the Google Cloud Vision API, which
allows you to identify a specific object in the digital image using machine learning models in the
REST API. It quickly characterizes the image in a large amount of classifications (for example,
"helicopter", "Statue of Liberty", etc.), highlights special features and images within the image itself,
and finds the printed words contained within. It can be used to build metadata on the image index,
target malicious content, or enhance new advertising scenarios by examining image ratings. The
image from the request is analyzed and integrated with the image storage in Google Cloud Storage.
     At the initial stage of the Android applicationdevelopment, that use TensorFlow Lite technology,
we need to import the required software libraries into the application. To do this, we add the
following line to the dependency section of our build.gradle file:

    In order to load the model and configure it start up, we need to import the TensorFlow Lite
interpreter, which also provides a set of inputs. Then in TensorFlow Lite it will be possible to execute
the model and set the outputs.

   Then we create and loadan instance of the Interpreter into the MappedByteBuffer.

    To load the model, we must also use the getModulePath () function, which returns a string that
points to a file in the assets folder. To classify images, we need to call the launch method on the
interpreter, and pass it an array of labels and image data:

   The classifyFrame () method contains the core of the TensorFlow Lite library:
   Then we loada raster mapfor the classifierand scale it to the required size. After that, to get a list of
the 3 best classes, we need to use the classifyFrame () method, which will return the text of the class
labels and the calculated weights.




   Precision predictions are made using the Intersection over Union method. The object detection
system makes predictions in terms of the timing of the bounding boxes and class labels. IoU measures
the overlap between two limits. We use this to measure the percentage of the predictable area overlap
the location of the object with its real area (real feature of the object). In some datasets, we pre-define
the IoU threshold (let say 0.5) by classifying whether the prediction is True Positive or False
Negative.




Figure 6: Overlapping areas of predicted and ground truth bounds
   Below are the results of identifying objects on a smartphone with an installed experimental
application. In the resulting images, you can observe the overlap of bounding boxes on certain
objects, object labels and the accuracy of the detection in percent. Based on the results, it can be
argued that TensorFlow Lite technology is capable of qualitatively and fairly accurately identifying
objects in the image.




Figure 7: Results of object detection
   For transmission of sound information about certain objects from the screen can be used the
functionality of the TalkBack service, often used by people with visual impairments on their
smartphones. This service is a Google screen reader pre-installed on Android devices. TalkBack
provides voice prompts so that the user can use the device without looking at the screen.

6. Conclusions
    The main driving factors in the analysis and testing of this software were the motivation and the
idea to find the optimal solution to the problems of people with visual impairments. This article
reviewed modern technologies for detecting objects in digital images, which include the TensorFlow
Light and the Google Cloud Vision API libraries. The main object of the research was the use of the
TensorFlow Light library, which contains many useful algorithms for processing, analyzing and
classifying images. This technology has demonstrated a wide range of tools for machine learning and
object detection. The Google Cloud Vision API uses a COCO dataset, which contains millions of
pictures, that can be compared with the input image. All Google Cloud images are sent and processed
in the cloud. Also, choosing the Android operating system from Google as the target platform for
distributing the application eliminates the possibility of unpleasant compatibility issues.
    Among the main limitations of this system are the necessity to keep the user smartphone always
turned on, a stable Internet connection and a sufficient battery level. The user must carry it with him
at all times. A good addition could be a special hook-on case for a smartphone, and then a person's
hands would become free.


7. References
   [1] A.K Tung, J. Hou, J. Han, “Spatial clustering in the presence of obstacles”, The 17th Intern.
        conf. on data engineering (ICDE’01), Heidelberg, 2001, pp. 359–367.
   [2] C. Boehm, K. Kailing, H. Kriegel, P. Kroeger, “Density connected clus-tering with local
        subspace preferences” IEEE Computer Society [Proc. of the 4th IEEE Intern. conf. on data
        mining, Los Alamitos, 2004, pp. 27–34].
   [3] D. Guo, D.J. Peuquet, M. Gahegan, “ICEAGE: Interactive clustering and exploration of large
        and high-dimensional geodata”, vol. 3, N. 7, Geoinfor-matica, 2003, pp. 229–253.
   [4] D. Harel, Y. Koren, Clustering spatial data using random walks, Proc. of the 7th ACM
        SIGKDD Intern. conf. on knowledge discovery and data mining, San Francisco, California,
        200, pp. 281–286.
   [5] N. Boyko, M. Kuba, L. Mochurad, S. Montenegro “Fractal Distribution of Medical Data in
        Neural Network”, The 2 nd International Workshop on Informatics & Data-Driven Medicine
        (IDDM 2019), Volume 1. Lviv, Ukraine, November 11-13, 2019, pp. 307-318.
   [6] D.J. Peuquet, “Representations of space and time”, N. Y.: Guilford Press, 2002.
   [7] P. Vitynskyi, R. Tkachenko, I. Izonin and H. Kutucu, "Hybridization of the SGTM Neural-
        Like Structure Through Inputs Polynomial Extension," 2018 IEEE Second International
        Conference on Data Stream Mining & Processing (DSMP), Lviv, 2018, pp. 386-391, doi:
        10.1109/DSMP.2018.8478456.
   [8] H.-Y. Kang, B.-J. Lim, K.-J. Li, “P2P Spatial query processing by Delaunay triangulation”,
        Lecture notes in computer science, vol. 3428,Springer/Heidelberg, 2005, pp. 136–150.
   [9] M. Ankerst, M. Ester, Kriegel H.-P. “Towards an effective cooperation of the user and the
        computer for classification” [Proc. of the 6th ACM SIGKDD Intern. conf. on knowledge
        discovery and data mining, Boston, Massachusetts, USA, 2000, pp. 179–188].
   [10] O. Veres, N. Shakhovska, “Elements of the formal model big date”, The 11th Intern. conf.
        Perspective Technologies and Methods in MEMS Design (MEMSTEH), Polyana, 2015, pp.
        81-83
   [11] N. Boyko, O. Pylypiv, Yu. Peleshchak, Yu. Kryvenchuk, J. Campos “Automated Document
        Analysis for Quick Personal Health Record Creation” The 2 nd International Workshop on
        Informatics & Data-Driven Medicine (IDDM 2019), Volume 1. Lviv, Ukraine, November 11-
        13, 2019, pp. 208-221.С. Zhang, Y. Murayama, “Testing local spatial autocorrelation using”,
        vol. 14, Intern. J. of Geogr. Inform. Science, 2000, pp. 681–692.
[12] R. Agrawal, J. Gehrke, D. Gunopulos, P. Raghavan, “Automatic sub-space clustering of high
     dimensional data”, vol. 11(1), Data mining knowledge discovery, 2005, pp. 5–33.
[13] V. Estivill-Castro, I. Lee, “Amoeba: Hierarchical clustering based on spatial proximity using
     Delaunay diagram” [9th Intern. Symp. on spatial data handling, Beijing, China, 2000, pp. 26–
     41].
[14] N. Boyko , L. Mochurad, I. Andrusiak, Yu. Drevnytskyi “Organizational and Legal Aspects
     of Managing the Process of Recognition of Objects in the Image”, Proceedings of the
     International Workshop on Cyber Hygiene (CybHyg-2019) co-located with 1st International
     Conference on Cyber Hygiene and Conflict Management in Global Information Networks
     (CyberConf 2019), Kyiv, Ukraine, November 30, 2019, pp. 571-592.
[15] N. Boyko, N. Shakhovska “ Prospects for Using Cloud Data Warehouses in Information
     Systems”, 2018 in IEEE 13th International scientific and technical conference on computer
     sciences and information technologies (CSIT), vol. 2, DOI: 10.1109/STC-
     CSIT.2018.8526745
[16] I. Turton, S. Openshaw, C. Brunsdon “Testing spacetime and more complex hyperspace
     geographical analysis tools”, Innovations in GIS 7, London: Taylor & Francis, 2000, pp. 87–
     100.
[17] C. Aggarwal, P. Yu “Finding generalized projected clusters in high dimensional spaces”,
     ACM SIGMOD Intern. conf. on management of data, 2000, pp. 70–81.
[18] C.M. Procopiuc, M. Jones, P.K. Agarwal, T.M. Murali, T.M. “A Monte Carlo algorithm for
     fast projective clustering”, ACM SIGMOD Intern. conf. on management of data, Madison,
     Wisconsin, USA, 2002, pp. 418–427.
[19] A. Mulyak, V. Yakovyna, B. Volochiy “Influence of software reliability models on reliability
     measures of software and hardware systems”, Eastern-European Journal of Enterprise
     Technologies, 2015, Vol. 4(9), pp. 53-57.
[20] N. Shakhovska, S. Fedushko, M. Greguš ml., N. Melnykova, I. Shvorob, & Y. Syerov “Big
     Data analysis in development of personalized medical system”, Procedia Computer Science,
     Vol. 160, pp. 229–234. https://doi.org/10.1016/j.procs.2019.09.461