<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>December</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>An IoT Solution: A Fitness Trainer</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yaroslav Hladkyi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Myroslava Gladka</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mykola Kostikov</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rostyslav Lisnevskyi</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”</institution>
          ,
          <addr-line>Prosp. Peremohy, 37, Kyiv, 01601</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>National University of Food Technologies</institution>
          ,
          <addr-line>Volodymyrska Street, 68, Kyiv, 01601</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>Volodymyrska Street, 60, Kyiv, 01601</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>0</volume>
      <fpage>1</fpage>
      <lpage>03</lpage>
      <abstract>
        <p>An IoT system of motion detection and control for fitness exercises can be urgent for people who do sports at home. Using such a system as a mobile application allows the user to get the assessment of technique when doing exercises demonstrating them in front of the smartphone camera, as well as recommendations which may help improve doing exercises. As a result, the efficiency of exercises will be increased. These functions are implemented via a neural network able to recognize images. Besides, users will see their progress and information on exercises made with flaws, which will help to fix the execution technique. The IoT system configures business rules and scenarios, necessary artefacts for users, the exercises they perform, errors and deviations that are committed while doing the exercises, as well as analyzing video files. The IoT solution is built using a neural network that is capable of recognizing the user's body posture during the exercise using a video file captured on the camera of their own smartphone. With the help of mathematical calculations, artificial intelligence will return the result of the video file analysis, where the user will be able to see their flaws while doing a certain task, and get recommendations. A special factor in pattern recognition is the individual initial anthropometric data of each user, which must be taken into account in the analysis. The use of IoT system of monitoring and control over the performance of fitness exercises will positively affect the trend of a healthy lifestyle in today's world without the involvement of personal fitness trainers.</p>
      </abstract>
      <kwd-group>
        <kwd>1 IoT</kwd>
        <kwd>system</kwd>
        <kwd>neural network</kwd>
        <kwd>image recognition</kwd>
        <kwd>fitness</kwd>
        <kwd>training</kwd>
        <kwd>technique</kwd>
        <kwd>mobile application</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Nowadays, when most professions involve sedentary work, stressful situations, and performing
tasks remotely using a PC, the question of recovery and support of full functioning of an organism.
emerges. Fitness has become widespread due to its positive effect on the human body: the activation
of anabolism, i.e. the accumulation of plastic substances that form body tissues, and energy
substances, to ensure vital functions. Full implementation of this process through fitness leads to
health improvement when a human body functions in a way that provides complete physical and
mental well-being. Therefore, doing fitness exercises, including cardio trainers and various other
procedures, has a positive effect on human health state and feelings [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        The supervision of a coach while doing fitness exercises can eliminate the need for deep
knowledge in the field of biomechanics, motion physiology, methods of the training process
organization, theoretical points of physical activities and sports [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Psychological, methodical,
nutrition counselling and pedagogical support of the coach will help to avoid the personal experience
of trial and error. No less important is also the psychological aspect. However, the constant
supervision of the training process by a fitness trainer limits the time and number of workouts.
Therefore, it is important to develop an image recognition system where the position of a fitness
trainer will be replaced in part or in full by a highly developed artificial intelligence [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>Among the mobile application in Ukraine, some offer their users an adaptive schedule for certain
exercises. So, the people who don't have the funds to buy a gym membership, or don't have enough
free time, have the opportunity to use such applications and maintain the good physical state of the
body. The users of such applications follow the instructions of a virtual trainer who develops exercise
routines depending on the user's characteristics. But users don't have an opportunity to know how
correct is their technique of doing a certain exercise, while it is an integral factor in their efficiency.</p>
      <p>However, there are no analogues in the modern market of mobile fitness applications that could
offer a real time support for doing exercises by an artificial intelligence. Therefore, such an IoT
solution can potentially be an achievement in the field of sports mobile applications.</p>
      <p>The aim of the IoT solution under development consists in the right placement of the relations
between the most important entities, optimization of performing queries and ensuring the
implementation of business processes of various kinds.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Methods for Solving the Problem</title>
      <p>
        Artificial neural networks (ANN) is a mathematical functioning model of neural networks that
are traditional for living organisms and represent networks of nerve cells [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. Both biological and
artificial neural networks have neurons as their main element. The neurons are interconnected and
form layers. The number of layers may be different depending on the complexity of the network and
its aim (tasks solved) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Probably the most popular task of the neural networks is visual images
recognition [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Nowadays the networks are created in which machines are capable of successful
recognition of symbols on the paper and bank cards, signatures on official documents, objects
detection, etc. These functions allow to facilitate human work significantly, as well as increase
reliability and accuracy of various work processes due to avoiding mistakes caused by the human
factor [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ]. A neural network (NN) is a mathematical model in a form of software and hardware
which is based on the functioning principles of biological neural networks [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Thus it is appropriate
to use this network for image recognition during the training process. A convolutional NN (CNN) has
a special architecture that allows it to recognize images most effectively [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ] The very idea of
CNN is based on the alternation of convolutional and subsampling layers, and the structure is
unidirectional. CNN got its name from the convolution operation, which implies that each image
fragment will be multiplied by the convolution core element by element, and the result must be added
up and written in a similar position of the original image. This architecture provides invariance of
recognition regarding the object shift, gradually increasing the "window", which "faces" the
convolution, revealing larger and larger structures and patterns in the image [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
2.1.
      </p>
    </sec>
    <sec id="sec-3">
      <title>Positioning for Image Scanning</title>
      <p>
        Anthropometric parameters of a person performing a fitness exercise are key indicators in
determining and recognizing images when performing fitness exercises [
        <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
        ]. Therefore, the basic
initial values for the correct calculations of the performance quality should be the following
parameters: weight, height, length of different body parts (Fig. 1: 1x, 2x, 3x, 4x, 5x), circumferences
of the chest, waist, hips, limbs (Fig. 1: 1v, 2v, 3v, 4v, 5v, 6v, 7v, 8v), as well as physiological
indicators of key articular joints locations and distances between them. To do this, it is necessary to
measure and use these indicators as a basis for calculations (Fig. 1, e.g., the length of 2r-4r, 6l-7l, etc.)
[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. It should be noted that all points parameters, except for 1 and 3, are represented by double
values: separately for the right and left side of the body.
2.2.
      </p>
    </sec>
    <sec id="sec-4">
      <title>A Neural Network for User's Motion Recognition</title>
      <p>
        For the correct work of a neural network for motion detection and recognition, the system uses
LMST architecture. It is based on anthropometric indicators and representing images of exercise
execution. This includes sets of body parts motion and also individual elements recorded while being
done by 5 fitness coaches and 24 performers who were not professional athletes. The training process
of this neural network is based on patterns depending on indicators of individual body part motion in
correlation to the motion range expected when doing a certain exercise element [
        <xref ref-type="bibr" rid="ref10 ref15">10, 15</xref>
        ]. After this
training, we get an encoder that allows predicting an acceptable motion range for each exercise
according to the given individual parameters of the user. The function of adding new exercises is
implemented by the same principles, with each exercise being represented as a set of individual
elements (or parts of elements) of the performer's motion.
2.3.
      </p>
    </sec>
    <sec id="sec-5">
      <title>Neural Networks in Mobile Applications</title>
      <p>
        Mobile applications development reaches a new level every year. Applying machine learning
algorithms in this field becomes a problem which is being solved more and more often [
        <xref ref-type="bibr" rid="ref17 ref18">17, 18</xref>
        ]. The
main aim lies in optimization of certain computing resources. This leads to the possibility of
integrating such algorithms into a mobile application. Another solution may consist in transferring all
the computing mechanisms to the server. In this case, smartphone users need to have a stable internet
connection in order to use the system functions [
        <xref ref-type="bibr" rid="ref19 ref20">19, 20</xref>
        ]. Nowadays many services that use the
possibilities of machine learning are implemented according to the latter scenario [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ].
2.4.
      </p>
    </sec>
    <sec id="sec-6">
      <title>Business Processes of a Fitness Trainer</title>
      <p>All the functioning of the fitness trainer must be described as a set of rules and algorithms that
allow to create a set of exercises, monitor the execution and analyze the results. To develop the
algorithms, we need to create business processes that are shown in the Table 1.</p>
      <p>The presented business rules are connected with the next views:
 entities: Video, Smartphone Camera, Neural Network, Message, Errors when Doing Exercise,
Exercise Execution Result, Execution Assessment, Fitness Exercise, IS User.
 activities: User does the exercise, User fims the exercise with the smartphone camera, Naural
network analyzes the execution process from the smartphone camera, Neural network counts the
errors when doing the certain exercise, User gets the message from the neural network.</p>
      <p>The neural network is an actor behind the scenes that is external to the system under development.
Thus, it isn't being modelled. Only the input and output data of this component are described. They
are given in the form of anthropometric indicators (fig. 1).
Business
Process
Doing the
exercise by
the user in
front of the
camera and
getting the
information
about the
execution
technique</p>
      <p>User
To turn on the camera
and to do the exercise
in front of it in a strictly
defined position and
from the best angle
To return the errors in
execution technique for
the exercise filmed on a
camera to the User</p>
      <p>
        The modelled business processes and defined views allow us to form a model of
entityrelationship dependencies (fig. 2.). The created diagram includes 6 entities with 14 attributes, and
8 relationships between entities. This is the basis for building a class diagram (fig. 3) that is presented
in UML notation [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ].
2.6.
      </p>
    </sec>
    <sec id="sec-7">
      <title>Data Elements Specification</title>
      <p>
        The defined attributes and their description for the database under development are described in
the table 2 [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ].
      </p>
      <p>Entity</p>
      <sec id="sec-7-1">
        <title>User</title>
      </sec>
      <sec id="sec-7-2">
        <title>Exercise</title>
      </sec>
      <sec id="sec-7-3">
        <title>Error</title>
      </sec>
      <sec id="sec-7-4">
        <title>Videofile</title>
      </sec>
      <sec id="sec-7-5">
        <title>Result</title>
      </sec>
      <sec id="sec-7-6">
        <title>Message</title>
      </sec>
      <sec id="sec-7-7">
        <title>Result</title>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>Views</title>
    </sec>
    <sec id="sec-9">
      <title>Practical Implementation</title>
      <p>
        For views, let's define the key entities that represent the nature of a fitness trainer [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]. The first
user data type is the video file format which is described in the Table 3. Another view is the data type
that defines the scale of the error made (Table 4). The exercise list that includes certain fitness
activities are described in the Table 5.
      </p>
      <p>Queries for creating user data types:
 create type VideoFileFormatType as enum ('mp4', 'avi', 'mov');
 create type ErrorGrossType as enum ('blunder', 'average', 'slight');
 create type ExerciseGroupType as enum ('legs', 'arms', 'chest', 'back', 'abs')</p>
      <p>Other views are implemented according to the presented types of corresponding entities (fig. 3).</p>
    </sec>
    <sec id="sec-10">
      <title>Functional and Ambiguous Dependencies</title>
      <p>The user has a login and a password connected to it. They are defined by a unique user ID:
userid → login(1), passwordhash (2).</p>
      <p>Given the fitness exercise ID, it is possible to determine its name, exercise group, and the
description of its execution: exerciseid → exercisename(3), exercisegroup(4), exercisedescription(5).</p>
      <p>Given the video file ID, it is possible to determine all of its attributes: videofileid → videofilesize
(6), videofileformat (7), videofileduration (8), videofileuri(9).</p>
      <p>The result ID contains the information about the success of the analysis: resultid →
analysisperformed(10).</p>
      <p>In addition, this result determines the user whose video was analyzed, the video file itself, the
exercise performed on it, and the message that will be sent to the user: resultid → userid(11),
videofileid(12), exerciseid(13), resultmessageid(14).</p>
      <p>Besides, the result ID defines errors detected during the analysis: resultid ↠ errorid(15).</p>
      <p>The ID of the message about the result gives the information about the message date and its text:
resultmessageid → resultmessagedate(16), resultmessagetext(17).</p>
      <p>In turn, the error ID can help to determine all of its attributes: errorid → errorgross (18),
errordescription (19).
3.3.</p>
    </sec>
    <sec id="sec-11">
      <title>Normalization</title>
      <p>
        The next step will be the decomposition process for normalizing the relation to the 4th normal
form [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. The initial relation is as follows:
      </p>
      <p>R (userid login passwordhash exerciseid exercisename exercisegroup exercisedescription
videofileid videofilesize videofileformat videofileduration videofileuri resultid analysisperformed
resultmessageid resultmessagedate resultmessagetext errorid errorgross errordescription)
Key: resulted.</p>
      <p>Step 1: FD 6-9 to R:
R1 (videofileid videofilesize videofileformat videofileduration videofileuri).</p>
      <p>Key: videofileid.</p>
      <p>R2 (userid login passwordhash exerciseid exercisename exercisegroup exercisedescription
videofileid resultid analysisperformed resultmessageid resultmessagedate resultmessagetext errorid
errorgross errordescription).</p>
      <p>Key: resulted.</p>
      <p>Step 2: FD 18-19 to R2:
R3 (errorid errorgross errordescription).</p>
      <p>Key: errored.</p>
      <p>R4 (userid login passwordhash exerciseid exercisename exercisegroup exercisedescription
videofileid resultid analysisperformed resultmessageid resultmessagedate resultmessagetext errorid).</p>
      <p>Key: resulted.</p>
      <p>Step 3: FD 16-17 to R4:
R5 (resultmessageid resultmessagedate resultmessagetext).</p>
      <p>Key: resultmessageid.</p>
      <p>R6 (userid login passwordhash exerciseid exercisename exercisegroup exercisedescription
videofileid resultid analysisperformed resultmessageid errorid).</p>
      <p>Key: resulted.</p>
      <p>The normalization of other views is made similarly. The obtained relations in the 4th normal form:
 R1 (videofileid videofilesize videofileformat videofileduration videofileuri)
 R3 (errorid errorgross errordescription)
 R5 (resultmessageid resultmessagedate resultmessagetext)
 R7 (userid login passwordhash)
 R9 (exerciseid exercisename exercisegroup exercisedescription)
 R11 (resultid errorid)
 R12 (resultid analysisperformed userid exerciseid videofileid resultmessageid)</p>
    </sec>
    <sec id="sec-12">
      <title>3.4. Tables and Subject Area Constraints</title>
      <p>The table videofile represents the video file entity and corresponds to the relation R1. It has a
unique identifier for each row. All other attributes cannot be null, and the video file address is unique:
create table videofile (
videofileid serial primary key,
videofilesize integer not null,
videofileformat VideoFileFormatType not null,
videofileduration integer not null,
videofileuri varchar(128) unique not null</p>
      <p>The table error represents the error entity and corresponds to the relation R3. It has a unique
identifier for each row. The error description and the error scale attribute cannot be null:
create table error (
errorid serial primary key,
errorgross ErrorGrossType not null,
errordescription text not null</p>
      <p>The table resultmessage represents the result message entity and corresponds to the relation R5. .
It has a unique identifier for each row. All the other attributes cannot be null. Besides, the result date
cannot be later than today:
create table resultmessage (
resultmessageid serial primary key,
resultmessagedate date not null check (resultmessagedate &lt;= current_date),
resultmessagetext text not null</p>
      <p>The table user represents the user entity and corresponds to the relation R7. It has a unique
identifier for each individual user, and the login field cannot be null:
);
);
);
create table "user" (
userid serial primary key,
login varchar(16) unique not null,
passwordHash text
);</p>
      <p>Besides, the password must have at least 8 symbols. Otherwise, a hashed password is entered into
the table:
create or replace function user_password_hashing_trigger()
returns trigger as
$$
begin
if length(new.passwordhash) &lt; 8 then</p>
      <p>raise exception 'Too short password';
end if;
new.passwordHash = crypt(new.passwordHash, gen_salt('md5'));
return new;
end;
$$ language PLPGSQL;
create trigger user_password_hashing
before insert on "user"
for each row
execute procedure user_password_hashing_trigger();</p>
      <p>The table result represents the result entity and corresponds to the relation R12. It has not null
foreign keys for corresponding tables, except for the result message ID, as if the attribute responsible
for the analysis (which also cannot be null) takes 'false' value, there will be no message for this result:
create table result (
resultid serial primary key,
analysisperformed boolean not null,
userid integer not null references "user"(userid),
exerciseid integer not null references exercise(exerciseid),
videofileid integer not null references videofile(videofileid),
resultmessageid integer references resultmessage(resultmessageid)
);</p>
      <p>After creating the result table, in the table errorsonresult it is necessary to perform the restriction
of the foreign key to the former table:
alter table errorsonresult
add constraint resultid_fk
foreign key(resultid) references result(resultid)
on delete cascade;</p>
      <p>Similarly, we form views, triggers and algorithms for other entities.</p>
    </sec>
    <sec id="sec-13">
      <title>4. Result</title>
      <p>When implementing the fitness trainer mobile application, we perform the analysis of the views
for business processes, presented in the table 1.</p>
      <p>The first execution plan responds to a query that returns all gross errors made by users:
explain analyze
select count(*) from error e
where errorgross = 'blunder';
Execution plans without and with indices are illustrated on fig. 4 and 5 respectively.</p>
      <p>An execution plan for a query that returns all the users and the exercise group along with the
number of exercises from this group that were done today, sorted by the number of exercises in
descending order:</p>
      <p>The execution plans without and with indices are illustrated on figures 6 and 7.</p>
      <p>An execution plan for a query that returns all the video files larger than 30 MB, file format MP4
and containing at least 3 errors.</p>
      <p>select videofileid, count(e.errorid) from videofile
natural join result
natural join errorsonresult e
where videofilesize &gt; 30000000
and videofileformat = 'mp4'
group by videofileid
having count(errorid) &gt; 2
order by 2 desc;</p>
      <p>Query plans are developed for working with each view of the system.
5.</p>
    </sec>
    <sec id="sec-14">
      <title>Conclusions</title>
      <p>The development of the IoT fitness trainer resulted in designing a mobile application that stores
the information about exercise types, parameters and execution features, anthropometric parameters
of the user. It allows users to monitor the correctness of doing fitness exercises in real time thanks to
using neural networks for image recognition. To develop this solution, DBMS PostgreSQL was used.
All the scripts in SQL language were written and executed in a free cross-platform tool named
DBeaver and in the terminal of the DBMS itself (PSQL Shell).</p>
      <p>The product is based on the described business processes related to the fitness trainer IoT solution.
An important concept of the presented solution is the support of the people health under quarantine
restrictions, lack of time or opportunities to attend fitness classes with a coach. Using the presented
IoT solution will fully ensure the accessibility for everyone, and the introduction of image recognition
technology implements the mechanism of correct performance and correlation of user performance,
which eliminates the risk of damage from improper fitness technique.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Keane</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>The Fitness Mindset: Eat for energy, Train for tension, Manage your mindset, Reap the results</article-title>
          . Rethink Press Limited, Norfolk (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Swettenham</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          :
          <source>Total Fitness After</source>
          <volume>40</volume>
          :
          <article-title>The 7 Life Changing Foundations You Need for Strength, Health and Motivation in your 40s, 50s, 60s and</article-title>
          <string-name>
            <surname>Beyond. Independently published</surname>
          </string-name>
          (
          <year>2021</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Logothetis</surname>
            ,
            <given-names>N. K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sheinberg</surname>
            ,
            <given-names>D. L.</given-names>
          </string-name>
          :
          <article-title>Visual Object Recognition</article-title>
          .
          <source>Annu. Rev. Neurosci</source>
          .
          <volume>19</volume>
          ,
          <fpage>577</fpage>
          -
          <lpage>621</lpage>
          (
          <year>1996</year>
          ). https://doi.org/10.1146/annurev.ne.
          <volume>19</volume>
          .030196.003045
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Gill</surname>
            ,
            <given-names>N. S.</given-names>
          </string-name>
          :
          <source>Artificial Neural Networks Applications and Algorithms</source>
          (
          <year>2021</year>
          ). https://www.xenonstack.com/blog/artificial
          <article-title>-neural-network-applications</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Kriesel</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <string-name>
            <given-names>A Brief</given-names>
            <surname>Introduction to Neural Networks</surname>
          </string-name>
          (
          <year>2007</year>
          ). http://www.dkriesel.com/en/ science/neural_networks
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>J.: Neural</given-names>
          </string-name>
          <string-name>
            <surname>Network</surname>
          </string-name>
          (
          <year>2020</year>
          ). https://www.investopedia.com/terms/n/neuralnetwork.asp
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Forsyth</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ponce</surname>
          </string-name>
          , J.:
          <article-title>Computer Vision: A Modern Approach</article-title>
          . Pearson Education, London (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Osowski</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Sieci neuronowe do przetwarzania informacji [Neural Networks for Information Processing]</article-title>
          . Oficyna
          <string-name>
            <surname>Wydawnicza</surname>
            <given-names>PW</given-names>
          </string-name>
          , Warsaw (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Haykin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          : Neural Networks:
          <string-name>
            <given-names>A Comprehensive</given-names>
            <surname>Foundation. Prentice Hall</surname>
          </string-name>
          , Hoboken, NJ (
          <year>1998</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>James</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Witten</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hastie</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tibshirani</surname>
            ,
            <given-names>R.:</given-names>
          </string-name>
          <article-title>An Introduction to Statistical Learning (with Applications in</article-title>
          R). Springer, New York, NY (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Saha</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>A Comprehensive Guide to Convolutional Neural Networks - the ELI5 way</article-title>
          .
          <source>Towards Data Science</source>
          (
          <year>2018</year>
          ). https://towardsdatascience.com
          <article-title>/a-comprehensive-guide-toconvolutional-neural-networks-the-eli5-way-3bd2b1164a53</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Shankar</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Robertson</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ioannou</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Criminisi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cipolla</surname>
          </string-name>
          , R.:
          <article-title>Refining Architectures of Deep Convolutional Neural Networks</article-title>
          .
          <source>In: Proceedings of 2016 IEEE Conference on Computer Vision</source>
          and
          <article-title>Pattern Recognition (CVPR), Las Vegas</article-title>
          , NV, pp.
          <fpage>2212</fpage>
          -
          <lpage>2220</lpage>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Shapiro</surname>
            ,
            <given-names>L. G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stockman</surname>
            ,
            <given-names>G. C.</given-names>
          </string-name>
          : Computer Vision. Pearson, London (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Durnin</surname>
            ,
            <given-names>J. V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Womersley</surname>
          </string-name>
          , J.:
          <article-title>Body fat assessed from total body density and its estimation from skinfold thickness: measurements on 481 men and women aged from 16 to 72 Years</article-title>
          . Br.
          <source>J. Nutr</source>
          .
          <volume>32</volume>
          (
          <issue>1</issue>
          ),
          <fpage>77</fpage>
          -
          <lpage>97</lpage>
          (
          <year>1974</year>
          ). https://doi.org/10.1079/bjn19740060
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Bland</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          : An Introduction to Medical Statistics. University Press, Oxford (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Fedotov</surname>
            ,
            <given-names>N. G.</given-names>
          </string-name>
          :
          <article-title>Theory of Signs of Recognition of Patterns on the Basis of Stochastic Geometry and Functional Analysis</article-title>
          . Fizmatlit, Moscow (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Horton</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Android Programming with Kotlin for Beginners</article-title>
          . Packt Publishing,
          <string-name>
            <surname>Birmingham</surname>
          </string-name>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Benedetto</surname>
            ,
            <given-names>J. I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sanabria</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Neyem</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Navon</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Poellabauer</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xia</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Deep Neural Networks on Mobile Healthcare Applications: Practical Recommendations</article-title>
          .
          <source>In: Proceedings of the 12th International Conference on Ubiquitous Computing and Ambient Intelligence (UCAmI</source>
          <year>2018</year>
          ),
          <volume>2</volume>
          (
          <issue>19</issue>
          ),
          <volume>550</volume>
          (
          <year>2018</year>
          ). https://doi.org/10.3390/proceedings2190550
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>Cisco</given-names>
            <surname>Annual Internet Report</surname>
          </string-name>
          (
          <year>2020</year>
          ). https://www.cisco.com/c/en/us/solutions/executiveperspectives/annual-internet-report/index.html
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Angelova</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          :
          <article-title>Mobile Applications for Business</article-title>
          .
          <source>Trakia J. Sci.</source>
          ,
          <volume>17</volume>
          (
          <issue>Suppl</issue>
          .1),
          <fpage>853</fpage>
          -
          <lpage>859</lpage>
          (
          <year>2019</year>
          ). http://dx.doi.org/10.15547/tjs.
          <year>2019</year>
          .s.
          <volume>01</volume>
          .140
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Arovina</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Prospects of Application of Mobile Apps in the Regional Information Market of Ukraine</article-title>
          . Skhid,
          <volume>3</volume>
          ,
          <fpage>5</fpage>
          -
          <lpage>10</lpage>
          (
          <year>2016</year>
          ). http://nbuv.gov.ua/UJRN/Skhid_
          <year>2016</year>
          _3_
          <fpage>2</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Lopes</surname>
            ,
            <given-names>A. G.</given-names>
          </string-name>
          :
          <article-title>Business Process Modeling with UML</article-title>
          .
          <source>In: ICEIS (2)</source>
          , pp.
          <fpage>679</fpage>
          -
          <lpage>685</lpage>
          (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Özsu</surname>
          </string-name>
          , M. T. (Eds.):
          <source>Encyclopedia of Database Systems</source>
          . Springer, New York (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Blockeel</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Calders</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fromont</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goethals</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Mining Views: Database Views for Data Mining</article-title>
          .
          <source>In: Proceedings 24th IEEE International Conference on Data Engineering (ICDE'08</source>
          ,
          <string-name>
            <surname>Cancún</surname>
          </string-name>
          , Mexico, April 7-
          <issue>12</issue>
          ,
          <year>2008</year>
          ),
          <fpage>1608</fpage>
          -
          <lpage>1611</lpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Garcia-Molina</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ullman</surname>
            ,
            <given-names>J. D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Widom</surname>
            ,
            <given-names>J. D.</given-names>
          </string-name>
          <string-name>
            <surname>Database</surname>
          </string-name>
          <article-title>Systems: The Complete Book</article-title>
          . Pearson, London (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>