<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Study of the Profitability of the Enterprise based on the Method of Machine Learning without a Teacher</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mariia Nazarkevych</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hanna Nazarkevych</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roman Moravskyi</string-name>
          <email>roman.moravskyi.mnpzm2021@lpnu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maryna Kostiak</string-name>
          <email>kostiak.maryna@lpnu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksii Shevchuk</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Lviv Ivan Franko National University</institution>
          ,
          <addr-line>1 Universytetska str., Lviv, 79000</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Lviv Polytechnic National University</institution>
          ,
          <addr-line>12 Stepan Bandera str., Lviv, 79013</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Ukrainian Academy of Printing</institution>
          ,
          <addr-line>19 Pid Goloskom str., Lviv, 79020</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>44</fpage>
      <lpage>54</lpage>
      <abstract>
        <p>Machine learning methods, which are used in the framework of predicting the solution of methods of adaptive management of the enterprise's goods sales, have been analyzed. Conduct an analysis of input data obtained during the operation of one enterprise. With the help of input data, train and conduct machine learning simulations with K-Nearest Neighbors, Support Vector Machines classifiers. The most significant factors influencing the client's decision to purchase goods have been identified. The study proposed a business process scenario for solving the problem of increasing the company's profit based on machine learning technology. The performance of the proposed methods was verified on a test sample of data.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Unsupervised learning methods</kwd>
        <kwd>adaptive management</kwd>
        <kwd>enterprise</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In an era where organizations increasingly
rely on data to run their business, the ability of
different companies to manage and analyze
data will have a significant impact on their
performance. In this article, we develop a
model to measure the current level of maturity
of an enterprise's data and analytics systems to
optimize them to maximize the potential of
their data assets. Based on the three critical
components of people, technology and process,
we take the requirements of the enterprise as a
starting point. After a comprehensive review of
the current situation in the industry and the
needs of the enterprise, a methodology for
determining profitability was built [
        <xref ref-type="bibr" rid="ref2">1</xref>
        ].
      </p>
      <p>
        Profitability [
        <xref ref-type="bibr" rid="ref3">2</xref>
        ] is the ratio of profit and
costs, expressed as a percentage. Profitability is
a relative indicator, and it is necessary for the
analysis of economic and economic activity of
any enterprise [
        <xref ref-type="bibr" rid="ref1 ref4 ref5 ref6">3–5</xref>
        ].
      </p>
      <p>Let’s consider the main indicators of the
company’s profitability:</p>
      <p>
        1. Profitability of invested funds [
        <xref ref-type="bibr" rid="ref7">6</xref>
        ], which
consists of the general level of profitability of
the enterprise, which in turn is equal to the ratio
of the gross profit of the enterprise to the total
production cost.
      </p>
      <p>
        2. Another indicator is the profitability of
production assets [
        <xref ref-type="bibr" rid="ref8">7</xref>
        ], which in turn is equal to
the ratio of the gross profit of the enterprise to
the profitability of total assets.
      </p>
      <p>
        Return on total assets [
        <xref ref-type="bibr" rid="ref9">8</xref>
        ] is the ratio of the
company’s gross profit to the average amount
of assets on the company’s balance
sheet.1. Return on equity (equity) [
        <xref ref-type="bibr" rid="ref10">9</xref>
        ] is the
ratio of the company’s net profit to the amount
of equity.
      </p>
      <p>
        2. And another indicator is the profitability
of production [
        <xref ref-type="bibr" rid="ref11">10</xref>
        ], which is equal to the ratio of
the total cost of goods sold to the volume of
sales.
      </p>
      <p>
        One of the fundamental prerequisites in the
field of artificial intelligence is the assumption
of the possibility of creating machines that are
capable of performing tasks that usually require
the addition of human intelligence [
        <xref ref-type="bibr" rid="ref12">11</xref>
        ].
      </p>
      <p>
        The artificial neural network is designed to
simulate learning processes in the human brain.
Artificial neural networks are designed in such
a way that they can recognize basic patterns
(regularities, stable relationships hidden in
data) and learn from them [
        <xref ref-type="bibr" rid="ref13">12</xref>
        ]. They can be
used to solve classification, regression, and data
segmentation problems [
        <xref ref-type="bibr" rid="ref14">13</xref>
        ]. Before providing
neural network data, they must be converted
into numerical form. For example, data of
various nature, including visual and textual
data, time series, etc. Decisions have to be made
about how tasks should be presented so that
they are understandable to neural networks.
      </p>
      <p>The practical implementation of material
objects
involves
the
solution
of
the
corresponding problems of synthesis. At the
same time, the</p>
      <p>mathematical model of the
object should not only adequately describe the
physical processes that ensure obtaining the
necessary initial characteristics of this object,
but also allow the optimization process itself to
be implemented. The software currently used to
solve similar problems is characterized by
either narrow specialization, or has a universal
calculation device and requires an unacceptably
long calculation time for complex objects. One
of the</p>
      <p>options for implementing fast and
flexible methods for solving synthesis problems
is the use of approximation properties of
artificial neural networks. Most of the research
related to artificial neural networks is focused
on the solution of combinatorial optimization
and forecasting problems, therefore, research
on the application of neural networks is
relevant.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Stages of Developing a Machine</title>
    </sec>
    <sec id="sec-3">
      <title>Learning Model</title>
      <p>The process of developing a certain model
of</p>
      <p>
        machine learning [
        <xref ref-type="bibr" rid="ref15">14</xref>
        ] consists of the
following stages:




      </p>
      <sec id="sec-3-1">
        <title>Process of data preparation and presentation.</title>
      </sec>
      <sec id="sec-3-2">
        <title>Algorithm design process.</title>
      </sec>
      <sec id="sec-3-3">
        <title>Process of training the algorithm on the available data.</title>
      </sec>
      <sec id="sec-3-4">
        <title>Algorithm validation process on test</title>
        <p>data.</p>
        <p>
          A training sample [
          <xref ref-type="bibr" rid="ref16">15</xref>
          ] is a set of examples
that we show the system so that, based on them,
it uncovers a certain hidden regularity that is
responsible for the distribution of data in the
training sample. Thanks to the discovery of
such a regularity, the system will be able to use
it to effectively predict answers on the test
sample.
        </p>
        <p>
          A neural network [
          <xref ref-type="bibr" rid="ref17">16</xref>
          ] is a certain complex
function that has a huge number of parameters.
Each of these parameters (called weights) is
adapted to approximate the function to the state
of how the data in the test sample is distributed.
Very often, the number of these parameters is
much larger than the description of the test data.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>3. Construction of Neural Networks 3.1.</title>
    </sec>
    <sec id="sec-5">
      <title>Mathematical Model</title>
      <p>Perceptron
is
mathematical
model
proposed by F. Rosenblatt, which is described
by the transformation   → 
using the
formula
a

 = ∑</p>
      <p>=1
where  = 1, … ,  ;  
is the weight of the
perceptron;   – value of input signals. After
receiving the result, the activation function f is
applied to the received value in v. The resulting
value is compared with the trigger threshold θ
and a decision is made.</p>
      <p>Perceptron learning consists of finding
weight coefficients. Let there be a set of pairs
of
vectors
(  ,   ),  = 1, … .  ,
which is
called a training sample. We will call a neural
network trained on a given training dataset if,
when applying each vector   to the inputs of
the network, we get the corresponding vector
at the outputs each time. Recognition,
processing of text, human language, music,
images, 3D
objects, tabular
data,
object
detection in photos and videos, even creation of
texts, images and videos—all this and much
more is included in the practical application of
neural networks.</p>
      <p>
        The
learning
method
proposed
by
Rosenblatt [
        <xref ref-type="bibr" rid="ref18">17</xref>
        ] consists in iterative substitution
of the weight matrix, successively reducing the
error in the output vectors. The algorithm has
several stages:
      </p>
      <p>Step 1. Initial values of all neuron weights
 ( = 0) are placed randomly.</p>
      <p>Step 2. If the input image   is loaded, the
result will be the output image  ̃ ≠   .</p>
      <p>Step 3. The error vector   = (  −  ̃ ) is
calculated, carried out by the network at the
output. Then it happens that the change in the
vector of weighting coefficients in the area of
small errors must be proportional to the error at
the output and equal to zero if the error is zero.</p>
      <p>Step three. The vector of weights is
modified by the formula:</p>
      <p>( + ∆ ) =  ( ) +    ∙ (  ) ,
where 0 &lt;  &lt; 1 is learning pace.</p>
      <p>Step 4. Steps 1–3 are repeated for all vectors
to be trained. One cycle of sequential
representation of the entire sample is called an
epoch. Training ends after several epochs: (a)
when the iterations converge, that is, the vector
stops changing, or when (b) or when the
complete, rubberized absolute error over all
vectors becomes smaller than some small value.</p>
      <p>Deep learning is based on complex systems
consisting of a huge number of neurons. One
neural network can have billions of such
structural units as neurons or perceptrons.
Accordingly, there are many ways to structure
them. And depending on how they will be
connected, different architectures of neural
networks can be distinguished.</p>
    </sec>
    <sec id="sec-6">
      <title>3.2. Optimization of Neural</title>
    </sec>
    <sec id="sec-7">
      <title>Network Parameters</title>
      <p>One of the simplest gradient descent
optimization algorithms looks like this:
  +1 =   − 
(  )</p>
      <p>
        We also justify that the Nesterov algorithm
will be the optimal algorithm for finding the
minimum [
        <xref ref-type="bibr" rid="ref19">18</xref>
        ]
  + 1 =  ⋅   +  ⋅  
(
−   ),
      </p>
      <p>−   + 1.</p>
      <p>A kind of refining operation is used here to
find the gradient. Therefore, the method is
sometimes called the accelerated Nesterov
gradient. However, despite the fact that the
proven upper estimate of the convergence time
for this algorithm is minimal, it has not found
its practical application. The thing is that the top
score does not always coincide with the
average. And in the case of this algorithm, the
average convergence time estimate is close to
the upper estimate. Therefore, the algorithm
mostly works slowly.
3.3.</p>
    </sec>
    <sec id="sec-8">
      <title>Creation of a Neural Network</title>
      <p>Human learning continues hierarchically. In
the neural network of our brain, this process is
carried out in several stages, each of which is
characterized by its own degree of learning. At
some stages, simple things are taught, at
others—more complex.</p>
      <p>In order to simulate the human learning
process, layers of neurons are used in the
construction of artificial neural networks. The
idea of these neurons is suggested by biological
processes. Each layer of an artificial neural
network is a set of independent neurons, each
neuron of a certain layer is connected to the
neurons of an adjacent layer.
3.4.</p>
    </sec>
    <sec id="sec-9">
      <title>Neural Network Training</title>
      <p>If we are dealing with n-dimensional input
data, then the input layer will consist of n
neurons. If m different classes are distinguished
among our training (training) data, then the
output layer will consist of m neurons. The
layers nested between the input and output
layers are called hidden. A simple neural
network consists of a pair of layers, and a deep
neural network consists of many layers.</p>
      <p>Consider the case when we want to use a
neural network for data classification. The first
step is to collect relevant training data and label
it. Each neuron acts as a simple function, and
the neural network is trained until the error is
less than a certain set value. The difference
between predicted and actual outputs is mostly
used as an error. Based on how big the error is,
the neural network corrects itself and retrains
until it gets close to the solution.</p>
    </sec>
    <sec id="sec-10">
      <title>3.5. Creating a Classifier based on a Perceptron</title>
      <p>A perceptron is a single neuron that receives
input data, performs calculations on it and
outputs an output signal. The perceptron uses a
simple function to make decisions. Suppose we
are dealing with an N-dimensional input data
point. The perceptron calculates a weighted
sum of N numbers, and then adds a constant to
them to get the original result. The ego constant
is called the bias of the neuron. It is interesting
to note that such simple perceptrons are used to
design very complex deep neural networks.</p>
      <p>The capabilities of the perceptron are
limited. You need to get a set of neurons to act
as one and evaluate it to achieve a goal. Let's
create a single-layer neural network consisting
of independent neurons that influence the input
data to obtain the output result.</p>
    </sec>
    <sec id="sec-11">
      <title>3.7. Building a Multilayer Neural</title>
    </sec>
    <sec id="sec-12">
      <title>Network</title>
      <p>To obtain higher accuracy, we must give
more freedom to the neural network. This
means that the neural network must have more
than one layer to capture the underlying
patterns that exist among the test data. Let’s
create a multilayer neural network that will
ensure this.</p>
      <p>
        A neural network can be used as a classifier.
You can use a neural network like regressor
[
        <xref ref-type="bibr" rid="ref20 ref21">19, 20</xref>
        ].
      </p>
      <p>Let’s define a multilayer neural network
with two hidden layers. You can design a neural
network in any other way. In this case, we will
have 10 neurons in the first layer and six
neurons in the second layer. Our task is to
predict a single value, so the output layer will
contain only one neuron.</p>
      <p>We will use the creation of the model for the
forecasting task.</p>
      <p>To do this, we will help predict the
probability that the enterprise will be profitable
and will have profits in the future.</p>
      <p>The dataset we used in this chapter is taken
from this database: ML_Manufacture_Prifit_
Companies.csv. This data set contains several
independent predictors and one goal, to make
the enterprise profitable. Its features are as
follows: rating, change in rating, income,
management assets, market value, percentage
change in income, percentage change in profit,
employees.</p>
      <p>The data set contains 1001 records and all
businesses are successful and profitable.</p>
    </sec>
    <sec id="sec-13">
      <title>5. Downloading Data</title>
      <p>
        For this example, the dataset was
downloaded locally and named
ML_Manufacture_Prifit_Companies.csv
(Fig. 1). We observe the columns with the
following data (Fig. 2). The first task for
estimating a company’s profit is to clean the
data so that there are no missing or erroneous
values (Fig. 3 and 4) [
        <xref ref-type="bibr" rid="ref22 ref23 ref24">21–23</xref>
        ].
      </p>
    </sec>
    <sec id="sec-14">
      <title>6. Study of the Relationship</title>
    </sec>
    <sec id="sec-15">
      <title>Between Features</title>
      <p>The next step is to study how different
independent characteristics affect the result
(which characteristics affect the profitability of
the enterprise).</p>
      <p>The corr() function calculates the pairwise
correlation of columns. For example, the
following result shows that the level of
management assets has little relationship with
the market value of the enterprise 0.127, but it
has a significant relationship with the
profitability of the firm 0.49.</p>
      <p>We find out which functions significantly
affect the result.</p>
    </sec>
    <sec id="sec-16">
      <title>7. Construction of a Graph of the</title>
    </sec>
    <sec id="sec-17">
      <title>Relationship between Features</title>
      <p>The following code snippet uses the
matshow() function to plot the results returned
by the corr() function as a matrix. At the same
time, various correlation coefficients are also
displayed in the matrix:</p>
      <p>Fig. 5 shows the matrix. Cubes with colors
closest to black represent the highest
correlation coefficients, and those closest to
blue represent the lowest correlation
coefficients.</p>
      <p>Another way to construct a correlation
matrix is to use Seaborn’s heatmap() function
as follows: Fig. 5 shows a heat map created
using the Seaborn library (Fig. 6).</p>
      <p>Let’s single out four signs at enterprises that
have the highest correlation with the result
(Fig. 7–9).</p>
    </sec>
    <sec id="sec-18">
      <title>8. Evaluation of Algorithms 8.1.</title>
    </sec>
    <sec id="sec-19">
      <title>Logistic Regression</title>
      <p>We will evaluate several algorithms to find
the most optimal one and the one that will
provide the best performance. Therefore, we
will use the following methods:
 Logistic regression.
 K-Nearest Neighbors (KNN).
 Support vector machines (SVM)
using linear kernels and RBF
kernels.</p>
      <p>For the first method, we will use logistic
regression. Instead of splitting the dataset into
training and testing sets, we will use 10-fold
cross-validation to get the average score of the
algorithm used.
8.2.</p>
    </sec>
    <sec id="sec-20">
      <title>K-Nearest Neighbours</title>
      <p>The next method we will use is K-Nearest
Neighbors (KNN). In addition to using 10-fold
cross-validation to average the algorithm,
different values of k should be tried to find the
optimal one so that the best accuracy can be
obtained.
8.3.</p>
    </sec>
    <sec id="sec-21">
      <title>Support Vector Machines</title>
      <p>Another method we will use is Support
Vector Machine (SVM). We will use two types
of kernels for SVM: linear and RBF.</p>
    </sec>
    <sec id="sec-22">
      <title>9. Learning and Saving the Model</title>
      <p>Since the most efficient algorithm for our
data set is KNN with k = 3, we can now
continue to train the model using the entire data
set: After training the model, we need to save
the files so that we can get the model for
prediction later. The trained model is now saved
to a file.</p>
      <p>After loading the model, let’s make some
predictions:</p>
      <p>The output prints the word “Profitable
Enterprise” if the return value of the prediction
is 1; otherwise, the word “Unprofitable
enterprise” is printed. It is also necessary to
obtain the probability of the forecast in order to
obtain the probability in percent (Fig. 10–12).</p>
      <p>The printed probabilities show that the
probability of the outcome is 0 and the
probability of the outcome is 1. The prediction
is based on the prediction with the highest
probability, and we take that probability and
convert it to a confidence percentage.
10.Conclusions</p>
      <p>In this study, a dataset of top 1,000
profitable companies was selected and machine
learning was implemented without a teacher.
Companies were divided into three clusters
based on revenue, earnings, current assets,
market value and number of employees.
Clustering was implemented based on the
KMeans algorithm. The Inertia metric was also
used to determine the optimal number of
clusters. Inertia shows us the sum of the
distances to each center of mass. If the total
distance is large, it means that the points are far
apart and may be less similar to each other. In
this case, one can continue to evaluate larger
values of K to see if the overall distance can be
reduced. However, it is not always the smartest
idea to reduce the distance. Using the elbow
(bend) method, we can choose the value of the
number of clusters 2 or 3 (the first or second
obvious bend).
11.References</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <article-title>3.6. Construction of a Single-Layer Neural Network</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>T.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , et al.,
          <source>Enterprise Digital Transformation and Production Efficiency: Mechanism Analysis and Empirical Research. Economic ResearchEkonomska Istraživanja</source>
          , vol.
          <volume>35</volume>
          , no.
          <issue>1</issue>
          ,
          <issue>2022</issue>
          , pp.
          <fpage>2781</fpage>
          -
          <lpage>2792</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Jahan</surname>
          </string-name>
          , et al.,
          <article-title>Modeling ProfitabilityInfluencing Risk Factors for Construction Projects: A System Dynamics Approach</article-title>
          , Buildings, vol.
          <volume>12</volume>
          , no.
          <issue>6</issue>
          ,
          <issue>2022</issue>
          , pp.
          <fpage>701</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>H.</given-names>
            <surname>Hulak</surname>
          </string-name>
          , et al.
          <article-title>Formation of Requirements for the Electronic RecordBook in Guaranteed Information Systems of Distance Learning</article-title>
          ,
          <source>in Workshop on Cybersecurity Providing in Information and Telecommunication Systems, CPITS 2021</source>
          , vol.
          <volume>2923</volume>
          ,
          <year>2021</year>
          , pp.
          <fpage>137</fpage>
          -
          <lpage>142</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>V.</given-names>
            <surname>Buriachok</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Sokolov</surname>
          </string-name>
          ,
          <article-title>Implementation of Active Learning in the Master's Program on Cybersecurity</article-title>
          , in Advances in Computer Science for Engineering and
          <string-name>
            <surname>Education</surname>
            <given-names>II</given-names>
          </string-name>
          ,
          <year>2019</year>
          , vol.
          <volume>938</volume>
          ,
          <year>2020</year>
          , pp.
          <fpage>610</fpage>
          -
          <lpage>624</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -16621- 2_
          <fpage>57</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Z. B.</given-names>
            <surname>Hu</surname>
          </string-name>
          , et al.,
          <source>Authentication System by Human Brainwaves Using Machine Learning and Artificial Intelligence</source>
          , in Advances in Computer Science for Engineering and
          <string-name>
            <surname>Education</surname>
            <given-names>IV</given-names>
          </string-name>
          ,
          <year>2021</year>
          , pp.
          <fpage>374</fpage>
          -
          <lpage>388</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -80472- 5_
          <fpage>31</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A. I.</given-names>
            <surname>Kato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. P. E.</given-names>
            <surname>Germinah</surname>
          </string-name>
          ,
          <article-title>Empirical Examination of Relationship between Venture Capital Financing and Profitability of Portfolio Companies in Uganda</article-title>
          ,
          <source>Journal of Innovation and Entrepreneurship</source>
          , vol.
          <volume>11</volume>
          , no.
          <issue>1</issue>
          ,
          <issue>2022</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>18</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>E.</given-names>
            <surname>Filatov</surname>
          </string-name>
          ,
          <article-title>Analysis of Profitability of Production of Enterprises in the Field of Transportation and Storage of the Irkutsk Region</article-title>
          .
          <source>Transportation Research Procedia</source>
          , vol.
          <volume>63</volume>
          ,
          <year>2022</year>
          , pp.
          <fpage>518</fpage>
          -
          <lpage>524</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>H.</given-names>
            <surname>Habibniya</surname>
          </string-name>
          , et al.,
          <article-title>Impact of Capital Structure on Profitability: Panel Data Evidence of the Telecom Industry in the United States, Risks</article-title>
          , vol.
          <volume>10</volume>
          , no.
          <issue>8</issue>
          ,
          <issue>2022</issue>
          , pp.
          <fpage>157</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Stipic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ruzic</surname>
          </string-name>
          ,
          <source>Panel VAR Analysis of the Interdependence of Capital Structure and Profitability, Economic and Social Development: Book of Proceedings</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>177</fpage>
          -
          <lpage>187</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>E. A.</given-names>
            <surname>Virtanen</surname>
          </string-name>
          , et al.,
          <source>Balancing Profitability of Energy Production</source>
          ,
          <article-title>Societal Impacts and Biodiversity in Offshore Wind Farm Design, Renewable and Sustainable Energy Reviews</article-title>
          , vol.
          <volume>158</volume>
          ,
          <year>2022</year>
          , pp.
          <fpage>112087</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Geyer</surname>
          </string-name>
          ,
          <article-title>Machine Assistance in Energy-Efficient Building Design: A Predictive Framework Toward Dynamic Interaction with Human Decision-Making under Uncertainty, Applied Energy</article-title>
          , vol.
          <volume>307</volume>
          ,
          <year>2022</year>
          , pp.
          <fpage>118240</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>G. V. S.</given-names>
            <surname>Bhagya Raj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. K.</given-names>
            <surname>Dash</surname>
          </string-name>
          ,
          <source>Comprehensive Study on Applications of Artificial Neural Network in Food Process Modeling, Critical Reviews in Food Science and Nutrition</source>
          , vol.
          <volume>62</volume>
          , no.
          <issue>10</issue>
          ,
          <year>2022</year>
          , pp.
          <fpage>2756</fpage>
          -
          <lpage>2783</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>F.</given-names>
            <surname>Moosavi</surname>
          </string-name>
          , et al.,
          <article-title>Application of Machine Learning Tools for Long-Term Diagnostic Feature Data Segmentation</article-title>
          ,
          <source>Applied Sciences</source>
          , vol.
          <volume>12</volume>
          , no.
          <issue>13</issue>
          ,
          <year>2022</year>
          , pp.
          <fpage>6766</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J.</given-names>
            <surname>Siebert</surname>
          </string-name>
          , et al.,
          <source>Construction of a Quality Model for Machine Learning Systems, Software Quality Journal</source>
          , vol.
          <volume>30</volume>
          , no.
          <issue>2</issue>
          ,
          <issue>2022</issue>
          , pp.
          <fpage>307</fpage>
          -
          <lpage>335</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , et al.,
          <article-title>Mapping Irrigated Croplands in China using a Synergetic Training Sample Generating Method, Machine Learning Classifier</article-title>
          , and Google Earth Engine,
          <source>International Journal of Applied Earth Observation and Geoinformation</source>
          , vol.
          <volume>112</volume>
          ,
          <year>2022</year>
          , pp.
          <fpage>102888</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>O. I.</given-names>
            <surname>Abiodun</surname>
          </string-name>
          , et al.,
          <article-title>State-of-the-Art in Artificial Neural Network Applications: A Survey, Heliyon</article-title>
          , vol.
          <volume>4</volume>
          , no.
          <issue>11</issue>
          ,
          <year>2018</year>
          , pp.
          <fpage>e00938</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>E.</given-names>
            <surname>Kussul</surname>
          </string-name>
          , et al.,
          <article-title>Rosenblatt Perceptrons for Handwritten Digit Recognition</article-title>
          , in
          <source>International Joint Conference on Neural Networks</source>
          , vol.
          <volume>2</volume>
          ,
          <issue>2001</issue>
          , pp.
          <fpage>1516</fpage>
          -
          <lpage>1520</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Lü</surname>
          </string-name>
          , et al.,
          <article-title>A Nesterov-Like Gradient Tracking Algorithm for Distributed Optimization over Directed Networks</article-title>
          ,
          <source>IEEE Transactions on Systems, Man, and Cybernetics: Systems</source>
          , vol.
          <volume>51</volume>
          , no.
          <issue>10</issue>
          ,
          <year>2020</year>
          , pp.
          <fpage>6258</fpage>
          -
          <lpage>6270</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>M.</given-names>
            <surname>Medykovskyy</surname>
          </string-name>
          , et al.,
          <article-title>Methods of Protection Document Formed from Latent Element Located by Fractals, in X International Scientific</article-title>
          and Technical Conference “
          <source>Computer Sciences and Information Technologies</source>
          ,”
          <year>2015</year>
          , pp.
          <fpage>70</fpage>
          -
          <lpage>72</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>M.</given-names>
            <surname>Nazarkevych</surname>
          </string-name>
          , et al.,
          <source>Application Perfected Wave Tracing Algorithm, in IEEE First Ukraine Conference on Electrical and Computer Engineering (UKRCON)</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>1011</fpage>
          -
          <lpage>1014</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>M.</given-names>
            <surname>Nazarkevych</surname>
          </string-name>
          ,
          <article-title>The Ateb-Gabor Filter for Fingerprinting</article-title>
          ,
          <source>in International Conference on Computer Science and Information Technology</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>247</fpage>
          -
          <lpage>255</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -33695-0_
          <fpage>18</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>M.</given-names>
            <surname>Logoyda</surname>
          </string-name>
          , et al.,
          <source>Identification of Biometric Images using Latent Elements. CEUR Workshop Proceedings</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>M.</given-names>
            <surname>Nazarkevych</surname>
          </string-name>
          , et al.,
          <source>The Ateb-Gabor Filter for Fingerprinting, in Conference on Computer Science and Information Technologies</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>247</fpage>
          -
          <lpage>255</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>