<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Methods for Predicting Failures in a Smart Home</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Viktoriia Zhebka</string-name>
          <email>viktoria_zhebka@ukr.net</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pavlo Skladannyi</string-name>
          <email>p.skladannyi@kubg.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yurii Bazak</string-name>
          <email>jura.bazak@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrii Bondarchuk</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kamila Storchak</string-name>
          <email>kpstorchak@ukr.net</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Borys Grinchenko Kyiv University</institution>
          ,
          <addr-line>18/2 Bulvarno-Kudriavska str., Kyiv, 04053</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>State University of Information and Communication Technologies</institution>
          ,
          <addr-line>7 Solomenskaya str., Kyiv, 03110</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>70</fpage>
      <lpage>78</lpage>
      <abstract>
        <p>Methods for predicting possible failures in smart home systems and analyzing the data required for this have been considered in the study. A study of machine learning methods has been carried out: their features, advantages, and disadvantages have been identified, the metrics of each method have been studied, and the effectiveness of methods for predicting failures in a smart home has been established. It has been found that the Long Short-Term Memory (LSTM) model is distinguished by its ability to work with data sequences and store information for a long time. The characteristics of the LSTM method and its algorithm have been studied in detail. The study emphasizes the importance of collecting and processing various data, such as sensor data, energy consumption, and information about devices and users. The results of the study can be useful for the further development of smart home control systems to improve their reliability and efficiency.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Long short-term memory</kwd>
        <kwd>LSTM</kwd>
        <kwd>Machine learning</kwd>
        <kwd>data processing</kwd>
        <kwd>forecasting</kwd>
        <kwd>smart home</kwd>
        <kwd>failure</kwd>
        <kwd>information technology</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Smart houses are becoming increasingly
common thanks to the development of the
Internet of Things (IoT) and smart technologies
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. They provide automation and ease of
control of various systems such as lighting,
heating, security, energy efficiency, and many
others [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ].
      </p>
      <p>
        However, as the complexity of these systems
grows, the likelihood of failures or problems
increases. Network instability, software errors,
and faulty devices can all lead to unpredictable
situations that affect the usability and security
of a smart home [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>Therefore, predicting failures in a smart
home is a relevant issue, and it is appropriate to
conduct such a prediction using machine
learning methods. The use of machine learning
algorithms allows for analyzing large amounts
of data; and identifying deviations and patterns
that precede failures. This approach allows
predicting possible problems and taking
measures to prevent them before they occur.</p>
      <p>Today, smart home failure prediction is
mostly based on reactive data analysis. This
means that systems detect anomalous
situations or failures after they occur, which can
make it difficult to avoid potential problems.</p>
      <p>However, using machine learning methods,
such as classification, clustering, and prediction
algorithms, it is possible to develop systems that
can predict failures in a smart home in advance.</p>
      <p>
        Such systems use data analysis from sensors,
IoT devices, control systems, energy
consumption, and other data to identify
patterns and anomalies that may precede
disruptions [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]. Based on this information,
machine learning systems can build predictive
models that respond to certain signals or
changes in normal operation, warning of
potential problems or taking steps to prevent
them [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ].
      </p>
      <p>This area of research is still evolving, but it
promises to improve smart home control
systems by enabling them to predict and
prevent possible failures in advance, providing
greater reliability and security for users.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Research Results</title>
      <p>Machine learning techniques can help detect
and avoid some disruptions before they occur
or restore system operations faster after a
failure. They can provide early detection of
anomalies in performance, which will prevent
problems from occurring or respond quickly to
them, which in turn will help reduce the impact
of these failures on the smart home.</p>
      <p>Machine learning algorithms are compared
on four different data representations: original,
balanced, normalized, and standardized. The
original data is unchanged from the selected
data except for the removal of timestamp
values. Balancing is performed by
undersampling the faultless data in the training data.</p>
      <p>Failure-free data inputs are randomly selected
and removed from the dataset, resulting in the
same number of failures and no failures.</p>
      <p>Insufficient sampling can erase important
information from the data, leading to poor
algorithm performance. The main advantage of
data balancing is that it reduces the resources
and time required to train algorithms. Data
normalization refers to the scaling of feature
values in the range from zero to one. Scaling is
performed using (1) separately for each feature
by finding its minimum and maximum values.</p>
      <p>
        The data is also standardized by considering
each feature separately. The feature values are
subtracted from the mean and then divided by
the standard deviation, as shown in (2). These
results in values centered around zero with unit
dispersion. An additional pre-processing step
that is performed before normalizing and
standardizing the data is the conversion of the
time features of the day of the week and the
hour. To indicate that the difference between
the hours 23 and 0 is the same as the difference
between 22 and 23, the values are converted to
cyclic representations using the Fourier
transform [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The transformation calculates
the sine and cosine values for each feature, as
shown in equation (3). Thus, each feature is
replaced by the corresponding sine and cosine
feature. The calculation depends on the total
number of different feature values N, which is
24 for hours and 7 for days of the week.
(2)
(3)
xn =
      </p>
      <p>x − min
max− min .
xn =
x − </p>
      <p>
 2 x 
xsin = sin   ,
 N </p>
      <p> 2 x 
xcos = cos  </p>
      <p> N 
where х is the feature value, min is the
minimum value of the feature, max is the
maximum value of the feature, µ is the average
value of the feature, σ is the standard deviation
of a feature, and N is the total number of
different values of the features.</p>
      <p>Some of the algorithms may have problems
with dimensionality and run much slower than
other algorithms. The problem can be solved
by reducing the number of dimensions using
principal component analysis. The method is
only applicable to specific algorithms and
specific data representations, depending on
the speed of learning and prediction. In
addition, some of the algorithms work only
with a certain input format.</p>
      <p>
        Many machine learning algorithms can be
used to predict device failures. They can differ
in many properties and features [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. They can
be supervised or unsupervised, they can solve
classification, regression, or clustering
problems or they can belong to different
families such as deep learning, tree-based,
probabilistic, or linear. The total number of
algorithms considered for comparison was
limited due to their lengthy setup and training.
      </p>
      <p>The algorithms were selected based on several
criteria:
1. Supervised learning: based on the
assumption that the data to be used have
been chosen.
2. Practical use: some of the algorithms are
used more than others for predictive
maintenance.
3. Diversity: algorithms were chosen to
represent different families, tasks, and
functions, such as online learning or
prediction over time.</p>
      <p>
        Based on these criteria, ten algorithms have
been selected, nine of which are implemented
as classification algorithms and one as a time
series regression algorithm (Tables 1 and 2)
[
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ]. There are representatives of different
types. All algorithms support online learning
either implicitly or through certain variations.
      </p>
      <p>
        Decision tree ensemble, where multiple trees are combined to avoid overlearning and improve accuracy [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
A probabilistic method that uses Bayes’ theorem for classification based on the probabilities of features
entering a class [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
      <p>
        An algorithm that determines the optimal boundary of separation between classes using support vectors
[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
      <p>
        A classification method that uses a logistic function to determine the probability of an object belonging to a
certain class [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        An optimization algorithm that uses a gradient to find the minimum of a loss function with a randomly
selected subset of data [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
      <p>
        A neural network with one or more hidden layers is used for classification and regression based on
weighting coefficients [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        A type of recurrent neural network designed to store and use information over a long period to predict
failures or events [
        <xref ref-type="bibr" rid="ref16 ref17">16, 17</xref>
        ].
The main metrics for evaluating different
machine learning algorithms in prediction or
classification tasks allow for an objective
comparison of the effectiveness of these
algorithms.
      </p>
      <p>
        Accuracy represents the percentage of
correctly classified cases in the total number of
cases, which gives a general idea of the
algorithm’s accuracy [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]:
The principle of operation
Determining the class of an
object through its nearest
neighbors
Tree ensemble to avoid
overtraining
Extreme Gradient Using gradient lift to improve
Boosting accuracy
Method
k-Nearest
Neighbor
Decision Tree
Random Forest
Naive Bayes
Support Vector
Machine
Logistic
Regression
Multi-Layer
Perceptron
LSTM
      </p>
      <p>Using Bayes’ theorem for
probabilistic classification
Determining the optimal
boundary of separation between
classes</p>
      <p>Filtering anomalies,
detecting failure patterns
Classification of
anomalies, forecasting
failures
Determining the probability of an Failure classification,
object belonging to a certain class anomaly detection
Stochastic Using a gradient to optimize the
Gradient Descent loss function
Model training, failure
analysis
Application area
Detecting anomalies,
predicting failures</p>
      <p>Advantages
Easy to implement, no
training required
Detecting anomalies,
predicting failures
Failure prediction,
anomaly detection</p>
      <p>High accuracy, and
consistency of solutions
High accuracy, less
tendency to overlearn
Efficiency for small data,
simplicity of the model
Efficiency in large spaces,
flexibility
Interpretability, ease of
implementation
Fast learning, efficient for
big data</p>
      <p>Disadvantages
Sensitivity to emissions,
high computational
costs
A large number of
hyperparameters,
training time
A large number of
hyperparameters,
complexity of
interpretation
The predictions are not
flexible enough
Requirements for data
preparation, high
complexity of
customization
Requires a linear
separation surface
Requirements for
hyperparameters,
tendency to stutter
Requires a lot of data for
training, training time
High computational costs,
the complexity of setup
Т = (TP + TN) / (FP + FN + TP + TN),
(4)
where TP is true positive (correctly categorized
positive ones), TN is true negative (correctly
categorized negative ones), FP is false positive
(incorrectly categorized positive ones), and FN is
false negative (incorrectly categorized negative
ones).</p>
      <p>The precision determines the percentage of
correctly identified positive classes among all
identified positive classes, which is useful for
working with uneven classes.</p>
      <p>Recall displays the percentage of correctly
identified positive classes among all actual
positive classes, which is important for
identifying important cases that have been
missed.</p>
      <p>F1-average is a score that uses the harmonic
mean between accuracy and completeness to
understand how well the model solves the
classification task.</p>
      <p>1 = 2
 +</p>
      <p>ROC-AUC measures the area under the ROC
curve and evaluates the model’s performance
depending on different classification thresholds,
helping to determine its ability to make correct
predictions.</p>
      <p>1
ROC − AUC =  R(S) d(S)
0
(TP + FN ) .
method are usually used, since the ROC-AUC
formula itself is an integral.</p>
      <p>The completeness (R) and frequency (S) are
determined using a confusion matrix for binary
classification.</p>
      <p>Indicator S is calculated using formula (6),
and the frequency is calculated using formula (9).</p>
      <p>S =</p>
      <p>FP
FP + TN .</p>
      <p>(9)</p>
      <p>The Confusion Matrix provides detailed
information about the real and predicted classes,
which helps to estimate the level of correctness
and errors for each class, which is important
when analyzing the model.</p>
      <p>The Confusion Matrix helps to evaluate the
performance of a classification model by
visualizing real and predicted values. It is the
basis for calculating various metrics, such as
accuracy, sensitivity, specificity, F1 score, and
others.</p>
      <p>
        These metrics are crucial for evaluating the
effectiveness of algorithms and choosing the one
that is most suitable for a particular task,
depending on the requirements and needs [
        <xref ref-type="bibr" rid="ref18 ref19">18,
19</xref>
        ].
      </p>
      <p>Table 3 shows the performance of different
machine learning methods by the main metrics
considered.</p>
      <p>Table 3 and Fig. 1 show the performance of
different machine learning methods in terms of
the main metrics such as precision, accuracy,
classification accuracy, completeness, F1-mean,
and ROC-AUC. The score of “High,” “Medium”, or
“Very High” in the “Performance” column is a
generalized characterization of the methods’
performance based on these metrics.</p>
      <sec id="sec-2-1">
        <title>5. New memory state Ct 6. New exit ht.</title>
        <p>The model itself consists of the following
formulas:
1. Forgetting the previous memory state:
2. Defining what will be updated in memory:</p>
      </sec>
      <sec id="sec-2-2">
        <title>3. Update the memory status:</title>
        <p>C t = f t * C t- 1+ it * C t .
4. Update the output value:
The study has found that the LSTM model is
distinguished by its ability to work with data
sequences and store information for a long
time. This makes LSTM effective for analyzing
time series, such as sensor data in a smart
home, where information is usually sequential
in time.</p>
        <p>The LSTM model is capable of storing
information for a long time, allowing it to
effectively understand and analyze a sequence
of real-time sensor data. By using mechanisms
that ensure that some information is forgotten
and others are retained, the LSTM can take into
account long-term dependencies and the
importance of individual events in time series.
LSTM can adapt to and learn from different
amounts of data, including large amounts of
data from smart home sensors, which allows for
more accurate predictions of failures. The LSTM
model can adapt to changing conditions and
detect changes in time series, which allows for
predicting failures and anomalies in real-time.
LSTM can process a variety of data types (text,
numbers, sequences, etc.), making it versatile
for use in various forecasting and analysis
scenarios.</p>
        <p>That is why it is not surprising that this
algorithm showed the best results for
predicting failures in a smart home.</p>
        <p>The LSTM model has the following elements
at each time step t:
1. Input data xt
2. Previous output ht−1
3. Previous memory state Ct−1
4. Gates:
• Forget gate ft
• Input gate it
• Output gate ot.</p>
        <p>ht = ot * tg (C t).
where ft is the forget gate, which decides that
the previous state should be forgotten, it is the
input gate, which determines how much of the
new input will be added to the memory state,
 ̃ is a new candidate for the memory state, Ct
is memory status, ot is output gate, determines
which output will be next, xt is input on a time
step t, ht−1 is preliminary output on the time
step t−1, W and а b is weights and
displacements to be taught during the training
process.</p>
        <p>These equations allow the LSTM to
determine what information to forget, what to
keep, and how to use it to generate output
values.</p>
        <p>The step-by-step algorithm for training an
LSTM model includes the following steps:
1. Data preparation:
• Input: receive a set of data containing time
series or sequences.
• Data preparation: normalization, and
conversion of data format to meet model
requirements.
2. Building an LSTM architecture:
• Creating a model: using machine learning
libraries to build an LSTM network.
• Defining parameters: number of layers,
number of neurons in each layer,
activation functions, etc.
3. Data separation:
• Training and testing sets: splitting data
into training and testing sets to evaluate
model performance.
4. Model training:
• Model training: fitting LSTM to training
data using backpropagation.
• Model evaluation: assessing whether the
model’s predictions match the actual data
on the test set.
5. Evaluation and improvement of the model:
• Analyzing the results: reviewing the
forecast results and assessing their
accuracy.
• Improving the model: use optimization
techniques, change hyperparameters, or
modify the architecture to improve
results.
6. Testing and forecasting:
• Failure prediction: applying a trained
model to new data to predict failures in a
smart home.
7. Evaluation of the results:
• Performance evaluation: comparing
forecasts with real data to determine the
accuracy and efficiency of the model.
8. Maintaining and improving the model:
• Continuous learning: collecting new data
and improving the model based on it to
keep forecasts up-to-date.</p>
        <p>This is a general description of the LSTM
learning algorithm for predicting failures in a
smart home.</p>
        <p>The approach presented in this study takes
advantage of LSTM to predict time series. The
LSTM is implemented using Keras (a high-level
neural network interface that simplifies the
process of creating and training artificial neural
networks; it is a machine learning library that
runs on the Tensorflow, Theano, and Microsoft
Cognitive Toolkit frameworks) as a sequential
model with two LSTM layers and a dense output
layer. It receives a sequence of data inputs
(normalized feature values without rejections)
and outputs a sequence of failure values.</p>
        <p>It was decided to use a single data input, as
this significantly reduces the training time and is
sufficient for the algorithm to recognize failure
patterns. The length of the source sequence
determines the duration of the runtime.
Prediction performance is tested on three
different input sequences: 1 (1 second), 300 (5
minutes), and 1800 (30 minutes). As a result,
three different LSTM models were built. For
training, we used a dataset with 70% failures,
creating a split into training and test sets in the
proportion of 25–75% without mixing. In
addition, the data was prepared by creating
output sequences for each data record, which
were then used for training.</p>
        <p>Based on the conducted research, a data
prediction platform has been developed. Once
the system has been successfully integrated
into the smart home, the implementation
process takes place, when the system becomes
an active part of the home environment.
However, this is only the beginning: further
support, optimization, and continuous
improvement of the system play a key role in
ensuring its long-term and efficient operation
in a smart home, adapting to changing needs
and conditions (Fig. 2).</p>
        <p>Energy efficiency</p>
        <p>Automation</p>
        <p>Security</p>
        <p>Comfort
management</p>
        <p>Resource
forecasting and
optimization</p>
        <p>Responding to
changes
System without machine learning
Machine learning system</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Discussion</title>
      <p>Machine learning helps to avoid certain
problems by analyzing previous data and
recognizing patterns, but it cannot predict
absolutely all possible scenarios, especially if
they arise from certain unpredictable factors
or third-party interventions.</p>
      <p>
        A smart home system that uses machine
learning methods proves to be better than a
system without this technology (as the study
results show, the performance of a smart home
using failure prediction methods gives an
average of 22% better result compared to a
similar system without prediction—Fig. 3).
Machine learning allows the system to adapt to
changes in the environment and user
requirements, respond more quickly to new
conditions, and optimize resource use. This
helps to improve the system’s efficiency in
managing energy, comfort, safety, and user
satisfaction [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
      </p>
      <p>
        Machine learning allows the system to
predict and avoid failures, which ensures
greater reliability and durability of the system.
This approach also allows for increased
automation, helping the system perform
routine tasks without user intervention [
        <xref ref-type="bibr" rid="ref21 ref22">21,
22</xref>
        ]. Overall, a machine learning system
remains the preferred choice due to its ability
to predict, optimize, and adapt to changes,
enabling it to provide more efficient and
convenient smart home management [
        <xref ref-type="bibr" rid="ref23 ref24">23, 24</xref>
        ].
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions</title>
      <p>The study results have shown a wide range of
modern technologies, sensors, and control
systems used in smart homes. The overview
has shown that existing technologies have the
potential to improve convenience, security,
and energy efficiency.</p>
      <p>The analysis of available machine learning
methods indicates their potential in predicting
and managing risks in smart homes. The
considered models have shown high accuracy
in predicting failures.</p>
      <p>The following areas can be considered for
further development of this work:
• Improving machine learning methods to
increase the accuracy of failure prediction.
• In-depth study of the impact of the
introduction of machine learning systems
on the functioning of a smart home.</p>
      <p>Based on the obtained data, it is possible to
build a methodology for predicting failures in a
smart home, which will be the direction of the
authors’ next research.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Hu</surname>
          </string-name>
          , et al.,
          <source>Bandwidth Research of Wireless IoT Switches, in: IEEE 15th International Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer Engineering</source>
          (
          <year>2020</year>
          ). doi:
          <volume>10</volume>
          .1109/tcset49122.
          <year>2020</year>
          .2354922
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>I.</given-names>
            <surname>Kuzminykh</surname>
          </string-name>
          , et al.,
          <article-title>Investigation of the IoT Device Lifetime with Secure Data Transmission, Internet of Things, Smart Spaces, and Next Generation Networks and Systems</article-title>
          , vol.
          <volume>11660</volume>
          (
          <year>2019</year>
          )
          <fpage>16</fpage>
          -
          <lpage>27</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -30859-
          <issue>9</issue>
          _
          <fpage>2</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhurakovskyi</surname>
          </string-name>
          , et al.,
          <source>Secured Remote Update Protocol in IoT Data Exchange System, in: Workshop on Cybersecurity Providing in Information and Telecommunication Systems</source>
          , vol.
          <volume>3421</volume>
          (
          <year>2023</year>
          )
          <fpage>67</fpage>
          -
          <lpage>76</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>O.</given-names>
            <surname>Shevchenko</surname>
          </string-name>
          , et al.,
          <article-title>Methods of the Objects Identification and Recognition Research in the Networks with the IoT Concept Support</article-title>
          ,
          <source>in: Cybersecurity Providing in Information and Telecommunication Systems</source>
          , vol.
          <volume>2923</volume>
          (
          <year>2021</year>
          )
          <fpage>277</fpage>
          -
          <lpage>282</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Moshenchenko</surname>
          </string-name>
          , et. al.,
          <article-title>Optimization Algorithms of Smart City Wireless Sensor Network Control, in: Cybersecurity Providing in Information and Telecommunication Systems II</article-title>
          , vol.
          <volume>3188</volume>
          (
          <year>2021</year>
          )
          <fpage>32</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>V.</given-names>
            <surname>Sokolov</surname>
          </string-name>
          , et al.,
          <article-title>Method for Increasing the Various Sources Data Consistency for IoT Sensors</article-title>
          , in: IEEE 9th International Conference on Problems of Infocommunications, Science and Technology (
          <year>2023</year>
          )
          <fpage>522</fpage>
          -
          <lpage>526</lpage>
          . doi:
          <volume>10</volume>
          .1109/PICST57299.
          <year>2022</year>
          .
          <volume>10238518</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Smart</given-names>
            <surname>Home Technology: How AI Creates</surname>
          </string-name>
          <article-title>a Space that is Comfortable for life</article-title>
          . URL: https://www.everest.
          <article-title>ua/tehnologiyarozumnogo-budynku-yak-ai-stvoryuyeprostirkomfortnyj-dlya-zhyttya/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>T.</given-names>
            <surname>Arsan</surname>
          </string-name>
          ,
          <article-title>Smart Systems: From Design to Implementation of Embedded SMART Systems (</article-title>
          <year>2016</year>
          ). doi:
          <volume>10</volume>
          .1109/HONET.
          <year>2016</year>
          .
          <volume>7753420</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>V.</given-names>
            <surname>Zhebka</surname>
          </string-name>
          et al.,
          <article-title>Optimization of Machine Learning Method to Improve the Management Efficiency of Heterogeneous Telecommunication Network</article-title>
          ,
          <source>Cybersecurity Providing in Information and Telecommunication Systems</source>
          Vol.
          <volume>3288</volume>
          (
          <year>2022</year>
          )
          <fpage>149</fpage>
          -
          <lpage>155</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Boiko</surname>
          </string-name>
          ,
          <article-title>Modeling of the Automated System of Operational Control of the Parameters of the “Smart Home” in the Proteus Environment</article-title>
          ,
          <source>Technologies and Design</source>
          <volume>2</volume>
          (
          <issue>35</issue>
          ) (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>V.</given-names>
            <surname>Zhebka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bazak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Storchak</surname>
          </string-name>
          ,
          <source>Features of Predicting Failures in a Smart Home based on Machine Learning Methods, Telecommunications and Information Technologies</source>
          <volume>4</volume>
          (
          <issue>81</issue>
          ) (
          <year>2023</year>
          )
          <fpage>4</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>I.</given-names>
            <surname>Ruban</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Martovytskyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Partyka</surname>
          </string-name>
          ,
          <source>Classification of Methods for Detecting Anomalies in Information Systems, Systems of Armament and Military Equipment</source>
          <volume>3</volume>
          (
          <issue>47</issue>
          ) (
          <year>2016</year>
          )
          <fpage>47</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>V.</given-names>
            <surname>Kharchenko</surname>
          </string-name>
          ,
          <source>Fundamentals of Machine Learning</source>
          , Sumy State University (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>X.</given-names>
            <surname>Sun</surname>
          </string-name>
          , et al.,
          <source>System-Level Hardware Failure Prediction using Deep Learning, 56th Annual Design Automation Conference</source>
          <volume>20</volume>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . doi:
          <volume>10</volume>
          .1145/3316781.3317918.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>V.</given-names>
            <surname>Malinov</surname>
          </string-name>
          , et al.,
          <article-title>Biomining as an Effective Mechanism for Utilizing the Bioenergy Potential of Processing Enterprises in the Agricultural Sector</article-title>
          ,
          <source>in: Cybersecurity Providing in Information and Telecommunication Systems</source>
          Vol.
          <volume>3421</volume>
          (
          <year>2023</year>
          )
          <fpage>223</fpage>
          -
          <lpage>230</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bazak</surname>
          </string-name>
          ,
          <article-title>Comparison of Machine Learning Methods for Predicting Failures in a Smart Home, Abstracts of the Report at the Scientific</article-title>
          and Practical Conference “Problems of Computer Engineering,” Collection of abstracts (
          <year>2023</year>
          )
          <fpage>125</fpage>
          -
          <lpage>127</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>K.</given-names>
            <surname>Kononova</surname>
          </string-name>
          ,
          <article-title>Machine Learning: Methods and Models: a Textbook for Bachelors, Masters</article-title>
          and Doctors of Philosophy in Specialty 051 “Economics,” V. N. Karazin Kharkiv National University (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bondarenko</surname>
          </string-name>
          ,
          <article-title>Manual for the Study of the Discipline “Statistical Analysis of Data,” Lira (</article-title>
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>K.</given-names>
            <surname>Hureeva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kudin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lisnyak</surname>
          </string-name>
          ,
          <string-name>
            <surname>A Review</surname>
          </string-name>
          <article-title>of Machine Learning Methods in the Task of forecasting Financial Time Series</article-title>
          ,
          <source>Computer Science and Applied Mathematics (2)</source>
          (
          <year>2018</year>
          )
          <fpage>18</fpage>
          -
          <lpage>28</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>I.</given-names>
            <surname>Puleko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Yefimenko</surname>
          </string-name>
          ,
          <article-title>Architecture and Technologies of the Internet of Things:</article-title>
          a Textbookб State University “Zhytomyr Polytechnic”
          <article-title>(</article-title>
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhurakovskyi</surname>
          </string-name>
          , et al.,
          <source>Coding for Information Systems Security and Viability, in: Information Technologies and Security</source>
          Vol.
          <volume>2859</volume>
          (
          <year>2021</year>
          )
          <fpage>71</fpage>
          -
          <lpage>84</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>M.</given-names>
            <surname>Moshenchenko</surname>
          </string-name>
          , et al.,
          <source>Optimization Algorithms of Smart City Wireless Sensor Network Control, in: Cybersecurity Providing in Information and Telecommunication Systems</source>
          Vol.
          <volume>3188</volume>
          (
          <year>2021</year>
          )
          <fpage>32</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>B.</given-names>
            <surname>Chen</surname>
          </string-name>
          , et al.,
          <source>Smart Factory of Industry 4</source>
          .0:
          <string-name>
            <surname>Key</surname>
            <given-names>Technologies</given-names>
          </string-name>
          ,
          <source>Application Case, and Challenges, IEEE Access 6</source>
          (
          <year>2018</year>
          )
          <fpage>6505</fpage>
          -
          <lpage>6519</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2017</year>
          .
          <volume>2783682</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>J.</given-names>
            <surname>Mineraud</surname>
          </string-name>
          , et al.,
          <article-title>A Gap Analysis of Internet-of-</article-title>
          <string-name>
            <surname>Things</surname>
            <given-names>Platforms</given-names>
          </string-name>
          ,
          <source>Computer Communications 89-90</source>
          (
          <year>2016</year>
          )
          <fpage>5</fpage>
          -
          <lpage>16</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.comcom.
          <year>2016</year>
          .
          <volume>03</volume>
          .015.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>