<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>O. H. Abdulganiyu, T. Ait Tchakoucht, Y. K. Saheed, A systematic literature review for network
intrusion detection system (ids), International journal of information security</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1145/3381831</article-id>
      <title-group>
        <article-title>Enhancing training time and sustainability in Intrusion Detection Systems on Machine Learning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Isabella Marasco</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Karina Chichifoi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Silvio Russo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Claudio Zanasi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science and Engineering, University of Bologna</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>22</volume>
      <issue>2023</issue>
      <fpage>54</fpage>
      <lpage>63</lpage>
      <abstract>
        <p>The increasing complexity and volume of network trafic in the Internet of Things (IoT) environments, coupled with the rapid evolution of cyber threats, has rendered traditional Intrusion Detection Systems (IDS) less efective. In response, there is an urgent need to develop a more eficient IDS that can not only detect a wider range of attacks but also adapt quickly to new, previously unknown threats. This study addresses the issues of prolonged training times and high computational resource consumption in IDS, with a particular focus on achieving sustainability without compromising performance. We put forth a solution to reduce the intrusion detection pipeline, with an emphasis on training time, that employs a novel machine learning (ML) model, PerpetualBooster, which has not previously been utilized in cybersecurity. This model is designed to minimize training times and computational resource consumption while maintaining high detection performance. The CIC Modbus dataset was used to evaluate the performance of our approach. PerpetualBooster was trained in a few seconds, demonstrating a reduction in training time compared to other ML algorithms. These results illustrate the potential of the proposed model as a sustainable, high-performance solution for real-time and energy-eficient IDS in IoT environments, addressing critical challenges in both cybersecurity and environmental sustainability.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Cybersecurity</kwd>
        <kwd>Machine Learning</kwd>
        <kwd>Intrusion Detection System</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The rapid increase in network trafic and the growing sophistication of cyber threats, particularly
in Internet of Things (IoT) environments, have revealed significant limitations in existing intrusion
detection systems (IDS). While traditional IDSs demonstrate eficacy in identifying known threats,
they exhibit dificulty in adapting to novel attacks [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Additionally, they often require substantial
computational resources, raising concerns about their environmental impact.
      </p>
      <p>
        The state of the art on IDS focuses on achieving high performance and reducing the number of
false positives [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and false negatives [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] generated by these systems. A plethora of research is being
conducted with the objective of enhancing the eficiency and accuracy of IDS [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5, 6</xref>
        ]. However,
it is crucial to prioritize not only the performance of the system but also the reduction of training
time and the minimization of computational resources, although a variety of approaches have been
proposed in this regard in the literature [7, 8, 9]. Nevertheless, the various solutions continue to
exhibit constraints with respect to training time and the consumption of computing resources. This is
particularly relevant in the context of sustainability, Schwartz et al. [10] highlight that the increasing
complexity of Machine Learning (ML) models amplifies their energy consumption. In cybersecurity,
where new attacks frequently emerge, retraining these models is necessary to ensure it remains efective
in detecting emerging and previously unseen threats. The delay in training time presents a significant
challenge, as it hinders the possibility of rapidly deploying new models adapted for the new threat
landscape, thereby limiting the ability to provide real-time protection against emerging attacks.
      </p>
      <p>Our aim is to design a solution that can reduce the detection time of new attacks. To achieve this,
we are focusing on an intrusion detection pipeline, with a particular focus on reducing the training
time of the ML model using PerpetualBooster, a model that has never been used in cybersecurity.
In contrast to alternative models, PerpetualBooster does not require hyperparameter optimization,
which enhances both operational eficiency and sustainability by reducing environmental impact.
The proposed methodology was evaluated on the CIC Modbus dataset [11] and was compared with
other models often used in this context including Gradient Boosting, LightGBM, HistGradientBoosting,
Random Forest, Support Vector Machine (SVM), and Multi-Layer Perceptron (MLP) in terms of training
time and performance. The results demonstrate that our proposed approach is able to reduce training
time in comparison to other models while maintaining high performance in the detection of attacks.
This is due to the absence of hyperparameter optimization, which not only accelerates adaptation to
new attacks in networks, but also reduces environmental impact and resource consumption, while
maintaining its eficacy in detecting malicious activities.</p>
      <p>The remainder of this paper is organized as follows. Section 2 presents an overview of the state
of the art. Section 3 describes the data set and the pre-processing applied. Section 4 presents our
proposed methodology. Section 5 illustrates the results. Finally, Section 6 summarizes the conclusions
and discusses future work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>The growing number of cyber-attacks underscores the critical need to prevent and detect intrusions
in networks. The application of AI has emerged as a highly sought-after solution due to its ability to
adapt to evolving threats.</p>
      <p>Shahid et al. [12] proposed a hybrid IDS tailored for IoT networks using the ROUT-4-2023 dataset,
targeting RPL-based routing attacks. Their system combines statistical trafic analysis with ML and
deep learning, achieving 99% accuracy with Random Forest and 97% with a Transformers-based model.
Alazzam et al. [13] presented a dual-subsystem IDS trained with OCSVM on normal and attack packets,
respectively. The system demonstrated superior performance on KDDCUP-99, NSL-KDD, and
UNSWNB15 datasets in detection rate, accuracy, and false alarms. Hu et al. [14] introduced Graph2vec+RF,
leveraging graph embeddings for early detection. By generating flow graphs from initial network
packets and classifying them with Random Forest, this method avoids extensive feature engineering
and dataset size requirements, outperforming benchmarks on CICIDS2017 and CICIDS2018. While the
research in this field aims to enhance the performance of IDS, this is no longer a suficient goal. The
impact of digital technologies on the environment is no longer negligible and must be considered, along
with performance, as a primary parameter for evaluating a model. Our work prioritizes the reduction of
training time without compromising high performance. This approach addresses a critical bottleneck in
IDS development, enabling faster deployment and adaptability in dynamic cybersecurity environments.</p>
      <p>Given the dynamic nature of events and the continuous stream of data, it is imperative to not only
improve detection accuracy, but also to minimize training time. The authors in [8] proposed LIO-IDS,
combining LSTM with an improved One-vs-One (I-OVO) technique. Evaluated on NSL-KDD, CIDDS-001
and CICIDS2017 datasets, it improved detection accuracy and reduced training time to as low as 153.25
seconds on CICIDS2017. Kim et al. [15] presented a hybrid IDS using a C4.5 decision tree and one-class
SVMs. Tested on NSL-KDD, it showed better detection rates, fewer false positives, and reduced training
time from 76.63 to 56.58 seconds. Gupta et al. [16] introduced CSE-IDS, a cost-sensitive deep learning
and ensemble-based NIDS. It achieved competitive accuracy and reduced training times to 120 seconds
on NSL-KDD and 430 seconds on CICIDS2017, outperforming traditional methods.</p>
      <p>These approaches typically entail the combination of various techniques to reduce training time,
which necessitates the regularization of hyperparameters to optimize performance. This, in turn, results
in an increase in the time required for training. In contrast, our approach employs PerpetualBooster,
a novel method for cybersecurity that has not previously been utilized in this context. This method
eliminates the necessity for hyperparameter optimization, enabling a significant reduction in training
time without compromising accuracy or other performance metrics. Furthermore, our methodology is
designed to reduce the consumption of computational resources, thereby enhancing the sustainability
of IDS.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Pre-processing of the dataset</title>
      <p>This section presents a description of the dataset, then proceeds to detail the pre-processing and feature
selection phases.</p>
      <sec id="sec-3-1">
        <title>3.1. Dataset</title>
        <p>The dataset utilized in this research is the CIC Modbus Dataset [11], which was published by the Canadian
Institute for Cybersecurity. It was generated from generated trafic captured through Wireshark that
contains benign and malicious packets. The benign trafic represents legitimate Modbus communications
within the substation network. The malicious network trafic emulates various types of Modbus protocol
attacks that are based on the MITRE ICS ATT&amp;CK framework. The original dataset includes trafic
related to a series of protocols, such as TCP, Modbus, RMI, ARP, DNS, ICMP, and IGMP, but only those
related to Modbus contain labeled samples of benign and malicious trafic. For this reason, we decide
to consider only the Modbus network trafic that has two research advantages: it is the most recent
publicly accessible dataset on this subject; there are no previous studies on IDS that utilized this dataset.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Pre-processing and feature selection</title>
        <p>The initial step was the pre-processing phase. Due to the significant imbalance in the dataset, as shown
in Table 1, we achieve a more balanced distribution through a random undersampling technique. This
method involves randomly removing a subset of samples from the original dataset.</p>
        <p>To further optimize the pre-processing pipeline, we initially encode the categorical features using the
OrdinalEncoder, thereby ensuring that categorical variables are converted into numerical values that
ML algorithms can efectively interpret. Then, to enhance the performance of the ML models, we apply
the min-max scaling that standardizes the numerical features. The dataset is then split into training
and testing subsets with an 80-20 ratio to validate the model’s performance on unseen data efectively.</p>
        <p>In addition to these pre-processing steps, we incorporate NeuroFS [17] for feature selection. NeuroFS
is a sparse neural network that is specifically designed to be resource eficient while maintaining high
performance. This approach, proposed by Atashgahi et al. [17], dynamically updates the input neurons
of the network to identify a set of relevant features from the given input data. Initially, the sparse
connectivity of the network is generated randomly as an Erdos-Renyi random graph. This graph
theory-based approach ensures that the network starts with a sparsely connected structure, which
is critical for maintaining resource eficiency and reducing computational overhead. The dynamic
adjustment of input neurons in NeuroFS allows the network to focus on the most pertinent features,
thereby enhancing the model’s ability to learn and generalize from the data.</p>
        <p>During the training phase, the network undergoes a dynamic modification process in which input
neurons are gradually removed based on their activity levels. Neurons that remain inactive are
systematically pruned from the network, thus reducing the overall complexity. Inactive neurons that are deemed
necessary based on certain criteria are re-added to the network, thus ensuring that the model retains
the ability to adapt and learn efectively. The selection process for reactivating neurons is governed
by the connections with the highest gradient magnitudes. This criterion ensures that only the most
significant neurons, in terms of their contribution to the network’s learning process, are retained and
emphasized. The gradient magnitude serves as a reliable indicator of the importance of each connection,
guiding the dynamic adjustment of the network structure. Upon completion of the training phase, the
network identifies K features corresponding to the active neurons with the highest strength in the input
layer. These features are considered the most informative and relevant within the dataset, representing
the key attributes that significantly contribute to the predictive power of the model. The features in
Table 2 are selected using NeuroFS on our dataset.</p>
        <p>Feature Name
flow
modbus_pdu
src_port
dst_port
ip_len
ip_chksum
modbus_len
modbus_start_addr
modbus_output_addr
modbus_output_value
modbus_byte_count
modbus_coil_status_0
modbus_register_val
modbus_register_addr
attack</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Model</title>
      <p>This section introduces PerpetualBooster [18], a recent ML model that was proposed in 2024. This model
is designed to achieve high performance with minimal usage of resources but it has not been utilized in
cybersecurity. PerpetualBooster is a gradient boosting machine (GBM) algorithm that does not require
tuning of hyperparameters, thus obviating the need for hyperparameter optimization. This distinctive
feature sets distinguishes it from other GBM algorithms [19] that are supervised learning algorithm that
combine multiple weak learners into an ensemble with good predictive performance. They are developed
through an iterative process where weak learners, typically decision trees [20], are sequentially added
to correct the errors of previous models. During each iteration, a new tree is trained to address the
residual errors of the current ensemble. This optimization is gradient-based, where the gradients of
a loss function guide the adjustments made by each new tree. The final predictions are obtained by
aggregating the outputs of all the trees, with each tree’s contribution moderated by a learning rate. This
learning rate not only controls the influence of each tree, but also acts as a regularization mechanism to
mitigate overfitting. To further prevent overfitting, additional regularization techniques are employed,
such as limiting the depth of the trees, pruning, and imposing penalties on model complexity.</p>
      <p>PerpetualBooster[21] is diferent from other gradient boosting-based models due to its lack of
requirement for hyperparameter optimization. This property is a consequence of a combined use of
two techniques: step size control and generalization control. These techniques enable the balancing of
data fitting and overfitting avoidance, which is a crucial aspect of decision tree learning.</p>
      <sec id="sec-4-1">
        <title>4.1. Step size control</title>
        <p>The step size control is based on the Armijo–Goldstein, also known as backtracking line search, that is
a strategy used in optimization to ensure that the loss function  decreases suficiently at each iteration.
In each interation, the Armijo–Goldstein attemps to determinate a suficiently step size  that satisfies
the following condition:</p>
        <p>( +  ) ≤  () +</p>
        <p>In this equation, x represents the current state, p denotes the descent direction, c controls the
suficiency of reducing the objective function and m is the direction derivative of the function f relative
to the direction p at the point x.</p>
        <p>In PerpetualBooster, the step size control has three diferences regarding the backtracking line search.
The first is that it tries to find the smallest step to satisfy the condition, instead of the largest. It grows
the tree and checks the loss decrease after each split. If the loss decrease exceeds a certain threshold, it
stops growing the tree and continues with the next boosting round. It tries to take the smallest step that
achieves a target loss decrease at each boosting round. This can be called forward tracking tree search,
where  is calculated from the user-defined budget parameter, that can be a value between 0 and 1.</p>
        <sec id="sec-4-1-1">
          <title>The second diference in step size control is the parameter</title>
          <p>m that is kept constant instead of updating
at every step. It is calculated before the fitting process using the base score.</p>
          <p>The last diference in PerpetualBooster during the step size control is the control parameter c that is
calculated with the bugdet parameter, using the following formula:</p>
          <p>= 10− 
 =

1 ∑︁ (, ˆ)
 =</p>
          <p>10
budget
 =
1
 − TSS</p>
          <p>Reciprocals of powers (ROF) =
Truncated series sum (TSS) = ROF −</p>
          <p>− 1
︂(
1 +
1 )︂</p>
          <p>Step size control is an eficacious strategy from the outset. The initial split significantly reduces the
loss, which is particularly beneficial in the initial boosting rounds.</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Generalization control</title>
        <p>Before each node splitting, the data is split into training and validation sets. It calculates not only the
training loss but also the validation loss using the separate validation set.</p>
        <p>losstrain = train +</p>
        <p>train2
 = − train</p>
        <p>train
lossvalid = valid +</p>
        <p>valid2
1
2
1
2</p>
        <p>In these equations, G represents the gradient values, H the sum of the Hessian values, and w the
weight term. Then, it calculates the generalization term using training and validating losses.</p>
        <p>Generalization =
lossparent − losstrain
lossparent − lossvalid</p>
        <p>It lets the node split if generalization is greater than 1; it stops splitting if generalization is less than
1. In other words, it checks if the validation loss decreases compared to the parent loss when splitting.</p>
        <p>The importance of generalization control increases as the boosting process progresses. As algorithmic
learning progresses, the trees that result tend to become shallower because there is less data to inform
the learning process. The algorithm has a built-in stopping mechanism that is triggered if it encounters
a simple tree with poor generalization, less than 1, three times.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Experimental results</title>
      <p>In this section we evaluate the efectiveness of PerpetualBooster in comparison to alternative models
based on gradient boosting, including LightGBM [22], HistGradientBoosting [23] and GradientBoosting.
The aim is to ascertain whether the novel gradient boosting-based model, PerpetualBooster, can attain
with more rapid execution a superior performance in comparison to the extant models. Furthermore,
we also compare PerpetualBooster with other models including Random Forest, SVM and MLP which,
as observed in the related work section, typically achieve good performance. For each model, we utilize
GridSearchCV in order to identify the optimal hyperparameter configuration.</p>
      <p>The results are presented in Table 3. It shows a superiority of the PerpetualBooster in terms of
training speed when compared to the other evaluated algorithms. PerpetualBooster completes the
training process in a shortest time, equal to 7.39 seconds. The second fastest model, SVM, requires
191.33 seconds, which is approximately 26 times longer than PerpetualBooster. Other algorithms exhibit
considerably longer processing times. Despite the considerable reduction in training time, the Perpetual
Booster maintains high levels of accuracy and precision.</p>
      <p>Model
PerpetualBooster</p>
      <p>LightGBM
HistGradientBoosting</p>
      <p>GradientBoosting</p>
      <p>Random Forest</p>
      <p>SVM
MLP</p>
      <p>Furthermore, figure 2 shows that, in the absence of hyperparameters optimization, the training time
of the models is decreased. In particular, the time required for LightGBM is marginally higher than
that of PerpetualBooster. Nevertheless, the use of PerpetualBooster remains advantageous because
of its capacity to achieve high results in a shorter time without the need to utilize hyperparameter
optimization. Thanks to the elimination of the hyperparameter optimization, PerpetualBooster achieves
good accuracy in a considerably shorter training time compared to other models.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>This paper proposes a novel IDS designed to address the challenges posed by the evolving nature
of cyber threats in industrial environments. We consider two main objectives: minimizing the time
required for model training and reducing computational resource consumption. In order to achieve
these goals, we propose the use of a novel ML model, PerpetualBooster, which has not previously been
used in cybersecurity.</p>
      <p>The experimental results demonstrate that PerpetualBooster enhances both the eficiency and
adaptability of the IDS. It leads to a substantial reduction in training time when compared to conventional
ML models, thereby enabling the system to rapidly adapt to new and emerging cyber threats.
PerpetualBooster exhibits a marked reduction in computational resource consumption. The proposed
IDS employs PerpetualBooster to reduce training times, lower resource consumption, and minimize
computational impact while maintaining high eficacy in detecting malicious activities. This work
contributes to the field of sustainable cybersecurity and establishes a foundation for future research
into the optimization of machine learning models for real-time, energy-eficient, and adaptive intrusion
detection systems even in cyberphysical environments.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This work was partially supported by the project SERICS (PE00000014) under the MUR National
Recovery and Resilience Plan funded by the European Union - NextGenerationEU and by the project
C4SI funded by the PR-FESR ER 2021-2027</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <sec id="sec-8-1">
        <title>The author(s) have not employed any Generative AI tools.</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Cantelli-Forti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Colajanni</surname>
          </string-name>
          ,
          <article-title>Adversarial fingerprinting of cyber attacks based on stateful honeypots</article-title>
          ,
          <source>in: 2018 International Conference on Computational Science and Computational Intelligence (CSCI)</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>19</fpage>
          -
          <lpage>24</lpage>
          . doi:
          <volume>10</volume>
          .1109/CSCI46756.
          <year>2018</year>
          .
          <volume>00012</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <article-title>A fast network intrusion detection system using adaptive synthetic oversampling and lightgbm</article-title>
          ,
          <source>Computers &amp; Security</source>
          <volume>106</volume>
          (
          <year>2021</year>
          )
          <fpage>102289</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>AlSlaiman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. I.</given-names>
            <surname>Salman</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Saleh</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Enhancing false negative and positive rates for eficient insider threat detection</article-title>
          ,
          <source>Computers &amp; Security</source>
          <volume>126</volume>
          (
          <year>2023</year>
          )
          <article-title>103066</article-title>
          . URL: https://www. sciencedirect.com/science/article/pii/S0167404822004588. doi:https://doi.org/10.1016/j. cose.
          <year>2022</year>
          .
          <volume>103066</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sajid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. R.</given-names>
            <surname>Malik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Almogren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. S.</given-names>
            <surname>Malik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. H.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tanveer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. U.</given-names>
            <surname>Rehman</surname>
          </string-name>
          ,
          <article-title>Enhancing intrusion detection: a hybrid machine and deep learning approach</article-title>
          ,
          <source>Journal of Cloud Computing</source>
          <volume>13</volume>
          (
          <year>2024</year>
          )
          <fpage>123</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Bakhsh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ahmed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Alshehri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ahmad</surname>
          </string-name>
          ,
          <article-title>Enhancing iot network security through deep learning-powered intrusion detection system</article-title>
          ,
          <source>Internet of Things</source>
          <volume>24</volume>
          (
          <year>2023</year>
          )
          <fpage>100936</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>