<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Next-generation debugger detection: an AI-based approach</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Aigerim Alibek</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Orisbay Abdiramanov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Saule Amanzholova</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mamyrbek</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Astana IT Universuty</institution>
          ,
          <addr-line>Astana, 010000</addr-line>
          ,
          <country country="KZ">Kazakhstan</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>L.N. Gumilyov Eurasian National University</institution>
          ,
          <addr-line>Astana, 010000</addr-line>
          ,
          <country country="KZ">Kazakhstan</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>Anti-debugging techniques continue to represent a major obstacle in the analysis and mitigation of contemporary malicious software. With the growing sophistication of adversarial strategies aimed at circumventing forensic instruments, traditional detection methodologies-whether static or dynamic-have exhibited diminishing effectiveness. The present study investigates the application of supervised machine learning for the identification of malware incorporating anti-debugging capabilities. Memory-resident attributes from the CIC-MalMem2022 dataset were employed to train and evaluate two classifiers, namely Random Forest and Support Vector Machine (SVM). The performance of the model was studied using established classification parameters, especially accuracy, recall, F1 score and confusion table. The random forest model achieves a maximum efficiency of 0.96, showing strong applicability of threat detection and avoidance. These results highlight the potential for learning-based methods to be integrated into malware analysis workflows, especially in environments that require complex anti-analysis mechanisms.</p>
      </abstract>
      <kwd-group>
        <kwd>anti-debugging</kwd>
        <kwd>malware detection</kwd>
        <kwd>machine learning</kwd>
        <kwd>Random Forest</kwd>
        <kwd>SVM</kwd>
        <kwd>cybersecurity1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        One of the defining characteristics of contemporary malicious software is the extensive application of
evasion
mechanisms, including
anti-debugging
and
anti-virtualization
techniques,
which
significantly complicate both static and dynamic analysis. The primary objective of these methods is
the deliberate postponement of detection, thereby extending the operational lifespan of malicious
campaigns. This effect is achieved by exploiting inherent limitations of conventional forensic
instruments. As noted in earlier studies [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], the combined use of obfuscation and runtime
manipulation substantially hinders automated identification. In particular, static analysis procedures
are frequently circumvented by advanced obfuscation, while dynamic environments are often
subjected to fingerprinting and subsequent bypassing. Consequently, there exists a pressing
necessity for the development of intelligent and automated approaches that ensure the efficient
identification of anti-debugging mechanisms.
      </p>
      <p>In this context, artificial intelligence (AI), and more specifically supervised machine learning
algorithms, have demonstrated considerable potential for advancing malware detection. By
extracting and analyzing structural and behavioral patterns resident in memory, AI-based methods
are capable of identifying latent dependencies between benign and malicious software entities that
remain inaccessible to purely rule-based techniques. The present work is devoted to the investigation
Beisenbi)
of AI-driven models for the purpose of detecting anti-debugging behaviors through the training of
classifiers on an enriched corpus of malware samples.</p>
      <p>To the best of the authors’ knowledge, this study represents one of the first attempts to employ
memory-resident features of the CIC-MalMem2022 dataset in the specific context of anti-debugging
detection. In so doing, it addresses an existing gap at the intersection of static and dynamic analytical
approaches.</p>
    </sec>
    <sec id="sec-2">
      <title>Methods</title>
      <p>
        In prior contributions, Smith et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] introduced REDIR, a static detection framework designed to
identify obfuscated anti-debugging mechanisms by means of pattern recognition. Despite its
efficiency in detecting established methods, the framework does not demonstrate adaptability to
novel or polymorphic implementations. Nevolin [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] proposed a comprehensive classification of
antidebugging strategies, including timing verification, exception manipulation, and the use of hardware
breakpoints; however, his work lacked both a practical realization and empirical validation.
      </p>
      <p>
        A different perspective was offered by Yoshizaki and Yamauchi [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], who presented a
behaviororiented methodology based on the frequency analysis of API calls. While effective in detecting
evasive characteristics in runtime environments, this approach suffered from elevated false positive
rates and its dependence on dynamic infrastructures, which themselves are subject to fingerprinting
and evasion. Similarly, Chen et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] he studied the relationship between anti-depuration and
antivirtualization, showing that advanced malware often uses varying degrees of evasion. However, his
research is mainly descriptive, with an emphasis on behavioral observation rather than evolutionary
detection strategies.
      </p>
      <p>
        More recent studies have attempted to incorporate machine learning into this domain. For
instance, Hamid [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] applied learning algorithms to static features in the broader context of malware
classification. Despite promising results, the work did not specifically address anti-debugging traits.
Al Balawi et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] suggested the use of generative AI to create adversarial malware samples for
training purposes; although innovative, the approach was not evaluated with respect to resilience
against anti-debugging methods. Apostolopoulos et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] described mechanisms through which
malware unhooks debugger artifacts, thereby exhibiting highly advanced evasion capabilities. Saad
and Taseer [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] proposed several countermeasures, though their empirical substantiation was
limited.
      </p>
      <p>In summary, while prior investigations have produced valuable taxonomies and experimental
frameworks, a substantial research gap persists in the application of supervised learning to the
detection of anti-debugging features using real memory-resident data. The majority of earlier
approaches rely either on static methods, vulnerable to obfuscation, or on dynamic environments
that necessitate resource-intensive sandboxing. The present study contributes to the closure of this
gap through the evaluation of Random Forest and Support Vector Machine classifiers trained on the
CIC-MalMem2022 dataset, with particular emphasis on their generalization capability, detection
accuracy, and prospects for integration into practical cybersecurity infrastructures.</p>
    </sec>
    <sec id="sec-3">
      <title>Results and discussion</title>
      <sec id="sec-3-1">
        <title>3.1. Dataset description</title>
        <p>
          The empirical foundation of the present study is constituted by the CIC-MalMem2022 dataset [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ],
compiled under the auspices of the Canadian Institute for Cybersecurity. This dataset consists of
labeled memory dumps extracted from both benign and malicious processes executed in a Windows
environment under controlled experimental conditions. The malicious component encompasses
representatives of several malware families, each characterized by specific behavioral patterns. The
dataset incorporates a range of memory-resident attributes, including service invocation logs,
dynamically loaded libraries (DLLs), kernel driver traces, and process handle interactions. These
attributes are of particular significance, as they are frequently exploited or modified during the
implementation of anti-debugging strategies. Accordingly, the dataset provides an appropriate
empirical basis for the detection of runtime evasion mechanisms. The CIC-Malmem2022 dataset
consists of 13,000 memory dumps, with a balanced distribution of benign and malicious samples.
        </p>
        <p>Although this dataset provides a controlled and well-annotated environment, its size is limited to
EMBR. or relatively small compared to large corpora such as Bodmas.</p>
        <p>To address this limitation, we use stratified sampling to ensure a balanced distribution of benign
and malicious samples and to reduce bias.</p>
        <p>Future work will include expanding the dataset by integrating memory images from other
opensource malware libraries such as VirusShare and Malmem2023.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Preprocessing</title>
        <p>Prior to the training of classification models, a series of preprocessing operations was conducted with
the objective of ensuring consistency and reliability of the data. Attributes consisting exclusively of
null values, as well as fields containing redundant or irrelevant metadata, were eliminated from the
dataset. To avoid potential information leakage during model development, the categorical field
“Category” was also removed. The remaining features were normalized through the application of
the StandardScaler transformation, thereby standardizing their distributions – an essential condition
for the convergence and stability of Support Vector Machine (SVM) optimization procedures. The
target variable was subsequently converted into a binary format: malicious samples were assigned a
value of “1,” whereas benign processes were assigned a value of “0.” Such a binarization scheme not
only simplifies the task of classification but also corresponds to the practical requirements of
realworld malware detection.
3.3.</p>
      </sec>
      <sec id="sec-3-3">
        <title>Model selection and training</title>
        <p>For the purposes of this research, two supervised learning algorithms were selected on the basis of
their established applicability in security-related analytical domains.</p>
        <p>
</p>
        <p>Random Forest (RF): An ensemble method relying on the aggregation of decision trees,
noted for its robustness in the presence of high-dimensional input data. The algorithm
achieves reduction of overfitting by means of random feature selection and bootstrap
resampling.</p>
        <p>Support Vector Machine (SVM): A deterministic classifier capable of delineating complex,
non-linear decision boundaries through the application of kernel functions. In this study, the
Radial Basis Function (RBF) kernel was employed, owing to its proven efficiency in modeling
intricate separations in feature space.</p>
        <p>Both models were trained and validated under an 80/20 partition of the dataset into training and
test subsets. Default hyperparameters, as implemented in the Scikit-learn library, were utilized in
order to establish a reproducible experimental baseline. The Random Forest model was primarily
selected for its resilience to noise and its limited dependency on extensive parameter calibration,
which makes it well suited for forensic data. The Support Vector Machine was incorporated as a
comparative baseline, given its recognized performance in binary classification problems
characterized by high dimensionality and elevated security relevance.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Evaluation metrics</title>
        <p>The effectiveness of the classification model was assessed by calculating four standard performance
indicators. These measurements collectively ensure a complete assessment of the classifier's
behavior. This is especially important when there is class imbalance, a frequent occurrence in
malware detection functions. In addition, confusion tables have been created to provide a visual
representation of classification results and to facilitate a more detailed review of incorrect
classification models.</p>
        <p>Accuracy represents the ratio between accurately predicted cases and the total number of samples
evaluated, reflecting the overall reliability of the classifier. Accuracy quantifies the percentage of
correctly identifiedmalignant cases in all samples classified as malignant, highlighting the sensitivity
of the model to incorrect positive results. Retrieval (or sensitivity) measures the classifier's ability to
detect harmful events among all truly harmful samples and emphasizes compensation for false
denials. The F1 score, expressed as a harmonized average of accuracy and recall, combines these two
perspectives into a single measure to balance the conflicting goals of minimizing false positives and
false negatives. These evaluation criteria provide a robust analytical framework for comparing and
interpreting the performance of selected classification models. The evaluation results indicate that
the Random Forest classifier performs effectively in identifying malicious processes. As seen in
Figure 1, the model demonstrates a strong true positive rate alongside a minimal number of false
positives, highlighting its robustness and reliability in malware detection. In comparison, the Support
Vector Machine, represented in Figure 2, shows slightly weaker performance, particularly with
regard to precision and recall, suggesting a reduced ability to consistently detect malicious samples
without errors in classification.</p>
        <p>A comparative summary of classifiers ' performance is presented in Table 1. The random forest
model achieved an accuracy of 0.96, with accuracy, recall and F1 scores of 0.95, 0.96 and 0.96,
respectively. This result highlights the balanced impact on various dimensions of the assessment. In
comparison, the support carrier machine received an overall accuracy of 0.93, an accuracy of 0.91, an
ascension of 0.92 and a score of 0.92. While showing good performance, these values confirm that the
support vector engine does not respond to random forests in terms of generalizability and robustness.</p>
        <p>
          As shown in Table 2, most existing approaches focus on developing theoretical taxonomy without
static analysis or empirical documentation. Frameworks such as Redir have limited adaptability to
emerging confusion techniques, but the methods proposed by Yoshizaki and Yamauchi [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] struggle to
achieve cross-platform generalization, primarily based on execution characterization. The genetic
approach represents a promising direction of research.However, in the context of resistance to
debugging, the implementation has not been thoroughly evaluated.
        </p>
        <p>On the other hand, current research uses the physical memory function of data sets to train
supervised machine learning classifiers. This methodological choice allows us to efficiently identify
samples that exhibit anti-debugging behavior, improve accuracy, and generalizability. Importantly,
this approach demonstrates a special aptitude for forensic analysis and seamless integration into
automated threat detection paths, as there is no need to access the source code directly or run
suspicious binaries.</p>
        <sec id="sec-3-4-1">
          <title>Study</title>
          <p>Technique</p>
        </sec>
        <sec id="sec-3-4-2">
          <title>Data Used</title>
        </sec>
        <sec id="sec-3-4-3">
          <title>Strengths Limitations ML Involvement</title>
          <p>Comprehensive No implementation
coverage of or evaluation
techniques</p>
        </sec>
        <sec id="sec-3-4-4">
          <title>Runtime</title>
          <p>focused,
detects
resistance
patterns</p>
        </sec>
        <sec id="sec-3-4-5">
          <title>High false positives, platform-specific</title>
        </sec>
        <sec id="sec-3-4-6">
          <title>No real-world evaluation on antidebugging</title>
        </sec>
        <sec id="sec-3-4-7">
          <title>Not focused on anti-debugging + (RF)</title>
        </sec>
        <sec id="sec-3-4-8">
          <title>General malware classification CIC</title>
          <p>MalMem2022
+
(RF, SVM)</p>
        </sec>
        <sec id="sec-3-4-9">
          <title>High accuracy</title>
          <p>on real memory Future work
dumps; focused needed on dynamic
on anti- + explainable ML
debugging</p>
          <p>The dataset was designed to maintain a balanced representation of both classes: normal execution
(no debugger) and debugged execution (with debugger attached). As shown in Figure 1, the same
number of samples were taken for both scenarios to ensure that the model was not biased against a
particular category. This balanced distribution is essential for a fair education and a reliable
evaluation of classifiers ' performance.</p>
          <p>In order to better evaluate the performance of the proposed system, a curve of the operating
characteristics of the receiver has been created(see Figure 3). The range below the curve score reaches
0.98, indicating an excellent ability to distinguish between the two categories. The Chinese curve
effectively shows a compromise between sensitivity(real positive rate) and specificity (false positive
rate). The extremely high auction value indicates that the system maintains high detection accuracy
and reliability at different classification boundaries, and confirms the effectiveness of artificial
intelligence-based approaches to identify debugging efforts.</p>
          <p>To ensure robustness, we did not rely solely on an 80/20 difference but instead performed
additional 5-fold cross-validation. The cross-validation accuracy of the random forest model was 0.95
± 0.01, confirming the consistency and stability of the classifier across multiple data partitions.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Discussion</title>
      <p>The present study has examined the application of supervised machine learning algorithms,
specifically Random Forest and Support Vector Machine, for the detection of anti-debugging
behaviors in malicious software through the analysis of memory-resident features. Therefore, we
have found that system-level indicators are effective in identifying avoidance threats.</p>
      <p>Figure 4 shows the relative importance of the ten most important memory-resident attributes from
the CICS-Malmem2022 dataset, determined by a random forest classifier. The most influential
attributes – kernel_driver_activity (16.5%), dll_load_count (14.2%), and process_handle_interactions
(13.2%) – represent low-level system operations often manipulated by malware to bypass or disable
debugging tools. These parameters capture behaviors such as unauthorized kernel driver loading,
excessive dynamic link library injection, and anomalous inter-process handle usage, which are all
powerful indicators of anti-debugging strategies. The next most important attributes are
Service_Invocation_Log (11.8%), Thread_Creation_Frequency (10.1%), and Heap_Allocation_Rate
(8.9%), which correspond to eight commonly used high-level runtime behaviors to detect anomalies
in the analysis environment, valid time, and resource usage. Low-priority properties, such as context
switching, page fault count, registry access count, API debugging checks, debugger interaction or
emulation traces, and indirect signals can contribute significantly to the classifier's decision
boundary. Overall, Figure 4 confirms that kernel-level and memory-level features play the most
important roles in malware detection as well as debugger tolerance. These insights lay a solid
foundation for improving the interpretability of every feature, decision reasoning, visualization and
automated forensic systems, and indeed for the future integration of explainable AI technologies like
SHAP or LIME. Future study directions include exploring hybrid detection frameworks that
simultaneously integrate static and dynamic capabilities, using contradictory training strategies to
improve resilience in the face of evolving threats, and evaluating the actual development of real-time
intrusion detection systems. In addition, the systematic application of explainable AI methods is a
promising way to advance model Interpretation and strengthen operator confidence.</p>
      <p>In addition, the systematic application of explainable artificial intelligence methods is a promising
way to advance the interpretation of models and increase the confidence of operators. The modern
malware is an evolving field in which the development and improvement of these systems is a top
priority for the research community in computer science with the increased complexity of modern
malware.</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>For the proposed detection framework to be practical in a real business environment, many aspects of
the application must be carefully considered. Integration into existing digital forensic workflows
requires automating the collection of memory dumps and subsequent extraction of functionality,
thus ensuring that the system operates with minimal manual intervention. To maintain accuracy and
adaptability in the long term, it is necessary to periodically recycle models using updated malware
bodies.Because the system can take into account the constant evolution of enemy technology.</p>
      <p>In addition, integrating trained classifiers into host-based intrusion detection systems is a
promising way to achieve real-time defense against evasive threats while maintaining negligible
overheads of the system. In addition, the integration of detection mechanisms into the endpoint
protection platform allows for early identification of malicious activities without running suspicious
binaries. This approach is particularly important in forensic environments with air gaps, where
proactive threat identification significantly increases system resilience and operational security.</p>
    </sec>
    <sec id="sec-6">
      <title>Limitations</title>
      <p>Despite the encouraging results of this study, some limitations must be recognized. Initially, an
experimental evaluation was carried out exclusively. These data sets provide valuable references, but
may not fully reflect the heterogeneity and complexity of malware occurring in real-world
operational situations. Therefore, the generalization of current results in different data sets or living
environments requires further verification.</p>
      <p>Second, the proposed framework is dependent upon the availability of reliable memory-resident
features, which presupposes accurate memory acquisition. In practice, memory extraction tools may
introduce noise, yield incomplete data, or encounter failures when faced with heavily obfuscated or
protected malware. Furthermore, this research does not examine hostile avoidance strategies or the
potential impact of encrypted memory regions. These two elements can reduce the efficiency of the
model.</p>
      <p>Finally, the scope of the study was limited to evaluating two supervised classifiers-random forest
and support vector machines. Random forests have excellent performance, but alternative algorithms
or hybrid set approaches can improve resilience and adaptability to a wide range of operating
conditions. These aspects represent an important direction for future research.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgements</title>
      <p>The authors gratefully acknowledge the support of Astana IT University, whose provision of
academic resources and institutional infrastructure made the completion of this research possible.
The contributions of the institution in fostering an environment conducive to scientific inquiry are
deeply appreciated.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used ChatGPT-5.1 to improve the presentation of
machine learning methodologies, enhance the clarity and readability of the text. After using this
service, the authors thoroughly reviewed, revised, and verified all technical specifications,
experimental results, and interpretations, and take full responsibility for the accuracy and
completeness of the final manuscript.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Collberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Thomborson</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Low</surname>
          </string-name>
          , “A Taxonomy of Obfuscating Transformations,” University of Auckland,
          <source>Technical Report 148</source>
          ,
          <year>July 1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>G.</given-names>
            <surname>Wurster</surname>
          </string-name>
          , P. van Oorschot,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Somayaji</surname>
          </string-name>
          , “
          <article-title>A generic attack on checksumming-based software tamper resistance</article-title>
          ,
          <source>” IEEE Symposium on Security and Privacy</source>
          , pp.
          <fpage>127</fpage>
          -
          <lpage>138</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Y.</given-names>
            <surname>You</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Yim</surname>
          </string-name>
          , “Software Protection: Survey, Taxonomy, and Evaluation,”
          <source>International Journal of Information Security</source>
          , vol.
          <volume>14</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>403</fpage>
          -
          <lpage>417</lpage>
          , Oct.
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Saxe</surname>
          </string-name>
          and K. Berlin, “
          <source>Deep Neural Network Based Malware Detection Using Two Dimensional Binary Program Features,” 10th International Conference on Malicious and Unwanted Software (MALWARE)</source>
          , pp.
          <fpage>11</fpage>
          -
          <lpage>20</lpage>
          , Oct.
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>E.</given-names>
            <surname>Raff</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Barker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sylvester</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brandon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Catanzaro</surname>
          </string-name>
          , and C. Nicholas, “
          <article-title>Malware Detection by Eating a Whole EXE,”</article-title>
          <source>AAAI Workshops: Artificial Intelligence for Cyber Security</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>D.</given-names>
            <surname>Ugarte-Pedrero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Santos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Sanz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Laorden</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Bringas</surname>
          </string-name>
          , “
          <article-title>Countering Malware Analysis Evasion Techniques: A Case Study with Dynamic Analysis,”</article-title>
          <source>International Journal of Information Security</source>
          , vol.
          <volume>18</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>207</fpage>
          -
          <lpage>226</lpage>
          , Apr.
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Wu</surname>
          </string-name>
          , “
          <article-title>Classification Based Hard Disk Drive Failure Prediction: Methodologies, Performance Evaluation and Comparison</article-title>
          ,”
          <source>2022 IEEE 18th International Conference on Automation Science and Engineering (CASE)</source>
          , pp.
          <fpage>189</fpage>
          -
          <lpage>195</lpage>
          , Aug.
          <year>2022</year>
          , doi: 10.1109/CASE49997.
          <year>2022</year>
          .
          <volume>9920134</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kinder</surname>
          </string-name>
          , “
          <article-title>Towards Static Analysis of Obfuscated Binaries</article-title>
          ,” International Conference on Computer Aided Verification, pp.
          <fpage>274</fpage>
          -
          <lpage>289</lpage>
          , Springer,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Moser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kruegel</surname>
          </string-name>
          , and E. Kirda, “
          <article-title>Limits of Static Analysis for Malware Detection,” 23rd Annual Computer Security Applications Conference</article-title>
          (ACSAC), pp.
          <fpage>421</fpage>
          -
          <lpage>430</lpage>
          , Dec.
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J. Z.</given-names>
            <surname>Kolter</surname>
          </string-name>
          and
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Maloof</surname>
          </string-name>
          , “
          <article-title>Learning to Detect and Classify Malicious Executables in the Wild</article-title>
          ,
          <source>” Journal of Machine Learning Research</source>
          , vol.
          <volume>7</volume>
          , pp.
          <fpage>2721</fpage>
          -
          <lpage>2744</lpage>
          , Dec.
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>