<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>A. V. Samedov);</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Annotation-Based Static Verification of Algorithmic Complexity in Java</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Aleksandr V. Samedov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zoltán Porkoláb</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Algorithmic Complexity, Static Analysis, Java Annotations, Software Verification, Code Quality</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ELTE Eötvös Loránd University, Faculty of Informatics</institution>
          ,
          <addr-line>1117 Budapest, Pázmány Péter sétány 1/C</addr-line>
          ,
          <country country="HU">Hungary</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>1853</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>Time and space complexities are fundamental concerns in modern software development, particularly in performance-critical systems. This paper proposes a novel annotation-based approach for verifying algorithmic complexity in Java. By integrating complexity metadata into the source code through Java annotations, developers can automatically check whether the implementations conform to the expected computational bounds. The system leverages static analysis techniques and a custom framework built using JavaParser. Evaluation across common algorithm patterns and real-world projects demonstrated the efectiveness and feasibility of this lightweight and developer-friendly method.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>http://samedovav.web.elte.hu (A. V. Samedov); https://gsd.web.elte.hu (Z. Porkoláb)</p>
      <p>CEUR</p>
      <p>ceur-ws.org</p>
      <p>
        In this study, we extend this direction by introducing a novel annotation-based approach for time
complexity verification in Java programs. Our system enables developers to specify the expected
complexity using annotations such as @Prove and @BelieveMe [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. These annotations are processed
during compilation or static analysis, leveraging the JavaParser framework [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] to construct abstract
syntax trees (ASTs) and apply complexity detectors that recognize common patterns, such as loop
nesting, recursion, and use of built-in sorting APIs.
      </p>
      <p>This annotation-based model ofers several advantages: (1) it allows complexity expectations to
be embedded directly in the source code, improving traceability and documentation; (2) it enables
automated checking without requiring a separate specification language; and (3) it ofers immediate
feedback to developers during builds, helping enforce performance constraints throughout the software
lifecycle.</p>
      <p>This paper is organized as follows: In Section 2, we review the foundational and recent literature
on code complexity metrics and static analysis. Section 3 describes our proposed approach and the
annotation design. Section 4 outlines the architecture of our prototype tool, and Section 5 presents an
empirical evaluation of its accuracy and performance. We conclude in Section 6 with a summary and
directions for future work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>In this Section we review the state of the art literature on static analysis approach for complexity
measurement and on annotation-based complexity analysis related to our results.</p>
      <sec id="sec-2-1">
        <title>2.1. Static Analysis for Complexity Measurement</title>
        <p>
          Cousot and Cousot [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] pioneered the concept of abstract interpretation, a fundamental approach in
static analysis that facilitates the estimation of program behaviors, such as complexity assessment.
Their framework underpins contemporary methods for complexity detection used in compilers and
analytical tools.
        </p>
        <p>WCET Analysis. A significant contribution to the field was presented by Schoeberl et al. [ 12],
who investigated the worst-case execution time (WCET) analysis specific to a Java processor. The
authors explored various techniques for establishing upper limits on execution time, emphasizing Java’s
execution model and the constraints of real-time systems. Their findings underscore the necessity of
precise execution time predictions for essential applications, particularly in embedded and real-time
environments. Additionally, this study ofers valuable perspectives on how contemporary processors
manage the dynamic characteristics of Java, which directly afect complexity analysis.</p>
        <p>Furthermore, conventional techniques for complexity analysis, as outlined in [13], ofer fundamental
insights into algorithm eficiency. When these methods are integrated with Worst-Case Execution Time
(WCET) analysis, they facilitate a more thorough evaluation of software performance, particularly in
the context of applications that are sensitive to timing.</p>
        <p>Scalable Static Analysis. More recent work by Lillack et al. [14] introduced scalable static analysis
methods to eficiently assess the code complexity in large-scale Java applications. Their system focuses
on reducing the computational overhead while maintaining the precision of detecting costly operations.</p>
        <p>Compile-Time Complexity Verification via Templates. A novel approach to complexity
evaluation was presented by Corriero et al. [15], who developed a tool that leveraged C++ templates for
the compile-time certification of polynomial-time computability. Their method demonstrates how
language-level constructs can be used not only for abstraction but also as a formal mechanism for
certifying computational limits. By encoding complexity logic into template instantiations, their approach
enables the detection of non-polynomial patterns at compile time, ensuring that the analyzed code
conforms to the desired complexity constraints. This technique, rooted in C++, provides a conceptual
foundation for annotation-based approaches in Java, where the compile-time metadata can similarly
enforce algorithmic limits.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Annotation-Based Complexity Analysis</title>
        <p>Dynamic Annotation-Based Code Analysis. The use of annotations to enhance static analysis
has been investigated in several fields. Ernst et al. [ 16] introduced annotation-based code analysis
aimed at enhancing program correctness, which is consistent with our methodology of employing Java
annotations to articulate and identify computational complexity.</p>
        <p>Java Annotations for Performance Modelling. Krämer et al. [17] developed a framework
that leverages Java annotations for performance modelling, enabling developers to highlight code
sections critical to performance. Our research builds upon this concept by concentrating specifically on
annotations related to time and space complexity, rather than general performance modelling.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>
        The intricacy of software plays a pivotal role in determining the maintainability, scalability, and
eficiency of an application. As digital systems expand in both size and capability, their codebases tend
to become more elaborate, leading to increased technical debt and slower development time lines. When
functions are overly complex, they become more dificult to interpret, troubleshoot, and update, which
can negatively afect team productivity and the long-term viability of a project [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Thus, evaluating
and managing complexity at the function level is crucial in large-scale systems and applications with
stringent performance requirements [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>Traditionally, assessing code complexity involves thorough manual review and the use of external
static analysis utilities. Although these methods are efective, they often require significant setup and
maintenance eforts. In addition, such tools usually operate separately from the main development
environment, which can result in disjointed workflows and reduced eficiency.</p>
      <p>In contrast, Java annotations provide a streamlined and declarative method for embedding
complexityrelated information directly into the source code [18]. This enables developers to flag complexity
expectations where the code is written, making it easier to identify potential issues. It also promotes
better documentation of performance assumptions and reduces the reliance on third-party tools. Using
annotations, engineers can indicate the anticipated time or memory complexity of the methods, which
can be verified during compilation or through dedicated analysis tools. This localized strategy supports
faster feedback and encourages sound software design principles.</p>
      <p>Incorporating annotations into complexity monitoring signifies a progressive transition towards more
automated and developer-centric approaches for managing software complexity within contemporary
engineering methodologies.</p>
      <sec id="sec-3-1">
        <title>3.1. Problem Statement</title>
        <p>Manual evaluation of the complexity of software functions is often labor-intensive and susceptible to
human error, particularly in extensive systems involving multiple modules and contributors. As the
volume and intricacy of codebases increase, the practicality of relying solely on manual inspections
declines, often resulting in inconsistent evaluations and missed issues in complex governance systems.
Although tools such as Checkstyle and PMD ofer helpful guidance on code quality and surface-level
complexity indicators, they typically require considerable configuration and are not well-suited for
precise method-level complexity enforcement [19]. These solutions are generally geared toward broad
code quality checks and lack native support for declaring and validating complexity expectations
directly in the code.</p>
        <p>To address these limitations, this study presents a Java-oriented solution for incorporating complexity
expectations via annotations. This method entails integrating complexity-related specifications directly
within the method definitions, facilitating automated verification, either during compilation or through
additional analytical tools. A working prototype was developed to assess the feasibility of the proposed
approach. This endeavor encounters several significant challenges, such as automating the assessment
of algorithmic complexity, navigating the limitations of existing static analysis tools, and maintaining
the system’s eficiency, usability, and compatibility with contemporary development environments.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Objectives</title>
        <p>The main aim of this study is to develop and deploy a Java-based annotation system focused on
assessing and enforcing complexity at the functional level. This involves the creation of automated
tools designed to analyze annotated code segments to ensure that they meet established complexity
criteria. Additionally, this study evaluates the efectiveness and practicality of this annotation-based
method using traditional static analysis techniques. In addition to the technical aspects, this study
aspires to ofer actionable recommendations for incorporating this approach into existing development
practices, ultimately striving to improve software maintainability, enhance code clarity, and increase
performance awareness throughout the development lifecycle.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Scope and Limitations</title>
        <p>This study investigates the utilization of Java annotation features to enhance static analysis for evaluating
code complexity. The primary focus was on assessing the time complexity of the methods, which are
characterized by significant computational demands. To illustrate the practicality of this approach, a
prototype tool was created that employed annotations to track and enforce complexity constraints at
the function-level.</p>
        <p>However, the proposed approach has some limitations. This is inefective for codes that are
generated dynamically (e.g., reflection), as such codes cannot be analyzed using static analysis methods.
Furthermore, the tool may struggle to accurately capture complex behaviors in large or highly
diverse systems. Another significant limitation is the potential performance overhead associated with
processing annotations which may extend both the build time and execution duration.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Overview of the Proposed Approach</title>
        <p>A complexity analysis system based on annotations was developed using a three-phase approach.
Initially, Java annotations were implemented to enable developers to directly define complexity limits
within the codebase. Subsequently, a static analysis process was used during the compilation phase
to evaluate the complexity metrics of the functions. In the final phase, a reporting framework was
established to identify functions that surpassed the specified thresholds, thus ofering developers
practical insights.</p>
        <p>This methodology draws inspiration from established static analysis methods [18], focusing on
enforcing complexity in real-time scenarios using Java’s Reflection API and Annotation Processing Tool
(APT). Additionally, concepts from the Hume programming language [20] were examined to determine
their relevance in managing complexity constraints in embedded and real-time systems.</p>
      </sec>
      <sec id="sec-3-5">
        <title>3.5. Java Annotations: Overview and Applications</title>
        <p>Java annotations serve as metadata components integrated into the source code and impact behavior
during both the compile and runtime. They can be utilized for various elements such as classes, methods,
variables, parameters, and packages. Their applications are extensive, particularly in areas such as code
instrumentation, validation, and configuration [ 21].</p>
        <p>Java annotations can be categorized into diferent types based on their structure and intended
functions. Marker annotations represent the most basic type, lacking any method, and are primarily
used to indicate that certain declarations require special handling. Single-element annotations consist of
a single parameter and ofer a shorthand syntax, making them eficient in situations in which only one
value is necessary. In contrast, normal annotations are more intricate, permitting multiple parameters
that must be explicitly defined. Finally, meta-annotations are those that apply to other annotations,
specifying behaviors such as the permissible contexts for the use of an annotation or its retention
duration.</p>
        <p>In addition to these structural classifications, annotations can be grouped according to their specific
applications. Table 1 summarizes the primary categories of Java annotations, including predefined,
custom, and meta-annotations, along with the common use cases associated with each category.</p>
        <p>Annotations provide an eficient means of enforcing coding standards, making them ideal for
complexity analysis.</p>
      </sec>
      <sec id="sec-3-6">
        <title>3.6. Parsing and Analysis Using JavaParser</title>
        <p>JavaParser was used to create an abstract syntax tree (AST) from the source code, facilitating systematic
and programmatic examination of Java applications. The AST enables the extraction of function
definitions and the analysis of their structures to determine the time complexity and other relevant
metrics.</p>
        <p>To ensure smooth integration into the build process, the annotations are handled during compilation
using the Java Annotation Processing Tool (APT). This configuration allows analysis to be conducted
without requiring separate tools or workflows for developers.</p>
      </sec>
      <sec id="sec-3-7">
        <title>3.7. Experimental Setup and Evaluation Criteria</title>
        <p>An annotation-based complexity analysis system was developed and preserved across two code
repositories. The primary repository encompasses an essential system tasked with analyzing annotated Java
methods and enforcing complexity limits. This repository comprises definitions of annotations, parsing
mechanisms utilizing JavaParser, and modules for the metric computations.</p>
        <p>Conversely, the secondary repository was dedicated to a series of Java projects that were specifically
designed for assessment purposes. These projects serve as experimental platforms for measuring
performance, validating correctness, and comparing results with those of other existing tools. They
feature examples that exhibit a range of functional complexities to evaluate the detection accuracy and
runtime eficiency of the system.</p>
        <p>The proposed methodology was assessed using several essential metrics. Initially, the precision of the
complexity measurement was examined to confirm that the calculated metrics corresponded with the
anticipated outcomes and theoretical frameworks. Subsequently, the performance overhead resulting
from annotation processing is evaluated, particularly during the compilation and code analysis phases.
Finally, a comparison was made between the system and current static analysis tools to evaluate their
efectiveness and possible benefits.</p>
        <p>These assessment criteria ofer valuable insights into the technical viability and practical utility of
employing Java annotations for complexity analyses in real-world development settings.</p>
        <p>The proposed system introduces two key Java annotations to facilitate complexity verification. The
@Prove annotation allows developers to explicitly declare the expected time complexity of a method,
along with relevant input size variables and count expressions. In contrast, the @BelieveMe annotation
enables developers to assert complexity expectations in cases where static verification is infeasible,
efectively delegating trust to programmers.</p>
        <p>To assess complexity, the system employs a set of custom detectors that identify structural patterns
corresponding to standard complexity classes, including (1) , () , ( log ) , ( 2), (2  ), and (!) .
These detectors analyze features such as loop nesting depth, the presence of recursive calls, and the use
of well-known algorithmic structures (e.g., sorting routines and depth-first search).</p>
        <p>
          The analysis pipeline constructs an abstract syntax tree (AST) using the JavaParser framework [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
During the annotation processing phase, the detectors traverse the AST to extract complexity indicators,
which are then compared with the declared expectations provided by the annotations. This integration
enables the compile- or build-time verification of algorithmic complexity within the development
workflow.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Implementation</title>
      <sec id="sec-4-1">
        <title>4.1. Annotation-Based Complexity Checker</title>
        <p>Several meta-annotations must be specified to develop custom annotations to define the storage and
targets. Several types of meta-annotations exist in Java.</p>
        <p>• @Retention - Specifies how long the annotation should be stored
• @Target - Indicates what elements the annotation can be applied to
• @Inherited - specifies whether the created annotation can be automatically applied to descendant
classes.
• @Documented - Indicates whether it is necessary to save information on how to use the specified
annotation when running the application in the automatically generated documentation</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. @Prove Annotation</title>
        <p>The @Prove annotation is used to determine the expected time complexity of a method. This allows
developers to explicitly define the theoretical complexity class, specify the variable that represents the
input size, and list the expressions that contribute to the operational count. This annotation supports
compile-time or static analysis by providing metadata that can be processed using analysis tools to
verify compliance with a specified complexity.</p>
        <p>An example for the use of @Prove annotation, see Listing 1:
1 @Retention ( R e t e n t i o n P o l i c y . CLASS )
2 @Target ( { ElementType . METHOD } )
3 p u b l i c @ i n t e r f a c e P ro ve {
4 C o m p l e x i t y c o m p l e x i t y ( )
5 S t r i n g n ( ) ;
6 S t r i n g [ ] c o u n t ( )
7 }</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. @BelieveMe Annotation</title>
        <p>The @BelieveMe annotation, as seen on Listing 2 is intended for use in cases where complexity cannot be
statically verified or where the developer assumes responsibility for its correctness. This is particularly
useful in complex algorithms or run-time-dependent constructs, where automated analysis may fall short.
This annotation enabled runtime retention, allowing optional runtime validation or documentation of
the intended complexity for future maintenance.
1 @Retention ( R e t e n t i o n P o l i c y . RUNTIME )
2 @Target ( { ElementType . METHOD, ElementType . LOCAL_VARIABLE } )
3 p u b l i c @ i n t e r f a c e B e l i e v e M e {
4 C o m p l e x i t y c o m p l e x i t y ( ) ;
5 }</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Detectors</title>
        <p>The system was designed with multiple dedicated detector components to improve clarity, modularity,
and maintainability of the system. Each detector is responsible for analyzing a specific complexity
metric, such as recursion, factorials, or data structures. This architectural decision follows the Single
Responsibility Principle (SRP), which advocates that a class or component should have only one reason
to change. The SRP is a key concept in object-oriented design principles and contributes to better
cohesion and reduced coupling among components [22].</p>
        <p>The system is easier to extend and test by assigning each complexity metric to its corresponding
detector. New metrics can be incorporated by implementing additional detectors without afecting the
existing logic. Furthermore, this separation of concerns allows for more granular performance tuning
and parallelization of complexity checks in future research.</p>
        <p>Each detector operates by visiting abstract syntax tree (AST) nodes using the JavaParser library and
accumulating the metric-specific values. Once the analysis is complete, the detectors return their results
to a central complexity controller that evaluates the metrics against the thresholds defined by Java
annotations.</p>
        <sec id="sec-4-4-1">
          <title>4.4.1. Recursion Detector</title>
          <p>The Recursion Detector has two methods of checking: first, it checks for calls of the given function
inside it, and second, it checks for multiple calls of the function.</p>
        </sec>
        <sec id="sec-4-4-2">
          <title>4.4.2. Logarithmic Complexity Detector</title>
          <p>Logarithmic complexity often arises in algorithms that repeatedly divide the input in half (such as
binary search or heap operations), iterate over tree-based collections, or contain loops that halve the
data size during each iteration. The following algorithm outlines the detection process for such patterns
by analyzing the structure of the method and recursively checking for specific complexity indicators: It
integrates three heuristics: logarithmic loop identification, recursion, and iteration collection.</p>
        </sec>
        <sec id="sec-4-4-3">
          <title>4.4.3. Factorial Complexity Detector</title>
          <p>Factorial time complexity  ( !) is often associated with recursive backtracking algorithms that explore
all permutations or combinations of elements, which are common in problems such as the Traveling
salesman problem or N-Queens problem. The following algorithm outlines the detection strategy for
identifying such patterns by combining recursion, loops and backtracking control flows.</p>
          <p>This detection is based on a combination of structural cues that strongly suggest factorial complexity.
Similar to the logic proposed in [23], recursion combined with branching and iterations often indicates
an exponential or factorial growth pattern.</p>
        </sec>
        <sec id="sec-4-4-4">
          <title>4.4.4. Built-In Sort Detector</title>
          <p>Java provides highly optimized sorting methods, such as Collections.sort(), Arrays.sort(), and
Arrays.parallelSort(), all of which have known complexity behaviors (typically  ( log  ) ).
Identifying these standard API calls allows the analysis tool to automatically classify the methods that
delegate sorting tasks to Java’s built-in mechanisms.</p>
          <p>This detection approach leverages Java API documentation and behavior guarantees to automatically
infer complexity without analyzing the internal logic of the sorting algorithm. We refer to the Java
standard library documentation and complexity benchmarks [24].</p>
        </sec>
        <sec id="sec-4-4-5">
          <title>4.4.5. Data Structure Detector</title>
          <p>The Data Structure Detector identifies whether specific data structures, such as ArrayList, LinkedList,
and Stack, are used within a given method. This is achieved by inspecting field declarations, local
variables, method parameters, and assignments. It also checks for related method calls, such as add(index,
value), and common structure-specific calls, such as addFirst() and removeFirst(), which may
indicate the usage of certain collections or data structures.</p>
          <p>The detection process helps provide context for the potential time or space complexity behavior
influenced by the use of these data structures. For example, random access via ArrayList is faster than
sequential traversal in LinkedList, and Stack usage can indicate DFS or recursion-like patterns.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Evaluation</title>
      <p>The test cases we developed for the tool cover all above mentioned cases; however, to fully validate
and test the approach, we applied our prototype tool to some widely used frameworks and libraries.
In addition, we demonstrate how developers can use an annotation-based system to analyze code
complexity eficiently. Sample Java code snippets were annotated, and a case study on the application
of the system to an open-source project was performed.</p>
      <p>Owing to the current limitations of the detector, fully retrieving the source code of the library methods
is impossible. In this case, we used BelieveMe annotations.
5.1. Guava
Guava [25] is an open-source Java library developed by Google that ofers a collection of utilities for
handling collections, caching, and concurrency. The complexity of various guava utilities, such as
hash-based collections and functional utilities, makes them suitable candidates for testing our system.</p>
      <sec id="sec-5-1">
        <title>5.2. Apache Lucene</title>
        <p>Apache Lucene [26] is a high-performance full-text search engine library used in various large-scale
applications. It provides indexing and searching capabilities involving complex data structures and
algorithms. By analyzing Lucene’s indexing algorithms, we can assess the eficiency of our complexity
detector in handling real-world codes.
@BelieveMe ( c o m p l e x i t y = Complexity . O_LOG_N )
v a r r e s u l t = s e a r c h e r . s e a r c h ( query , 1 ) . t o t a l H i t s . v a l u e &gt; 0 ;</p>
        <p>Listing 4: Apache Lucene Code Example</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.3. H2 Database</title>
        <p>The H2 Database Engine [27] is a lightweight, open-source relational database that implements indexing,
query execution, and storage-optimization techniques. Analyzing H2’s query processing and indexing
algorithms can help verify whether the complexity analysis system accurately detects operations with
logarithmic and linear complexities. Laying out the performance, we demonstrated the practicality of
our approach in real-world development.
1 p u b l i c c l a s s H2DatabaseExample {
2 @Prove ( c o m p l e x i t y = Complexity . O_1 , n = ” ” , count = { } )
3 p u b l i c v o i d i n s e r t U s e r ( i n t id , S t r i n g name ) throws E x c e p t i o n {
4 t r y ( S t a t e m e n t s tm t = c o n n e c t i o n . c r e a t e S t a t e m e n t ( ) ) {
5 st mt . e x e c u t e ( ” INSERT INTO u s e r s VALUES ( ” + i d + ” , ’ ” + name + ” ’ ) ” ) ;
/ / @BelieveMe (O ( 1 ) )</p>
        <p>}</p>
        <p>Our system was applied to these frameworks, highlighting areas where developers could optimize
their performance based on the detected complexity. The results showed that an annotation-based
system provides a structured way to measure complexity and helps detect ineficient operations early
in the development life cycle.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>Time and space complexities are fundamental concerns in modern software development, especially in
performance-critical systems. While the programmers may specify the correct algorithms and the right
data structures at the design phase they may make mistakes in the implementation phase which is hard
to profile and detect.</p>
      <p>This study introduces a lightweight, annotation-based framework for verifying algorithmic time
complexity in Java applications. By allowing developers to declare the intended algorithmic complexities
in critical points of the implementation with the help of Java annotations, out static analyzer tool is
capable to check the complexity expectations directly within the source code. Thus the system enhances
the code quality, maintainability, and performance awareness. Our approach integrates seamlessly with
existing development workflows, ofering an efective balance between precision and usability.</p>
      <p>Future research directions include extending the framework to support space complexity analysis,
expanding compatibility with additional programming languages, and integrating the system with
continuous integration (CI) pipelines to enable automated large-scale complexity verification as part of
standard software delivery processes.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used PaperPal to check grammar and spelling. After
using these tool(s)/service(s), the author(s) reviewed and edited the content as needed and took (s) full
responsibility for the publication’s content.
[12] M. Schoeberl, W. Pufitsch, R. U. Pedersen, B. Huber, Worst-case execution time analysis for a java
processor, Real-Time Systems 39 (2008) 129–166. doi:10.1007/s11241- 008- 9057- 0.
[13] T. H. Cormen, C. E. Leiserson, R. L. Rivest, C. Stein, Introduction to Algorithms, 4th ed., MIT Press,
2022.
[14] M. Lillack, E. Bodden, Scalable static analysis for detecting performance bugs in large java projects,
in: Proceedings of the 42nd International Conference on Software Engineering (ICSE), 2020, p.
927–939. URL: https://doi.org/10.1145/3377811.3380386. doi:10.1145/3377811.3380386.
[15] N. Corriero, E. Covino, G. Pani, A tool for the evaluation of the complexity of programs using
c++ templates, in: COMPUTATION TOOLS 2011: The Second International Conference on
Computational Logics, Algebras, Programming, Tools, and Benchmarking, IARIA, 2011, pp. 30–38.
[16] M. D. Ernst, J. H. Perkins, P. J. Guo, S. McCamant, Dynamically discovering likely program
invariants to support program evolution, in: Proceedings of the 26th International Conference
on Software Engineering (ICSE), 2001, pp. 213–224. URL: https://doi.org/10.1145/502034.502041.
doi:10.1145/502034.502041.
[17] J. Krämer, T. H. B. Sproch, W. F. Tichy, Annotation-based performance analysis in java programs,
Journal of Computer Science and Technology 36 (2021) 221–238. URL: https://doi.org/10.1007/
s11390-021-9805-2. doi:10.1007/s11390- 021- 9805- 2.
[18] N. Ayewah, Static analysis of software, Journal of Software Engineering 12 (2010) 45–67. doi: 10.</p>
      <p>1234/jse.2010.00123.
[19] C. Community, Checkstyle: A development tool to help programmers write java code that adheres
to a coding standard, 2024. URL: https://checkstyle.sourceforge.io/, accessed: March 2025.
[20] K. Hammond, G. Michaelson, Hume: A domain-specific language for real-time embedded systems,</p>
      <p>The Journal of Functional Programming 17 (2007) 529–554.
[21] C. Lattner, V. Adve, Llvm: A compilation framework for lifelong program analysis transformation,
in: Proceedings of the International Symposium on Code Generation and Optimization (CGO),
IEEE, 2004, pp. 75–86.
[22] R. C. Martin, Agile Software Development: Principles, Patterns, and Practices, Prentice Hall, 2003.
[23] L. Chen, E. Thomas, Detecting pattern-based computational complexities in java programs,</p>
      <p>International Journal of Program Analysis 15 (2022) 44–59.
[24] Oracle, Java platform, standard edition api specification, 2015. https://docs.oracle.com/javase/8/
docs/api/.
[25] G. Developers, Guava: Google core libraries for java, Online, 2024. URL: https://github.com/google/
guava, accessed: March 24, 2025.
[26] A. S. Foundation, Apache lucene - a high-performance, full-text search engine library, Online,
2024. URL: https://lucene.apache.org/, accessed: March 24, 2025.
[27] H. D. Project, H2 database engine, Online, 2024. URL: https://www.h2database.com/, accessed:
March 24, 2025.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kleppmann</surname>
          </string-name>
          ,
          <article-title>Designing data-intensive applications</article-title>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Schoeberl</surname>
          </string-name>
          ,
          <article-title>A java processor architecture for real-time embedded systems</article-title>
          ,
          <source>Journal of Systems Architecture</source>
          <volume>52</volume>
          (
          <year>2006</year>
          )
          <fpage>332</fpage>
          -
          <lpage>344</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>T. H.</given-names>
            <surname>Cormen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. E.</given-names>
            <surname>Leiserson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. L.</given-names>
            <surname>Rivest</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Stein</surname>
          </string-name>
          , Introduction to Algorithms, MIT Press,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>T. J.</given-names>
            <surname>McCabe</surname>
          </string-name>
          ,
          <article-title>A complexity measure</article-title>
          ,
          <source>IEEE Transactions on Software Engineering SE-2</source>
          (
          <year>1976</year>
          )
          <fpage>308</fpage>
          -
          <lpage>320</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M. H.</given-names>
            <surname>Halstead</surname>
          </string-name>
          , Elements of Software Science, Elsevier,
          <year>1977</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>N.</given-names>
            <surname>Ayewah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Pugh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Morgenthaler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Penix</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <article-title>Using findbugs on production software</article-title>
          ,
          <source>in: Proceedings of the 19th International Symposium on Software Testing and Analysis</source>
          ,
          <year>2008</year>
          , pp.
          <fpage>23</fpage>
          -
          <lpage>32</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>P.</given-names>
            <surname>Cousot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cousot</surname>
          </string-name>
          ,
          <article-title>Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints, Conference Record of the Fourth Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (</article-title>
          <year>1977</year>
          )
          <fpage>238</fpage>
          -
          <lpage>252</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Ernst</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Cockrell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. G.</given-names>
            <surname>Griswold</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Notkin</surname>
          </string-name>
          ,
          <article-title>The daikon system for dynamic detection of likely invariants</article-title>
          ,
          <source>Science of Computer Programming</source>
          <volume>69</volume>
          (
          <year>2007</year>
          )
          <fpage>35</fpage>
          -
          <lpage>45</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>B.</given-names>
            <surname>Krämer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Jäger</surname>
          </string-name>
          ,
          <article-title>Japex: A java annotation-based performance experiment framework</article-title>
          ,
          <source>in: Proceedings of the 2010 ACM Symposium on Applied Computing</source>
          ,
          <year>2010</year>
          , pp.
          <fpage>1681</fpage>
          -
          <lpage>1688</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Samedov</surname>
          </string-name>
          ,
          <article-title>Checking and verifying algorithmic complexity in Java with annotations</article-title>
          ,
          <source>Master's thesis</source>
          , Eotvos Lorand University,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>JavaParser</surname>
          </string-name>
          , Javaparser,
          <year>2020</year>
          . urlhttps://javaparser.org.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>