<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Multidisciplinary
Science Journal 6 (2024) e2024ss0720. URL: https://doi.org/10.31893/multiscience.2024ss0720.
doi:10.31893/multiscience.2024ss0720.
[29] S. Buchwald</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.62271/pjc.16.2.913.928</article-id>
      <title-group>
        <article-title>Clipping method of unpromising variants of a solution for the problem of integer linear programming with Boolean variables</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sergii Kavun</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Volodymyr Tkach</string-name>
          <email>vtkach@mit.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computer Information Systems and Technologies Department, Interregional Academy of Personnel Management</institution>
          ,
          <addr-line>Frometivska str.</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology (MIT)</institution>
          ,
          <addr-line>32 Vassar St, Cambridge MA 02139</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”</institution>
          ,
          <addr-line>37, Prospect Beresteiskyi (former Peremohy), 03056, Kyiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>WDA'26: International Workshop on Data Analytics</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>14679</volume>
      <fpage>3</fpage>
      <lpage>5</lpage>
      <abstract>
        <p>This paper introduces an innovative clipping method (CM) for solving integer linear programming problems with Boolean variables, addressing the computational complexity inherent in these NP-complete problems. The proposed method is based on a rank-based approach that transforms the unitary n-dimensional cube Bn into a graph structure, where the vertices are organized by ranks according to the number of ones in their binary representations. The methodology employs six distinct clipping strategies (L1 - L6) that systematically eliminate non-promising solution paths during the search process. The fundamental strategy involves introducing a "double corridor" concept that establishes upper and lower bounds for feasible solutions, significantly reducing the search space. The method's key innovation lies in its ability to predict and exclude non-optimal paths through careful analysis of weighted graph traversals, where weights correspond to objective function coeficients and constraint parameters. Experimental results demonstrate that the proposed CM achieves algorithmic complexity, which represents a substantial improvement over exponential-time algorithms. Comparative analysis against wellknown methods, including Balas' algorithm, the P-method, and exhaustive search approaches, reveals superior performance in terms of elementary operations count, solution time, and decision-making probability within time constraints. The method proves particularly efective for higher-dimensional problems when reasonable time limits are established, making it applicable to various practical optimization scenarios in cryptanalysis, network topology analysis, and resource allocation problems.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Integer linear programming</kwd>
        <kwd>Boolean variables</kwd>
        <kwd>clipping</kwd>
        <kwd>solution method</kwd>
        <kwd>clipping strategies</kwd>
        <kwd>algorithmic complexity</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Across numerous research domains spanning computer science, engineering, economics, and operations
research, practitioners frequently encounter optimization problems that can be formulated as integer
linear programming (ILP) tasks with Boolean variables. These problems arise naturally in various
application areas, including resource allocation, scheduling, network design, cryptanalysis, facility location,
portfolio optimization, and combinatorial auction design [1]. The prevalence of such formulations
stems from their remarkable expressive power and ability to capture discrete decision-making processes
inherent in real-world scenarios.</p>
      <p>Nevertheless, practical experience has consistently demonstrated that solving these optimization
problems presents significant computational challenges. The primary dificulties include prohibitively
complex algorithms and exponential growth in solution time relative to problem size. Memory
requirements also scale unfavorably with dimensionality and the fundamental computational intractability
characteristic of NP-complete problems [2]. These limitations become particularly acute when dealing
with large-scale instances encountered in industrial applications, where problem dimensions may
extend to thousands or millions of variables and constraints.</p>
      <p>The integer programming framework, for which Boolean variable problems constitute a special case,
has been recognized as extraordinarily versatile. Indeed, a vast array of combinatorial optimization
problems (including the traveling salesman problem, the knapsack problem, the set covering problem,
the maximum clique problem, and the graph coloring problem) can be naturally formulated as integer
programming instances [? ]. This universality, while theoretically elegant, has significant computational
implications. The ability to encode virtually any NP-complete problem within the integer programming
paradigm simultaneously confirms both the framework’s power and its inherent computational dificulty.
As established by complexity theory, unless  =   , there is no polynomial-time algorithm to solve
general integer programming problems [3].</p>
      <p>In standard terminology, the phrase "integer programming" typically refers specifically to integer
linear programming, wherein both the objective function and constraints are linear expressions of the
decision variables. However, the broader landscape encompasses mixed-integer programming (MIP),
where some variables are restricted to integer values. In contrast, others remain continuous, as well
as mixed-integer nonlinear programming (MINLP), which incorporates nonlinear objective functions
or constraints [3]. These nonlinear variants introduce additional layers of computational complexity
beyond their linear counterparts. However, ILP techniques have demonstrated efectiveness across this
spectrum of problem types, which is valuable for pure-integer formulations, pure-binary formulations
where variables are restricted to 0,1, and hybrid formulations involving arbitrary combinations of
decision variables with real value, integer value, and binary value [4].</p>
      <p>The research community has developed numerous methodologies and algorithmic approaches to
address integer programming problems. Contemporary commercial optimization software packages,
including CPLEX, Gurobi, and XPRESS, routinely accept integer and binary variable restrictions as
standard input specifications [ 5]. These sophisticated solvers employ branch-and-bound frameworks,
automatically constructing search trees that systematically partition the solution space while computing
linear programming relaxations at each node to establish bounds on optimal solutions. Advanced
implementations incorporate cutting plane techniques, heuristic methods, and pre-solving procedures
to enhance computational performance. Despite these algorithmic renfiements, integer linear
programming problems generally require substantially greater computational resources than their continuous
linear programming counterparts. Although linear programming can be solved in polynomial time
using interior-point or simplex methods [6], integer programming instances of comparable size may
demand orders of magnitude more computation time, with solution times often showing exponential
growth as the dimensions of the problem increase.</p>
      <p>This computational gap between continuous and discrete optimization motivates ongoing research
into specialized algorithms that exploit problem structure, develop tighter formulations, and employ
novel search strategies to mitigate the inherent complexity of integer programming. The present work
contributes to this research direction by introducing a clipping method specifically designed for integer
linear programming problems with Boolean variables, aiming to reduce both algorithmic complexity
and practical solution times through strategic elimination of unpromising solution candidates.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Literature Review</title>
      <p>The application of integer linear programming with Boolean variables has been extensively investigated
across multiple research domains, demonstrating the versatility and computational challenges inherent
in this optimization paradigm. This section examines representative studies that illustrate both the
breadth of applications and the evolution of solution methodologies.</p>
      <p>Mihailescu [7] introduced a groundbreaking approach to cryptanalysis through linear approximation
methods, demonstrating that the Data Encryption Standard (DES) cipher could be compromised using
integer programming formulations. His methodology successfully attacked 8-round DES using 221
known plain texts and 16-round DES with 247 known plain texts. Notably, when plain texts consisted
of natural English sentences encoded in ASCII format, the cipher text-only attack required merely 229
samples. This work established a fundamental connection between cryptographic security analysis and
discrete optimization.</p>
      <p>Building upon this foundation, Borghof [ 8] presented an advanced framework applying mixed-integer
linear programming to cryptanalysis of modern stream and block ciphers. Her investigation focused on
Trivium, recommended by the eSTREAM project, and the lightweight cipher Ktantan, demonstrating
how nonlinear multivariate Boolean equation systems inherent in cryptographic primitives can be
reformulated as MILP problems. The methodology encompassed critical design decisions, including
conversion techniques for transforming Boolean equations into real-valued constraints, strategic variable
selection, and exploitation of the CPLEX commercial solver capabilities. More recently, Kölbl et al. [9]
extended these techniques to analyze the SIMON block cipher family, achieving improved bounds on
the number of active S-boxes through refined MILP models that incorporate diferential and linear
cryptanalysis constraints simultaneously.</p>
      <p>Buchheim and Rinaldi [10] developed an innovative polyhedral approach to nonlinear Boolean
optimization that significantly reduces model size compared to conventional formulations. Their
methodology avoids transformation to a normal form, thereby eliminating the introduction of auxiliary
variables that typically inflate the issue dimensions. By eficiently reducing the problem to degree-two
polynomials through minimal dimension extension, they established connections to the maximum
cut problem, demonstrating that the corresponding polytope constitutes a face of an appropriate cut
polytope. Experimental validation using challenging instances from the Max-SAT Evaluation 2007
revealed that their general-purpose implementation outperformed specialized Max-SAT solvers in
numerous cases.</p>
      <p>Subsequently, Rostami et al. [11] extended this framework by introducing strengthened linearization
techniques for pseudo-Boolean optimization, incorporating valid inequalities derived from the structure
of the objective function. Their computational experiments on benchmark instances demonstrated
solution time reductions of up to 40% compared to standard linearization approaches. Furthermore,
Dlask et al. [12] investigated preprocessing techniques for Boolean optimization problems, showing
that problem-specific reduction rules combined with constraint propagation can eliminate up to 60% of
variables in certain problem classes before invoking MILP solvers.</p>
      <p>Götz et al. [13] addressed the challenge of integrating optimization techniques into self-adaptive
software systems through model-driven development paradigms. Their work compared Integer Linear
Programming and Pseudo-Boolean Optimization approaches for automatically generating optimization
problems from architectural models at runtime. The study provided comprehensive scalability analysis
for pipe-and-filter application architectures, demonstrating practical feasibility despite the inherent
computational complexity. Their methodology automated the translation from high-level system
specifications to concrete optimization formulations, significantly reducing development efort and
error potential.</p>
      <p>Recent advances in this domain include the work of Jamshidi et al. [14], who developed transfer
learning techniques for configuration optimization in software product lines. Their approach leverages
MILP formulations to identify near-optimal configurations across related software systems, achieving
speedups of two orders of magnitude compared to traditional search-based methods. Additionally,
Siegmund et al. [15] investigated performance-influence models for configurable systems, utilizing Boolean
satisfiability constraints encoded as integer programming problems to predict system performance
across vast configuration spaces.</p>
      <p>Savage et al. [16] introduced a sophisticated methodology for reconciling experimental
phosphoproteomic data with prior knowledge of cellular signaling networks. Unlike prevalent approaches based
on Bayesian networks, Boolean models, or ordinary diferential equations, their technique employs
integer linear programming on interaction graphs to encode qualitative behavioral constraints. The
framework provides four fundamental operations: (i) identifying topology-consistent explanations for
experimental observations or determining nearest feasible explanations when inconsistencies exist;
(ii) computing minimal node correction sets to restore consistency; (iii) extracting optimal subgraphs
that best reflect experimental scenarios; and (iv) identifying candidate edges whose inclusion would
maximally improve model-data concordance. Validation using EGFR/ErbB signaling data from primary
hepatocytes demonstrated the method’s ability to identify context-specific inactive interactions and
propose biologically plausible network refinements. The freely available SigNetTrainer toolbox facilitates
widespread application of this methodology.</p>
      <p>Extending these biological applications, Razzaq et al. [17] developed MILP-based methods for
metabolic network reconstruction that simultaneously optimize network topology and flux
distributions. Their approach integrates transcriptomic and metabolomic data to infer context-specific
metabolic models, achieving prediction accuracies exceeding 85% for growth phenotypes across diverse
environmental conditions.</p>
      <p>Terci et al. [18] formulated optimal bitwise register allocation as an integer linear programming
problem, addressing a critical challenge in compiler backend optimization. Their model explicitly
captures interference constraints between variables at the bit level, enabling more eficient utilization of
processor registers compared to traditional graph-coloring heuristics. The ILP formulation guaranties
optimality for small-to-medium code regions while providing high-quality solutions for larger instances
when combined with time-bounded branch-and-bound search.</p>
      <p>Contemporary work by Lozano et al. [19] has refined these techniques through the development
of integer programming models for integrated instruction selection and register allocation. Their
unified optimization approach eliminates the traditional phase-ordering problem in compiler backends,
achieving code quality improvements of 5 − 15% across standard benchmark suites. Furthermore, Kim
et al. [20] investigated the use of partitioned Boolean quadratic programming for register allocation with
rematerialization, demonstrating that concern decomposition strategies can extend the applicability of
exact methods to industrial-scale compilation tasks.</p>
      <p>The foundational work of Balas [21] on additive algorithms for zero-one linear programming
established many principles underlying modern branch-and-bound methods. Gomory’s [22] pioneering
cutting plane techniques provided theoretical foundations for constraint strengthening approaches widely
employed in contemporary solvers. More recently, Subramani et al. [23] investigated Boolean
combinations of unit two-variable per inequality (UTVPI) constraints, demonstrating that certain restricted
constraint classes admit polynomial-time solution algorithms despite the general NP-completeness of
Boolean integer programming.</p>
      <p>Demirović et al. [24] developed sophisticated methods for solving systems of pseudo-Boolean
constraints by leveraging advances in Boolean satisfiability solving technology. Their hybrid approach
combines MILP relaxations with SAT-based search, achieving substantial performance improvements on
constraint satisfaction problems arising in electronic design automation. Conow et al. [25] formulated
cophylogeny reconstruction concerns (which model co-evolutionary relationships between host and
parasite species) as integer linear programs, demonstrating that phylogenetic inference dificulties can
benefit from discrete optimization techniques.</p>
      <p>Balen et al. [26] addressed the compilation of high-level array programs by formulating loop fusion
optimization as an ILP problem. Their technique determines optimal fusion strategies that balance data
locality benefits against potential parallelism losses, incorporating machine-specific cost models into
the optimization objective.</p>
      <p>The surveyed literature demonstrates that ILP with Boolean variables provides a unifying
mathematical framework applicable across remarkably diverse domains — from cryptographic security and
compiler optimization to systems biology and software engineering. Despite substantial methodological
advances over the past decades, including sophisticated branching strategies, cutting plane generation,
and hybrid algorithmic approaches, the fundamental computational challenge persists: general ILP
problems with Boolean variables remain NP-complete, and practical solution times exhibit exponential
growth for large-scale instances.</p>
      <p>Several research gaps emerge from this review (Table 1). First, while commercial solvers have
achieved impressive performance through algorithmic refinements and hardware acceleration, guarantee
Method Type</p>
      <p>Key
Innovation</p>
      <p>Computational Solution
Complexity Quality</p>
      <p>Implementation
problem-specific clipping strategies that exploit structural properties remain underexplored. Second,
most existing approaches [27] and [28] focus on either an exact method with worst-case exponential
complexity or heuristics without optimality guaranties, leaving the opportunity for hybrid methods
that provide quality-controlled approximate solutions with predictable computational bounds. Third,
the potential for parallelization and distributed computation in Boolean ILP solving has received limited
attention compared to continuous optimization domains.</p>
      <p>The present work addresses these gaps by introducing a clipping method specifically designed for
integer linear programming problems with Boolean variables. The proposed approach leverages
rankbased graph representations and multiple complementary pruning strategies to systematically eliminate
unpromising solution candidates, thereby reducing both algorithmic complexity and practical solution
times while maintaining solution quality guaranties.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Mathematical Background and Problem Formulation</title>
      <p>Binary Integer Programming (BIP), alternatively referred to as integer linear programming with Boolean
variables, constitutes a fundamental formal model for representing a broad spectrum of scientific and
technical optimization problems. In this formulation, each decision variable is constrained to assume
exclusively binary values from the set 0, 1, thereby encoding discrete choices such as project selection
or rejection, resource activation or deactivation, binary switching states, or afirmative or negative
responses. Numerous other dichotomous decision scenarios are encountered in practical applications
[30].</p>
      <p>The BIP framework is characterized as an m-dimensional optimization problem belonging to the
NP-complete complexity class, a classification that establishes both its theoretical intractability and
practical computational challenges [31]. Unless the widely conjectured inequality  ̸=   is disproved,
non polynomial-time algorithm can exist for solving general instances of this issue class. Nevertheless,
the canonical formulation provides a mathematically rigorous foundation for algorithm development
and theoretical analysis.</p>
      <sec id="sec-3-1">
        <title>3.1. Standard Mathematical Formulation</title>
        <p>
          Following the convention established by Chinneck [32], the general BIP problem can be expressed in
the following standard form. The objective function seeks to maximize (or equivalently minimize) a
linear combination of binary decision variables:
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
where  ∈ R≥ 0 represents the objective function coeficient associated with decision variable  ,
and  denotes the problem dimensionality.
        </p>
        <p>The feasible solution space is delimited by m linear inequality constraints of the form:
where  ∈ R denotes the constraint coeficient matrix elements, and  ∈ R specifies the right-hand
side bounds. The formulation imposes the following structural conditions:</p>
        <p>Binary constraint: all decision variables  , where  = 1, 2, . . . ,  are restricted to the binary domain:
 ∈ {0, 1}.</p>
        <p>
          Non-negativity of the objective coeficients: all coeficients in the objective function (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) satisfy  ≥ 0
for all .
        </p>
        <p>Ordered coeficients: without loss of generality, variables are indexed such that their objective
function coeficients satisfy the monotonicity condition:
0 ≤  1 ≤  2 ≤ ... ≤</p>
        <p>Although this standard form may initially appear restrictive, most practical BIP instances can be
transformed to this canonical representation through elementary algebraic manipulations. For instance,
negative objective function coeficients are accommodated through variable substitution, wherein  is
replaced by its complement (1 −  ′ ), efectively reversing the contribution polarity while preserving the

 = ∑︁  ·   → max (min)</p>
        <p>
          =1

∑︁  ·   ≥  , for  = 1, 2, . . . , ,
=1
problem structure. Similarly, variable reordering to achieve the monotonicity condition (
          <xref ref-type="bibr" rid="ref3">3</xref>
          ) constitutes a
trivial index permutation. Constraints involving ≤ inequalities are readily converted to the ≥ standard
form through multiplication by -1, noting that constraint right-hand sides may assume negative values
without loss of generality.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Theoretical Properties and Complexity Considerations</title>
        <p>The BIP problem exhibits several fundamental theoretical properties that distinguish it from its
continuous optimization counterpart and establish its computational characteristics [? ]:</p>
        <p>Property 1 (Polynomial relaxation): relaxing the integrality constraint  ∈ {0, 1} to the
continuous interval  ∈ [0, 1] transforms the problem into a standard Linear Programming (LP) instance, which
admits polynomial-time solution via algorithms such as the ellipsoid method or interior-point methods.
This LP relaxation provides lower bounds (for minimization) or upper bounds (for maximization) on
the optimal objective value and forms the theoretical foundation for branch-and-bound methodologies.</p>
        <p>Property 2 (Formulation equivalence): multiple equivalent formulations exist for BIP problems.
Extensions may incorporate equality constraints in addition to inequalities without altering the
fundamental complexity class. Furthermore, the decision problem variant (determining whether a feasible
solution exists) and the optimization variant (finding an optimal solution) are polynomially equivalent
through standard reduction techniques.</p>
        <p>Property 3 (NP-completeness): the general BIP decision problem is NP-complete [33], as established
through reduction from the 3-SAT problem or other canonical NP-complete problems. Consequently, the
optimization variant belongs to the NP-hard class. This theoretical classification implies that worst-case
solution time grows exponentially with a problem dimension  unless  =   .</p>
        <p>Property 4 (Solution space structure): the feasible region of a BIP problem constitutes a finite
subset of the n-dimensional Boolean hypercube  = {0, 1}, containing at most 2 vertices. However,
constraint satisfaction may render the vast majority of these vertices infeasible, creating a sparse feasible
region embedded within the hypercube structure.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Algorithmic Classification for BIP Solution Methods</title>
        <p>The computational challenges posed by Boolean integer programming have motivated extensive
algorithm development spanning multiple decades. A comprehensive taxonomic classification of solution
methodologies can be organized into three principal categories based on their fundamental search
strategies and theoretical foundations:</p>
        <sec id="sec-3-3-1">
          <title>Category 1: Combinatorial Enumeration Methods This class encompasses algorithms that</title>
          <p>systematically explore the solution space through structured enumeration strategies:</p>
          <p>Complete enumeration: exhaustive evaluation of all 2 vertices of the Boolean hypercube,
guaranteeing optimal solution discovery at the expense of exponential computational cost (2).</p>
          <p>Branch-and-bound techniques: intelligent enumeration employing recursive partitioning of the
solution space combined with bound computation via LP relaxations to prune suboptimal branches.
Modern implementations incorporate sophisticated branching heuristics such as strong branching,
pseudo-cost branching, and reliability branching [34].</p>
          <p>Dynamic programming approaches: decomposition of the problem into overlapping subproblems
with optimal substructure properties, enabling polynomial-space solution for certain restricted problem
classes exhibiting appropriate recursive structure.</p>
          <p>Additive algorithms: methods based on the foundational work of Balas [21] that construct solutions
incrementally through systematic addition of variables while maintaining feasibility and optimality
conditions.</p>
          <p>Rank-based approaches: specialized techniques that exploit ordered variable structure by organizing
the Boolean hypercube into hierarchical ranks according to Hamming weight (number of ones in binary
representations), facilitating structured search through geometric graph representations.</p>
        </sec>
        <sec id="sec-3-3-2">
          <title>Category 2: Sequential Solution Space Reduction Methods This category comprises techniques</title>
          <p>that progressively narrow the feasible region through iterative constraint tightening and dominated
solution elimination:</p>
          <p>Sequential variant analysis: systematic examination of solution candidates with incremental
constraint satisfaction verification and dominated solution exclusion based on lexicographic or parametric
ordering. Sequential plan construction: iterative solution building through staged variable fixing and
partial solution evaluation, commonly employed in resource allocation and scheduling contexts.</p>
          <p>Cutting plane methods: generation of valid inequalities that eliminate fractional solutions from the
LP relaxation without excluding integer feasible points, thereby tightening the polyhedral relaxation.
Modern cutting plane families include Gomory cuts, lift-and-project cuts, and problem-specific cuts
derived from an issue structure [35].</p>
        </sec>
        <sec id="sec-3-3-3">
          <title>Category 3: Solution Quality Improvement Methods These approaches iteratively refine solution</title>
          <p>quality through local or global search mechanisms :</p>
          <p>Descent direction methods: local improvement algorithms that identify improving directions in the
discrete solution space through neighborhood exploration, analogous to gradient descent in continuous
optimization.</p>
          <p>Stochastic search techniques: randomize algorithms, including simulated annealing, genetic
algorithms, and tabu search, that employ probabilistic mechanisms to escape local optima and explore the
solution landscape.</p>
          <p>Local optimization heuristics: greedy algorithms and constructive heuristics that build solutions
through myopic optimization decisions, trading optimality guaranties for computational eficiency.</p>
          <p>-optimal algorithms: approximation algorithms that provide solution quality guaranties in the form
 * ≤   ≤ (1 + ) * of minimization problems, where  * denotes the optimal objective value and
 &gt; 0 represents the approximation factor.</p>
        </sec>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Hybrid Methodologies and Contemporary Developments</title>
        <p>Contemporary BIP solvers typically integrate multiple algorithmic paradigms into sophisticated hybrid
frameworks. Commercial solvers such as CPLEX, Gurobi, and XPRESS combine branch-and-bound
search with cutting plane generation, primal heuristics, pre-solving techniques, and parallel computation
[36]. These systems employ extensive algorithm portfolios that dynamically select and configure solution
strategies based on instance characteristics detected through automated problem classification. Despite
decades of algorithmic innovation and substantial computational performance improvements driven
by hardware advances, the fundamental computational barrier posed by NP-completeness persists.
Large-scale BIP instances arising in industrial applications frequently exceed the practical capabilities
of general-purpose solvers, motivating continued research into problem-specific methods that exploit
structural properties to achieve superior performance within restricted concern classes.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Pruning Unpromising Solutions in Boolean ILP: Rank-Based</title>
    </sec>
    <sec id="sec-5">
      <title>Clipping Method</title>
      <sec id="sec-5-1">
        <title>4.1. Theoretical Foundations and Graph-Based Representation</title>
        <p>The proposed methodology is based on the rank-based approach originally developed by some authors
[37, 38, 36, 39]. This framework provides a systematic mechanism for transforming the exponential
search complexity inherent in NP-complete Boolean integer programming problems into a structured
graph traversal problem amenable to eficient pruning strategies.</p>
        <p>The fundamental principle underlying the rank-based approach involves partitioning the
dimensional Boolean hypercube  = {0, 1} into  + 1 hierarchical ranks, where rank  (for
 = 0, 1, . . . , ) contains all binary vectors possessing exactly  components equal to unity. This
partitioning naturally orders the 2 vertices of the hypercube according to their Hamming weight,
creating a layered graph structure Δ that encodes the complete solution space while facilitating
systematic exploration.</p>
      </sec>
      <sec id="sec-5-2">
        <title>4.2. Graph Construction and Properties</title>
        <p>The graph Δ represents an alternative geometric realization of the Boolean hypercube , wherein
vertices are organized into ranks and the edges connect to vertices difering in exactly one binary
position. Formally, the graph Δ = (, ) possesses the following structural characteristics. Vertex
set: the vertex set  is partitioned into  + 1 disjoint ranks:
=0
−1 ︂( )︂
∑︁  · ( − ) =  · 2</p>
        <p>−1

 = ⋃︁ ,
=0</p>
        <p>where  = { ∈ {0, 1} : ‖‖1 = },
and ‖‖1 = ∑︀</p>
        <p>=1  denotes the Hamming weight (number of ones) in a vector .</p>
        <p>Edge set: an edge (, ) ∈  exists if and only if  and  belong to consecutive ranks and difer
in exactly one coordinate position, i.e., ( − ) 1 = 1 with 1 =  and 1 =  + 1 for some  ∈
0, 1, ...,  − 1.</p>
        <p>Cardinality: rank  contains exactly (︀ )︀ vertices, and the total number of edges is equal:</p>
      </sec>
      <sec id="sec-5-3">
        <title>4.3. Multi-Attribute Edge Weighting Scheme</title>
        <p>
          To incorporate the BIP problem structure (
          <xref ref-type="bibr" rid="ref1">1</xref>
          )-(
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) into the graph representation, each edge (, ) ∈ 
entering vertex  is assigned ( + 1) distinct weights that capture both objective function contributions
and constraint violation measures:
        </p>
        <p>
          Objective weight  ≥ 0 : equal to the coeficient of variable  in the objective function (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ),
representing the marginal contribution to the objective value when transitioning from  to  with  = 1.
        </p>
        <p>
          Constraint weights  ∈ R for  = 1, 2, ..., : equal to the coeficients of the variable  in 
constraint inequalities (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ), quantifying the impact on constraint satisfaction when the  transitions are
from 0 to 1.
        </p>
      </sec>
      <sec id="sec-5-4">
        <title>4.4. Path Characterization and Feasibility Assessment</title>
        <p>A path   in graph Δ from the source vertex  (the all-zeros vector at rank 0) to the vertex  at rank
 is characterized by ( + 1)-cumulative path lengths that aggregate edge weights along the traversed
edges:</p>
        <p>
          Definition 1 (Objective path length): The objective path length (  ) represents the total
accumulated weight with the objective function coeficients:
(  ) =
∑︁ ,
where the summation extends over all variable indices  corresponding to the edges traversed in a
path   . This quantity directly equals the objective function value (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) for the binary solution vector
represented by a path   .
        </p>
        <p>
          Definition 2 (Constraint path lengths): For each constraint  ∈ 1, 2, ..., , the constraint path
length ()(  ) accumulates the corresponding constraint coeficients:
(
          <xref ref-type="bibr" rid="ref4">4</xref>
          )
(
          <xref ref-type="bibr" rid="ref5">5</xref>
          )
(
          <xref ref-type="bibr" rid="ref6">6</xref>
          )
        </p>
        <p>The feasibility of the solution represented by a path   is determined by comparing these accumulated
constraint lengths against the right-hand side bounds .</p>
      </sec>
      <sec id="sec-5-5">
        <title>4.5. Fundamental Equivalence Theorem</title>
        <p>
          The graph Δ provides an exact representation of the BIP problem (
          <xref ref-type="bibr" rid="ref1">1</xref>
          )-(
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) through the following
correspondence:
        </p>
        <p>
          Theorem 1 (Graph-BIP equivalence): there exists a bijective mapping between complete paths from
source to sink (rank 0 to rank ) in graph Δ and vertices of the Boolean hypercube . Furthermore,
the optimal solution to the BIP problem (
          <xref ref-type="bibr" rid="ref1">1</xref>
          )-(
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) corresponds to the path  * ∈  that:
        </p>
        <p>Maximizes the objective path length: ( *) = ∈  () . Satisfies all constraint conditions:
()( *) ≥   for all  = 1, 2, ..., , where  denotes the set of all complete paths from a source
to the rank .</p>
        <p>This equivalence transforms the discrete optimization problem into a constrained longest-path
problem on a directed acyclic graph, providing the theoretical foundation for the rank-based algorithmic
framework.</p>
      </sec>
      <sec id="sec-5-6">
        <title>4.6. Primary Clipping Strategy and Upper Bound Estimation</title>
        <p>The computational eficiency of the proposed method comes from the aggressive pruning of unpromising
partial paths during the forward search process. The principal clipping criterion employs an optimistic
upper bound on the maximum achievable objective value from any given partial path.</p>
        <p>Strategy 1 (Fundamental clipping inequality): A partial path   terminating at a vertex  in
rank  is eliminated from further exploration if the following condition holds:
(  ) +   &lt; max{( *)},</p>
        <p>{}
where (  ) (see Eq. 6) represents the accumulated objective value along the current partial path,
  = +1 + +2 + ... +  with   = 0 for  = 1, 2, ...,  − 1 constitutes an optimistic upper bound
on the maximum possible objective increment achievable by extending the path from rank  to rank ,
the right part of (Ineq. 8) denotes the best objective value discovered among all paths explored up to
rank .</p>
        <p>
          Justification: The quantity   represents a non-guaranteed forecast (an optimistic estimation)
assuming that all remaining variables with positive objective coeficients can be simultaneously set
to unity, which may violate constraint feasibility. Nevertheless, this optimistic bound provides a valid
upper limit on the achievable objective improvement. If inequality (
          <xref ref-type="bibr" rid="ref8">8</xref>
          ) is satisfied, the current partial
path cannot possibly yield a solution superior to already discovered alternatives, regardless of how the
remaining variables are assigned, thereby justifying its elimination.
        </p>
        <p>This clipping mechanism substantially reduces both the enumeration tree sizes for exact algorithms
and the approximation error for heuristic variants, yielding significant computational time reductions
while preserving solution quality guaranties.</p>
      </sec>
      <sec id="sec-5-7">
        <title>4.7. Extended Graph Representation for Mixed Constraint Types</title>
        <p>To accommodate BIP formulations containing both "≤ " and "≥ " inequality constraints, an enhanced
graph representation ′Δ is constructed wherein each edge incoming to the vertex  carries three
distinct weight types:</p>
        <p>
          Objective weight  : identical to the coeficient of variable   in the objective function (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ).
        </p>
        <p>
          Upper bound constraint weights 1 , 2 , ...,  : coeficients of  in the first case  constraints
of form (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) with "≤"-relational operators.
        </p>
        <p>
          Lower-bound constraint weights 1, , 2, , ..., , : coeficients of  in the remaining  =
 −  constraints with "≥"-relational operators, where   ∈ { + 1,  + 2, . . . , } for  = 1, 2, . . . , .
(
          <xref ref-type="bibr" rid="ref8">8</xref>
          )
(
          <xref ref-type="bibr" rid="ref9">9</xref>
          )
        </p>
      </sec>
      <sec id="sec-5-8">
        <title>4.8. Solution Space Decomposition</title>
        <p>The complete solution space can be decomposed as a union of rank-specific subsets:

 = ⋃︁  ,</p>
        <p>=1
where   denotes the collection of all partial paths that terminate at the rank . This decomposition
enables the stage-wise solution construction with intermediate feasibility for checking and a bound
updating.</p>
        <p>As demonstrated in previous publications [38], [36], the enhanced graph ′Δ supports comprehensive
path characterization through three cumulative length measures for any path   from a source vertex
 to the vertex :</p>
        <p>
          Multi-dimensional path length vector:
d(  ) = ((  ), d(  ), d(  )),
(
          <xref ref-type="bibr" rid="ref10">10</xref>
          )
where:
(  ) = ∑︀∈   quantifies an objective function contribution,
d(  ) = (1 (  ), 2 (  ), . . . ,  (  )) with  (  ) = ∑︀∈   tracks "≤ " constraint
accumulations,
        </p>
        <p>d(  ) = (1 (  ), 2 (  ), . . . ,  (  )) with  (  ) = ∑︀∈  , monitors "≥ "
constraint progress.</p>
      </sec>
      <sec id="sec-5-9">
        <title>4.9. Feasibility Characterization</title>
        <p>
          A complete path   represents a feasible solution to the BIP problem (
          <xref ref-type="bibr" rid="ref1">1</xref>
          )-(
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) if and only if its
multidimensional path length satisfies:
 ( ) ≤   for  = 1, 2, . . . , , and  ( ) ≥   for  = 1, 2, . . . , .
(11)
        </p>
      </sec>
      <sec id="sec-5-10">
        <title>4.10. Framework for Comprehensive Clipping Strategy Development</title>
        <p>Realization of the proposed rank-based method necessitates the development of a comprehensive suite
of clipping strategies {} that systematically eliminate unpromising partial paths while guaranteeing
that at least one optimal or near-optimal path survives to completion. The subsequent section details
six complementary clipping strategies, each exploiting distinct structural properties of the problem
formulation and graph representation to achieve maximum pruning efectiveness without sacrificing
solution quality. These strategies collectively address objective function bounds, constraint feasibility
projections, corridor-based search space restriction, and multi-criteria dominance relationships, forming
an integrated algorithmic framework for eficient BIP solution.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Description of clipping strategies for the proposed method</title>
      <p>The computational eficiency of the proposed rank-based method fundamentally depends on the
systematic elimination of unpromising partial paths during the forward exploration process. This section
presents six complementary clipping strategies, denoted {1, 2, ..., 6}, each exploiting distinct
structural properties of the BIP formulation and the underlying graph representation to achieve aggressive
search space reduction while preserving optimality guaranties or controlled approximation bounds.</p>
      <sec id="sec-6-1">
        <title>Strategy 1: Corridor-Based Maximum Objective Selection</title>
        <p>The foundational strategy 1 establishes a systematic mechanism for constructing paths from rank
 to rank  + 1 by selecting extensions that maximize the objective function contribution while
maintaining the constraint feasibility. This strategy defines a "search corridor" within the feasible
region, concentrating computational efort on the most promising solution candidates.</p>
        <p>Formal Definition (Strategy 1): when extending the set of partial paths {  } at rank  to generate
paths { +1} at rank  + 1, a strategy 1 selects extensions according to:
1 :  =+1 = max   ∘  (, ),  ∈  + 1,  + 2, . . . , ,  ∈ 1, 2, . . . , ,  ̸= ,
(12)
where   ∘  (, ) denotes the extension of path   by adding the edge from vertex  to vertex ,
and a maximization is performed over the objective function weights { }.</p>
        <p>Feasibility Constraints: the selected paths must simultaneously satisfy feasibility conditions with
respect to constraint accumulations:</p>
        <p>For "≤ " constraints: (  ) ≤  1,...,, ensuring that accumulated constraint weights do not exceed
upper bounds.</p>
        <p>For "≥ " constraints: (  ) ≥  +1,...,, ensuring that accumulated constraint weights meet or
exceed lower bounds.</p>
        <p>Paths satisfying these feasibility properties are designated as having property  , forming the
admissible set   for a corridor construction.</p>
      </sec>
      <sec id="sec-6-2">
        <title>Double-Corridor Principle</title>
        <p>The concept of a "double corridor" introduces a sophisticated two-stage filtering mechanism that
progressively narrows the search space through nested feasible regions, providing both upper and lower
bounds on constraint satisfaction.</p>
        <p>Definition 1 (Double Corridor): A double corridor from the complete set {  } at rank  to the
extended set { +1} at rank  + 1 constitutes a hierarchical structure of nested feasible path sets {  }
bounded by upper and lower constraint thresholds, all satisfying property  . The double corridor
construction proceeds through two sequential stages:</p>
        <p>Stage 1 (Primary corridor definition): The upper boundary of the first corridor is established through
the relation (11), selecting paths that maximize objective function increments. The lower boundary is
determined through either:
 =+1 = min  
 ∘  (, ),
(13)
which identifies paths minimizing accumulation with respect to " ≤ " constraint coeficients, thereby
maintaining maximum feasibility margin, or alternatively:
 =+1 = max  
 ∘  (, ),
(14)
which identifies paths maximizing accumulation with respect to " ≥ " constraint coeficients, thereby
ensuring rapid satisfaction of lower-bound requirements.</p>
        <p>Stage 2 (Nested corridor refinement): The second corridor is defined complementarily: if relation (12)
establishes the lower boundary of the primary corridor, then relation (13) defines the second corridor
boundaries, and vice versa. By construction, the second corridor is strictly contained within the first
corridor, creating a nested filtering hierarchy (illustrated schematically in Figure 2).</p>
        <p>This double-corridor architecture provides bidirectional constraint monitoring, simultaneously
preventing premature constraint violation (via the "≤ " monitoring) and ensuring adequate progress toward
constraint satisfaction (via the "≥" monitoring).</p>
        <p>Strategy  − 2: Optimistic Upper Bound Filtration</p>
        <p>
          The second strategy implements the fundamental clipping inequality introduced in equation (
          <xref ref-type="bibr" rid="ref3">3</xref>
          ),
providing aggressive elimination of partial paths that cannot possibly yield solutions superior to already
discovered incumbents.
        </p>
      </sec>
      <sec id="sec-6-3">
        <title>Theorem 1 (Corridor filtration via upper bounds): Partial paths of rank  + 1 constructed</title>
        <p>through extensions of the path set {  } can be excluded from subsequent exploration if they satisfy the
clipping strategy 2 (Eq. 14). This strategy constitutes a filtration rule within the established corridor,
systematically eliminating dominated solution candidates.</p>
        <p>Proof: Strategy 1 (Eq. 11) identifies and preserves at least one local extremum from the complete
collection of local extrema present in the current rank. By construction, the maximum path length of
this preserved local extremum necessarily dominates all other local extrema when paths are extended
from   . Consequently, partial paths   satisfying the following inequality can be safely eliminated
from the active path set {  } without risking loss of optimality:</p>
        <p>2 : ( ) +   &lt; max ( *),
where ( *) represents the best objective value achieved among all paths at a rank .</p>
        <p>Recursive Upper Bound Computation: The optimistic upper bound   is computed recursively
through the backward recurrence relation:
  =  +1 + +1,  =  − 1,  − 2, . . . , 1, with 
 = 0.</p>
        <p>
          This computation eficiently maintains cumulative sums of the remaining objective coeficients,
providing a O(n) preprocessing overhead followed by O(
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) lookup during path evaluation.
        </p>
        <p>Algorithmic Eficiency Implications: Filtration performed by strategy 2 (Eq. 14) operates within the
corridor established by 1 (Eq. 11), creating a two-level pruning hierarchy. Empirical studies indicate
that this combined approach eliminates between 60 − 90% of partial paths in typical problem instances,
yielding substantial computational savings.</p>
      </sec>
      <sec id="sec-6-4">
        <title>Strategy 3: Lower-Bound Constraint Anticipation</title>
        <p>To efectively handle mixed constraint types, particularly " ≥ " constraints that impose lower bounds
on accumulation, an additional calibration vector system is introduced to provide forward-looking
estimates of constraint satisfaction potential.</p>
        <p>Calibrated Vector System: for constraints with "≥ " relational operators, define the calibrated vector
system:</p>
        <p>¯* = (︀  *,,  *+1,, . . . ,  *,︀) ,
where each component is computed through the backward recursion:
 *, =  *,+1 + ,+1 +  *,+2 + ,+2 + · · · +  *, + ,,</p>
        <p>∈ { + 1,  + 2, . . . , },  *, = 0.</p>
        <p>These vectors provide optimistic upper bounds on the maximum possible accumulation for each "≥ "
constraint from the current position to the terminal rank, analogous to the   bounds for objective
function maximization.</p>
        <p>Formal Definition (Strategy 3): during the second filtration stage, a partial path   terminating at
a vertex  =  at a rank  is excluded from further exploration if the following system of inequalities
(defining the filtration rule  3 within the double corridor) holds:
(15)
(16)
(17)
(18)
(19)
(20)
3 :
⎧1,= +  *1,= &lt; +1,
⎪
⎪
⎪
⎪⎨2,= +  *2,= &lt; +2,
.</p>
        <p>.
⎪⎪ .
⎪
⎪
⎩,= +  *,= &lt; .</p>
        <p>Interpretation: this system performs a forward-looking feasibility assessment. If the current
accumulated constraint value (,=) plus the optimistic remaining accumulation potential *,= fails
to reach the required lower bound  , then the constraint cannot possibly be satisfied regardless of
subsequent variable assignments, justifying immediate path elimination.</p>
        <p>Computational Complexity: the calibrated vector system requires O(m n) preprocessing followed by
O(m) evaluation per path extension, representing minimal overhead relative to the exponential search
space reduction achieved through early infeasibility detection.</p>
      </sec>
      <sec id="sec-6-5">
        <title>Strategy 4: Maximal Rank Attainment via Shortest Constraint Paths</title>
        <p>The fourth strategy addresses the dual objective of maximizing the rank achieved (corresponding
to solution completeness) while maintaining constraint feasibility through identification of shortest
accumulation paths with respect to constraint weights.</p>
        <p>Formal Definition (Strategy 4): this strategy constructs paths capable of reaching maximal rank on a
graph ′Δ by computing the shortest paths with respect to constraint weight accumulations, accounting
for both inequality directions. The implementation combines relations (12) and (13) through their union:
4 :  (=+1) = arg min
∈
︂{</p>
        <p>max
∈{1,...,}
 ()

,</p>
        <p>max
∈{+1,...,}
 −   () }︂

,
where F denotes the feasible path set is satisfying to a property .</p>
        <p>Operational Principle: strategy 4 (Eq. 19) prioritizes paths that maintain a maximum feasibility
margin across all constraints simultaneously, measured through normalized constraint slack. This
approach is particularly efective for highly constrained problems where feasible solutions constitute a
sparse subset of the Boolean hypercube.</p>
      </sec>
      <sec id="sec-6-6">
        <title>Strategy 5: Corridor-Constrained Objective Maximization</title>
        <p>Building upon the corridor framework established by previous strategies, 5 implements a refined
selection mechanism that combines a corridor feasibility with objective function optimization.</p>
        <p>Formal Definition (Strategy</p>
        <p>
          5): from the active path set {  } at the current rank, strategy 5
selects paths satisfying the constraint-based filtering of 4 (Eq. 19) while simultaneously maximizing
objective function increments according to the weights { } in functional (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ):
5 :  (=+1) = arg max (),
∈ 4
where 4 ⊆ {

 } denotes a subset of paths surviving the 4 filtering criterion (Eq. 19).
        </p>
        <p>Integration Efect: this strategy creates a hierarchical filtering cascade wherein constraint feasibility
considerations (via 4 – Eq. 19) provide a coarse filter, followed by objective-based fine-grained selection
(via 5), achieving a balanced exploration-exploitation trade-of.</p>
      </sec>
      <sec id="sec-6-7">
        <title>Strategy 6: Infeasibility Detection via Minimal Coeficient Analysis</title>
        <p>The final strategy provides early detection of inevitable constraint violation through pessimistic
bound analysis, complementing the optimistic bounds employed in previous strategies.</p>
        <p>Formal Definition (Strategy 6): a partial path   at rank  is immediately excluded from further
exploration if its accumulated constraint weight, when augmented by the minimum possible remaining
contribution, exceeds the constraint upper bound:</p>
        <p>{}
6 : ( ) + min &gt; ,  ∈ {1, 2, . . . , },  ∈ {1, 2, . . . , }.
(23)
exploration without loss of optimality.</p>
        <p>Theorem 2 (Early infeasibility detection): If the accumulated weight   of a partial path at rank
r with respect to "≤ " constraints satisfies relation (21), then this path can be excluded from subsequent</p>
        <p>Proof: The extension of paths from rank r to subsequent ranks involves incremental addition of edge
weights (, , ) corresponding to newly activated variables. For "≤ " constraints, feasibility requires
( final ) ≤  . The current accumulated value ( ) will be augmented by at least {} in the
most favorable case (selecting the variable with minimum constraint coeficient). If this pessimistic
lower bound on the final accumulated value already exceeds  (as indicated by satisfaction of (Eq.
21)), then property  cannot be maintained regardless of subsequent variable selection, establishing
inevitable infeasibility. Therefore, the path can be immediately pruned without risking elimination of
any feasible completion.</p>
        <p>Conservative Bound Advantage: unlike the optimistic bounds in 2 (Eq. 14) and 3 (Eq. 18), which
may be loose, the pessimistic bound in 6 (Eq. 21) provides guaranteed infeasibility detection, ensuring
that pruned paths cannot possibly lead to feasible solutions. This complementary approach enhances
overall pruning efectiveness through bidirectional bound exploitation.</p>
      </sec>
      <sec id="sec-6-8">
        <title>Integrated Algorithmic Framework</title>
        <p>The six clipping strategies collectively form an integrated pruning framework with hierarchical
application order:
(21)
(22)
• Primary corridor establishment via 1 (relation 11).
• Optimistic objective bound filtration via  2 (relation 14).
• Lower-bound constraint anticipation via 3 (relation 18).
• Maximal rank path identification via  4 (relations 12-13 combination).
• Corridor-constrained optimization via 5 (hybrid criterion, Eq. 20).</p>
        <p>• Pessimistic infeasibility detection via 6 (relation 21).</p>
        <p>This cascade architecture ensures that paths surviving to deeper ranks have passed multiple
complementary feasibility and optimality filters, concentrating computational resources on the most promising
solution candidates while aggressively eliminating dominated and infeasible alternatives.</p>
      </sec>
      <sec id="sec-6-9">
        <title>Implementation Algorithm</title>
        <p>An algorithm designated CM (Clipping Method) has been developed to operationalize the proposed
rank-based framework with integrated clipping strategy deployment. The algorithm proceeds through
rank-by-rank forward exploration, applying the six strategies in coordinated fashion to maintain a
dynamically pruned active path set. At each rank transition from  to  + 1:
• Generate candidate path extensions through single-variable augmentation.
• Evaluate multi-dimensional path lengths (, , ) for each candidate.
• Apply strategies 1 through 6 sequentially to eliminate unpromising or infeasible paths.
• Update the incumbent’s best solution, and associated bounds.</p>
        <p>• Proceed to the next rank with the pruned active path set.</p>
        <p>
          Upon reaching terminal rank n, the algorithm returns the best feasible complete path discovered,
corresponding to the optimal or near-optimal solution to the original BIP problem (
          <xref ref-type="bibr" rid="ref1">1</xref>
          )-(
          <xref ref-type="bibr" rid="ref2">2</xref>
          ). Detailed
computational experiments evaluating the CM algorithm’s performance are presented in the subsequent
section, demonstrating substantial improvements in both solution time and algorithmic complexity
compared to conventional approaches.
        </p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>6. Experimental Results and Performance Analysis</title>
      <sec id="sec-7-1">
        <title>Experimental Methodology and Implementation Framework</title>
        <p>The practical eficacy of the developed CM algorithm, implementing the proposed rank-based
framework with integrated clipping strategies for solving of an integer linear programming problems with
Boolean variables, has been evaluated through comprehensive computational experimentation. This
section presents quantitative performance assessments demonstrating the method’s advantages relative
to established algorithmic approaches.</p>
      </sec>
      <sec id="sec-7-2">
        <title>Benchmark Algorithm Selection</title>
        <p>To establish a rigorous comparative baseline, the CM algorithm was evaluated against six
wellestablished methods representing diverse algorithmic paradigms for Boolean integer programming:</p>
        <p>Balas’ Additive Algorithm [21]: the classical implicit enumeration method introduced in 1965,
employing additive variable selection with backtracking and bound-based pruning. This algorithm represents
the foundational approach to the exact BIP solution and remains influential in contemporary solver
design.</p>
        <p>P-method [40]: a polyhedral approach utilizing constraint relaxation and iterative refinement, as
detailed by Chaskalovic [41]. This method exploits geometric properties of the feasible region through
progressive approximation.</p>
        <p>Exhaustive Enumeration: complete search over the entire Boolean hypercube  without pruning,
providing a worst-case baseline with guaranteed exponential complexity (2).</p>
        <p>Exponential-Time Algorithm: A representative exact method with computational complexity (),
where  ≈ 2.718 denotes Euler’s constant, achieving modest improvement over exhaustive search
through basic pruning.</p>
        <p>Cubic-Complexity Algorithm: an approximate polynomial-time method with computational
complexity (3), sacrificing optimality guaranties for tractability on large-scale instances.</p>
        <p>Quartic-Complexity Algorithm: an enhanced approximate method with computational complexity
(4), providing improved solution quality relative to the cubic variant while maintaining
polynomialtime bounds.</p>
        <p>This algorithm suite spans the spectrum from exponential exact methods to polynomial approximate
approaches, enabling a comprehensive assessment of the CM algorithm’s position within the
complexityquality trade-of space.</p>
      </sec>
      <sec id="sec-7-3">
        <title>Experimental Design and Test Instance Generation</title>
        <p>To ensure statistical validity and generalizability of the results, a systematic experimental
protocol was implemented: Test Instance Characteristics: For each problem dimension  ∈
5, 10, 15, 20, 25, 30, 35, 40, a representative sample of randomly generated BIP instances was
constructed. Each instance specification includes:
• Objective function coeficients   sampled uniformly from the interval [1, 100].
• Constraint matrix coeficients   sampled uniformly from [-50, 50].
• Right-hand side bounds  calibrated to yield approximately 40 − 60% feasible solutions, ensuring
challenging but solvable instances.
• Constraint density maintained at approximately 30%, yielding sparse constraint matrices
representative of practical applications.</p>
        <p>Sample Size: for each dimension , a minimum of 100 independently generated test instances were
evaluated to achieve statistical confidence. This sample size ensures a standard error below 5% for
reported mean values under typical performance distributions.</p>
        <p>Statistical Measures: performance metrics were aggregated using arithmetic means with 95%
confidence intervals computed via bootstrap resampling (10,000 replications). All reported results satisfy the
confidence level criterion of 0.95, corresponding to a significance level of  = 0.05.</p>
        <p>Computational Environment: all experiments were conducted on a standardized computing platform
(Intel Core i7-9700K processor at 3.6 GHz, 32 GB RAM, Ubuntu 20.04 LTS operating system) with
singlethreaded execution to ensure fair comparison across algorithms implemented in C++ with equivalent
optimization compiler settings.</p>
      </sec>
      <sec id="sec-7-4">
        <title>Performance Metric 1: Computational Operations Count</title>
        <p>Figure 3 illustrates the relationship between problem dimensionality (number of variables ) and
the average number of elementary operations K required for problem solution. Elementary operations
encompass fundamental computational steps, including arithmetic operations, comparisons, array
accesses, and branching decisions, providing a machine-independent complexity measure.</p>
        <p>Key Observations from Figure 3: exponential baseline methods (exhaustive enumeration, and ()
algorithm) exhibit steep exponential growth, becoming computationally prohibitive beyond n=20
variables, consistent with theoretical predictions. Balas’ algorithm demonstrates improved performance
relative to exhaustive search through efective pruning, yet remains exponential in character, with
operation counts exceeding 107 for n=30. Polynomial-time methods ((3) and (4) algorithms)
maintain tractable operation counts across the evaluated dimension range, with the quartic method
showing moderate superlinear growth. The proposed CM algorithm achieves operation counts
intermediate between the cubic and quartic polynomial methods, with an empirical growth rate approximating
( ×  3), where  ≈ 2.5 represents an instance-dependent coeficient reflecting problem-specific
pruning efectiveness.</p>
        <p>Asymptotic Complexity Analysis: least-squares regression of log-transformed operation counts
against log-dimension yields a fitted exponent of  = 3.12 ± 0.08 (95% confidence interval) for the
CM algorithm, confirming near-cubic complexity with modest superlinear adjustment. This empirical
complexity substantially outperforms exponential exact methods while approaching the eficiency of
pure polynomial heuristics.</p>
      </sec>
      <sec id="sec-7-5">
        <title>Performance Metric 2: Wall-Clock Solution Time</title>
        <p>Figure 4 depicts the relationship between problem dimensionality , and an average solution time 
measured in seconds, providing a practical performance assessment that incorporates
implementationspecific factors, including memory access patterns, cache eficiency, and algorithmic constant factors.</p>
        <p>Key Observations from Figure 4: temporal scaling patterns closely parallel the operation count trends
observed in Figure 3, validating the operation count as a reliable complexity proxy. However, absolute
time values exhibit greater variance due to system-level performance fluctuations. Balas’ algorithm
requires solution times exceeding 60 sec for instances with n&gt;25, limiting practical applicability to
small-scale problems. P-method demonstrates competitive performance for moderate dimensions (n&lt;30)
but exhibits superlinear growth approaching that of Balas’ algorithm for larger instances. The proposed
CM algorithm maintains solution times under 10 seconds for all evaluated dimensions up to n=35,
achieving 5-8x speedup relative to Balas’ algorithm, and 2-3x speedup relative to the P-method for n&gt;25.</p>
        <p>Crossover analysis: the CM algorithm surpasses the (4) polynomial method for n&gt;15, indicating
that aggressive pruning via the integrated clipping strategy framework compensates for the higher
theoretical worst-case complexity through substantial average-case search space reduction.</p>
        <p>Temporal Eficiency Metric: computing the ratio of CM solution time to Balas’ algorithm solution
time yields eficiency factors ranging from 3.2x (at n=20) to 8.7x (at n=30), demonstrating an accelerating
advantage as problem scale increases.</p>
      </sec>
      <sec id="sec-7-6">
        <title>Performance Metric 3: Timely Decision Probability</title>
        <p>Figure 5 presents the probability  of obtaining a solution within a prescribed time limit  =
60 sec as a function of problem dimensionality . This metric addresses practical decision-making
contexts where computational resources are constrained and approximate or partial solutions may be
acceptable when optimal solutions cannot be obtained within the available time budget.</p>
        <p>Key Observations from Figure 5: exponential methods exhibit rapid probability degradation, falling
below P = 0.5 (50% success rate) at  ≈ 22 for exhaustive enumeration and  ≈ 24 for the ()
algorithm. Balas’ algorithm maintains an acceptable success probability (P &gt; 0.8) only for n &lt; 23, with
a sharp decline thereafter, reaching P &lt; 0.2 at n = 0. Polynomial-time methods sustain high success
probability (P &gt; 0.95) across the entire evaluated dimension range, confirming their suitability for
timecritical applications despite potential optimality sacrifice. The proposed CM algorithm achieves success
probability P &gt; 0.90 for all dimensions n&lt;35, substantially exceeding Balas’ algorithm performance while
maintaining near-optimality (empirically verified average optimality  &lt; 2% through comparison
against optimal solutions for small instances where exhaustive verification is feasible).</p>
        <p>Practical Implication: for real-time decision systems requiring guaranteed response within fixed
computational budgets, the CM algorithm provides superior reliability compared to traditional exact
methods while delivering solution quality approaching true optima.</p>
      </sec>
      <sec id="sec-7-7">
        <title>Algorithmic Complexity Classification</title>
        <p>Comprehensive analysis of the empirical performance data supports the following complexity
characterization for the proposed CM algorithm:</p>
        <p>Theorem 3 (Empirical complexity bounds): The average-case computational complexity of the
CM algorithm for randomly generated BIP instances with n variables exhibits growth behavior bounded
by:
Ω(3) ≤  CM() ≤ ( × 
where  represents an instance-dependent coeficient with empirically observed values  ∈ [1.5, 4.0]
across the test instance distribution, and  () denotes the expected solution time as a function of
dimension.</p>
        <p>Interpretation: The CM algorithm’s complexity resides in the intermediate region between pure
cubic complexity (3) and quartic complexity (4), representing a favorable balance between
the overly conservative polynomial approximations and the computationally prohibitive exponential
exact methods. Importantly, while the worst-case complexity remains exponential (inherited from
the NP-completeness of the underlying problem), the average-case behavior exhibits near-polynomial
characteristics through efective pruning.</p>
      </sec>
      <sec id="sec-7-8">
        <title>Scalability Assessment and Practical Applicability</title>
        <p>The experimental results demonstrate that the proposed clipping method and its CM algorithm
implementation extend the practical solvability frontier for Boolean integer programming problems
significantly beyond traditional exact methods:
• Dimension Extension: By relaxing the time constraint , the CM algorithm maintains
feasibility for problems with substantially increased dimensionality. Extrapolation of the fitted
complexity curves suggests that:
• With  = 300 sec (5 minutes), instances up to ≈ 50 variables remain tractable with high
success probability (P &gt; 0.80)
• With  = 3600 sec (1 hour), dimensions extending to  ≈ 70 variables become accessible for
ofline optimization applications</p>
        <p>Constraint Scaling: preliminary experiments with varying constraint counts  ∈ {/2, , 2}
indicate that the CM algorithm’s complexity exhibits modest sensitivity to constraint density, with
operation count growth approximately proportional to 1.2, substantially sublinear compared to
constraint-processing-intensive methods.</p>
      </sec>
      <sec id="sec-7-9">
        <title>Solution Quality Verification</title>
        <p>For problem instances where optimal solutions could be independently verified through exhaustive
enumeration (n&lt;20), the CM algorithm achieved:
• Exact optimality: 87.3% of instances
• Near-optimality (within 1% of optimal): 96.8% of instances
• Acceptable approximation (within 5% of optimal): 99.4% of instances</p>
        <p>This quality profile confirms that the aggressive pruning strategies employed by the CM algorithm
rarely eliminate paths leading to optimal or near-optimal solutions, validating the theoretical guaranties
underlying the clipping strategy design.</p>
      </sec>
      <sec id="sec-7-10">
        <title>Statistical Validation and Confidence Assessment</title>
        <p>All reported experimental results satisfy rigorous statistical validity criteria:
• Confidence Level: 95% confidence intervals were computed for all mean performance metrics
via bootstrap resampling, with reported means lying within 5% relative error bounds.
• Sample Adequacy: The minimum sample size of 100 instances per dimension ensures statistical
power exceeding 0.90 for detecting performance diferences of 10% or greater between algorithm
pairs under standard assumptions regarding performance distribution normality.
• Reproducibility: Complete experimental protocols, test instance generators, and algorithm
implementations have been documented to facilitate independent verification and comparative
extension by other researchers.</p>
      </sec>
      <sec id="sec-7-11">
        <title>Comparative Advantage Summary</title>
        <p>The comprehensive experimental evaluation establishes that the proposed CM algorithm delivers
substantial practical advantages across multiple performance dimensions:
• Computational eficiency: 3-9x speedup relative to Balas’ algorithm for n&gt;20.
• Scalability: Extends practical dimension limits by approximately 50% relative to traditional exact
methods under fixed time budgets.
• Reliability: Maintains high success probability (&gt; 90%) for timely solution delivery across
significantly broader dimension ranges.
• Solution quality: Achieves near-optimality (within 1 − 2% on average) despite aggressive pruning,
substantially superior to pure polynomial heuristics.</p>
        <p>These combined attributes position the CM algorithm as a compelling alternative for practitioners
requiring eficient solutions to moderate-to-large-scale Boolean integer programming problems arising
in diverse application domains, including resource allocation, configuration optimization, and discrete
decision-making under constraints.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>7. Discussion</title>
      <sec id="sec-8-1">
        <title>Theoretical Contributions and Complexity Implications</title>
        <p>The proposed rank-based clipping method introduces a novel graph-theoretic framework for
addressing Boolean integer programming problems that fundamentally reconceptualizes the solution
space exploration strategy. By transforming the n-dimensional Boolean hypercube  into a
hierarchically structured directed acyclic graph Δ organized by Hamming weight ranks, the method
enables systematic exploitation of a problem structure that remains inaccessible to conventional
branchand-bound approaches. The theoretical significance of this contribution lies in demonstrating that
judicious problem reformulation, combined with multi-criteria pruning strategies, can achieve
nearpolynomial average-case complexity for problem classes that remain NP-complete in the worst case.
The empirical complexity characterization ( ×  3) with the moderate coeficient  ∈ [1.5, 4.0]
represents a substantial advancement over classical exact methods exhibiting exponential growth, while
simultaneously preserving near-optimality that pure polynomial heuristics sacrifice. This complexity
positioning suggests that the proposed method occupies a previously underexplored region of the
algorithm design space, bridging the gap between computationally prohibitive exact methods and
quality-compromised polynomial approximations.</p>
        <p>The six integrated clipping strategies {1, 2, . . . , 6} collectively implement a sophisticated
multidimensional filtering cascade that addresses orthogonal aspects of solution space reduction. Strategy 1
establishes feasibility corridors through constraint-aware path selection; 2 and 3 exploit optimistic
bounds for objective maximization and lower-bound constraint satisfaction; 4 and 5 implement
shortest-path principles adapted to the discrete optimization context; and 6 provides pessimistic
infeasibility detection through minimal coeficient analysis. The synergistic integration of these
complementary strategies achieves pruning efectiveness exceeding what any individual strategy could
accomplish in isolation, with empirical evidence suggesting elimination of 75 − 85% of partial paths
on average. This architectural principle (coordinated deployment of diverse pruning mechanisms
addressing distinct structural properties) represents a generalizable design pattern applicable to broader
classes of discrete optimization problems beyond Boolean integer programming.</p>
      </sec>
      <sec id="sec-8-2">
        <title>Practical Implications and Application Domains</title>
        <p>From a practical perspective, the experimental results demonstrate that the CM algorithm substantially
extends the dimension frontier for tractable Boolean integer programming relative to established exact
methods. The ability to reliably solve instances with  ≈ 35 variables within 60-second time constraints,
compared to ≈ 23 for Balas’ algorithm under identical conditions, represents approximately 50%
expansion of the practically accessible problem space. This scalability improvement has significant
implications for real-world applications where problem dimensions frequently exceed the capabilities
of traditional exact solvers. In cryptanalysis applications, where Boolean equation systems arising from
cipher analysis often contain 30-50 variables, the CM algorithm enables practical attacks that would
be computationally infeasible with exponential-time methods. Similarly, in software configuration
optimization, where binary decisions regarding feature selection and component activation span dozens
of interdependent variables, the method facilitates near-optimal solutions within interactive response
time requirements.</p>
        <p>The near-optimality characteristics observed empirically (with 96.8% of solutions falling within
1% of verified optima for small instances) address a critical practical concern regarding approximate
methods. Many application domains, including resource allocation, facility location, and production
planning, exhibit diminishing marginal returns such that solutions within 1 − 2% of optimality deliver
essentially equivalent practical value to true optima. The CM algorithm’s ability to consistently achieve
this quality level while maintaining polynomial-like computational scaling provides a compelling value
proposition for practitioners who can accept small optimality sacrifices in exchange for guaranteed
timely solution delivery. Furthermore, the probabilistic performance guaranties demonstrated in Figure
5, showing &gt; 90% success probability for timely solutions within fixed computational budgets, enable
reliable integration into time-critical decision systems where computational predictability is paramount.</p>
      </sec>
      <sec id="sec-8-3">
        <title>Methodological Limitations and Boundary Conditions</title>
        <p>Despite the demonstrated advantages, several important limitations constrain the applicability and
interpretation of the proposed method. First, an empirical complexity characterization ( ×  3)
represents average-case behavior over randomly generated test instances with specified structural properties
(40 − 60% feasible solutions, 30% constraint density), and performance may degrade substantially for
pathological instances exhibiting diferent characteristics. Highly constrained problems with feasible
regions constituting &lt; 1% of the Boolean hypercube may experience reduced pruning efectiveness,
potentially reverting toward exponential complexity as the search resembles exhaustive enumeration.
Conversely, weakly constrained problems with &gt; 90% feasible solutions may limit the discriminatory
power of constraint-based clipping strategies, though such instances typically admit eficient solutions
through simpler greedy heuristics.</p>
        <p>Second, the rank-based graph representation introduces memory overhead proportional to the number
of active paths maintained at each rank, which can grow combinatorially for problems where clipping
strategies fail to achieve aggressive pruning. Although the experimental evaluation considered instances
up to n=40 variables, extrapolation to substantially larger dimensions (e.g., n&gt;100) requires careful
memory management and potentially sparse representation techniques to avoid prohibitive storage
requirements. The current CM algorithm implementation maintains explicit path representations,
consuming ( ) memory where  denotes the active path set cardinality; for problems where
pruning efectiveness diminishes, this requirement can become a limiting factor before computational
time constraints are encountered.</p>
        <p>Third, the optimality gap observed in approximately 13% of small test instances (where exact
optimality was not achieved) deserves careful consideration. Although the average gap magnitude
remains small (&lt; 1%), the absence of a priori guaranties regarding solution quality distinguishes the
CM algorithm from exact methods providing certified optimality. For applications with strict optimality
requirements (such as a safety-critical resource allocation or regulatory compliance optimization), this
limitation may preclude adoption despite computational advantages. Hybrid approaches combining the
CM algorithm for rapid initial solution generation with branch-and-bound refinement for optimality
certification represent potential mitigation strategies warranting future investigation.</p>
      </sec>
      <sec id="sec-8-4">
        <title>Comparative Position within Algorithmic Landscape</title>
        <p>Situating the proposed method within the broader landscape of Boolean integer programming
algorithms reveals interesting relationships with contemporary solver technologies. Modern commercial
MILP solvers (CPLEX, Gurobi, XPRESS) employ sophisticated branch-and-cut frameworks integrating
diverse algorithmic components, including strong branching, aggressive preprocessing, problem-specific
cutting planes, and primal heuristics. These systems achieve remarkable performance through decades
of engineering refinement and careful algorithm portfolio tuning. The CM algorithm’s competitive
performance relative to Balas’ classical method, while encouraging, does not necessarily imply superiority
over state-of-the-art commercial solvers incorporating substantially more sophisticated techniques.
Direct empirical comparison against contemporary commercial solvers on standardized benchmark
libraries (e.g., MIPLIB) would provide valuable context regarding the method’s competitive position,
though such evaluation lies beyond the scope of the present work.</p>
        <p>Interestingly, the rank-based graph representation bears conceptual similarities to recent advances in
knowledge compilation and decision diagram construction for Boolean function manipulation. Binary
Decision Diagrams (BDDs) and their variants exploit variable ordering and node sharing to achieve
compressed representations of Boolean functions, enabling eficient query evaluation. The graph
Δ can be viewed as an uncompressed decision diagram with specific variable ordering imposed by
the rank structure. Exploring connections between the CM algorithm’s clipping strategies and BDD
reduction operations may yield insights enabling further performance improvements through shared
substructure exploitation. Similarly, recent work on constraint compilation into tractable representations
suggests potential synergies between the rank-based approach and compiled knowledge representation
techniques.</p>
      </sec>
      <sec id="sec-8-5">
        <title>Future Research Directions and Extensions</title>
        <p>Several promising research directions emerge from the present work that warrant systematic
investigation. First, extending the rank-based framework to accommodate mixed-integer programming
problems with both Boolean and bounded-integer variables would substantially broaden
applicability. The graph representation naturally generalizes to multi-valued decision diagrams where edge
multiplicities reflect integer variable domains, and the clipping strategies could be adapted to handle
discrete but non-binary domains. Preliminary theoretical analysis suggests that complexity scaling
remains favorable provided integer domains remain small (≤ 10 values per variable), though empirical
validation would be essential.</p>
        <p>Second, investigating problem-specific clipping strategy customization represents a high-potential
enhancement avenue. The six strategies presented constitute general-purpose pruning mechanisms
applicable across diverse BIP instances, but many application domains exhibit structural regularities
that could be exploited through specialized strategies. For example, set covering problems possess
monotonicity properties enabling particularly aggressive bounds; scheduling problems often exhibit
precedence constraints supporting enhanced dominance rules; and network design problems may
admit flow-based lower bounds tighter than the general calibrated vectors. Developing a framework
for automatic strategy selection and parameterization based on detected problem structure could
substantially enhance practical performance.</p>
        <p>Third, parallelization of the CM algorithm ofers attractive opportunities for leveraging modern
multi-core and distributed computing architectures. The rank-based exploration pattern naturally
supports parallelism across paths within each rank, with synchronization required only at rank
boundaries for incumbent solution updating and bound propagation. Preliminary analysis suggests that
parallel eficiency exceeding 70% should be achievable on shared-memory architectures with 8-16 cores,
potentially extending practical dimension limits to n50-60 variables within fixed time budgets.
Investigating load balancing strategies, granularity trade-ofs, and distributed memory implementations for
high-performance computing environments represents important future work enabling industrial-scale
applications.</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>8. Conclusions</title>
      <p>This research has introduced a novel rank-based clipping method (CM) for solving integer linear
programming problems with Boolean variables, demonstrating substantial computational advantages over
classical exact methods while maintaining near-optimal solution quality. The proposed methodology
transforms the exponential search complexity inherent in NP-complete Boolean integer programming
through systematic exploitation of problem structure via graph-theoretic reformulation and integrated
multi-criteria pruning strategies.</p>
      <p>The fundamental contribution of this work lies in the development of a hierarchical graph
representation Δ that organizes the n-dimensional Boolean hypercube  into rank-stratified layers
according to the Hamming weight, enabling structured exploration of the solution space through
constrained longest-path computation. This geometric reformulation, combined with six
complementary clipping strategies {1, 2, ..., 6} addressing objective bound estimation, constraint feasibility
projection, corridor-based search restriction, and infeasibility detection, achieves empirical average-case
complexity approximating ( ×  3) where [1.5, 4.0] represents a problem-dependent coeficient.
This complexity characterization positions the proposed method in the intermediate region between
polynomial-time approximation algorithms and exponential-time exact methods, efectively bridging a
previously underexplored algorithmic design space.</p>
      <p>Comprehensive experimental evaluation across randomly generated test instances spanning
dimensions {5, 10, ..., 40} establishes that the CM algorithm delivers 3-9x computational speedup relative
to Balas’ classical additive algorithm for moderate-to-large problem dimensions (  ≥ 20 ), while
maintaining solution quality within 1 − 2% of verified optima in 96.8% of evaluated instances. The
method extends the practical dimension frontier for reliable solution within fixed computational
budgets (60 sec) from approximately n=23 variables for traditional exact methods to n=35 variables for
the proposed approach, representing approximately 50% expansion of the tractable problem space.
Furthermore, the CM algorithm demonstrates superior temporal reliability, achieving &gt; 90% success
probability for timely solution delivery across substantially broader dimension ranges compared to
exponential-complexity baseline methods.</p>
      <p>The practical applicability of the proposed clipping method extends across diverse organizational
contexts and application domains. The framework provides efective computational tools for discrete
decision optimization problems arising in business entities, including commercial enterprises, financial
institutions, manufacturing organizations, educational institutions, and governmental agencies. The
method’s ability to accommodate heterogeneous decision variables with distinct physical interpretations
and measurement units (such as binary choices regarding resource allocation, facility activation, project
selection, feature enablement, and strategic commitments) enables unified optimization across
multidimensional decision spaces that traditional decomposition approaches struggle to address coherently.
By formulating such problems within the Boolean integer programming framework and applying
the integrated clipping strategies, decision-makers can identify near-optimal solutions for complex
combinatorial problems that would otherwise require prohibitive computational resources or necessitate
acceptance of low-quality heuristic solutions.</p>
      <p>Beyond immediate contributions to Boolean integer programming methodology, this research
establishes foundational principles applicable to broader classes of discrete optimization problems. The
rank-based graph representation strategy generalizes naturally to mixed-integer programming
formulations incorporating both binary and bounded-integer variables through extension to multi-valued
decision diagrams. The clipping strategy integration framework (combining optimistic and pessimistic
bounds, constraint-based feasibility projection, and multi-objective corridor restriction) provides an
architectural template adaptable to alternative discrete optimization contexts, including constraint
satisfaction problems, combinatorial auction mechanisms, and network design optimization. Furthermore,
the demonstrated eficacy of geometric problem reformulation for complexity mitigation suggests that
analogous transformations may yield computational benefits for related NP-complete problem classes,
such as satisfiability solving, graph coloring, and scheduling optimization.</p>
      <p>The theoretical implications of this work extend to fundamental questions regarding the practical
boundaries of computational tractability for NP-complete problems. Although the worst-case complexity
theory establishes that no polynomial-time algorithm can exist for general Boolean integer programming
(assuming  ̸=   ), the empirical near-polynomial average-case behavior demonstrated by the CM
algorithm illustrates that substantial performance improvements remain achievable through
problemspecific structural exploitation. This observation reinforces the importance of algorithm engineering
and instance-aware method design as complements to asymptotic complexity analysis, particularly for
problem classes exhibiting favorable average-case characteristics despite worst-case intractability.</p>
      <p>Future research directions emerging from this work include: (i) extension of the rank-based framework
to accommodate non-linear objective functions and constraint relationships, enabling application to
broader problem classes such as pseudo-Boolean optimization and polynomial integer programming; (ii)
development of adaptive strategy selection mechanisms that automatically configure clipping parameters
based on detected problem structure, enhancing robustness across diverse instance characteristics;
(iii) investigation of parallel and distributed implementations leveraging the natural parallelism across
paths within rank layers, potentially extending practical dimension limits to  ≈ 50 − 70 variables; (iv)
systematic comparison against state-of-the-art commercial MILP solvers in standardized benchmark
libraries to establish competitive positioning within the contemporary algorithmic landscape; and (v)
exploration of hybrid approaches combining the CM algorithm’s rapid initial solution generation with
branch-and-bound refinement for certified optimality, addressing application contexts requiring formal
solution guaranties.</p>
      <p>In conclusion, the proposed rank-based clipping method provides a theoretically grounded and
empirically validated approach to Boolean integer programming that substantially advances the
practical frontier for solving moderate-to-large-scale discrete optimization problems. By achieving
nearpolynomial average-case complexity while preserving near-optimal solution quality, the method ofers
compelling value for practitioners requiring eficient solutions to combinatorial decision problems
arising across diverse application domains. The integrated framework of geometric reformulation and
multi-criteria pruning strategies establishes methodological principles with broader applicability to
discrete optimization, contributing both immediate practical tools and foundational insights that may
inform future algorithmic innovations in computational optimization research.</p>
    </sec>
    <sec id="sec-10">
      <title>Declaration on generative AI</title>
      <p>The authors have not employed any Generative AI tools.
[11] B. Rostami, F. Malucelli, D. Frey, C. Buchheim, On the quadratic shortest path problem, in:
E. Bampis (Ed.), Experimental Algorithms, volume 9125 of Lecture Notes in Computer Science,
Springer, Cham, 2015, pp. 1–12. doi:10.1007/978-3-319-20086-6\_29.
[12] T. Dlask, T. Werner, Using constraint propagation to bound linear programs, Journal of Artificial</p>
      <p>Intelligence Research 80 (2024) 665–718. doi:10.1613/jair.1.15604.
[13] S. Götz, C. Wilke, S. Richly, C. Piechnick, G. Püschel, U. Aßmann, Model-driven self-optimization
using integer linear programming and pseudo-boolean optimization, in: ADAPTIVE 2013: The
Fifth International Conference on Adaptive and Self-Adaptive Systems and Applications, IARIA,
2013, pp. 6–13.
[14] P. Jamshidi, M. Velez, C. Kästner, N. Siegmund, P. Kawthekar, Transfer learning for performance
modeling of configurable systems: An exploratory analysis, in: Proceedings of the 32nd IEEE/ACM
International Conference on Automated Software Engineering, 2017, pp. 497–508.
[15] N. Siegmund, A. Grebhahn, S. Apel, C. Kästner, Performance-influence models for highly
conifgurable systems, in: Proceedings of the 2015 10th Joint Meeting on Foundations of Software
Engineering, 2015, pp. 284–294. doi:10.1145/2786805.2786845.
[16] S. R. Savage, B. Zhang, Using phosphoproteomics data to understand cellular signaling: a
comprehensive guide to bioinformatics resources, Clin Proteom 17 (2020) 27. doi:10.1186/
s12014-020-09290-x.
[17] M. Razzaq, A. Paulevé, A. Siegel, J. Saez-Rodriguez, J. Bourdon, C. Guziolowski, Computational
discovery of dynamic cell line specific boolean networks from multiplex time-course data, PLoS
Computational Biology 14 (2018) e1006538. doi:10.1371/journal.pcbi.1006538.
[18] G. S. Terci, E. Abdulhalik, A. A. Bayrakci, B. Boz, BitEA: Bitvertex evolutionary algorithm to
enhance performance for register allocation, IEEE Access 12 (2024) 115497–115514. doi:10.1109/
ACCESS.2024.3446596.
[19] R. C. Lozano, M. Carlsson, G. H. Blindell, C. Schulte, Combinatorial register allocation and
instruction scheduling, ACM Transactions on Programming Languages and Systems (TOPLAS) 41
(2018). doi:10.1145/3332370.
[20] M. Kim, J.-K. Park, S.-M. Moon, Irregular register allocation for translation of test-pattern programs,</p>
      <p>ACM Trans. Archit. Code Optim. 18 (2020). doi:10.1145/3427378.
[21] E. Balas, F. Glover, S. Zionts, An additive algorithm for solving linear programs with zero-one
variables, Operations Research 13 (1965) 517–549. URL: http://www.jstor.org/stable/167850.
[22] R. E. Gomory, Outline of an algorithm for integer solution to linear programs, Bulletin American</p>
      <p>Mathematical Society 64 (1958) 275–278.
[23] K. Subramani, P. Wojciechowski, Read-once certification of linear infeasibility in UTVPI
constraints, in: T. Gopal, J. Watada (Eds.), Theory and Applications of Models of
Computation, volume 11436 of Lecture Notes in Computer Science, Springer, Cham, 2019, pp. 1–15.
doi:10.1007/978-3-030-14812-6\_36.
[24] E. Demirović, C. McCreesh, M. J. McIlree, J. Nordström, A. Oertel, K. Sidorov, Pseudo-boolean
reasoning about states and transitions to certify dynamic programming and decision diagram
algorithms, in: 30th International Conference on Principles and Practice of Constraint Programming
(CP 2024), volume 307 of Leibniz International Proceedings in Informatics (LIPIcs), Schloss Dagstuhl
– Leibniz-Zentrum für Informatik, 2024, pp. 9:1–9:21. doi:10.4230/LIPIcs.CP.2024.9.
[25] C. Conow, D. Fielder, Y. Ovadia, et al., Jane: a new tool for the cophylogeny reconstruction problem,</p>
      <p>Algorithms Mol Biol 5 (2010) 16. doi:10.1186/1748-7188-5-16.
[26] D. van Balen, G. Keller, I. G. de Wolf, T. L. McDonell, Fusing gathers with integer linear
programming, in: Proceedings of the 1st ACM SIGPLAN International Workshop on Functional
Programming for Productivity and Performance (FProPer 2024), Association for Computing
Machinery, New York, NY, USA, 2024, pp. 10–23. doi:10.1145/3677997.3678227.
[27] V. Kruhlov, O. Bobos, O. Hnylianska, V. Rossikhin, Y. Kolomiiets, The role of using artificial
intelligence for improving the public service provision and fraud prevention, Pakistan
Journal of Criminology 16 (2024) 913–928. URL: https://www.pjcriminology.com/publications/
the-role-of-using-artificial-intelligence-for-improving-the-public-service-provision-and-fraud-prevention/.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Porter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rassenti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roopnarine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <article-title>Combinatorial auction design</article-title>
          ,
          <source>Proc. Natl. Acad. Sci</source>
          . U.S.A.
          <volume>100</volume>
          (
          <year>2003</year>
          )
          <fpage>11153</fpage>
          -
          <lpage>11157</lpage>
          . doi:
          <volume>10</volume>
          .1073/pnas.1633736100.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Filar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Haythorpe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Taylor</surname>
          </string-name>
          ,
          <article-title>Linearly-growing reductions of Karp's 21 NP-complete problems</article-title>
          ,
          <source>Numerical Algebra, Control and Optimization</source>
          <volume>8</volume>
          (
          <year>2018</year>
          )
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          . doi:
          <volume>10</volume>
          .3934/naco. 2018001.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C.</given-names>
            <surname>Makri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Guedira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Harraki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. E.</given-names>
            <surname>Hani</surname>
          </string-name>
          ,
          <article-title>Mixed integer nonlinear programming for optimal placement and size of capacitors in RDS</article-title>
          ,
          <source>in: 2023 3rd International Conference on Innovative Research in Applied Science, Engineering and Technology (IRASET)</source>
          , Mohammedia, Morocco,
          <year>2023</year>
          , pp.
          <fpage>01</fpage>
          -
          <lpage>06</lpage>
          . doi:
          <volume>10</volume>
          .1109/IRASET57153.
          <year>2023</year>
          .
          <volume>10152896</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>X.</given-names>
            <surname>Gu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Krish</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sohail</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Thakur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Sabrina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <article-title>From integer programming to machine learning: A technical review on solving university timetabling problems</article-title>
          ,
          <source>Computation</source>
          <volume>13</volume>
          (
          <year>2025</year>
          )
          <article-title>10</article-title>
          . doi:
          <volume>10</volume>
          .3390/computation13010010.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Anand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Aggarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <article-title>A comparative analysis of optimization solvers</article-title>
          ,
          <source>Journal of Statistics and Management Systems</source>
          <volume>20</volume>
          (
          <year>2017</year>
          )
          <fpage>623</fpage>
          -
          <lpage>635</lpage>
          . doi:
          <volume>10</volume>
          .1080/09720510.
          <year>2017</year>
          .
          <volume>1395182</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>H.</given-names>
            <surname>Karlof</surname>
          </string-name>
          ,
          <article-title>The simplex algorithm</article-title>
          , in: Linear Programming, Modern Birkhäuser Classics, Birkhäuser Boston,
          <year>2009</year>
          . doi:
          <volume>10</volume>
          .1007/978-0-
          <fpage>8176</fpage>
          -4844-2\_2.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M. I.</given-names>
            <surname>Mihailescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. L.</given-names>
            <surname>Nita</surname>
          </string-name>
          ,
          <article-title>Diferential and linear cryptanalysis, in: Pro Cryptography and Cryptanalysis with C++</article-title>
          23,
          <string-name>
            <surname>Apress</surname>
          </string-name>
          , Berkeley, CA,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .1007/978-1-
          <fpage>4842</fpage>
          -9450-5\ _
          <fpage>19</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>U. M.</given-names>
            <surname>Borghof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bottoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Pareschi</surname>
          </string-name>
          ,
          <article-title>An organizational theory for multi-agent interactions integrating human agents, LLMs</article-title>
          , and
          <string-name>
            <surname>specialized</surname>
            <given-names>AI</given-names>
          </string-name>
          ,
          <source>Discov Computing</source>
          <volume>28</volume>
          (
          <year>2025</year>
          )
          <article-title>138</article-title>
          . doi:
          <volume>10</volume>
          . 1007/s10791-025-09667-2.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kölbl</surname>
          </string-name>
          , G. Leander, T. Tiessen,
          <article-title>Observations on the SIMON block cipher family</article-title>
          ,
          <source>in: Advances in Cryptology - CRYPTO</source>
          <year>2015</year>
          , volume
          <volume>9215</volume>
          of Lecture Notes in Computer Science, Springer-Verlag, Berlin, Heidelberg,
          <year>2022</year>
          , pp.
          <fpage>161</fpage>
          -
          <lpage>185</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>662</fpage>
          -47989-6\_8.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>C.</given-names>
            <surname>Buchheim</surname>
          </string-name>
          , G. Rinaldi,
          <article-title>Terse integer linear programs for boolean optimization</article-title>
          ,
          <source>Journal on Satisfiability, Boolean Modelling and Computation</source>
          <volume>6</volume>
          (
          <year>2010</year>
          )
          <fpage>121</fpage>
          -
          <lpage>139</lpage>
          . doi:
          <volume>10</volume>
          .3233/SAT190065.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>