<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oksana Pichugina</string-name>
          <email>oksanapichugina1@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Olha Matsiy</string-name>
          <email>olga.matsiy@gmail.com</email>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yuriy Skob</string-name>
          <email>yuriy.skob@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Combinatorial Optimization, Euclidean Combinatorial Optimization</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National Aerospace University "Kharkiv Aviation Institute"</institution>
          ,
          <addr-line>17 Chkalova Street, Kharkiv, 61070</addr-line>
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Toronto</institution>
          ,
          <addr-line>27 King's College Circle, Toronto, M5S 1A1</addr-line>
          ,
          <country country="CA">Canada</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>V. N. Karazin Kharkiv National University</institution>
          ,
          <addr-line>4 Svobody Sq., Kharkiv, 61022</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>including Computer Science, Operations Research, Engineering</institution>
          ,
          <addr-line>Finance, and many others. Overall</addr-line>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>knapsack problem</institution>
          ,
          <addr-line>Gurobi, CPLEX, Integer Programming</addr-line>
          ,
          <country>Linear Binary Optimization</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This study investigates various linear discrete formulations of the unbounded knapsack problem, including binary and bounded forms. Two variations of binary knapsack problem reformulations are examined, each with distinct linear constraints. Gurobi and CPLEX solvers are used to compare the performance of these models. The computational experiments are conducted on randomly generated instances of sizes 10-100. The results revealed that, on average, Gurobi was 20% faster than CPLEX. Integer, bounded, and 0/1 knapsack problem formulations had comparable mean run times. Incorporating specific constraints in the 0/1 formulation yielded superior results compared to the basic 0/1 knapsack problem Overall, this research contributes to understanding efficient and effective formulations for solving the unbounded knapsack problem and choosing a better solver. Knapsack problem, binary knapsack problem, unbounded knapsack problem, bounded CO is highly important because it provides powerful tools for solving complex problems and making better decisions in a wide range of applications [1,7].</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Combinatorial optimization (CO) is a branch of optimization that deals with finding the best solution
among a finite set of possible solutions [
        <xref ref-type="bibr" rid="ref1">1,7</xref>
        ]. It has wide-ranging applications in various fields,
      </p>
      <p>2023 Copyright for this paper by its authors.</p>
      <p>Binary optimization (Binary Programming, BP) is a special case of IP that deals with optimization
problems where the decision variables are restricted to take on binary values, i.e., 0 or 1. BP problems
arise in many applications, such as network flow problems, scheduling problems, and Boolean function
optimization. Here, the decision variables can represent binary decisions, such as whether to include or
exclude a certain item in a set or schedule a job at a particular moment. Binary optimization problems
have many applications in various fields. They can be solved using various techniques, including IP,
dynamic programming, and heuristic algorithms such as simulated annealing, genetic algorithms, and
tabu search.</p>
      <p>
        Due to the discreteness of the search domain, CO problems allow multiple formulations. The
performance of a solver can depend greatly on the formulation of the problem being solved. Different
problem formulations can result in different solution times, solution quality, and solution methods used
by the solver. This is particularly true for the KP, which can be formulated in various ways [
        <xref ref-type="bibr" rid="ref3">3,10</xref>
        ],
including as an IP problem [18], a dynamic programming problem [11], a constraint satisfaction
problem, the Quadratic Unconstrained Binary Problem (QUBO) [15] etc.
      </p>
      <p>For example, in KP, the choice of the objective function can significantly impact the solver's
performance. A simple objective function that only considers the value of items being packed can result
in a suboptimal solution, particularly when the weights of the items are not uniformly distributed. In
contrast, a more complex objective function that balances the value and weight of the packed items can
result in a better solution but may require more computational resources. Similarly, the choice of solver
used to solve the problem can also affect performance. Different solvers may be better suited to
particular problem formulations or problem instances and can have different strengths and weaknesses.
For example, some solvers may be better at solving linear programming problems, while others may be
better at solving mixed-integer or nonlinear programming problems.</p>
      <p>In general, it is highly important to choose a problem formulation that is well-suited to a specific
problem being solved and a solver that is well-suited to the problem formulation. Experimenting with
different problem formulations and solvers may also be useful to determine which combination yields
the best performance for a particular problem instance.</p>
      <p>
        KP is a well-known problem in computer science and optimization that has important applications
in many fields, including finance, resource allocation, and logistics [
        <xref ref-type="bibr" rid="ref3 ref5">3,5</xref>
        ].
      </p>
      <p>In this paper, we aim to investigate the solution time of the Unbounded KP, where each item placed
in the knapsack has an unlimited number of copies depending on its formulation and solver, where
solvers are limited to well-known Gurobi and CPLEX.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Knapsack problem (KP)</title>
      <p>
        The knapsack problem (KP) [
        <xref ref-type="bibr" rid="ref5">5,9,17,18</xref>
        ] is a classic problem in CO that involves selecting a subset
of items with the goal of maximizing the total value while satisfying a constraint on the total weight or
volume.
2.1.
      </p>
    </sec>
    <sec id="sec-3">
      <title>KP variations</title>
      <p>There are several variants of the knapsack problem, including:
• 0/1 KP (binary KP): This is the classic problem variant, where each item can be included in
the knapsack either completely (i.e., with weight equal to its capacity) or not at all. The goal is to
maximize the total value of the selected items (Objective 1) subject to the constraint on the total
weight (Constraint 1).
• Bounded KP (BKP): In this variant, each item has a limited number of copies available, and
the goal is to select a subset of items to maximize Objective 1 subject to Constraint 1 and the limit
on the number of copies of each item that can be included (Constraint 2).
• Unbounded KP (UKP, integer KP): In this special case, items have an unlimited number of
copies, and the goal is to select a subset of them to maximize Objective 1 subject to Constraint 1 and
the integrity of the number of copies of each item (Constraint 3).
• Fractional knapsack problem (FKP): In this version, items can be included in the knapsack
partially, i.e., fractions of an item can be utilized. The goal is to maximize Objective 1 subject to
Constraint 1.
• Subset sum problem (SSP): This is a special case of the knapsack problem where each item
has the same weight and aims to select a subset that sums to a specific target value.</p>
      <p>These knapsack problems possess different properties and may require different algorithms. Also,
formulations of the same problem can be different. Respectively, appropriate algorithms vary
depending on the formulations
2.2.</p>
    </sec>
    <sec id="sec-4">
      <title>KP applications</title>
      <p>
        KP has practical applications in many fields [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4">1,2,3,4,8,10,12,13</xref>
        ], including:
• Resource allocation: In manufacturing and logistics, the KP can be used to determine which
orders to fulfill, which items to stock, and how to allocate resources such as vehicles, workers, and
machines.
• Finance: The KP can be used in financial portfolio optimization, where an investor must choose
a set of investments to maximize returns while respecting constraints on available capital and risk
tolerance.
• Project selection: In project management, the KP can be utilized to select a set of projects that
maximize the expected value of the portfolio while respecting constraints such as budget and
resource availability.
• Cutting stock problem: In cutting stock problems, KP is used to find the optimal way to cut
large items, such as sheets of metal, into smaller pieces with minimum waste.
      </p>
      <p>
        Also, the KP has various applications in natural language processing (NLP) [
        <xref ref-type="bibr" rid="ref2">2,12</xref>
        ]. Here are
some examples:
• Named entity recognition: Named entity recognition (NER) identifies named entities (such as
people, organizations, and locations) in text. It is a problem of selecting the most relevant named
entities can be formulated as a KP, where each named entity is represented by a weight (indicating
its relevance) and a value (indicating its frequency or importance).
• Sentence compression: Sentence compression involves generating shorter sentences that
preserve the meaning of the original sentences. This can be formulated as a KP, where the goal is to
select the most important words or phrases from the original sentence to include in the compressed
sentence.
• Text summarization: KP can be used in text summarization, where the goal is to select a subset
of sentences that represent the most important content of a document. The problem can be
formulated as a KP, where each sentence is represented by a weight (indicating its importance) and
a value (indicating its length).
• Keyword selection: In keyword selection, the goal is to choose a set of keywords that best
represent the content of a document or a set of documents. This can be formulated as a KP, where
each keyword is represented by a weight (importance) and a value (frequency or relevance).
      </p>
      <p>Thus, KP is an important problem in computer science and optimization, and its applications extend
to a wide range of fields.
2.3.</p>
    </sec>
    <sec id="sec-5">
      <title>Literature on KP</title>
      <p>
        The KP is a well-studied problem in the CO, and many books, articles, and research papers exist on
the topic. Among them are books:
• Books [
        <xref ref-type="bibr" rid="ref3">3,10</xref>
        ] provide a comprehensive overview of the KP and its variants, including exact and
heuristic solution methods, complexity analysis, and applications in various fields.
• The source [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] is a classic book covering fundamental computer science algorithms, including
the KP and dynamic programming.
• The book [18] provides an in-depth treatment of integer programming, including the KP,
mixed-integer programming, and branch-and-bound methods for solving IP problems.
• [9] is the book dedicated to the dynamic programming approach to solving the KP and other
optimization problems. It covers the theory and application of dynamic programming algorithms
with examples and exercises.
• The book [18] covers approximation algorithms for optimization problems, particularly the KP,
with a focus on algorithm design and analysis.
      </p>
      <p>Many more resources are available, including research papers, conference proceedings, and online
tutorials.
2.4.</p>
    </sec>
    <sec id="sec-6">
      <title>KP benchmark libraries</title>
      <p>Several benchmark libraries are available for the KP that can be used to evaluate the performance
of different algorithms and solvers. Here are some of the most commonly used benchmark libraries:
• OR-Library [19]: The OR-Library is a collection of test problems for various optimization
problems, including KP instances. It includes a range of problem sizes and characteristics, such as
varying numbers of items and capacities, and can be used to evaluate the performance of different
algorithms and solvers.
• DIMACS Implementation Challenge [21]: The DIMACS Implementation Challenge
includes a set of benchmark instances for several combinatorial optimization problems, including
the KP. The instances are designed to be challenging and can be used to evaluate the performance
of different algorithms and solvers.
• MIPLIB [20]: The Mixed IP Library (MIPLIB) includes a set of benchmark instances for
mixed IP problems, including the KP. It contains instances of varying sizes and characteristics and
can be used to evaluate the performance of different solvers and algorithms.</p>
      <p>These benchmark libraries can be useful for evaluating the performance of different algorithms and
solvers and comparing the effectiveness of different approaches to solving the KP.
2.5.</p>
    </sec>
    <sec id="sec-7">
      <title>KP solution techniques</title>
      <p>
        Various methods are known to solve the KP, ranging from exact methods providing optimal
solutions to heuristic approaches that provide approximate solutions [
        <xref ref-type="bibr" rid="ref3">3,10</xref>
        ]. Among them are exact and
approximate methods
2.5.1. Exact methods
• Linear IP (ILP): This mathematical optimization technique can solve the KP by formulating
the problem as a linear program with integer variables and using an ILP solver to find the optimal
solution.
• Branch and Bound: This method enables solving quite large instances by exploring a search
tree to find the optimal solution. The search tree is constructed by branching on each item and
exploring all possible feasible combinations of items that lead to a better solution.
• Constraint Programming: This declarative programming paradigm can solve the KP by
expressing the problem's constraints as logical constraints and using a constraint solver to find a
feasible solution.
• Dynamic Programming: This exact method can solve the KP efficiently for small problem
sizes. It works by constructing a dynamic programming table to determine the maximum value that
can be obtained for each subproblem of the knapsack packing. Then it uses these subproblems to
solve the larger problem.
2.5.2. Approximate methods
• Greedy Algorithms: These heuristic methods provide approximate solutions by greedily
selecting items based on criteria such as the value-to-weight ratio. Although they do not always
provide optimal solutions, they are computationally efficient and can solve the KP with large
problem sizes.
• Metaheuristic Algorithms: These heuristic methods use search and optimization techniques
to find good solutions to Knapsack problems. Examples include genetic algorithms, simulated
annealing, and tabu search.
      </p>
      <p>The choice of method depends on the problem size, the required level of optimality, and the
computational resources available. Exact methods are preferred for small problem sizes, while heuristic
and metaheuristic methods are preferred for larger problem sizes or when an approximate solution is
acceptable. Also, a set of relevant methods depends on the selected formulation of the KP.
2.6.</p>
    </sec>
    <sec id="sec-8">
      <title>Gurobi and CPLEX solvers</title>
      <p>Gurobi and CPLEX are both suitable for solving knapsack problems, as they offer powerful
optimization algorithms for integer programming problems, which the KP can be formulated as.</p>
      <p>Solving the KP using Gurobi or CPLEX can be formulated as a binary or integer linear programming
problem, where the binary decision variables represent whether an item is packed in the knapsack or
not. Respectively, the objective function is the sum of the values of the packed items, and the capacity
constraint is represented as a linear inequality.</p>
      <p>Gurobi and CPLEX both offer powerful algorithms for solving MILP problems, including
branchand-bound, cutting-plane, and heuristics, which can be used to find the optimal or near-optimal
solutions to the KP. Additionally, Gurobi and CPLEX offer some features that can be used to speed up
the optimization process and improve the quality of the solutions.</p>
      <p>In summary, both Gurobi and CPLEX are well-suited for solving the KP and offer powerful
algorithms and features for finding optimal or near-optimal solutions. The choice between them may
depend on factors such as the specific problem formulation, the size and complexity of the problem,
and personal preferences.</p>
      <p>Among other solvers to solve the KP is another popular Integer programming solver SCIP, constraint
programming solvers, Choco and Gecode, solvers supporting evolutionary and local search algorithms.</p>
    </sec>
    <sec id="sec-9">
      <title>3. Unbounded KP formulations</title>
      <p>The UKP is a classic optimization problem in which a set of items, each with a weight wi and a
value vi , must be packed into a knapsack of limited capacity C in such a way as to maximize the total
value of the items packed.</p>
      <p>One way to formulate the unbounded knapsack problem mathematically is as follows:
Let n be the number of items, and let wi and vi denote the weight and value of the i -th item,
respectively. Let C be the maximum capacity that the knapsack can hold. Then the goal is to find the
maximum possible value that can be packed into the knapsack.</p>
      <p>Let Jn = 1,..., n, Jn0 = Jn  0 , decision variables form a vector ( x1,..., xn ) , where xi is a
decision variable for each item i represents the number of times that item i is included in the knapsack.
Then the objective function is:
subject to constraints:</p>
      <p>n
maximize vi xi</p>
      <p>i=1
n
i=1
 wi xi  C
xi  Z+ , i  Jn .</p>
      <p>To the model (1)-(3) further we will refer to as UKP.IP.</p>
      <p>Let us reformulate it as a Bounded KP. For that, we will define the maximum number ki  Z+ of
copies of the item i that can be used in the knapsack. Clearly,</p>
      <p> C 
ki =   , i  Jn .</p>
      <p> wi 
In this notation, we obtain the Bounded KP (1), (2),</p>
      <p>xi  Jk0i , i  Jn ,
where ki is given by (4) (further we referred to as UKP.BKP). Evidently, the models UKP.IP and
UKP.BKP are equivalent.</p>
      <p>Now, we can reduce UKP.BKP to 0/1.KP of higher dimension. For that, we introduce multisets of
weights and values
(1)
(2)
(3)
(4)
(5)
and enumerating elements of W ,V yields</p>
      <p>W ' = w'j 
V ' = v'j 
jJ N
jJ N
= w1,..., w1, w2 ,..., w2 ,..., wn ,..., wn,
= v1,..., v1, v2 ,..., v2 ,..., vn ,..., vn.</p>
      <p>In these notations, a binary equivalent of UKP.BKP is</p>
      <p>N
maximize  v'j y j</p>
      <p>W = w1k1 ,..., wnkn , V = v1k1 ,..., vnkn  ,
where the notation wki means that the multiplicity of wi in W is ki for i  Jn . Thus the cardinality
i
of the multisets (6) is
n
N =  ki</p>
      <p>From the ordering (12), (13) of the coordinates of decision variables, it follows that W ',V ' are also
ordered. Namely.</p>
      <p>w'j  w'j +1, j  J N and v'j  v'j +1, j  Ii .
where Ii is a set of indices of weights from W ' equal to wi ,i  Jn .</p>
      <p>Now, we can strengthen the model UKP.0/1KP.1 by adding the following constraints:
y j  y j+1, jJIi ,i  Jn and v'j  v'j +1, j  Ii .</p>
      <p>To the model (9)-(11), (15) we will refer to as UKP.0/1KP.2.</p>
      <p>Computational experiment
are represented by the same tuple
where N is given by (7). y = ( y1,..., yN ) is another vector of decision variables where y j is a binary
decision variable that takes a value of 1 if an item j is selected for packing, and 0 otherwise.</p>
      <p>To the model (9)-(11) we will refer to it as UKP.0/1KP.1.</p>
      <p>Note that, in contrast to the previous model, in UKP.0/1KP.1 some items can be identical, i.e., they
w
v</p>
      <p>, w  W, v  V , where W = wi iJn ,V = vi iJn .</p>
      <p>Without loss of generality, we can assume that all tuples in
are ordered
W
V</p>
      <p> wi 
=  </p>
      <p> vi iJn
wi
vi
wi+1 ,i  Jn−1 , we mean that
vi+1
lexicographically. More specifically, under the ordering
if i ' Jn−1 such that
wi  wi+1,i  Jn−1,
wi ' = wi '+1, then vi '  vi '+1.</p>
      <p>We solved randomly generated KP instances of dimensions 10-100 in the form of the models
UKP.IP, UKP.BKP, UKP.0/1KP.1, UKP.0/1KP.2 in order to compare their performance in two selected
popular solvers – CPLEX and Gurobi, the software implementation was done in Python. For CPLEX,
the package DOCPLEX was used.</p>
      <p>Generator of instance outline:
• Set n ;
• Set the number prn of problems generated in the series;
•
•
•
•</p>
      <p>Set ranges wmin , wmax ,vmin ,vmax  for generating weights and values, respectively;
Set m - the upper bound for items copies;
Generate sets W  U (wmin , wmax ),V  U (vmin ,vmax ) ;
Generate a constant C  m  c 0.8,1 , where the value c = min wi such that C  max wi is
iJn iJn
fulfilled.</p>
      <p>We generated 100 instances in each dimension 10, 20,...,100 and experimented with them.
The main conclusion is that Gurobi performs better, and, on average, it is nearly 20% faster than
CPLEX. The runtime of models UKP.IP, UKP.BKP, UKP.0/1KP.1 is very similar, and we cannot single
out a top model. At the same time, a comparison of the binary models UKP.0/1KP.1 and UKP.0/1KP.2
shows that the strengthening UKP.0/1KP.1 by introducing the constraint (15) is fruitful as a
performance of UKP.0/1KP.2 is around 15% better than UKP.0/1KP.1.</p>
    </sec>
    <sec id="sec-10">
      <title>5. Further work</title>
      <p>
        The classical KP is a well-studied problem, but there are several of its generalizations and extensions
that are of interest in the research community [
        <xref ref-type="bibr" rid="ref3">3,8,10,13</xref>
        ]:
• Multiple KP: This is a KP generalization where multiple knapsacks are available for packing
items. Each item has a weight and value, and the goal is to pack the items into the knapsacks while
maximizing the total value subject to a capacity constraint on each knapsack.
• Time-dependent KP: This is an extension of the KP where the items have a time-dependent
value. Each item has a value that varies over time, and the goal is to pack the items into the knapsack
at the optimal time to maximize the total value.
• Stochastic KP: In this problem, the item values and/or weights are uncertain. The goal is to
pack the items into the knapsack to maximize the expected value subject to the capacity constraint.
• Multiple-choice knapsack problem: Each item is associated with a set of options. The goal is
to choose one option for each item to maximize the total value subject to the capacity constraint on
the knapsack.
      </p>
      <p>These generalizations and extensions of the classical KP provide additional complexities to the
problem, making it more challenging to find an optimal solution. Researchers intensively work on
developing general and specialized algorithms and heuristics for each of these variants of the KP, but
still, many open problems remain in this research field.</p>
      <p>The knapsack problems arise as a subproblem in many nonlinear binary optimization problems
maximize f0 ( y ) (16)
subject to the weight constraint (10), the binary constraint (11) and any others linear or nonlinear
constraints:</p>
      <p>fi ( x)  0, i  Jm ,
where fi : BN → B1, i  Jm . Functions present in (17) can be incorporated into the penalty function,
and we come to the equivalent optimization problem to (10), (11), (16), (17) having the form of:
 m 
minimize F ( y) = − f0 ( y) + i fi ( y) ,
 i=1 
(17)
(18)
m
where  = (1,...,m )  R0 is a vector of penalty parameters chosen large enough to ensure that
optimizers of the above problem and the problem (18) coincide. The function F ( y ) can always be

convexified with the help of a regularization term  y −

we come to a problem
minimize  , ( y ) = F ( y ) +    y − 1 2 − n  , (19)
   2  4 
subject to constraints (10), (11) equivalent to the above two. Here   0 is a regularization parameter
chosen large enough to guarantee convexity of
 , ( y) . Also, we can assume that
 , ( y) is
differentiable. If not it can be replaced by its convex differentiable extension from Bn onto Rn since
a set Bn is vertex located [16]. Now, the conditional gradient method can be applied to solving (19)
with constraints (10), (11) where an auxiliary problem of minimizing a gradient of  , ( y) on the
correspondent polytope is solved on each iteration. These auxiliary problems are, in fact, binary
knapsack problems. In such a way KPs can be used in nonlinear optimization over the binary domain.</p>
      <p>In future, we plan to extend the experimental part onto other formulations of the classical KP, such
as QUBO, and consider nonlinear optimization generalizations and the listed and many more extensions
and generalizations of KP.</p>
    </sec>
    <sec id="sec-11">
      <title>6. Conclusion</title>
      <p>This study explored different linear discrete formulations for solving the unbounded knapsack
problem, including transforming the problem into binary and bounded forms. Two variations of binary
knapsack reformulations were examined, each with distinct linear constraints. The performance of these
models was compared using Gurobi and CPLEX solvers, and the implementation was carried out in
Python. The experiments were conducted on randomly generated instances of sizes ranging from 10 to
100, with the weights and values of items following uniform discrete distributions within specific
ranges. Our findings indicate that on average, Gurobi is approximately 20% faster than CPLEX. The
IKP, BKP, and 0/1 KP formulations had similar mean run times. Additionally, the inclusion of specific
constraints in the 0/1 formulation demonstrated superior performance over the underlying 0/1 KP
model.</p>
    </sec>
    <sec id="sec-12">
      <title>7. References</title>
      <p>[6] L. Koliechkina, O. Pichugina, O. Dvirna, Horizontal method application to multiobjective
combinatorial optimization over permutations, in: 2022 IEEE 3rd International Conference on
System Analysis &amp; Intelligent Computing (SAIC), IEEE, 2022-10-04, pp. 1–5.
doi:10.1109/SAIC57818.2022.9923018.
[7] B. Korte, J. Vygen, Combinatorial Optimization: Theory and Algorithms, 6th ed. 2018 ed.,</p>
      <p>Springer, 2018-03-23.
[8] S. Laabadi, M. Naimi, H. E. Amri, B. Achchab, The 0/1 multidimensional knapsack problem and
its variants: A survey of practical models and heuristic approaches 8 (2018-09-04) 395–439.
doi:10.4236/ajor.2018.85023, number: 5 Publisher: Scientific Research Publishing.
[9] A. Lew, H. Mauch, Dynamic Programming: A Computational Tool, softcover reprint of hardcover
1st ed. 2007 ed., Springer, 2010-11-18.
[10] S. Martello, P. Toth, Knapsack Problems: Algorithms and Computer Implementations, revised ed.,</p>
      <p>John Wiley &amp; Sons, 1990-11-30.
[11] O. B. Matsiy, A. V. Morozov, A. V. Panishev, A recurrent algorithm to solve the weighted
matching problem 52 (2016-09-01) 748–757. doi:10.1007/s10559-016-9876-4.
[12] H. Nishikawa, T. Hirao, T. Makino, Y. Matsuo, Text summarization model based on redundancy
constrained knapsack problem, in: Proceedings of COLING 2012: Posters, The COLING 2012
Organizing Committee, 2012-12, pp. 893–902. URL: https://aclanthology.org/C12-2087.
[13] O. Pichugina, L. Koliechkina, The constrained knapsack problem: Models and the
polyhedralellipsoid method, in: A. Strekalovsky, Y. Kochetov, T. Gruzdeva, A. Orlov (Eds.), Mathematical
Optimization Theory and Operations Research: Recent Trends, Communications in Computer and
Information Science, Springer International Publishing, 2021, pp. 233–247.
doi:10.1007/978-3030-86433-0_16.
[14] O. Pichugina, S. Yakovlev, New classes of unconstrained permutation-based problems and their
solutions, in: 2022 IEEE 3rd International Conference on System Analysis &amp; Intelligent
Computing (SAIC), IEEE, 2022-10-04, pp. 1–5. doi:10.1109/SAIC57818.2022.9922919.
[15] A. P. Punnen, The Quadratic Unconstrained Binary Optimization Problem: Theory, Algorithms,
and Applications, 1st ed. 2022 ed., Springer Nature, 2022-07-13.
[16] Y. G. Stoyan, S. V. Yakovlev, Theory and methods of Euclidian combinatorial optimization:</p>
      <p>Current status and prospects 56 (2020-05-01) 366–379. doi:10.1007/s10559-020-00253-6.
[17] V. V. Vazirani, Approximation Algorithms, softcover reprint of hardcover 1st ed. 2001 ed.,</p>
      <p>Springer/Sci-Tech/Trade, 2010-12-08.
[18] L. A. Wolsey, Integer Programming, 2nd ed., Wiley, 2020-10-20.
[19] OR-LIBRARY. URL: http://people.brunel.ac.uk/~mastjjb/jeb/info.html.
[20] MIPLIB - mixed integer problem library. URL: https://miplib2010.zib.de/.
[21] DIMACS :: Implementation challenges. URL: http://dimacs.rutgers.edu/programs/
challenge/.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>N.</given-names>
            <surname>Christofides</surname>
          </string-name>
          , Combinatorial optimization, Wiley,
          <year>1979</year>
          . URL: http://www.gbv.de/dms/hbz/toc/ht000072474.pdf, OCLC:
          <fpage>4495250</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>T.</given-names>
            <surname>Hirao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yoshida</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Nishino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Yasuda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Nagata</surname>
          </string-name>
          ,
          <article-title>Single-document summarization as a tree knapsack problem</article-title>
          ,
          <source>in: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics</source>
          ,
          <fpage>2013</fpage>
          -
          <lpage>10</lpage>
          , pp.
          <fpage>1515</fpage>
          -
          <lpage>1520</lpage>
          . URL: https://aclanthology.org/D13-1158.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>H.</given-names>
            <surname>Kellerer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Pferschy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pisinger</surname>
          </string-name>
          , Knapsack Problems, softcover reprint of hardcover 1st ed. 2004 ed., Springer,
          <fpage>2010</fpage>
          -
          <volume>12</volume>
          -07.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L.</given-names>
            <surname>Kirichenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Radivilova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ryzhanov</surname>
          </string-name>
          ,
          <article-title>Applying visibility graphs to classify time series</article-title>
          , in: S.
          <string-name>
            <surname>Babichev</surname>
          </string-name>
          , V. Lytvynenko (Eds.),
          <source>Lecture Notes in Computational Intelligence and Decision Making, Lecture Notes on Data Engineering and Communications Technologies</source>
          , Springer International Publishing,
          <year>2022</year>
          , pp.
          <fpage>397</fpage>
          -
          <lpage>409</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -82014-5_
          <fpage>26</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Knuth</surname>
          </string-name>
          , Art of Computer Programming,
          <source>The: Fundamental Algorithms</source>
          , Volume
          <volume>1</volume>
          , 3rd ed.,
          <string-name>
            <surname>Addison-Wesley Professional</surname>
          </string-name>
          ,
          <fpage>1997</fpage>
          -
          <volume>07</volume>
          -07.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>