<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>February</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Preventing disinformation spread across network clusters by strategic edge deletions: a Lagrangian heuristic</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Martina Cerulli</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Manlio Gaudioso</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Domenico Serra</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Carmine Sorgente</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, University of Salerno</institution>
          ,
          <addr-line>Fisciano</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Mathematics, University of Salerno</institution>
          ,
          <addr-line>Fisciano</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>0</volume>
      <fpage>3</fpage>
      <lpage>8</lpage>
      <abstract>
        <p>The rapid spread of fake news through dense clusters in networks jeopardizes trust and stability. Traditional approaches targeting harmful content often overlook the structural network dynamics that drive disinformation. The Cluster Deletion Problem addresses this by removing edges that bridge communities, disrupting disinformation flow while preserving intra-group communication. Indeed, given a graph representing the considered network, the Cluster Deletion Problem aims to transform it into a cluster graph via minimal edge deletions. In this paper, we address this problem with a tailored Lagrangian heuristic based on a dual descent algorithm, incorporating both a subgradient method and an ad-hoc polynomial-time repair heuristic. The proposed approach is tested on existing instances, in most cases providing better lower bounds with respect to the ones existing in the literature.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;cluster deletion</kwd>
        <kwd>cybersecurity</kwd>
        <kwd>fake news</kwd>
        <kwd>Lagrangian relaxation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        clusters indicate genes with correlated expression patterns. Further
        <xref ref-type="bibr" rid="ref3">more, Charikar et al. (2005</xref>
        ) explored
the use of edge deletion for clustering with qualitative constraints, showing its relevance to both social
and biological networks where groupings based on connectivity patterns
        <xref ref-type="bibr" rid="ref4">are of interest. Natanzon (1999</xref>
        )
showed that the CDP is NP-complete and Shami
        <xref ref-type="bibr" rid="ref1">r et al. (2004</xref>
        ) that there exists some constant  &gt; 0
such that it is also NP-hard to approximate the CDP within a factor of 1 +  . The NP-completeness of
the CDP, however, can already be extracted from the work of
        <xref ref-type="bibr" rid="ref5">(Yannakakis, 1981)</xref>
        on hereditary graph
properties. This problem has been proven to be NP-hard even when restricted to graphs with maximum
degree 4
        <xref ref-type="bibr" rid="ref6">(Komusiewicz and Uhlmann, 2012)</xref>
        . Due to its intractability, research has largely focused on
developing approximation algorithms, and fixed-parameter tractable approaches
        <xref ref-type="bibr" rid="ref7">(Crespelle et al., 2023)</xref>
        .
Gra
        <xref ref-type="bibr" rid="ref3">mm et al. (2005</xref>
        ) addressed the decision version of the CDP, restricting the problem to allow at
most  edge deletions, and provided a fixed-parameter algorithm with a runtime of (1.77 + | |3).
Subsequent improvements reduced this to (1.53 + | |3) (G
        <xref ref-type="bibr" rid="ref1">ramm et al., 2004</xref>
        ), (1.47 + | |3)
        <xref ref-type="bibr" rid="ref10">(Damaschke, 2009)</xref>
        , and finally (1.415 + | |3)
        <xref ref-type="bibr" rid="ref11">(Böcker and Damaschke, 2011)</xref>
        . This advancement
leverages the fact that an optimal cluster deletion set of  edges can be computed in polynomial time on
graphs structured as a clique plus a constant number of additional nodes attached to it. As with many
NP-complete problems, the CDP has also been studied on specific graph classes to identify instances
where eficient solutions are achievable. For example, Gao et al. (2013) proposed a polynomial-time
algorithm for solving the CDP on cographs. Regarding approximation algorithms, the first approach
was introduced by Charikar et al. (2005) for complete graphs, using a linear programming relaxation to
achieve a factor-4 approximation guarantee. More rece
        <xref ref-type="bibr" rid="ref13">ntly, Ailon et al. (2008</xref>
        ) developed a randomized
algorithm with an expected 3-approximation guarantee. Deterministic algorithms have also been
proposed
        <xref ref-type="bibr" rid="ref14">(van Zuylen and Williamson, 2009)</xref>
        , culminating in a 2-factor approximatio
        <xref ref-type="bibr" rid="ref15">n algorithm by
Veldt et al. (2018</xref>
        ). A problem with strong connections to the CDP is the Strong Triadic Closure.
Grüttemeier a
        <xref ref-type="bibr" rid="ref16">nd Komusiewicz (2020</xref>
        ) showed that both problems are fixed-parameter tractable w.r.t.
|| − , but do not admit a polynomial kernel parametrized by || − . As regards exact approaches,
Grötschel and Wakabayashi (1989) proposed the first integer linear formulation for the CDP, which
was then made more compact by Martins (2016) exploiting the sparsity of the graph. To the best of our
knowledge, the only heuristic procedure to address the CDP has been proposed in
        <xref ref-type="bibr" rid="ref19">(Ambrosio et al.,
2025)</xref>
        , based on iterative edge contraction operations.
      </p>
      <p>
        In this paper, we tackle the CDP through a Lagrangian relaxation approach, a powerful mathematical
optimization technique, based on the machinery of the nondiferentiable optimization
        <xref ref-type="bibr" rid="ref20 ref21">(Gaudioso and
Monaco, 1992, Gaudioso et al., 2020)</xref>
        , used to address hard combinatorial problems. We solve it through
a Dual Descent (DD) Algorithm, which explores the solution space, finding valid lower bounds on the
CDP optimal value. Within this algorithm, we incorporate both a subgradient method and an ad-hoc
polynomial-time repair heuristic which, given an infeasible CDP solution, repairs it into a feasible one.
We test our approach on existing instances, showing its efectiveness. The rest of the paper is organized
as follows. In Section 2, we present the CDP edge-formulation, which is relaxed through a Lagrangian
approach in Section 3. The DD Algorithm which is used to solve this relaxation is described in Section 4.
Finally, computational results are discussed in Section 5, and Section 6 concludes the paper.
      </p>
    </sec>
    <sec id="sec-2">
      <title>The CDP edge-formulation</title>
      <p>
        To model the CDP, we consider the edge-formulation introduced in
        <xref ref-type="bibr" rid="ref18">(Martins, 2016)</xref>
        in the binary
variables  , which, for each edge {, } ∈ , is 1 if  and  belong to the same clique. Since minimizing
the number of edges between clusters is equivalent to maximizing the number of edges inside the
clusters
        <xref ref-type="bibr" rid="ref22">(Dessmark et al., 2007)</xref>
        , the CDP is formulated as:
      </p>
      <p>max ∑︁ 
∈{0,1}|| {,}∈
s.t.  +  ≤</p>
      <p>+ 1
∀  ∈ , ,  ∈   :  &lt; , {, } ∈ 
∀  ∈ , ,  ∈   :  &lt; , {, } ∈/ 
(1a)
(1b)
(1c)
exist, respectively.
with   defined as the set of neighbors of node  ∈  . The objective function (1a) aims at maximizing
the number of edges remaining in the graph. Given a triplet of nodes (, , ) such that both  and 
belong to the neighborhood of , constraints (1b) and (1c) impose that at most one between edges {, }
and {, } stays in the graph, if either edge {, } is removed from the graph ( = 0) or does not</p>
    </sec>
    <sec id="sec-3">
      <title>The Lagrangian relaxation</title>
      <p>Many optimization problems include constraints that make finding an exact solution computationally
challenging. Lagrangian relaxation addresses this by relaxing some of these constraints and adding
them to the objective as penalties. This simplifies the problem, which can be solved more eficiently
while providing insights into the structure of the solution space. Specifically, in Lagrangian relaxation,
each relaxed constraint is multiplied by a non-negative parameter, known as Lagrange multiplier, which
can be tuned to iteratively approach feasible solutions. We construct the Lagrangian relaxation of the
CDP formulation (1) by introducing multipliers
 , ∀  ∈ , ,  ∈   :  &lt; , {, } ∈ , and
 , ∀  ∈ , ,  ∈   :  &lt; , {, } ∈/ 
associated to constraints (1b) and (1c), respectively. We come out with the following problem:
⎧
⎪
⎪
⎪
⎪
⎪
⎩
(,  ) =
max</p>
      <p>⎨ ∑︁  + ∑︁
∈{0,1}| ⎪⎪⎪{,}∈
∈ ,∈ :</p>
      <p>&lt;,
{,}∈
Let us define, for each edge {, } ∈ , the sets
∑︁  (1 +  −  −
) + ∑︁
∑︁  (1 −  −</p>
      <p>) .
∈ ,∈ :</p>
      <p>&lt;,
{,}∈/
⎫
⎪
⎪
⎪
⎪
⎬
⎪
⎪
⎪
⎪
⎭
(2)
(3)
(5)
 =   ∩   ,  =   ∖ {{} ∪  },
 =   ∖ {{} ∪  },
which allow us to write the cost coeficient  associated to each variable  in the objective function
of (,  ) (see Appendix A) as:
 =  (,  ) = 1 + ∑︁ (  −   −  ) −
∈
∑︁   −
∈
∑︁  .
∈
Finally, isolating the constant term independent on  , problem (,  ) is rewritten as:
(,  ) = ⎜⎜∑︁
  + ∑︁
∑︁
 ⎟⎟ +</p>
      <p>max
⎞
⎠
⎧
⎨ ∑︁
∈{0,1}|| ⎩{,}∈
⎫
⎬
⎭
  .</p>
      <p>(4)
⎛
⎝∈
∑︁
,∈ :
&lt; {,}∈
∈</p>
      <p>,∈ :
&lt; {,}∈/
 =
︂{
1 if  ≥ 0,
0 otherwise.</p>
      <p>Notice that an optimal solution (,  ) of (,  ) is obtained by simple inspection of the sign of the
coeficients</p>
      <p>as
It is clear that, even relaxing the integrality constraint on  (i.e., replacing  ∈ {0, 1}|| with  ∈
[0, 1]||), the solution of the LR problem obtained as in Eq. (5) is still integer. Hence, the Lagrangian
relaxation (,</p>
      <p>
        ) of problem (1) has the integrality property, which means that the Lagrangian
relaxation cannot provide a better bound on the original problem than the one given by the Linear
Programming (LP) relaxation of the problem
        <xref ref-type="bibr" rid="ref23">(Geofrion, 1974)</xref>
        .
      </p>
      <p>Furthermore, observe that, once such a solution (,  ) of (,  ) has been found, a subgradient
(,  ) of (,  ) is immediately available. For any triplet (, , ), it is given by
(,  ) =
︂{ 1 +  −  − ,  ∈ , ,  ∈   :  &lt; , {, } ∈</p>
      <p>1 −  − ,  ∈ , ,  ∈   :  &lt; , {, } ∈/ .</p>
      <p>Minimizing the Lagrangian function (,  ) with respect to the Lagrangian multipliers yields the
Lagrangian dual problem. This dual problem provides an upper bound to the original (primal) CDP.
The Lagrangian dual  associated to our Lagrangian relaxation  is
 =
min (,  ).</p>
      <p>, ≥ 0</p>
    </sec>
    <sec id="sec-4">
      <title>The Dual Descent method</title>
      <p>This section describes a dual descent procedure to tackle the LD Problem (7). The DD method is an
iterative approach to solving the Lagrangian dual problem while achieving the best possible bound on
the original problem. As already noticed in Section 3, this bound cannot improve the LP one. However,
the DD method can still be useful in exploring the solution space and giving insights into the structure
of the problem.</p>
      <p>All parameter values at iteration ℎ are associated with the superscript ℎ. In particular, ℎ = ( ℎ,  ℎ)
and ℎ = ( ℎ,  ℎ). The aim of the method, whose pseudo-code is reported in Algorithm 1, is
creating a sequence of multipliers ( ℎ,  ℎ) such that ( ℎ+1,  ℎ+1) ≤ ( ℎ,  ℎ). This is pursued
by acting on one multiplier at a time. We introduce some possible multiplier updating rules, based on
selecting one multiplier and considering the value of the associate constraint in correspondence with
the current solution of the relaxation ℎ.</p>
      <p>The efect of the increase/decrease of any multiplier   or   onto the cost coeficients of the
variables  in problem (,  ) is highlighted by recalling that the dualization of the constraints (1b)
and (1c) gives rise, respectively, to the terms</p>
      <p>(1 +  −  − ) and  (1 −  − ).</p>
      <p>Given the starting  0,  0 (line 1), the cost coeficients 0 can be immediately found by Eq. (3) (line 2),
and consequently an optimal solution of ( ℎ,  ℎ) can be obtained by (5), together with its value 0
(line 3). This solution is used as a starting point ℎ within the while loop in lines 5–24 of the algorithm.
It may be feasible in problem (1) (lines 11–13), or not (lines 6–10). If it is infeasible, at least one of
the constraints of type (1b) or (1c) is violated. In line 7, for each triple (, , ) the diferent options
are considered by calling Algorithm 3 described in Appendix B. It returns the updated multipliers
together with the corresponding decrease in the cost coeficients Δ. After this multipliers update
through Algorithm 3, given the induced graph corresponding to the infeasible solution ℎ, in line 9,
we apply a repair heuristic procedure presented in Section 4.1, to reconstruct the feasibility, obtaining
solution ˜ℎ. This lets us determine, at each iteration, not only an upper bound ℎ, but also a lower
bound to the optimal value of the CDP (line 10).</p>
      <p>Stopping upon finding a feasible solution is often suficient, however, one can decide to continue
with the descent. In order to proceed, the Lagrangian multipliers must be updated. This is what is done
in line 13, where the algorithm calls multiple iterations of a tailored Subgradient Method (Algorithm 4)
described in Appendix C. It requires two multiplier pairs and a maximum number of iterations  in
input. We consider as multiplier pairs: the current pair ( ℎ,  ℎ) and the first of the previously generated
pairs ( ℎ,  ℎ) s.t. its distance (in terms of Euclidean norm) is greater than a given threshold  .</p>
      <p>Clearly, if, at a given iteration ℎ, the Δ returned by Algorithm 3 is too small (smaller than a threshold
 &gt; 0), the algorithm cannot generate a diferent solution in the following iterations, remaining blocked
at the solution ℎ. For this reason, only if Δ is greater than  , we set the new objective function value
to ℎ − Δ in line 15. Otherwise, we perform some iterations of the Subgradient Algorithm 4 (line 21)
in order to generate diferent multipliers and a corresponding diferent  value. We remark that,
(6)
(7)
(8)
∑︀
{,}∈
Algorithm 1: Dual Descent algorithm for the Lagrangian Relaxation of the CDP</p>
      <p>Input: Maximum number of DD iterations ℎ, maximum number of subgradient iterations
, tolerance parameters ,  &gt; 0, heuristic parameter 0 &lt;  ≤ 1.</p>
      <p>Output: Lower bound on the CDP optimal solution.
1  ← 0 and ( 0,  0) ← (0, 0);
2 Compute cost coeficients 0 for each edge {, } ∈  according to Eq. (3);
3 Compute 0 for each edge {, } ∈  according to Eq. (5), and 0 to Eq. (4);
4 ℎ ← 0;
5 while ℎ &lt; ℎ do
6 if ℎ is infeasible then
7 {( ℎ+1,  ℎ+1), Δ} ← Algorithm 3 (ℎ,  ℎ,  ℎ, ℎ);
8 Let ˜ be a graph with node set  and edge set ˜ = {{, } ∈  : ℎ = 1};
˜ℎ ← Algorithm 2 (˜, ℎ,  );
if ˜ℎ &gt;  then  ←</p>
      <p>∑︀ ˜ℎ ;
{,}∈
else
if</p>
      <p>∑︀ ℎ &gt;  then  ←
{,}∈</p>
      <p>ℎ+1
{( ℎ+1,  ℎ+1),  } ←</p>
      <p>∑︀ ℎ ;
{,}∈</p>
      <p>Algorithm 4 ( ℎ,  ℎ, );
if Δ &gt;  then</p>
      <p>ℎ+1</p>
      <p>Update  ←
else
 ← ℎ and  ← 0;
while  &gt; 0 and  &lt;  do
 ←  − 1;
 ← ⃦⃦⃦⃦ (︂  ℎℎ)︂ − ︂(  )︂ ⃦⃦⃦⃦ ;</p>
      <p>ℎ − Δ
In this section, we describe a polynomial-time heuristic, which is called within the DD Algorithm
whenever an infeasible solution ℎ is found and operates on a graph ˜ = (, ) obtained by considering
only the edges associated with positive ℎ . The algorithm starts by receiving a graph, an iteration
counter ℎ and a threshold parameter  between 0 and 1. Its goal is to transform the infeasible ℎ into a
feasible solution ˜ℎ by iteratively processing the connected components of the graph.</p>
      <p>While the graph is not empty, in line 2, the algorithm computes the connected components of graph
˜. For each connected component , the nodes are ordered in descending order of their degree, i.e.,
by the number of neighbors within  (line 4). In the case of multiple nodes with the same number of</p>
      <p>Algorithm 2: Repair heuristic algorithm.</p>
      <p>Input: Graph ˜ = (, ), iteration counter ℎ, parameter 0 &lt;  &lt; 1.</p>
      <p>Output: Feasible CDP solution ˜ℎ.
1 while Graph ˜ is not empty do</p>
      <p>Compute the connected components of ˜;
foreach connected component  do</p>
      <p>Order the nodes  in  by (descending) value of  ;
 ← ⌈  ||⌉;
 ← ∅ ;
for  ∈ 1 . . .  do
 ← [];
 ← { };
if | | &gt; 0 then
foreach  ∈   do Compute sets  =   ∩   and  =   ∖ {  ∪ {}};
Order the set   in a non-increasing order of  ; in the case of ties, order in a</p>
      <p>non-decreasing order of ;
while   ̸= ∅ do
 ←  [0];
  ←   ∖ {};
if  ⊆   then  ←  ∪ {} else continue;
17
if || &gt; || then  ←</p>
      <p>;
foreach ,  ∈  :  ∈  and  &gt;  do</p>
      <p>if {, } ∈  and  ∈  then Set ˜ℎ ←
 ←  ∖ ;
1 else Set ˜ℎ ←
0;
neighbors, they are ordered with respect to their index . In lines 5-6, parameter  and set  are
initialized. They are needed because we consider one by one the first  nodes in the ordered set 
and construct  cliques starting from each of these nodes independently.  is the largest among the
obtained cliques, initialized to the empty set, while  is set to a predefined fraction  of || (rounded
up to the next integer).</p>
      <p>After the initialization a for loop starts. At each iteration, the -th node (with  from 1 to ) is
selected in the ordered set of nodes in  as the starting point (line 8), and a clique  is initialized
with just this node  = [] (line 9). If the node  is not a singleton (| | &gt; 0), in lines 11–11, for each
neighbor  of , the sets  and  are computed as defined in (2). The set   is then ordered in a
non-increasing order of | |, breaking ties by || in non-decreasing order (line 12). This order will
be used to determine how nodes from   should be considered for insertion in the clique . First of all,
the more neighbors they have in common with  (i.e., the greater | |), the greater the potential final
dimension of the clique. In the case of ties, we will prefer the nodes that are more isolated (i.e., with
smaller ||) because it is less probable that they will be considered for other cliques in the future. The
algorithm then enters a while loop (lines 13–16): provided there are still nodes left in  , the algorithm
iteratively adds nodes  from the ordered set   to the clique , removing them from  . A node  is
added to  if all members of  are also neighbors of . If this condition is satisfied (  can belong to
the clique ),  is updated. In line 17, if clique  is larger than the best clique  found for  so far,
 is set to . As soon as all the  first nodes are considered (the for loop in lines 7–17 ends) the
best clique  found is considered and the solution ˜ℎ is updated. For each pair of nodes ,  ∈ , the
edge {, } is marked as part of the solution if it exists in the graph. Specifically, if {, } ∈  and
both  and  are in , set ˜ℎ = 1, otherwise set ˜ℎ = 0. After processing a component, the clique
 is removed from the graph. This process repeats until the graph becomes empty, and the algorithm
returns the feasible solution ˜ℎ.</p>
    </sec>
    <sec id="sec-5">
      <title>Numerical results</title>
      <p>
        To assess the performance of our proposed approach for the CDP, we conducted experiments using
diferent datasets of instances: one derived from
        <xref ref-type="bibr" rid="ref19">(Ambrosio et al., 2025)</xref>
        , another from
        <xref ref-type="bibr" rid="ref24">(Cerulli et al.,
2023)</xref>
        , and a third from the DIMACS graph coloring challenge1. The first dataset, from
        <xref ref-type="bibr" rid="ref19">(Ambrosio et al.,
2025)</xref>
        , includes 120 networks generated according to the Barabási-Albert model, with varying sizes and
densities, making them well-suited for representing real-world network scenarios. The second dataset,
from
        <xref ref-type="bibr" rid="ref24">(Cerulli et al., 2023)</xref>
        , consists of 6 diferent networks selected from the social-networks literature
(out of the 14 considered in the paper), each with fewer than 1000 nodes. Finally, the third dataset
includes 40 instances, each containing fewer than 1000 nodes and 10000 edges. On these benchmark
sets, we compare our approach against the edge contraction heuristic (referred to as “ECH”) described
in
        <xref ref-type="bibr" rid="ref19">(Ambrosio et al., 2025)</xref>
        .
      </p>
      <p>All the experiments were performed on a 12th Gen Intel(R) Core(TM) i7-12700 2.10 GHz CPU with
16.0 GB RAM. For each run, we set a time limit of one hour, with a maximum of 3000 iterations (ℎ)
for the Algorithm 1 and 50 iterations () for the inner Subgradient Algorithm 4. Additionally, we
used  = 10− 4,  = 0.5 and  = 0.04.</p>
      <p>
        In Table 1, we report the average results obtained on the first dataset
        <xref ref-type="bibr" rid="ref19">(Ambrosio et al., 2025)</xref>
        by
grouping instances based on the number of nodes and edges, with each group representing the mean
performance over instances sharing the same structural characteristics. For each group, we report the
following information: number of nodes | | and edges || of all the instances of the group; average
value of the continuous relaxation (UB); average value of the solution identified by the ECH ( LBECH);
average value of the solution identified by running Algorithm 1 ( LBDD); average runtime of Algorithm 1
(timeDD) and number of explored solutions with diferent objective ( #solDD). The left part of the table
refers to groups of instances with  ∈ [100, 400], while the right part to groups of instances with
 ∈ [600, 1000]. In the last line of the table, we report the total averages of all columns, separately for
these two groups.
      </p>
      <p>
        In Table 2, we present the results obtained for the second dataset
        <xref ref-type="bibr" rid="ref24">(Cerulli et al., 2023)</xref>
        . Since it contains
only six instances, the results are not averaged. Instead, each row reports the result of a specific instance,
whose name is shown in the first column. The remaining column headings are the same as in Table 1. In
Table 2, we present the results obtained for the second dataset
        <xref ref-type="bibr" rid="ref24">(Cerulli et al., 2023)</xref>
        . Since it contains only
six instances, the results are not averaged. Instead, each row reports the result of a specific instance,
whose name is shown in the first column. The remaining column headings are the same as in Table 1.
1https://mat.tepper.cmu.edu/COLOR/instances.html
      </p>
      <p>In Table 3 we show the average results obtained for the third dataset coming from the DIMACS graph
coloring challenge. The instances are grouped based on the number of nodes (up to 50, up to 100, up
to 150, up to 300, over 300), with each group representing the mean performance over the considered
instances. The column headings are the same as in Table 1.</p>
      <p>The results in the table emphasize the strong performance of the proposed algorithm in producing
high-quality lower bounds, which are, in most of the cases, superior to the existing heuristic solution
values. As the problem size increases with larger numbers of nodes, the computational time grows
significantly, with runtimes reaching the time limit for the instances with  ≥ 600 (right part of Table 1).
Despite the increased computational efort, for those instances the bound found by the DD method is
consistently higher than the one found by the ECH. An important positive indicator of the DD algorithm
performance is the number of solutions explored. Higher values of #solDD suggest that the algorithm
is efectively exploring a broad solution space, which is beneficial for identifying strong lower bounds.
This is particularly notable for mid-sized instances, where the algorithm shows high exploration rates.
When considering larger instances, the number of explored solutions decreases as the algorithm does
not stop for having performed ℎ iterations, but because the time limit of one hour is reached.</p>
      <p>The comparison with the continuous relaxation, which provides the best possible benchmark in
terms of upper bounds, further demonstrates the quality of the lower bounds. Even if it does not reach
the upper bound found by the LP relaxation, the gap is in most of the cases reduced, underlining the
strength of the proposed method. The overall trends across the table indicate consistent and reliable
performance, with the algorithm adapting well to varying problem structures and scales. In total the
DD algorithm is at least as good as the ECH for 115 on 166 instances. Out of these 115 instances, it is
strictly better for 95 of them.</p>
    </sec>
    <sec id="sec-6">
      <title>Conclusion</title>
      <p>Links between dense clusters in networks are highly critical for disinformation spread. In this work, we
study a combinatorial optimization problem, named Cluster Deletion, aimed at partitioning a network
into clusters of fully connected nodes, by identifying the minimum number of bridge connections.
Monitoring these connections allows for disrupting the disinformation flow while preserving
intragroup communication. We introduce a Lagrangian heuristic to solve the problem, consisting of a dual
descent algorithm equipped with a subgradient method and an ad-hoc repair heuristic. Computational
experiments are conducted on benchmark networks from the literature, providing, in most cases, new
best-known solutions.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>The research of M. Cerulli was partially supported by the project “SEcurity and RIghts in the CyberSpace”
SERICS (PE00000014) under the MUR National Recovery and Resilience Plan funded by the European
Union - NextGenerationEU. This support is gratefully acknowledged.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.</p>
      <p>A. Linking primal constraints with dual variables</p>
      <p>3  ∈ 
 ∈ 
4

1

2
 ∈ 
5</p>
      <p>In Fig. 1, a graph with 5 nodes is shown, in order to better explain formula (3). Let  be node 1,
 be node 2, and  ∈ {3, 4, 5} (i.e.,  &lt;  &lt; ). Observe that, in problem (1), variable  appears
in all constraints associated to the nodes ,  and  ∈   ∪  . Consider first the case  ∈ , that
is {, }, {, }, {, } ∈  (node 3 in Fig. 1). We have the following three constraints of type (1b)
associated to the nodes (, , ) and involving the variable :
 +  −  ≤ 1 N ,
 +  −  ≤ 1 N ,
 +  −  ≤ 1 N ,</p>
      <p>M  ,
M  ,
M  .</p>
      <p>Consequently, the contribution of the above constraints to the cost coeficient (,  ) is:
  −   −  .</p>
      <p>We remark that the indices of the variables are always written in such a way that the first index is
smaller than the second one. Therefore,  will always appear before , which, in turn, will always appear
before . This is why, for example, in the third constraint above, instead of having  +  −  ≤ 1,
we have  +  −  ≤ 1. As for the multipliers, the first index is fixed to the node on which the
corresponding constraint is defined, while the remaining two are ordered in such a way that the second
index is smaller than the third one.</p>
      <p>Consider now the case  ∈  (node 4 in Fig. 1). The constraint to consider is
Instead, for  ∈  (node 5 in Fig. 1) the constraint to be taken into account is</p>
      <p>The corresponding contributions to  (,  ) are −   and −  , respectively.</p>
    </sec>
    <sec id="sec-9">
      <title>Multipliers update in case of infeasibility</title>
      <p>The pseudo-code of the algorithm which is used to update the Lagrangian multipliers in case an
infeasible solution ℎ is found is reported in Algorithm 3. It analyzes diferent constraint violations and
updates the multipliers accordingly.
Algorithm 3: Multipliers update when an infeasible solution is found.</p>
      <p>Input: Iteration counter ℎ, multipliers  ℎ,  ℎ, and cost coeficients ℎ.</p>
      <p>Output: Updated multipliers ( ℎ+1,  ℎ+1).
1 foreach triplet (, , ) do
2 if constraint (1b) is violated by (, , ) then
3</p>
      <p>Δ ← min{ℎ , − ℎ, ℎ};
else if constraint (1c) is violated by (, , ) then
Δ ←</p>
      <p>min{ℎ , ℎ};
 ¯ℎ¯+¯1 ←</p>
      <p>¯ℎ¯+¯1 ←  ¯ℎ¯¯ + Δ¯¯¯;
11 else if (¯, ¯, ¯) violates constraint (1c) then
12</p>
      <p>¯ℎ¯¯ + Δ¯¯¯;
13 Δ ← Δ¯¯¯;
14 return ( ℎ+1,  ℎ+1) and Δ</p>
      <p>If, for a certain triplet (^, ^, ^), constraint (1b) is violated, i.e., if ℎ ^^ &gt; ^ℎ^ + 1, it means that
^^ + ℎ
^ℎ^ = ^ℎ^ = 1 and ^ℎ^ = 0. From (5), it holds that ^ℎ^ &lt; 0, ^ℎ^ ≥ 0 and ^ℎ^ ≥ 0. We observe that,
taking into account Eq. (8) and the remark in Appendix A, any increase of  ^ℎ^^ results in an increase of
ℎ
^^ and a decrease of both ^ℎ^ and ^ℎ^.</p>
      <p>If instead, for a certain triplet (^, ^, ^) constraint (1c) is violated, that is: ℎ
^^ + ^ℎ^ &gt; 1, it means that
ℎ ^^ ≥ 0. We observe that (8) indicates that any increase
^^ = ^ℎ^ = 1, which implies, from (5), that ^ℎ^ , ℎ
of  ^^^ produces a decrease of both ^ℎ^ and ^ℎ^.</p>
      <p>ℎ
for each triplet (, , ) violating constraint (1c). Then, we look for the tiplet (¯, ¯, ¯) corresponding
to the greatest Δ value in line 8. If it violates constraint (1b), in line 10, we update the corresponding
multiplier  ¯¯¯ as follows
while the remaining multipliers   for (, , ) ̸= (¯, ¯, ¯) as well as   for all (, , ) do not change
(lines 6–7). If, instead, (¯, ¯, ¯) violates constraint (1c), we update the corresponding multiplier  ¯¯¯ in
line 12 as:
 ℎ+1 =  ¯ℎ¯¯ + Δ¯¯¯,
 ℎ,+,1 =  ¯ℎ¯¯ + Δ¯¯¯,
while not changing the remaining multipliers (lines 6–7).</p>
    </sec>
    <sec id="sec-10">
      <title>The subgradient algorithm</title>
      <p>
        More specifically, in line 3, we define
for each triplet (, , ) violating constraint (1b) and, in line 5, we define
Δ = min {︁ℎ , ℎ, − ℎ}︁ ≥ 0,
Δ = min {︁ℎ , ℎ}︁ ≥ 0,
(9)
(10)
(11)
Within the DD algorithm, we call the subgradient method described in this section, which produces a
diferent multiplier pair which can be used to update the  value. Actually, the subgradient methods
itself can be used as an alternative to the DD algorithm to solve the dual problem (7). Indeed, this method,
i
        <xref ref-type="bibr" rid="ref25">ntroduced by Shor (2012</xref>
        ), is aimed at solving the unconstrained optimization problem min∈R  (),
with  : R → R convex and not necessarily smooth. The basic iteration scheme is
+1 =  −  ,
ensuring the convergence of the sequence {} can be set in diferent ways.
      </p>
      <p>
        <xref ref-type="bibr" rid="ref26">Following Carrabs et al. (2024</xref>
        ), we propose the following choice of the stepsize:
where the subgradient  is an element of  (), the subdiferential of  at point .

The stepsize   commonly used in the literature is in the form   = ‖‖ , where the stepsize 
 =
      </p>
      <p>‖ ‖2‖‖
2( (− 1) −  () + ()⊤ ) ,
where   =  − − 1. Such a stepsize choice is an extension of the well-known Barzilai and Borwein
approach to convex nonsmooth optimization.</p>
      <p>When applying the subgradient method to our problem (7),  represents the pair of multipliers (,  )
and  () is the function (,  ). The corresponding pseudocode is shown in the Algorithm 4. It
starts from two points ( 0,  0) and ( 1,  1), which are needed to determine at the first iteration  = 1
the value of the stepsize 1. At the beginning,  is assigned the tighter value between the upper
bounds associated with the two starting points (line 1). Then, new points ( ,  ) are found at each
iteration  &gt; 1, by using the stepsize  (lines 4–5) and the subgradient  (line 9). The subgradient
method stops as soon as a point ( ,  ) associated with a tighter upper bound 
 than the starting
one is found (line 11). If no such a solution is produced during  iterations, then the procedure
returns the last point, together with the associated upper bound (line 13). Thanks to this algorithm, we
can modify the Lagrangian multipliers properly within the DD algorithm, also when a feasible solution
is found.
4
5
6
7
8
9
10
11
12</p>
      <p>13 return ( ,  ),</p>
      <p>‖ ‖2‖( , )‖
 ←  +2(( , )− ( − 1, − 1)− ( , )⊤( −  − 1, −  − 1)) ;
 ← max {︁min {︁, log101(1+) }︁ , 0}︁;
Compute ( +1,  +1);
Compute +1( +1,  +1) according to Eq. (6);
if ( +1,  +1) &lt;  then</p>
      <p>return ( +1,  +1), ( +1,  +1)
 +1 ← max {︁0,   − ‖( , )‖ ( ,  )}︁ ,  ∈ , ,  ∈   :  &lt; , {, } ∈ ;
 +1 ← max {︁0,   − ‖( , )‖ ( ,  )}︁ ,  ∈ , ,  ∈   :  &lt; , {, } ∈/ ;</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <given-names>R.</given-names>
            <surname>Shamir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sharan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tsur</surname>
          </string-name>
          ,
          <article-title>Cluster graph modification problems</article-title>
          ,
          <source>Discrete Applied Mathematics</source>
          <volume>144</volume>
          (
          <year>2004</year>
          )
          <fpage>173</fpage>
          -
          <lpage>182</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <given-names>R.</given-names>
            <surname>Sharan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Shamir</surname>
          </string-name>
          ,
          <article-title>CLICK: A clustering algorithm with applications to gene expression analysis</article-title>
          ,
          <source>in: Proceedings of the Eighth International Conference on Intelligent Systems for Molecular Biology (ISMB)</source>
          ,
          <year>2000</year>
          , pp.
          <fpage>307</fpage>
          -
          <lpage>316</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Charikar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Guruswami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Wirth</surname>
          </string-name>
          ,
          <article-title>Clustering with qualitative information</article-title>
          ,
          <source>Journal of Computer and System Sciences</source>
          <volume>71</volume>
          (
          <year>2005</year>
          )
          <fpage>360</fpage>
          -
          <lpage>383</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.jcss.
          <year>2004</year>
          .
          <volume>10</volume>
          .012,
          <string-name>
            <surname>learning</surname>
            <given-names>Theory</given-names>
          </string-name>
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>A.</given-names>
            <surname>Natanzon</surname>
          </string-name>
          ,
          <article-title>Complexity and approximation of some graph modification problems</article-title>
          , University of Tel-Aviv,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Yannakakis</surname>
          </string-name>
          ,
          <article-title>Edge-deletion problems</article-title>
          ,
          <source>SIAM Journal on Computing</source>
          <volume>10</volume>
          (
          <year>1981</year>
          )
          <fpage>297</fpage>
          -
          <lpage>309</lpage>
          . doi:
          <volume>10</volume>
          .1137/ 0210021.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <given-names>C.</given-names>
            <surname>Komusiewicz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uhlmann</surname>
          </string-name>
          ,
          <article-title>Cluster editing with locally bounded modifications</article-title>
          ,
          <source>Discrete Applied Mathematics</source>
          <volume>160</volume>
          (
          <year>2012</year>
          )
          <fpage>2259</fpage>
          -
          <lpage>2270</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.dam.
          <year>2012</year>
          .
          <volume>05</volume>
          .019.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <given-names>C.</given-names>
            <surname>Crespelle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. G.</given-names>
            <surname>Drange</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. V.</given-names>
            <surname>Fomin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Golovach</surname>
          </string-name>
          ,
          <article-title>A survey of parameterized algorithms and the complexity of edge modification</article-title>
          ,
          <source>Computer Science Review</source>
          <volume>48</volume>
          (
          <year>2023</year>
          )
          <article-title>100556</article-title>
          . doi:
          <volume>10</volume>
          .1016/j. cosrev.
          <year>2023</year>
          .
          <volume>100556</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <given-names>J.</given-names>
            <surname>Gramm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hüfner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Niedermeier</surname>
          </string-name>
          ,
          <article-title>Graph-modeled data clustering: Exact algorithms for clique generation</article-title>
          ,
          <source>Theory of Computing Systems</source>
          <volume>38</volume>
          (
          <year>2005</year>
          )
          <fpage>373</fpage>
          -
          <lpage>392</lpage>
          . doi:
          <volume>10</volume>
          .1007/s00224-004-1178-y.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <given-names>J.</given-names>
            <surname>Gramm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hüfner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Niedermeier</surname>
          </string-name>
          ,
          <article-title>Automated generation of search tree algorithms for hard graph modification problems</article-title>
          ,
          <source>Algorithmica</source>
          <volume>39</volume>
          (
          <year>2004</year>
          )
          <fpage>321</fpage>
          -
          <lpage>347</lpage>
          . doi:
          <volume>10</volume>
          .1007/s00453-004-1090-5.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <given-names>P.</given-names>
            <surname>Damaschke</surname>
          </string-name>
          ,
          <article-title>Bounded-degree techniques accelerate some parameterized graph algorithms</article-title>
          , in: J.
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>F. V.</given-names>
          </string-name>
          <string-name>
            <surname>Fomin</surname>
          </string-name>
          (Eds.),
          <source>Parameterized and Exact Computation</source>
          , Springer Berlin Heidelberg, Berlin, Heidelberg,
          <year>2009</year>
          , pp.
          <fpage>98</fpage>
          -
          <lpage>109</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -11269-0\_8.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <given-names>S.</given-names>
            <surname>Böcker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Damaschke</surname>
          </string-name>
          ,
          <article-title>Even faster parameterized cluster deletion and cluster editing</article-title>
          ,
          <source>Information Processing Letters</source>
          <volume>111</volume>
          (
          <year>2011</year>
          )
          <fpage>717</fpage>
          -
          <lpage>721</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.ipl.
          <year>2011</year>
          .
          <volume>05</volume>
          .003.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hare</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Nastos,</surname>
          </string-name>
          <article-title>The cluster deletion problem for cographs</article-title>
          ,
          <source>Discrete Mathematics</source>
          <volume>313</volume>
          (
          <year>2013</year>
          ). doi:
          <volume>10</volume>
          .1016/j.disc.
          <year>2013</year>
          .
          <volume>08</volume>
          .017.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <given-names>N.</given-names>
            <surname>Ailon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Charikar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Newman</surname>
          </string-name>
          ,
          <article-title>Aggregating inconsistent information: Ranking and clustering</article-title>
          ,
          <source>Journal of the ACM</source>
          <volume>55</volume>
          (
          <year>2008</year>
          ). doi:
          <volume>10</volume>
          .1145/1411509.1411513.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <surname>A. van Zuylen</surname>
            ,
            <given-names>D. P.</given-names>
          </string-name>
          <string-name>
            <surname>Williamson</surname>
          </string-name>
          ,
          <article-title>Deterministic pivoting algorithms for constrained ranking and clustering problems</article-title>
          ,
          <source>Mathematics of Operations Research</source>
          <volume>34</volume>
          (
          <year>2009</year>
          )
          <fpage>594</fpage>
          -
          <lpage>620</lpage>
          . doi:
          <volume>10</volume>
          .1287/moor. 1090.0385.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <given-names>N.</given-names>
            <surname>Veldt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. F.</given-names>
            <surname>Gleich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Wirth</surname>
          </string-name>
          ,
          <article-title>A correlation clustering framework for community detection</article-title>
          ,
          <source>in: Proceedings of the 2018 World Wide Web Conference, International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>439</fpage>
          -
          <lpage>448</lpage>
          . doi:
          <volume>10</volume>
          .1145/ 3178876.3186110.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <given-names>N.</given-names>
            <surname>Grüttemeier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Komusiewicz</surname>
          </string-name>
          ,
          <article-title>On the relation of strong triadic closure and cluster deletion</article-title>
          ,
          <source>Algorithmica</source>
          <volume>82</volume>
          (
          <year>2020</year>
          )
          <fpage>853</fpage>
          -
          <lpage>880</lpage>
          . doi:
          <volume>10</volume>
          .1007/s00453-019-00617-1.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Grötschel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wakabayashi</surname>
          </string-name>
          ,
          <article-title>A cutting plane algorithm for a clustering problem</article-title>
          ,
          <source>Mathematical Programming</source>
          <volume>45</volume>
          (
          <year>1989</year>
          )
          <fpage>59</fpage>
          -
          <lpage>96</lpage>
          . doi:
          <volume>10</volume>
          .1007/BF01589097.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <given-names>P.</given-names>
            <surname>Martins</surname>
          </string-name>
          ,
          <article-title>Modeling the maximum edge-weight k-plex partitioning problem</article-title>
          ,
          <year>2016</year>
          . doi:
          <volume>10</volume>
          .48550/ arXiv.1612.06243. arXiv:
          <volume>1612</volume>
          .
          <fpage>06243</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <given-names>G.</given-names>
            <surname>Ambrosio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cerulli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Serra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Sorgente</surname>
          </string-name>
          , U. Vaccaro,
          <article-title>Exact and heuristic solution approaches for the cluster deletion problem on general graphs</article-title>
          ,
          <source>Networks</source>
          (
          <year>2025</year>
          ). doi:
          <volume>10</volume>
          .1002/net.22267.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Gaudioso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Monaco</surname>
          </string-name>
          ,
          <article-title>Variants to the cutting plane approach for convex nondiferentiable optimization</article-title>
          ,
          <source>Optimization</source>
          <volume>25</volume>
          (
          <year>1992</year>
          )
          <fpage>65</fpage>
          -
          <lpage>75</lpage>
          . doi:
          <volume>10</volume>
          .1080/02331939208843808.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Gaudioso</surname>
          </string-name>
          , G. Giallombardo, G. Miglionico,
          <article-title>Essentials of numerical nonsmooth optimization</article-title>
          , 4OR
          <volume>18</volume>
          (
          <year>2020</year>
          )
          <fpage>1</fpage>
          -
          <lpage>47</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10288-019-00425-x.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <string-name>
            <given-names>A.</given-names>
            <surname>Dessmark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jansson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lingas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.-M.</given-names>
            <surname>Lundell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Persson</surname>
          </string-name>
          ,
          <article-title>On the approximability of maximum and minimum edge clique partition problems</article-title>
          ,
          <source>International Journal of Foundations of Computer Science</source>
          <volume>18</volume>
          (
          <year>2007</year>
          )
          <fpage>217</fpage>
          -
          <lpage>226</lpage>
          . doi:
          <volume>10</volume>
          .1142/S0129054107004656.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <surname>A. M. Geofrion</surname>
          </string-name>
          ,
          <article-title>Lagrangean relaxation for integer programming</article-title>
          , in: M. L.
          <string-name>
            <surname>Balinski</surname>
          </string-name>
          (Ed.), Approaches to Integer Programming, Springer Berlin Heidelberg, Berlin, Heidelberg,
          <year>1974</year>
          , pp.
          <fpage>82</fpage>
          -
          <lpage>114</lpage>
          . doi:
          <volume>10</volume>
          . 1007/BFb0120690.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Cerulli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Serra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Sorgente</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Archetti</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Ljubić</surname>
          </string-name>
          ,
          <article-title>Mathematical programming formulations for the collapsed k-core problem</article-title>
          ,
          <source>European Journal of Operational Research</source>
          <volume>311</volume>
          (
          <year>2023</year>
          )
          <fpage>56</fpage>
          -
          <lpage>72</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.ejor.
          <year>2023</year>
          .
          <volume>04</volume>
          .038.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <given-names>N. Z.</given-names>
            <surname>Shor</surname>
          </string-name>
          ,
          <article-title>Minimization methods for non-diferentiable functions</article-title>
          , Springer Science &amp; Business
          <string-name>
            <surname>Media</surname>
          </string-name>
          ,
          <year>2012</year>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -82118-9.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <string-name>
            <given-names>F.</given-names>
            <surname>Carrabs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gaudioso</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Miglionico, A two-point heuristic to calculate the stepsize in subgradient method with application to a network design problem</article-title>
          ,
          <source>EURO Journal on Computational Optimization</source>
          (
          <year>2024</year>
          )
          <article-title>100092</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.ejco.
          <year>2024</year>
          .
          <volume>100092</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>