<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Widening for Systems of Two Variables Per Inequality</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Martin Brain</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jacob M. Howe</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science</institution>
          ,
          <addr-line>City</addr-line>
          ,
          <institution>University of London</institution>
          ,
          <addr-line>EC1V 0HB</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <fpage>11</fpage>
      <lpage>28</lpage>
      <kwd-group>
        <kwd>eol&gt;Abstract Interpretation</kwd>
        <kwd>Weakly Relational Domains</kwd>
        <kwd>Widening</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>as independent. This is manifested in the representation, that is, the choice of abstract domain,
then reflected in the way computations are modelled as transformers, as well as the use of
widening in handling loops. Not tracking relationships in the abstract domain gives scalability
but at the cost of precision; the results are over-approximate, hence the results are safe, but
possibly giving false alarms. Too many false alarms makes the verification process unusable.</p>
      <p>
        Abstract domains based on polyhedra have been considered since the introduction of abstract
interpretation [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], however, the domain operations for polyhedra are prohibitively expensive
for practical analysis tools. This has led to interest in subclasses of polyhedra, called weakly
relation domains. These restrict the constraints in the abstract domain in some way, typically
allowing at most two variables in each equality [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and often also restricting the coeficients of
the variables in the inequalities, for example to +1,0,-1 to give Octagons [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        Weakly relational domains usually maintain their representations in a closed form [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ], where
implied inequalities are made explicit, allowing join, entailment and satisfiability operations to be
reduced to planar calculations. This approach is potentially problematic when considered with
widening. Widening typically throws away inequalities that are in some way unstable across
iterations, moving the abstraction up the lattice with the hope of quickly reaching a fixpoint.
However, it is possible that an inequality discarded was one introduced when calculating the
closed form. This would then be immediately re-introduced, and no progress is made.
      </p>
      <p>This paper is about widening, and especially about the geometric challenges arising when
working with a (weakly) relational numeric domain to handle some of the relationships that
occur between variables.</p>
      <p>The paper makes the following contributions:
• provides a review of widening techniques for numeric domains and illustrates their
application;
• gives case studies to illustrate problems with the application of widening;
• proposes a new way of treating widening for weakly relational domains maintained in a
closed form.</p>
      <p>The rest of the paper is structured as follows: Section 2 surveys related work and introduces
relevant notation, Section 3 contains a worked example demonstrating the application of weakly
relational domains and widening in a program verification context, Section 4 gives problems
illustrating points of interest, and provides some tentative solutions, Section 5 concludes.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background and Related Work</title>
      <p>This work is concerned with widening and abstract domains consisting of sets of linear
inequalities, or polyhedral domains. This section gives background on polyhedra and their subclasses,
a brief history of widening, details on weakly relational domains and closure, before illustrating
the problematic interaction between closure and widening.</p>
      <sec id="sec-2-1">
        <title>2.1. Polyhedra and Weakly Relational Domains</title>
        <p>The abstract domains of interest in this paper are lattices over sets of inequalities. Several
domains are considered, varying in the form of inequalities that are allowed. The first of these
is Polyhedra, Poly , the set of all linear inequalities over a set of variables . Here (and
throughout this paper), coeficients and constants are rationals, but other choices are available.</p>
        <sec id="sec-2-1-1">
          <title>Definition 1.</title>
          <p>Poly = {11 + ... +  ≤  | 1, ...,  ∈  ∧ 1, ..., ,  ∈ Q}</p>
          <p>Then ⟨(Poly ), ⊑, ⊔, ⊓⟩ is a complete lattice, that is, sets of polyhedral constraints
(implicitly quotiented by equivalence), or rather the solutions to those constraints, are ordered
by inclusion (⊑), with greatest lower bound (meet) being geometric intersection (⊓) and least
upper bound (join) being convex hull (⊔). Observe that this lattice has infinite ascending chains.
The sets of inequalities below lift to lattices in the same way. As well as the lattice operations
above, the projection of a set of constraints onto a set of constraints over a subset of variables is
an important operation in abstract interpretation.</p>
          <p>
            Abstract interpretation using the Poly domain is seen as impractical beyond small programs
as the size of the representation and the cost of performing operations such as convex hull might
become prohibitively large. Therefore abstract domains based on restricted forms of linear
inequalities have been investigated; that is, the relationships between variables that can be
expressed have been restricted. Hence, these are referred to as weakly relational domains. Many
weakly relational domains have been investigated, and those covered below are not intended to
be a complete list. These (typically) restrict inequalities so that they have at most two variables
in them. This then means that operations such as least upper bound can be calculated by a series
of planar operations; since these planar operations have good computational complexity the
intractability of full polyhedra is avoided. The most general version is then the two variables
per inequality (TVPI) abstract domain [
            <xref ref-type="bibr" rid="ref4 ref6">4, 6</xref>
            ].
          </p>
        </sec>
        <sec id="sec-2-1-2">
          <title>Definition 2.</title>
          <p>TVPI = { +  ≤  | ,  ∈  ∧ , ,  ∈ Q}</p>
          <p>
            The domain of Logahedra [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ] restricts coeficients to be powers of 2 (potentially with a
maximum power). Structures based on Logahedra have been used as atomic structures for
verification of embedded real-time systems [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ].
          </p>
          <p>Definition 3. Log = { +  ≤  | ,  ∈  ∧ ,  ∈ {− 2, 0, 2 |  ∈ Z} ∧  ∈ Q}</p>
          <p>
            Octagons [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ], where the inequalities have unit or zero coeficients, have received considerable
attention and have been widely used for abstract interpretation based formal verification. For
example, the Octagon abstract domain is an important component of the ASTRÉE static analyser,
developed to verify the absence of classes of run-time errors in embedded C code [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ]. This
analyser has been used for avionics code by Airbus.
          </p>
        </sec>
        <sec id="sec-2-1-3">
          <title>Definition 4.</title>
          <p>Oct = { +  ≤  | ,  ∈  ∧ ,  ∈ {− 1, 0, 1} ∧  ∈ Q}</p>
          <p>
            Bounded diferences (sometimes also called Zones) have also been popular [
            <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
            ], partly
because the constraint systems can naturally be represented as weighted graphs, leading to
eficient algorithms (they are referred to as DBM for diference bounded matrices). Unary
inequalities for DBM are typically dealt with by introducing a dummy variable for 0.
          </p>
        </sec>
        <sec id="sec-2-1-4">
          <title>Definition 5.</title>
          <p>DBM = { −  ≤  | ,  ∈  ∧  ∈ Q}</p>
          <p>Finally, using intervals as an abstract domain loses all ability to track relations, but does
result in particularly scalable algorithms.</p>
          <p>Definition 6. Int = { ≤  |  ∈  ∧  ∈ {− 1, 1} ∧  ∈ Q}</p>
          <p>When considering intervals,  ∈ [, ] is shorthand for {−  ≤ − ,  ≤ }, whilst  ∈ [, ∞]
is shorthand for {−  ≤ − }.</p>
          <p>
            This paper primarily considers TVPI inequalities. Following [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ], the intention is that
algorithms developed for this domain will be inherited by abstract domains using subclasses of
two variable inequalities, in particular Octagons.
          </p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Widening</title>
        <p>
          Widening [
          <xref ref-type="bibr" rid="ref1 ref12">1, 12</xref>
          ] is an operation that forces the iterations of an abstract interpretation to reach
a fixpoint, that is, for the analysis to terminate. This is formalised in the following definition.
        </p>
        <sec id="sec-2-2-1">
          <title>Definition 7.</title>
          <p>Where  is a lattice, a widening is then a function ▽ :  ×  →  such that:
i) ∀,  ∈ . ⊑ ▽
ii) ∀,  ∈ . ⊑ ▽
iii) for all increasing chains 0 ⊑ 1 ⊑ ..., the chain given by 0 = 0, +1 = ▽+1 is not
strictly increasing.</p>
          <p>
            Widening is about sequences of iterates, that is, it is about how the possible values of variables
change at a given program point as control returns to it (for example, in a loop). It is usual to
merge the original abstraction at a widening point with the new value before applying widening
(the definition hints that this is sensible), so ▽ is really ▽( ⊔ ). Notice that widening is
about more than controlled loss of precision within the representation, it is about enforcing
termination. A particularly crude widening would then be to return the top point of the lattice.
One point of interest is that by considering the topological structure of the abstract program
(in particular strongly closed components), widening need only been applied, and termination
checked, at certain nodes in the call graph [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ].
          </p>
          <p>
            For polyhedral analysis, widening as first introduced [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ] allows the fixpoint calculation to
iterate through a point twice then on the third pass inequalities that are stable are retained and
others are discarded, this allows a fixpoint to be reached. The paper uses the double description
representation (which contains an implicit assumption of non-redundancy), but variations
on this approach, whatever the representation, remain the classic approach. A number of
heuristics as to how and when to widen are available and have been combined and refined in
[
            <xref ref-type="bibr" rid="ref14">14</xref>
            ]. These include refraining from widening if it can be determined that a chain is finite, and
more sophisticated delays of the application of widening.
          </p>
          <p>
            The danger with the classic approach is that it widens too aggressively, jumping past more
subtle fixpoints leading to an over-approximation that can’t verify a correctness condition.
Various alternatives have been investigated. This includes widening with thresholds [
            <xref ref-type="bibr" rid="ref15">15</xref>
            ], where
the user supplies, before analysis, a series of thresholds to widen to. So, for example, in the
classic approach [
            <xref ref-type="bibr" rid="ref1 ref4">1, 4</xref>
            ]▽[
            <xref ref-type="bibr" rid="ref1 ref5">1, 5</xref>
            ] = [1, ∞], but if the user has supplied some threshold , then
[
            <xref ref-type="bibr" rid="ref1 ref4">1, 4</xref>
            ]▽[
            <xref ref-type="bibr" rid="ref1 ref5">1, 5</xref>
            ] = [1, ]. If the interval continues to expand, then once all thresholds are passed
the classic widening applies and analysis will terminate; but, with some wisdom in the up front
choice of threshold it might be that the [1, ] interval is a fixpoint, improving the calculated
ifxpoint and associated analysis.
          </p>
          <p>
            This relies, to some extent, on the expertise and insight of the person conducting the analysis.
In addition, the technique is typically limited to intervals, that is, bounds on single variables,
and not capturing relational constraints. Another approach is to automate the selection of
thresholds, as suggested in the widening with landmarks approach of [
            <xref ref-type="bibr" rid="ref16 ref17">16, 17</xref>
            ]. The lookahead
widening of [
            <xref ref-type="bibr" rid="ref18">18</xref>
            ] is another attempt to find good values to widen to. Work on widening
continues, with [
            <xref ref-type="bibr" rid="ref19">19</xref>
            ] considering how it can be used when analysing programs containing
non-linear transformations, [
            <xref ref-type="bibr" rid="ref20">20</xref>
            ] tackling loops and how their analysis can be used to aid the
successful application of widening, whilst [
            <xref ref-type="bibr" rid="ref21">21</xref>
            ] provides a more general framework where
abstract interpretation can interact with bounded model checking and k-induction.
          </p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Weakly Relational Domains and Closure</title>
        <p>When using weakly relational domains based on classes of inequalities such as TVPI or Octagons
for abstract interpretation, some consideration is required as to how to represent and maintain
the sets of inequalities. The choice of representation impacts on the cost of the domain operations,
with the choice made as a trade of between the complexity of the operations. Weakly relational
domains often use a closed form, where some redundant inequalities are made explicit. After
ifnding a closed system all non-redundant inequalities relating any pair of variables are made
explicit. For example, suppose that { −  ≤ 1, 2 − 3 ≤ 2} is a system of inequalities, then
the closed system also includes the redundant inequality 2 − 3 ≤ 4. The relationship between
variables ,  has been made explicit.</p>
        <p>
          A closed system means that a variable can be projected out of a system in constant time by
simply dropping all constraints involving that variable. It also means that least upper bound
computation can be reduced to two dimensional convex hull problems, likewise satisfiability can
be computed at a planar level (plus entailment and equality are straightforward linear operations
with the closed form). These good operations are traded of against the cost of maintaining
the closed form. A typical program analysis step is to consider how a line of (abstracted) code
updates the representation. This is done incrementally, so there is interest in how to update a
closed representation upon the addition of a single new inequality (repeated applications of
this step can be used to find a closed system from an initially unclosed system). Incremental
closure for TVPI has been closely investigated in [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] where it is proved that any inequality
made explicit is the result of at most two computation steps.
        </p>
        <p>
          The following reiterates some key definitions from [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. Firstly, the set of variables that occur
in inequality  is denoted vars() and is defined:
Definition 8.
        </p>
        <p>vars( +  ≤ ) =
⎧⎪ ∅
⎪
⎪⎪⎨ ∅</p>
        <p>{}
⎪⎪⎪⎪ {}
⎩ {, }</p>
        <p>This allows the definition of syntactic projection of a system of inequalities onto a given set
of variables: simply select those inequalities including those variables.</p>
        <p>Definition 9. The syntactic projection, denoted   for some  ⊆  , of system of inequalities
 ⊆ TVPI is defined as   () = { ∈  | vars() ⊆  }.</p>
        <p>This is used in a formal definition of a closed system of inequalities. Notice that a closed system
may include inequalities that are redundant even in a given projection (unary constraints). The
entailment operation, |=, is geometric inclusion. If 1 |= 2 and 2 |= 1, then 1 ≡ 2. The
definition of closed states that syntactic and semantic projection coincide:</p>
        <sec id="sec-2-3-1">
          <title>Definition 10.</title>
          <p>A system  ⊆</p>
          <p>TVPI is closed if the following predicate holds:
closed() ⇐⇒ ∀ ∈ TVPI . ( |=  ⇒  vars()() |= )</p>
          <p>Some inequalities in a projection might not be wanted, since they are redundant with respect
to all projections. The filteroperation will be used to remove these, giving a set of inequalities
that is a minimal representation equivalent to the projection (noting that unary constraints are
retained, and that points and lines might have many such representations):</p>
        </sec>
        <sec id="sec-2-3-2">
          <title>Definition 11.</title>
          <p>The mapping filter:  (TVPI ) →  (TVPI ) is defined:</p>
          <p>filter() = ∪{filter (  ())| ⊆  ∧ | | ≤ 2}
where:
• filter () ⊆ 
• filter () ≡   ()
• for every ′ ⊂ filter (), ′ ̸≡ .</p>
          <p>The process of generating implied inequalities is based on the result operator that eliminates
a shared variable from a pair of inequalities. In the definition of complete below, result will be
lifting to sets of inequalities, referring to all pairwise applications of result.</p>
        </sec>
        <sec id="sec-2-3-3">
          <title>Definition 12.</title>
          <p>If 1 ≡ 1 + 1 ≤ 1, 2 ≡ 2 + 2 ≤ 2 and 12 &lt; 0 then</p>
          <p>= result(1, 2, ) = |2|1 + |1|2 ≤ | 2|1 + |1|2
otherwise result(1, 2, ) = ⊥.</p>
          <p>At this point the function complete can be defined, that results in a minimal set of inequalities
which is closed.</p>
          <p>Definition 13.  ∼=  ′ if for all  ⊆  such that | | ≤
2,   ( ) ≡   ( ′).</p>
          <p>Definition 14. Let  ⊆ TVPI . Put 0 = filter( ) and +1 = filter( ∪ result(, )). Then
complete :  (TVPI ) →  (TVPI ) is defined
complete( ) =  where +1 ∼=  and for every 0 ≤  &lt; , +1 ̸∼= .</p>
          <p>
            The following result (proved in [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ]) states that when adding a single new inequality to a
complete system (a system whose completion is itself) at most two result steps are needed to
generate a new complete system.
          </p>
          <p>Theorem 1. Consider adding 0 ∈ TVPI to  ⊆ TVPI where complete() = . If  ∈
complete( ∪ {0}) and  ̸= false, then one of the following holds:
1.  ∈  ∪ {0}
2.  = result(0, 1) where 1 ∈ 
3.  = result(result(0, 1), 2) where 1, 2 ∈</p>
        </sec>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Widening and Closure: A Problem</title>
        <p>There is a problem when using widening whilst calculating a fixpoint over weakly relational
domains. Widening attempt to weaken a system of inequalities by discarding inequalities
that are not displaying stability. This is syntactically calculated: consider  and  as sets of
inequalities, then ▽ =  ∩ . By maintaining a closed representation, weakly relational
domains are introducing redundant inequalities into the representation. The interaction of these
two is unclear, but an infinite loop of discarding an inequality that is immediately reintroduced
by closure is possible.</p>
        <p>
          This problem is illustrated for Octagons by Miné [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], and has been reiterated in [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]. Consider
the following inequalities over Oct{,,}, where iterations are increments of .
        </p>
        <p>= { −  ≤  + 1,  −  ≤  + 1,  −  ≤  + 1,  −  ≤  + 1,  −  ≤ 1,  −  ≤ 1}
Suppose an initial abstraction is:</p>
        <p>= { −  ≤ 1,  −  ≤ 1,  −  ≤ 1,  −  ≤ 1}
then</p>
        <p>0 = complete() = { −  ≤ 1,  −  ≤ 1,  −  ≤ 2,  −  ≤ 2,  −  ≤ 1,  −  ≤ 1}
Now consider the first iteration without using widening:</p>
        <p>
          1 = 0 ⊔ 0 = { −  ≤ 1,  −  ≤ 1,  −  ≤ 2,  −  ≤ 2,  −  ≤ 1,  −  ≤ 1}
The next iteration is calculated with widening, where widening drops the unstable constraints
on , , but closure using the complete function immediately introduces new constraints on
these variables:
1 ⊔ 1 = { −  ≤ 2,  −  ≤ 2,  −  ≤ 2,  −  ≤ 2,  −  ≤ 1,  −  ≤ 1}
1▽1 = { −  ≤ 2,  −  ≤ 2,  −  ≤ 1,  −  ≤ 1}
2 = complete(1▽1) = { −  ≤ 3,  −  ≤ 3,  −  ≤ 2,  −  ≤ 2,  −  ≤ 1,  −  ≤ 1}
The next iterate is then the below, noting that this time it is the constraints on ,  that are
unstable:
3 = complete(2▽2) = { −  ≤ 3,  −  ≤ 3,  −  ≤ 4,  −  ≤ 4,  −  ≤ 1,  −  ≤ 1}
And so on. Closure hasn’t been accounted for in the use of stability in this definition of a
widening, and this isn’t a terminating sequence. Therefore, ▽ isn’t a widening in this context.
The solution proposed is simple [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. At the widening point retain the widened (▽, not
closed) version of the Octagon and use this as the first argument of the classic widening in the
next iteration. A closed version of this may be used for continuing the abstract computation.
        </p>
        <p>
          Gange, et al [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ] isolate the problem: lattices are semantic, classic widening is syntactic,
and closure has no semantic efect (at least when considering the system as a whole). But this
representation dependence of the correct behaviour is “not fully satisfactory” [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ].
        </p>
        <p>
          The solution presented in [
          <xref ref-type="bibr" rid="ref11 ref22">22, 11</xref>
          ], introduces a new concept of isolated widening operating
on an alternative structure. In [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ] elements of weakly relational domains are considered purely
geometrically [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ], allowed the classic approach to widening to be retained, sidestepping the
syntactic problems of closure. Their work also leads to a closure algorithm for Octagons, so
that other domain operations can still use this form.
        </p>
      </sec>
      <sec id="sec-2-5">
        <title>2.5. Relaxation</title>
        <p>A final point of consideration is relaxation of systems of constraints. System of constraints
 is said to be a relaxation of  if  ⊏ . Noting again that widening is about enforcing
termination, it should also be noted that changing an element from an abstract domain might
have another motivation, that is, to maintain a tractable size of representation in order that
a fixpoint computation does not grind to a halt. The more expressive the set of constraints
underlying the abstract domain is, the more this becomes an issue. The concept of relaxation is
geometrical, but again interacts with the syntactic maintenance of a closed form.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Worked Example</title>
      <p>
        This example works through the analysis of a small section of code to illustrate the use of
abstract interpretation using weakly relational domains for program verifcation. For a full
introduction to abstract interpretation see [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>Consider the following implementation of a C standard library function, with line numbers:
( 1 )
( 2 )
( 3 )
( 4 )
( 5 )
( 6 )
( 7 )
( 8 )
( 9 )
c h a r ∗ s t r d u p ( c o n s t c h a r ∗ s ) {
s i z e _ t n = s t r l e n ( s ) ;
c h a r ∗ r e s u l t = m a l l o c ( n + 1 ) ;
s i z e _ t i = 0 ;
s i z e _ t j = 0 ;
w h i l e ( i &lt; n ) {
r e s u l t [ j ] = s [ i ] ;</p>
      <p>If it is known that s points to a valid C string and strlen returns its length then this is
clearly memory-safe. When reading from s,  ∈ [0,  − 1] and when writing to  ∈ [0, ].
Showing this via abstract interpretation is not as simple as it might appear.</p>
      <p>First consider using the Int{,,} abstract domain, and assume no hidden dependencies. At
the start of the loop on line (8) observe that  ∈ [0, 0],  ∈ [0, 0],  ∈ [0, ∞], as in Figure 1 a).</p>
      <p>
        Each instruction in the code has an abstract equivalent transforming the lattice point as
passed to that line. For example, on line (10) the increment on the program variable i will
correspondingly shift the polyhedron representing the program state at this point by one in the
i direction. At the end of the first iteration around the loop, at line (12)  ∈ [
        <xref ref-type="bibr" rid="ref1 ref1">1, 1</xref>
        ],  ∈ [
        <xref ref-type="bibr" rid="ref1 ref1">1, 1</xref>
        ].
Control returns to the start of line (8) and the new values for ,  need to be merged with the
previous abstraction at this program point. The start of the loop on line (8) can be reached
either as control passes from line (6), or as control returns to the head of the loop from line (12).
The abstract domain value at this point needs to merge these two control paths, summarising
the values that can be taken. This is achieved by computing the least upper bound of the existing
abstract domain value with the values coming from the loop. The merge results in new intervals
for the variables,  ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ],  ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]. Note that this region includes points like (0, 1, 0) which
are not possible and are only present because the over-approximation is unable to express the
relation between variables.
      </p>
      <p>Repeating this will result in the intervals for the variables at line (8) increasing. However,
this process will not terminate, since the values will not stabilise at a fixpoint. This necessitates
widening. To accelerate (or indeed to reach) a fixpoint, information is discarded or generalised,
until the abstract domain value at the merge point is stable. The classic way to apply widening
is to execute the abstract loop two or three times, then to observe which constraints are stable
from one iteration to the next, retaining only these. The result is a fixpoint. Note that this is
often a syntactic observation, and that two semantically equivalent but syntactically diferent
points might well result in widening being applied.</p>
      <p>
        Using interval domain Int{,,} , for  and  only the lower bounds are stable across iterations,
hence widening applies:
{ ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ],  ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ],  ∈ [0, ∞]}▽{ ∈ [
        <xref ref-type="bibr" rid="ref2">0, 2</xref>
        ],  ∈ [
        <xref ref-type="bibr" rid="ref2">0, 2</xref>
        ],  ∈ [0, ∞]}
      </p>
      <p>= { ∈ [0, ∞],  ∈ [0, ∞],  ∈ [0, ∞]}
This is a fixpoint (indeed the least fixpoint for this choice of abstract domain), but this does not
verify memory safety at line (13). An analysis using this would throw a spurious warning to
the developer.</p>
      <p>The Int abstract domain does not track relations between variables. The verification failure
above suggests a need to track (some) relations between variables. Equalities might make a
tempting choice of abstract domain:  =  is a loop invariant at the head of the loop, however
it is not maintained during the loop; at the start of line (11)  =  + 1, so something more
expressive than equalities is needed.</p>
      <p>Consider using the TVPI{,,} domain. At the first merge point considered above, the same
inequalities are merged. The least upper bound calculation results in {−  ≤ 0, −  ≤ 0,  ≤
,  −  ≤ 0,  −  ≤ 0,  ≤ 1,  ≤ 1}. This represents a section of plane reaching through
 =  (see Figure 1 b)). Repeating this will result in the plane section increasing in size (as in
Figure 1 c)). At the next iteration widening is applied, retaining only the stable inequalities
{−  ≤ 0, −  ≤ 0,  ≤ ,  −  ≤ 0,  −  ≤ 0}. as illustrated in Figure 1 d). This fixpoint
implies that  ≤  (indeed in the closed representation used this will be explicit) hence the
array access on line (13) can be verified. This would be the case if unrestricted polyhedra were
used, but also if a less expressive weakly relational domain such as Oct{,,}, or even bounded
diferences, DBM{,,}, were used.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>This section works through three examples illustrating the use of widening in over-approximating
ifxpoints within geometric structures. It discusses the related use of relaxation to throttle the
size of representation and in particular addresses the problems that can arise whilst applying
widening with weakly relational domains.
4.1. Problem 1
Consider a transfer function for sets . This translates a shape (approximation) by one unit in
the -axis, and takes its join with a point half way between the top right hand corner of the
translated shape and  = 2, and with the original .</p>
      <p>+1 =  ∪ {(,  + 1) | (, ) ∈ } ∪ {(1 + max2() , max() + 1)}
With initial set 0 = {(, ) | −  ≤ 0,  ≤ 1,  −  ≤ 0}, this gives the monotonic sequence
of spaces illustrated in Figure 2.


1
3</p>
      <p>Consider applying widening to this sequence, with abstract domains Int{,}, Oct{,},
TVPI{,}.</p>
      <p>In the first instance, each  will be approximated by its bounding box. When classic widening
is applied the increasing upper bounds on  and  will be unstable and the fixpoint {−  ≤
0, −  ≤ 0} will be found. If widening with thresholds were applied and threshold  = 2 were
supplied then a stronger over-approximation would be found, namely {−  ≤ 0,  ≤ , −  ≤ 0}.
Repeating with Octagons, with  for  ≥ 1, the space can’t be precisely described, and the
end result after widening would add the  −  ≤ 0 constraint to the fixpoint calculated with
intervals (either with or without thresholds).</p>
      <p>Analysing with TVPI, each space can be precisely described and the set of constraints will
grow as in the picture, {−  ≤ 0, −  ≤ 0, −  ≤ 0, − 2 ≤ 0.5, − 4 ≤ 1, − 8 ≤ 1.5, ...}
(plus upper bounds on  and ). In fact, these constraints are in Log{,}. When widening is
applied, all but the final constraint (and the upper bounds) will be stable. Again, a widening
with thresholds approach might add an upper bound on .</p>
      <p>The classic widening is typically applied after two or three iterations without widening. In
this example, there is an attraction in applying more iterations to get a better approximation.
This presents three problems to be addressed: i) the practical question of how many inequalities
to maintain, and how to relax the system to control the number of inequalities; ii) the practical
question of how to determine the number of iterations to apply before using widening; iii)
the theoretical question of how to recognise the bound being approached as the limit of the
sequence of inequalities, and add this without relying on up front thresholds.
4.2. Problem 2
Scientific programming often uses trigonometric functions. Analysis of these can be problematic.
Consider a transfer function for set of points , parameterised by constant , where  is an
angle.
+1 = {((.cos( ) − .sin( ), (.sin( ) + .cos( )) | (, ) ∈ ,  ∈ [0, 2 ),  ∈ Q} ⊔ 
This rotates the current iteration by  , scaling by  and taking the least upper bound with the
previous iterate. Notice that if  = 1 then vertices of the described space are points on the
circumference of a circle; if  &gt; 1 then points diverge; if  &lt; 1 then points become closer to
the centre of the circle. Figure 3 illustrates some possible iterations with 0 = {(4, 0)}, and
various choices for  and .</p>
      <p>Case a) has  = 3 and  = 1. Here, a fixpoint describing the space is a hexagon which can
be described with TVPI inequalities; this would be found assuming that the analysis was run
for a suficient number of iterations without widening. Notice that if the abstract domain used
were intervals or Octagons then each iterate would over-approximate the space, this space
will diverge, and when widening is applied to enforce termination the result would be the top
element, the whole plane.</p>
      <p>Case b) has  = 310 and  = 1. This is somewhat similar to case a), although the analysis needs
more iterations (and three times round the circle) to reach a fixpoint, which will be an icosagon.
The diagram illustrates the approximation after seven iterations, with the arrow pointing to the
point that will need to be incorporated when the next least upper bound is calculated. It isn’t
− 4 − 3 − 2 − 1
− 1
1
2
3
4

3
2
1</p>
      <p>too hard to see how  can be chosen so that the fixpoint is arbitrarily complicated, or indeed
non-existent. This might motivate some kind of relaxation, however, such a relaxation will
always lead to an over-appoximation of the space including points from without the circle, and
the space diverges, as above. The desired fixpoint is the circle, but this can’t be represented
using polyhedral domains.</p>
      <p>As noted above, if  &gt; 1 then the approximation will always diverges, hence the only fixpoint
is the whole plane.</p>
      <p>Now suppose that  &lt; 1. There is a critical value for  where fixpoint behaviour changes,
√
that is when  = 22 . As can be seen in Figure 3 d) this is the value at which the iteration of
a bounding box is contained within itself; this will be discussed further below. In Figure 3 c),
√
 = 4 and  = 0.8 &gt; 22 . The iterations are approximated by a spiral, leading to the fixpoint
illustrated. Again, this can be modelled with TVPI constraints. Suppose instead that this was
approximated using Int{,}. 1 would be overapproximated with the dashed box whose bottom
right corner is (4, 0). To calculate 2 this space is translated to give the dashed box tilted by
4 and the least upper bound is calculated, resulting in the containing box. 2 contains points
outside of the circle and further iterations will diverge. Once widening is applied the result will
again be the plane, the top element of the lattice of Int{,}.
√</p>
      <p>Finally, consider the case illustrated in Figure 3 d). Here,  = 4 and  = 22 . To illustrate
the point, a rather gross over-approximation of the space is given by the bounding box which
translates into a subspace of itself at the next iteration, and widening need not be considered. As
previously, this space can be described precisely in TVPI{,}, whilst (as shown by the dotted
lines) Oct{,} can also approximate this space, reaching a fixpoint.</p>
      <p>This discussion has illustrated some of the issue arising when computing fixpoints of iterates
built using trigonometric functions. It poses several questions: i) what is the correct patience
to show before performing widening? ii) when and how can polyhedral approximations be
relaxed to given improved performance and fixpoints? iii) what is the right domain to use when
analysing code using trigonometric (as in this case, matrix) functions? One response to the
last question might be that polyhedral approximations are the wrong choice and a domain
capturing circles and ellipses, such as that in [25], might be a better choice. In this case, an
addition question is how do polyhedral abstract domains interact with non-polyhedral abstract
domains?
4.3. Problem 3
This section returns to the example from Section 2.4, which demonstrated problematic behaviour
when closure and widening interact. As noted there, the problem arises because widening is
essentially a semantic notion about how (for geometric systems) the space is allowed to expand,
but that it is often implemented syntactically by observing the form of constraints. To put it
another way, is the lattice of polyhedra a lattice of convex spaces, or is it a lattice of sets of
inequalities? Closure is a syntactic notion and the redundant inequalities introduced interfere
with the straightforward correspondence, that the classic approach to widening relies upon,
between stability of inequalities and the way in which they are removed.</p>
      <p>A widening is given below that injects some semantics into the procedure, using entailment
checking to determine whether or not apparently stable inequalities should be retained.</p>
      <p>Suppose that  is a closed system of TVPI inequalities, and that  are the inequalities
being merged at a widening point. As usual, widening takes place between  and  ⊔ .
The widening proposed performs the classic widening on these inputs, closes the system, then
uses entailment checking to remove inequalities whose apparently stability is an artefact of the
closed representation maintaining redundant inequalities.</p>
      <p>Firstly, compute the classic widening (which of course isn’t a widening in this context), giving
a set of constraints ℓ:</p>
      <p>complete(▽( ⊔ )) = {1, ..., , +1, ..., }
Here, 1, ...,  ∈/  (that is, are introduced by closure) and +1, ...,  ∈ . Then the new
widening +1 = ▽′ is given by:
+1 = { |  + 1 ≤  ≤ ,  ∖ { } ̸|=  , or  |=  and +1, ..., − 1, +1, ...,  |=  }
That is, stable inequalities are dropped if  ∖ { } |=  (they are redundant in ) and
+1, ..., − 1, +1, ...,  ̸|=  (they are non-redundant in complete( ⊔ )).
Lemma 1. Where +1 = ▽′ for closed , complete(+1) = +1
Proof. Suppose some ℓ ∈ complete(+1), but ℓ ∈/ +1. Then ℓ can be derived from
inequalities in +1 using the result operation. Since +1 ⊆ , ℓ can also be derived from
, hence ℓ ∈ +1 by definition.</p>
      <p>Lemma 2. ▽′ is a widening.</p>
      <p>Proof. Since ▽′ only retains inequalities explicit in the previous iterations, and all iterates are
ifnite sets, stability is reached for a set of constraints or the empty set.</p>
      <p>Returning to the example from Section 2.4. 0 is as before:
0 = complete() = { −  ≤ 1,  −  ≤ 1,  −  ≤ 2,  −  ≤ 2,  −  ≤ 1,  −  ≤ 1}
Next, 1 is again as before, with no widening applied:</p>
      <p>1 = { −  ≤ 1,  −  ≤ 1,  −  ≤ 2,  −  ≤ 2,  −  ≤ 1,  −  ≤ 1}
Now compute the classic widening. Here, the constraints in ,  are unstable, but new constraints
on these variables are introduced in closure.
complete(1▽(1 ⊔1)) = { −  ≤ 3, −  ≤ 3,  −  ≤ 2, −  ≤ 2,  −  ≤ 1,  −  ≤ 1}
Now compute 2 and observe that</p>
      <p>2 = { −  ≤ 1,  −  ≤ 1}
The inequalities in ,  are not stable, hence they are dropped. The inequalities in ,  are
stable, but they were redundant in 1, and are not entailed by the other stable inequalities
in complete(1▽(1 ⊔ 1)), hence are dropped along with the unstable inequalities. Also
observe that complete(2) = 2, and that this is fixpoint.</p>
      <p>Is this entirely satisfactory? Not necessarily, the extensive use of entailment checking might
make this widening particularly expensive to apply. However, it might be that a practical
implementation marks inequalities introduced by closure, which chimes with the observation
that maintaining the widening point as a non-closed system allows a fixpoint to be observed.</p>
      <p>The tension between the semantic notion of widening and the syntactic might not be
problematic when the convex space has a unique representation using the syntactic representation.
However, at least for constraint based systems, a potential problem exists even if a closed system
is not maintained: suppose that a point is to be represented as a polyhedron over multiple
variables; which constraints are used to do this, there are infinitely many choices? Two successive
iterates need to choose the same representation for stability to be syntactically observed. It
might be hoped, indeeded expected, that this will be the case, but this needs to be enforced.</p>
      <p>A final problem remains. The new widening results in a closed system, but part of the working
involves applying closure to the system resulting from the classic widening. This potentially
means calculating closure from scratch, which is an expensive operation; what refinements are
available to avoid the complete recalculation of closure?</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>This paper has looked at widening for polyhedral constraints, with a focus on weakly relational
domains and TVPI in particular. It has illustrated the importance of capturing relations between
variables whilst performing program analysis for verification. The key role of widening in this
analysis has been highlighted. The application of widening is not always straightforward and
some dificult cases highlighting this have been presented. The interaction between widening
and the closed forms maintained in weakly relational domains has been investigated and a new
widening has been introduced that gives a new way to sidestep the problem. The paper also
contains a lot of questions, some of which require theoretical development of widening, others
of which require empirical investigation of the performance of widening. Future work is to
address these questions more fully by carrying out experimental work with real-life programs
to illuminate where problematic structures occur and where the performance of widening needs
to be improved.</p>
      <p>Acknowledgements The authors would like to thank Marius Zicius for discussions on two
variable per inequality constraints.
(2009) 279–323.
[25] J. Feret, Static Analysis of Digital Filters, in: Proceedings of the 13th European Symposium
on Programming, volume 2986 of Lecture Notes in Computer Science, Springer, 2004, pp.
33–48.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P.</given-names>
            <surname>Cousot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cousot</surname>
          </string-name>
          ,
          <article-title>Abstract Interpretation: A Unified Lattice Model for Static Analysis of Programs by Construction or Approximation of Fixpoints</article-title>
          ,
          <source>in: Conference Record of the Fourth ACM Symposium on Principles of Program Analysis</source>
          , ACM Press,
          <year>1977</year>
          , pp.
          <fpage>238</fpage>
          -
          <lpage>252</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.</given-names>
            <surname>Cousot</surname>
          </string-name>
          , Principles of Abstract Interpretation, MIT Press,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Cousot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Halbwachs</surname>
          </string-name>
          ,
          <article-title>Automatic Discovery of Linear Restraints among Variables of a Program, in: Principles of Programming Languages</article-title>
          , ACM Press,
          <year>1978</year>
          , pp.
          <fpage>84</fpage>
          -
          <lpage>97</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Simon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>King</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Howe</surname>
          </string-name>
          ,
          <article-title>Two Variables per Linear Inequality as an Abstract Domain</article-title>
          ,
          <source>in: Logic Based Program Development and Transformation</source>
          , volume
          <volume>2664</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2002</year>
          , pp.
          <fpage>71</fpage>
          -
          <lpage>89</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Miné</surname>
          </string-name>
          ,
          <source>The Octagon Abstract Domain, Higher-Order and Symbolic Computation</source>
          <volume>19</volume>
          (
          <year>2006</year>
          )
          <fpage>31</fpage>
          -
          <lpage>100</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Howe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>King</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Simon</surname>
          </string-name>
          ,
          <article-title>Incremental Closure for Systems of Two Variables Per Inequality</article-title>
          ,
          <source>Theoretical Computer Science</source>
          <volume>768</volume>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Howe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>King</surname>
          </string-name>
          ,
          <article-title>Logahedra: a New Weakly Relational Domain</article-title>
          ,
          <source>in: International Symposium on Automated Technology for Verification and Analysis</source>
          , volume
          <volume>5799</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2009</year>
          , pp.
          <fpage>306</fpage>
          -
          <lpage>320</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>T.</given-names>
            <surname>Reinbacher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Függer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Brauer</surname>
          </string-name>
          ,
          <article-title>Runtime verification of embedded real-time systems</article-title>
          ,
          <source>Formal Methods in System Design</source>
          <volume>44</volume>
          (
          <year>2014</year>
          )
          <fpage>203</fpage>
          -
          <lpage>239</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>P.</given-names>
            <surname>Cousot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cousot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Feret</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Miné</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Rival</surname>
          </string-name>
          ,
          <article-title>Why does ASTRÉE scale up?</article-title>
          ,
          <source>Formal Methods in System Design</source>
          <volume>35</volume>
          (
          <year>2009</year>
          )
          <fpage>229</fpage>
          -
          <lpage>264</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Miné</surname>
          </string-name>
          ,
          <article-title>A New Numerical Abstract Domain Based on Diference-Bound Matrices, in: Second Symposium on Programs as Data Objects</article-title>
          , volume
          <volume>2053</volume>
          <source>of Lecture Notes in Computer Science</source>
          , Springer,
          <year>2001</year>
          , pp.
          <fpage>155</fpage>
          -
          <lpage>172</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>G.</given-names>
            <surname>Gange</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Navas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Schachte</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Søndergaard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Stuckey</surname>
          </string-name>
          ,
          <string-name>
            <surname>A Fresh</surname>
          </string-name>
          <article-title>Look at Zones and Octagons</article-title>
          ,
          <source>ACM Transactions on Programming Languages and Systems</source>
          <volume>43</volume>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>51</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>P.</given-names>
            <surname>Cousot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cousot</surname>
          </string-name>
          ,
          <article-title>Comparing the Galois Connection</article-title>
          and Widening/Narrowing Approaches to Abstract Interpretation,
          <source>in: Proceedings of the 4th International Symposium on Programming Language Implementation and Logic Programming</source>
          , volume
          <volume>631</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>1992</year>
          , pp.
          <fpage>269</fpage>
          -
          <lpage>295</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bourdoncle</surname>
          </string-name>
          ,
          <article-title>Eficient chaotic iteration strategies with widenings</article-title>
          ,
          <source>in: Proceedings of the International Conference on Formal Methods in Programming and Their Applications</source>
          , volume
          <volume>735</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>1993</year>
          , pp.
          <fpage>128</fpage>
          -
          <lpage>141</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>R.</given-names>
            <surname>Bagnara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. M.</given-names>
            <surname>Hill</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Mazzi</surname>
          </string-name>
          , E. Zafanella,
          <article-title>Precise widening operators for convex polyhedra</article-title>
          ,
          <source>Science of Computer Programming</source>
          <volume>58</volume>
          (
          <year>2005</year>
          )
          <fpage>28</fpage>
          -
          <lpage>56</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>B.</given-names>
            <surname>Blanchet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cousot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cousot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Feret</surname>
          </string-name>
          , Mauborgne,
          <string-name>
            <given-names>A.</given-names>
            <surname>Miné</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Monniaux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Rival</surname>
          </string-name>
          ,
          <article-title>Design and Implementation of a Special-Purpose Static Program Analyzer for SafetyCritical Real-Time Embedded Software, in: The Essence of Computation: Complexity, Analysis, Transformation</article-title>
          . Essays Dedicated to Neil D. Jones, volume
          <volume>2566</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2002</year>
          , pp.
          <fpage>85</fpage>
          -
          <lpage>108</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Simon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>King</surname>
          </string-name>
          ,
          <article-title>Widening Polyhedra with Landmarks</article-title>
          ,
          <source>in: Proceedings of the 4th Asian Symposium on Programming Languages and Systems</source>
          , volume
          <volume>4279</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2006</year>
          , pp.
          <fpage>166</fpage>
          -
          <lpage>182</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>A.</given-names>
            <surname>Simon</surname>
          </string-name>
          ,
          <string-name>
            <surname>Value-Range Analysis</surname>
          </string-name>
          of C Programs, Springer,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>D.</given-names>
            <surname>Gopan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. W.</given-names>
            <surname>Reps</surname>
          </string-name>
          , Lookahead Widening,
          <source>in: Proceedings of the 18th International Conference on Computer Aided Verification</source>
          , volume
          <volume>4144</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2006</year>
          , pp.
          <fpage>452</fpage>
          -
          <lpage>466</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>A.</given-names>
            <surname>Simon</surname>
          </string-name>
          , L. Chen,
          <article-title>Simple and precise widenings for polyhedra</article-title>
          ,
          <source>in: Proceedings of the 8th Asian Symposium on Programming Languages and Systems</source>
          , volume
          <volume>6461</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2010</year>
          , pp.
          <fpage>139</fpage>
          -
          <lpage>155</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>L.</given-names>
            <surname>Gonnord</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Halbwachs</surname>
          </string-name>
          ,
          <article-title>Combining Widening and Acceleration in Linear Relation Analysis</article-title>
          ,
          <source>in: Proceedings of the 13th International Static Analysis Symposium</source>
          , volume
          <volume>4134</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2006</year>
          , pp.
          <fpage>144</fpage>
          -
          <lpage>160</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>M.</given-names>
            <surname>Brain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Joshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kroening</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Schrammel</surname>
          </string-name>
          ,
          <article-title>Safety Verification and Refutation by k-Invariants and k-Induction</article-title>
          ,
          <source>in: Proceedings of the 22th International Static Analysis Symposium</source>
          , volume
          <volume>9291</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2015</year>
          , pp.
          <fpage>145</fpage>
          -
          <lpage>161</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>G.</given-names>
            <surname>Gange</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Navas</surname>
          </string-name>
          ,
          <string-name>
            <surname>J</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Schachte</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Søndergaard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Stuckey</surname>
          </string-name>
          ,
          <article-title>Dissecting widening: Separating termination from information</article-title>
          ,
          <source>in: Proceedings of the 17th Asian Symposium on Programming Languages and Systems</source>
          , volume
          <volume>11893</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2019</year>
          , pp.
          <fpage>95</fpage>
          -
          <lpage>114</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>R.</given-names>
            <surname>Bagnara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. M.</given-names>
            <surname>Hill</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Mazzi</surname>
          </string-name>
          , E. Zafanella,
          <article-title>Widening operators for weakly-relational numeric abstractions</article-title>
          ,
          <source>in: Proceedings of the 12th International Symposium on Static Analysis</source>
          , volume
          <volume>3672</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2005</year>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>18</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>R.</given-names>
            <surname>Bagnara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. M.</given-names>
            <surname>Hill</surname>
          </string-name>
          , E. Zafanella,
          <article-title>Weakly-relational Shapes for Numeric Abstractions: Improved Algorithms</article-title>
          and Proofs of Correctness,
          <source>Formal Methods in System Design 35</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>