=Paper= {{Paper |id=Vol-1745/paper5 |storemode=property |title=Adjudication of Coreference Annotations via Finding Optimal Repairs of Equivalence Relations |pdfUrl=https://ceur-ws.org/Vol-1745/paper5.pdf |volume=Vol-1745 |authors=Peter Schüller |dblpUrl=https://dblp.org/rec/conf/aiia/Schuller16 }} ==Adjudication of Coreference Annotations via Finding Optimal Repairs of Equivalence Relations== https://ceur-ws.org/Vol-1745/paper5.pdf
       Adjudication of Coreference Annotations
via Finding Optimal Repairs of Equivalence Relations ?

                                        Peter Schüller

                             Computer Engineering Department
                                 Faculty of Engineering
                               Marmara University, Turkey
                          peter.schuller@marmara.edu.tr



       Abstract. We describe encodings for merging multiple coreference annotations
       into a single annotation, subject to hard constraints (consistency) and optimiza-
       tion criteria (minimal divergence from annotators) using Answer Set Program-
       ming (ASP). This task requires guessing an equivalence relation with a large
       number of elements. We report on experiments with real-world instances based
       on the METU-Sabanci Turkish Treebank using the ASP tools Gringo, Clasp, and
       Wasp. We also describe a patch for Gringo which facilitates fine-grained instan-
       tiation analysis and use it to analyze experimental results.


Keywords: Coreference Resolution, Adjudication, Answer Set Programming


1   Introduction

Coreference Resolution [9, 14, 20, 21] is the task of finding phrases in a text that refer
to the same entity. Coreference is commonly annotated by marking subsequences of
tokens in the input text as mentions and putting sets of mentions into chains such that
all mentions in a chain refer to the same, clearly identifiable entity in the world.
     For example [22] in the text “John is a musician. He played a new song. A girl was
listening to the song. ‘It is my favorite,’ John said to her.” we can identify the following
mentions.
     [John] (i) is [a musician] (ii) . [He] (iii) played [a new song] (iv) .
     [A girl] (v) was listening to [the song] (vi) .
     “[It] (vii) is [ [my] (ix) favorite] (viii) ,” [John] (x) said to [her] (xi) .
Roman superscripts denote mention IDs. This text contains the following chains: {(i),
(iii ), (ix ), (x)} (John, He, my, John); {(iv ), (vi ), (vii )} (a new song, the song, It); and
{(v), (xi )} (A girl, her), where numbers refer to mention IDs.1
     For supervised learning and verification of coreference resolution methods, anno-
tated corpora are an important resource. For creating such resources, we usually give
?
   This work has been supported by Scientific and Technological Research Council of Turkey
   (TUBITAK) Grant 114E430.
 1
   Mention (viii) is in a predicative relationship with (vii). Per convention such mentions are
   not considered coreferent, hence (viii) is missing in the second chain.



                                             57
P.Schüller Adjudication of coreference annotations

  the same document to several annotators who produce mention and chain annotations
  independently. Afterwards we enter a process of manually adjudicating these—often
  conflicting—annotations to produce a single gold standard. Annotations in coreference
  resolution are global on the document level, i.e., it is not possible to decide the truth of
  the annotation of one token, mention, or chain, without considering other tokens, men-
  tions, and chains in the same document. Merging such annotations is a tedious task,
  and the final decision on the gold standard is done based on the adjudicators’ expert
  opinion, however tool support for this task would be useful.
       We here present a method for automatic merging of coreference annotations based
  on combinatorial optimization, where the result combines the information provided by
  all annotators while performing minimal corrective modifications to ensure consistency
  of the output. Our method is based on ASP optimization and provides various parame-
  ters for defining which corrective modifications to prefer.
       We here focus on the experimental evaluation of two encodings for this task which
  use different underlying representations for the equivalence relation:
      – an encoding representing a transitive closure of mention-mention links, scoring
        repairs based on the closure, and deriving a mention-chain representation from the
        closure; and
      – another encoding which directly represents mention-chain links and scores repairs
        based on pairs of mention-chain links.
  We compare the Clasp [19] and Wasp [2] solver tools with branch-and-bound as well
  as with unsatisfiable-core optimization modes on both encodings and several objective
  function variations. Moreover we describe a patch to the Gringo [18] instantiation tool
  that allows us to shed more light on the reasons for performance of encoding variants
  by obtaining a detailed analysis of instantiation of nonground rules.
      In the following we provide preliminaries on Coreference Resolution and Answer
  Set Programming in Section 2, describe our approach for automatic adjudication in
  Section 3, provide ASP encodings in Section 4, discuss experimental evaluation and
  instantiation analysis with Gringo in Section 5, and conclude with related and future
  work in Section 6.


  2     Preliminaries
  We next introduce coreference resolution informally and mathematically, and then give
  a brief introduction of Answer Set Programming.

  2.1    Coreference Resolution
  Coreference resolution is the task of finding phrases in a text that refer to the same
  entity [9, 14, 20, 21, 26, 33]. We call such phrases mentions, and we call a group of
  mentions that refers to one entity chains.
      In formal mathematical terms, we can describe mention detection and coreference
  resolution as follows. Given a document D which is sequence of tokens w1 , . . . , wn ,
  mention detection is the task of finding a set M = {(f1 , t1 ), . . . , (fm , tm )} of mentions,


                                               58
P.Schüller Adjudication of coreference annotations

  i.e., pairs (fi , ti ) of indexes, 1 ≤ i ≤ m, into the sequence of tokens (1 ≤ fi ≤ ti ≤ n).
  Given a set M of mentions, coreference resolution is the task of partitioning M into
  k (disjoint) partitions (chains) P such that all sequences of tokens wfi , . . . , wti corre-
  sponding to mention (fi , ti ) in one chain refer to the same entity.

  Example 1 (Introduction example continued). We have 31 tokens (including punctu-
  ation). Some of the tokens are w1 = ‘John’, w2 = ‘is’, . . . , w30 = ‘her’, w31 = ‘.’, the
  set of mentions is M = {(1, 1), (3, 4), (6, 6), . . ., (30, 30)} and the correct coreference
  partitions are P = {{(1, 1), (6, 6), (23, 23), (27, 27)}, . . . , {(12, 13), (30, 30)}} where,
  (12, 13) represents ‘A girl’ and (30, 30) represents ‘her’.                                 t
                                                                                              u

      Given a set of mentions, there are exponentially many potential solutions to the
  coreference resolution problem, and finding a globally optimal solution is NP-hard ac-
  cording to most measures of coreference optimality [31]. In coreference annotation,
  humans create both mentions and chains, so the problem becomes more involved and
  automatic methods for creating consistent result annotations are helpful.


  2.2   Answer Set Programming

  Answer Set Programming (ASP) is a logic programming paradigm which is suitable for
  knowledge representation and finding solutions for computationally (NP-)hard prob-
  lems [5, 6, 17, 23]. An ASP program consists of rules of the form

                                        Head ← Body.

  where Head is either a first-order atom or a choice construction, and Body is a conjunc-
  tion of first-order atoms. Intuitively, a rule makes the head logically true if the body is
  satisfied. Rules without body are facts (the head is always true) and rules without head
  are constraints (if the body is satisfied, the candidate solution is discarded). Choices of
  the form L {A1 ; . . . ; An } U generate all solution candidates where between L and U
  atoms from the set {A1 , . . . , An } are true.
      In addition to combinatorial search, by using weak constraints of the form

                                 ;Body.         [Cost@Tuple]

  we can perform combinatorial optimization: a weak constraint incurs cost Cost for
  each unique tuple Tuple such that Body is satisfied in the answer set with respect to
  that tuple.
      For details of syntax and semantics we refer to the ASP-Core-2 standard [7]. Effi-
  cient solvers are available for computing answer sets, in this work we use the Gringo
  4.5.0 [18] grounder and experiment with the Clasp 3.1.2 [19] and Wasp 2.0 [2] solvers.


  3     Automatic Coreference Adjudication

  We first describe the problem more formally and give an example of inconsistent an-
  notations, then we describe how we approach the problem and its input and output in


                                              59
P.Schüller Adjudication of coreference annotations

  ASP. Finally we provide and describe two logic programs that use different internal
  representations.
      Mathematically, we can describe adjudication of multiple coreference resolution
  annotations as follows.
        Given a document D of tokens as described in Section 2.1, and u ≥ 2 partitions
        P1 , . . . , Pu of mentions (where each partition can be based on a different un-
        derlying set of mentions) we need to create a single set of mentions M and a
        single partition P which groups the set M into chains.
      Clearly, annotations might be contradictory and we need to ensure certain structural
  constraints in the solution. For example mention A cannot be coreferent with mention
  B if A includes B. Moreover, a solution might contain equivalences between mentions
  that are not present in any annotation. This is because chains are equivalence relations,
  and if we merge equivalence relations that are not subsets of each other, the new equiva-
  lence relation is the reflexive, symmetric, and transitive closure of the original relations.
  Example 2 (continued). Assume that in parallel to chain {(i ), (iii ), (ix ), (x )} in the In-
  troduction, we obtain (from another annotator) a chain {(i ), (ii ), (x )} If we merge these
  chains naively by merging the sets, we obtain a single chain {(i ), (ii ), (iii ), (ix ), (x )}
  although no annotator indicated that (ii ) and (iii ) belong to the same chain.               t
                                                                                                u
        Therefore we need to merge chains in a more intelligent way.

  3.1    Merge Representation and Merge Preferences
  We want to use as much information from annotations as possible, while keeping con-
  sistency of the solution. We achieve this by attaching a cost to ignored mentions, to
  ignored links between mentions, and to links that were not annotated but arise due to
  merged chains.
      Our conceptual model for merging annotations from multiple sources is as follows.
    – A subset of annotated chain-mention links is selected from all annotators.
    – From the resulting chains, each chain is represented as a set of links between all
      mentions it contains (i.e., as a symmetric, transitive relation).
    – A subset of these links is selected to build the combined result.
    – The transitive, and reflexive closure of these selected links is created, which pro-
      vides an equivalence relation over mentions of all annotators.
    – We use this equivalence relation as new set of chains.
    – We ensure the solution adheres to two constraints: no mention can be a part of itself,
      and each chain must contain more than one mention.
     We search for the choice of chain-mention and mention-mention links that opti-
  mizes an objective function. As objective functions we incur a cost of
    – costm (M ) for each ignored mention M in a chain;
    – costl (M , M 0 ) for each ignored link between mentions M and M 0 ; and
    – costa (M , M 0 ) for each link between mentions M and M 0 that is present in the
      result equivalence relation but not part of any annotation.
  In Section 4.3 we provide concrete preference configurations of these functions.


                                              60
P.Schüller Adjudication of coreference annotations

  3.2   Input and Output ASP Representation

  Given u partitions P1 , . . . , Pu where each chain C ∈ Pi contains a set of mentions of
  form (from, to) we create facts of form mention(A, M , Fr , To) where A represents
  the partition (i.e., which annotator created the annotation), M is a unique identifier for
  each mention, and (Fr , To) represents the mention. Moreover we create facts of form
  cm(A, Chain, M ) for each mention with identifier M that is in a chain with identifier
  Chain according to annotator A.

  Example 3 (continued). Given the text from our running example, we can mark tokens
  (including punctuation) with integers as follows.
                John1 is2 a3 musician4 .5 He6 played7 a8 new9 song10 .11
      Consider that annotator a1 created chain {‘John’a , ‘a musician’b , ‘He’c } while an-
  notator a2 produced {‘musician’d , ‘He’e } where superscripts indicate mention IDs.
  These annotations would be represented by the following ASP facts.

                       mention(a1 , a, 1 , 1 ) ← . cm(a1 , c1 , a) ← .
                       mention(a1 , b, 3 , 4 ) ← . cm(a1 , c1 , b) ← .
                       mention(a1 , c, 6 , 6 ) ← . cm(a1 , c1 , c) ← .
                       mention(a2 , d , 4 , 4 ) ← . cm(a2 , c2 , d ) ← .
                       mention(a2 , e, 6 , 6 ) ← . cm(a2 , c2 , e) ← .

  where c1 and c2 represent the first and second chain, respectively.                     t
                                                                                          u

      The output of our logic program will be a set of chains without annotator informa-
  tion. Therefore we want to obtain atoms of the form resultcm (Chain, mid (Fr , To))
  which indicate that in chain Chain there is a mention from token F r to token T o.

  Example 4 (continued). Assume we have merged the annotations of the previous ex-
  ample into two chains v = {‘John’, ‘a musician’} and w = {‘musician’, ‘He’}, this
  would be represented by the following atoms:

               resultcm (v, mid (1, 1))                resultcm (v, mid (3, 4))
               resultcm (w, mid (4, 4))               resultcm (w, mid (6, 6))

  Note that it is frequently the case that mentions contain other mentions, but this is
  usually forbidden in case these mentions belong to the same chain.                 t
                                                                                     u


  4     ASP Encodings

  We next provide two ASP encodings that realize the above merging strategy. The first
  encoding explicitly represents the transitive closure of mention-mention links, while
  the second encoding avoids such a representation by representing the resulting equiva-
  lence relation only as chain-mention links. We use functions cost X for configuring the
  preference relation; these are part of the encoding and resolved during instantiation.


                                             61
P.Schüller Adjudication of coreference annotations


    { usecm (A, Chain, M ) : cm(A, Chain, M ) } ← .                                              (1)
    ; cm(A, C , M ), not usecm (A, C , M ).                   [costm (M )@1 , A, C , M , cm]     (2)
    link (A, M1 , M2 ) ← usecm (A, C , M1 ), usecm (A, C , M2 ), M1 < M2 .                       (3)
    { uselink (A, M1 , M2 ) } ← link (A, M1 , M2 ).                                              (4)
    ; link (A, M1 , M2 ), not uselink (A, M1 , M2 ).      [costl (M1 , M2 )@1 , A, M1 , M2 , mm] (5)
    clink (X , Y ) ← X = mid (Fr1 , To1 ), Y = mid (Fr2 , To2 ), X < Y ,
           uselink (A, M1 , M2 ), mention(A, M1 , Fr1 , To1 ), mention(A, M2 , Fr2 , To2 ).      (6)
    clink (X , Y ) ← X = mid (Fr1 , To1 ), Y = mid (Fr2 , To2 ), X < Y ,
           uselink (A, M2 , M1 ), mention(A, M1 , Fr1 , To1 ), mention(A, M2 , Fr2 , To2 ).      (7)
    clink → (X , Y ) ← clink (X , Y ).                                                           (8)
            →                  →
    clink (X , Y ) ← clink (Y , X ).                                                             (9)
    clink → (X , Z ) ← clink → (X , Y ), clink → (Y , Z ).                                     (10)
       ; clink → (X , Y ), X < Y , not clink (X , Y ).    [cost a (X, Y )@1, X, Y, add]        (11)
    not repr (Y ) ← clink → (X, Y ), X < Y.                                                    (12)
                                 →
    resultc,m (X , Y ) ← clink (X , Y ), not notrepr (X ).                                     (13)
    resultc (C ) ← resultc,m (C , _).                                                           (14)
       ← resultc (C ), #count { Y : resultc,m (C , Y ) } ≤ 1 .                                  (15)
       ← clink (mid (F1 , T1 ), mid (F2 , T2 )),F1 ≤ F2 , T2 ≤ T1 ,(F1 , T1 ) 6= (F2 , T2 ).    (16)


                                     Fig. 1. Encoding variant MM (full).



  4.1     Mention-Mention Encoding (MM)

  Figure 1 shows encoding MM. Rule (1) guesses which subset of mention-chain anno-
  tations from all annotators is used, while the weak constraint (2) incurs cost costm (M )
  for each mention-chain annotation that is not present in the solution. Each pair of dis-
  tinct mentions in the same chain induces a link between these mentions (3), rules (4)
  guess a subset of these links to be used in the result, and the weak constraint (5) in-
  curs a cost costl (M, M 0 ) for each link that is not present in the solution. Note that we
  canonicalize the symmetric link relation using the < operator based on link IDs.
       Rules (6) and (7) project away annotator information and use new constant terms
  of form mid (Fr , To) to identify links between mentions using just start and end tokens
  Fr and To, respectively.2 The relation clink (·, ·) is a full canonical representation of all
  mentions and mention-mention links in a candidate solution.
       Rules (8)–(10) define in clink → the symmetric, reflexive, and transitive closure of
  the clink relation. This closure represents an equivalence relation between mentions,
  i.e., the result chains, and is the distinguishing feature of the MM encoding.

   2
       Both rules are required because IDs of mentions could be sorted differently from their occur-
       rence in the text.



                                                   62
P.Schüller Adjudication of coreference annotations


          ann(A) ← cm(A, _, _).                                                              (17)
          count_chain(A, N ) ← ann(A), N = #count { C : cm(A, C , _) }.                      (18)
          maxchain(N ∗ 3 /2 ) ← N = #max { C : count_chain(_, C ) }.                         (19)
          { resultc (X ) : X = 1 ..Max } ← maxhain(Max ).                                    (20)
          resultc (X −1 ) ← resultc (X ), 1 < X .                                            (21)
          cmention(M ) ← clink (M , _).                                                      (22)
          cmention(M ) ← clink (_, M ).                                                      (23)
          1 { resultcm (C , M ) : resultc (C ) } 1 ← cmention(M ).                           (24)
          ← clink (X , Y ), resultcm (C , X ), not resultcm (C , Y ).                        (25)
          ← clink (X , Y ), not resultcm (C , X ), resultcm (C , Y ).                        (26)
          ; resultcm (C , X ), resultcm (C , Y ), X < Y , not clink (X , Y ).
                                                             [cost a (X, Y )@1, X, Y, add]   (27)


            Fig. 2. ASP rules for encoding variation CM which replaces rules (8)–(11).


      Based on clink → , weak constraint (11) incurs cost cost a (M, M 0 ) for all pairs of
  mentions that are present in the result equivalence relation but not in any mention-
  mention link obtained from annotators (clink ).
      The equivalence relation clink → is transformed back into a chain-mention represen-
  tation in (13) and (14) by using the lexicographically smallest mention as representative
  for each chain, where (12) identifies lexicographically non-smallest mentions.
      Finally we impose two structural constraints on the result: a chain has to contain
  more than one mention (15) and a mention that is within another mention cannot be in
  the same chain with that mention (16). Note that, thanks to the elaboration tolerance [25]
  of ASP, constraints (15) and (16) can be removed if the respective restriction should
  not be applied, or can be made more restrictive (e.g., mentions in a chain cannot be
  neighbors) independent from the rest of the encoding.
      The encoding shown in Figure 1 admits as answer sets all possible consistent merges
  of the given annotations, including solutions where all annotations are ignored. Weak
  constraints prefer solutions where, according to the chosen cost functions, as many
  given annotations as possible are used and a single consistent solution is represented
  in the answer set. The balance between not ignoring annotations and not creating too
  many novel links between mentions can be adjusted using the cost functions.


  4.2   Chain-Mention Encoding (CM)

  The difference between MM and CM is the way we represent the equivalence relation
  and how we check the preference function, in particular links in the solution that are
  not present in annotations. For reasons of brevity, we provide this encoding as a patch
  to encoding MM: encoding CM is obtained by using rules (17)–(27) shown in Figure 2
  instead of rules (8)–(11) in encoding MM.


                                               63
P.Schüller Adjudication of coreference annotations

      In the CM encoding variant, (17) represents annotators, (18) finds the number of
  chains for each annotator, and (19) computes 1.5 times the maximum of the largest
  number of chains annotated by any annotator in the input. Encoding CM is based on
  the assumption that the preferred result will not contain more chains than this amount
  of chains. This assumption can safely be made because differences in the amount of
  annotations are consistent over the whole document: some annotators produce more
  annotations, but this is true over the whole document, therefore using the maximum
  amount of chains times 1.5 is safe (using the preference relations we study, this number
  is never reached in practical solutions).
      Given the maximum number of resulting chains, (20) guesses which chain will ac-
  tually exist, and (21) ensures that if we use a certain chain ID, then we also use the
  ID below (this eliminates symmetries by ensuring the usage of consecutive chain IDs).
  Rules (22) and (23) represent all canonical mentions in cmention(·). For each of these
  mentions, (24) guesses in which chain that mention will be in the result. Constraints (25)
  and (26) enforce, that the guessed solution in resultcm corresponds to those links of an-
  notators that have been selected to be part of the solution in clink .
      To incur a cost for transitive edges that are not present in annotations, we cannot use
  clink → as in encoding MM. Instead, in weak constraint (27) we again join the resultcm
  relation with itself on the chain ID.


  4.3   Weak and Strong Constraints for Preferences

  The encodings we have described so far impose a cost on chain/mention and men-
  tion/mention annotations that are not reflected in the solution, and a cost on men-
  tion/mention annotations in the solution that were not annotated by human annotators.
       If we replace the weak constraints by strict constraints (and remove the attached
  cost) then we obtain solutions corresponding to infinite cost of a constraint violation,
  i.e., the constraints must never be violated. Note that, if we replace all weak constraints
  by strict constraints, inconsistent annotations will not admit a solution. To guarantee
  existence of a solution, we must allow either to ignore chain/mention annotations, i.e.,
  costm (M ) must be finite, or to ignore mention/mention annotations, i.e., costl (M, M 0 )
  must be finite.
       Table 3 shows all objective function configurations we experiment with in this work.
  A dash ‘-’ indicates that the constraint was used as a strict constraint, an integer indi-
  cates a constant cost per ignored annotation, while L indicates a cost depending on the
  length |M | of mention M , computed as follows.

                                                          
                   3 if |M | = 1                           3 if |M | + |M 0 | = 2
                                                       0
     cost m (M ) = 2 if 2 ≤ |M | ≤ 4       cost l (M, M ) = 2 if 3 ≤ |M | + |M 0 | ≤ 8
                                                          
                    1 otherwise                              1 otherwise

      Some rationale for the choice of objectives in Table 3 is as follows. Annotators
  make more mistakes with longer mentions, and for this reason it can be useful to assign
  a higher cost to ignoring mentions that contain fewer tokens. Therefore cost functions


                                             64
P.Schüller Adjudication of coreference annotations

           Encoding/Approximation    OOM     OOT    SAT    OPT      T     M    Tgrd
                                       #       #      #      #     sec   MB     sec
           CM/25                         0    182    592    234 239 498         4.0
           CM/∞                         96    245    562    105 250 1372       10.1
           MM/25                       181    150    324    353 175 1329       10.8
           MM/∞                        181    162    354    311 186 1339       10.9
                   Table 1. Variation of encoding variant and approximation.



  can be constant or depend on the length of the involved mentions. We can ensure tran-
  sitivity by enforcing that we never merge chains that would require additional mention-
  mention links, however this would cause more mentions or their links to be ignored, and
  we would like to believe that human annotators create annotations for a reason. There-
  fore we put a cost on additional links that is lower than the cost of ignoring a given
  annotation (in some cases the cost is even zero). Moreover ignoring a chain-mention
  link implicitly removes at least one mention-mention link, and in large chains removes
  many mention-mention links. Therefore we put higher cost on ignoreing chain-mention
  links than on ignoring mention-mention links. A more comprehensive analysis of the
  implications of cost functions in Table 3 for the practical usefulness of solutions is a
  topic of future work. In this work our purpose is to provide a configurable framework
  and study the influence on the cost function on feasibility of computation.


  5   Experimental Evaluation

  To evaluate our approach we used 21 documents from the METU-Sabanci Turkish Tree-
  bank [29] where documents have between 494 and 2600 mentions (1199 on average)
  and between 247 and 1300 chains (600 on average) with each chain containing between
  19 and 53 mentions (34 on average). Each document was annotated by between 6 and
  9 annotators (8 on average).
      Experiments were performed on a computer with 48 GB RAM and two Intel E5-
  2630 CPUs (total 16 cores) using Debian 8 and at most 8 concurrently running jobs
  (each job using a single core, as we did not use solvers in multithreaded mode). We lim-
  ited memory usage to 5 GB and time usage to 300 sec. For instantiation we used Gringo
  4.5.0 [18], for solving Clasp 3.1.2 [19] and Wasp 2.0 [2]. We use Clasp with parame-
  ters -opt-strategy=bb and -opt-strategy=usc and Wasp with parameters
  -weakconstraints-algorithm=basic and without arguments (which corre-
  sponds to unsat-core based optimization).
      As our instances are very hard for the abovementioned memory and time con-
  straints, we additionally experiment with an approximation for the preference relation:
  we only use a subset of weak constraints, concretely we omit weak constraints (for ig-
  noring or adding mention-mention links) whenever the pair of mentions has a distance
  of more than 25 tokens in the document.
      In our experimental results, columns OOM, OOT, SAT, and OPT, shows the number
  of runs that exceeded 5 GB, exceeded 300 sec, found at least one satisfiable solution,


                                             65
P.Schüller Adjudication of coreference annotations

         Encoding/Solver OOM OOT SAT OPT T    M Tgrd Chc Conf Lem
                           #   #   #   # sec MB sec    #    #   #
         CM/Clasp/B           0     0   241 11 291 402 4.1 8M 1M 1M
         CM/Wasp/B            0     0   247   5 297 1039 4.0 1M 41M 297K
         CM/Clasp/U           0   120     0 132 156 248 4.0 1M 492K 492K
         CM/Wasp/U            0    62   104 86 213 305 4.0 1M 2M 31K
         MM/Clasp/B          42     0   199 11 269 1158 10.8 3M 1M 1M
         MM/Wasp/B           49    83   112   8 251 1641 10.9 514K 11M 295K
         MM/Clasp/U          42    27     0 183 79 1055 10.8 316K 10K 10K
         MM/Wasp/U           48    40    13 151 102 1462 10.7 114K 478K  2K
      Table 2. Variation of computation method and encodings with 25-token approximation.




  and found the first optimal solution and proved its optimality, respectively. Moreover
  columns T , M , and Tgrd give the average total time, memory, and instantiation time of
  all runs. In our experiments, all OOM and OOT conditions were reached during solving,
  never during grounding.
       For comparing the feasibility of computing solutions with MM and CM encodings
  and with or without approximation, Table 1 depicts accumulated results over all 21
  documents, 4 solver variations, and 12 objective functions (i.e., each table row summa-
  rizes 1008 runs). We can observe that encoding MM exhausts memory much more than
  encoding CM, and that we can only fit into 5 GB by using CM and the 25-token approx-
  imation. Although CM fits into less memory, it requires more time on average and we
  find more optimal solutions with encoding MM, both with and without the 25-token ap-
  proximation. Instantiation times are similar for all configurations, except for the CM/25
  combination which requires half the time of the other configurations for reasons we try
  to explain in Section 5.1.
       Table 2 compares computational methods for both encodings with 25-token ap-
  proximation across all 21 instances and 12 objective functions. We see that for getting
  some answer set (SAT or OPT), bound-based optimization and CM encoding are better
  than configurations with unsat-core optimization or encoding MM, i.e., CM/·/B pro-
  vides (suboptimal) solutions for all instances. However, the highest number of optimal
  solutions is obtained with encoding MM and unsat-core optimization: here Clasp per-
  forms better (183 optimal solutions) than Wasp (151) in this setting. The columns
  Chc, Conf , and Lem, show the number of choices, conflicts, and learned clauses, re-
  spectively (in runs that did not exhaust memory). Clasp did not find any satisfiable
  solutions in unsat-core (U) mode (either the solutions were optimal or the timeout was
  reached), while Wasp found many solutions before optimality. On the other hand, in
  branch-and-bound mode (B) Clasp finds more optimal and more satisfiable solutions
  than Wasp. The memory usage of Wasp is significantly higher than the one of Clasp
  in mode B, while for CM/·/U this difference is smaller (248 MB vs 305 MB).
       In summary, for branch-and-bound optimization Clasp seems more suitable, while
  for unsat-core optimization, Wasp seems more suitable.
       As only the CM encoding with 25-token approximation fits into memory for all
  configurations, and because only branch-and-bound optimization finds solutions in all


                                             66
P.Schüller Adjudication of coreference annotations

                 Objective                Clasp/B                    Wasp/B
        cost m     cost l cost a   SAT   OPT     T       M      SAT OPT   T           M
                                     #      # sec       MB        #   # sec          MB
          -          1      -       21      0     300   560       21      0   300    804
          -          1      0       19      2     274   443       20      1   287   1132
          1          -      -       21      0     300   547       21      0   300   1981
          1          -      0       19      2     274   161       20      1   292   1686
          2          -      1       20      1     295   190       21      0   300   1383
          2          1      -       21      0     300   567       21      0   300    706
          2          1      0       19      2     281   520       20      1   295    727
          3          2      1       21      0     300   366       21      0   300    888
          6          3      1       20      1     294   423       21      0   300    857
          -          L      1       20      1     287   450       20      1   288    806
          L          -      1       20      1     288   194       20      1   291    770
         2L          L      1       20      1     292   402       21      0   300    721
  Table 3. Variation of objective with encoding CM, branch-and-bound, 25-token approximation.



  cases, we compared several possible objective functions using the CM/25 configuration.
  Table 3 shows the result, both with Clasp and Wasp in branch-and-bound optimiza-
  tion mode. We see that there are no big differences between the feasibility of various
  objective functions: only few instances are solved to the optimal solution and therefore
  average time requirements are very close to the timeout of 300 seconds. However, we
  can observe interesting memory requirement, which is high for Wasp in those cases
  where it is low for Clasp: for an objective function which does not permit ignoring
  mention-mention links, uses fixed cost 2 on ignoring chain-mention annotations, and
  either uses cost 1 or no cost for additional mention-mention links. Moreover, memory
  requirement is more stable for Clasp while it varies more and is higher in general for
  Wasp.

  5.1    Instantiation Analysis with Gringo
  To gain additional insights about the instantiated program, we have modified Gringo
  4.5.0 to count instantiations of each nonground rule and extend the output of gringo
  -verbose with a table that shows for each nonground rule the number of times it was
  instantiated with 0 body conditions (i.e., as facts), with 1 and 2 body conditions (i.e.,
  as rules that can be propagated particularly efficiently), and as rules with 3 or more
  body conditions. Our publicly available patch3 shows these counts for the intermediate
  representation of Gringo.
       Table 4 shows those parts of the instantiation analysis table where rules have been
  instantiated more than 10k times for an instance of average difficulty (1298 mentions,
  649 chains, max 35 mentions per chain).
       In the MM encoding, the main instantiation effort (725k rules) originates in the tran-
  sitive closure rules (10). On the other hand, in the CM encoding the main instantiation
   3
       https://github.com/peschue/clingo/tree/grounder-stats



                                             67
P.Schüller Adjudication of coreference annotations

      Variables in body        Nonground Rule
  0     1       2         3+ MM encoding
  0 12K    0               0   clink_tr(X,Y):-clink_tr(Y,X).
  0   0 725K               0   clink_tr(X,Z):-clink_tr(Y,Z),clink_tr(X,Y).
  0 453  12K               0   result_cm(X,Y):-clink_tr(X,Y),not not_repr(X).
  0 12K    0               0   result_c(C):-result_cm(C,_).
  0     1       2         3+ CM encoding
  0     0     0 112K :-result_cm(C,X),clink(X,Y),not result_cm(C,Y).
  0     0     0 112K :-result_cm(C,Y),clink(X,Y),not result_cm(C,X).
  0     0 5212K 112K :~result_cm(C,Y),X