=Paper= {{Paper |id=Vol-2385/paper3 |storemode=property |title=Shift-of-Perspective Identification within Legal Cases |pdfUrl=https://ceur-ws.org/Vol-2385/paper3.pdf |volume=Vol-2385 |authors=Gathika Ratnayaka,Thejan Rupasinghe,Nisansa de Silva,Viraj Salaka Gamage,Menuka Warushavithana,Amal Shehan Perera |dblpUrl=https://dblp.org/rec/conf/icail/RatnayakaRSGWP19 }} ==Shift-of-Perspective Identification within Legal Cases== https://ceur-ws.org/Vol-2385/paper3.pdf
               Shift-of-Perspective Identification within Legal Cases
              Gathika Ratnayaka                                       Thejan Rupasinghe                                        Nisansa de Silva
         gathika.14@cse.mrt.ac.lk                             thejanrupasinghe.14@cse.mrt.ac.lk                       nisansaDdS@cse.mrt.ac.lk
     Department of Computer Science &                         Department of Computer Science &                    Department of Computer Science &
               Engineering                                              Engineering                                          Engineering
          University of Moratuwa                                   University of Moratuwa                              University of Moratuwa
           Moratuwa, Sri Lanka                                       Moratuwa, Sri lanka                                 Moratuwa, Sri Lanka

            Viraj Salaka Gamage                                  Menuka Warushavithana                                     Amal Shehan Perera
           viraj.14@cse.mrt.ac.lk                                 menuka.14@cse.mrt.ac.lk                               shehan@cse.mrt.ac.lk
     Department of Computer Science &                         Department of Computer Science &                    Department of Computer Science &
                 Engineering                                            Engineering                                         Engineering
          University of Moratuwa                                   University of Moratuwa                              University of Moratuwa
           Moratuwa, Sri Lanka                                      Moratuwa, Sri Lanka                                 Moratuwa, Sri Lanka
ABSTRACT                                                                              In the process of interpreting the meaning of a text, understanding
Arguments, counter-arguments, facts, and evidence obtained via                        the context can be considered as a major requirement, especially in
documents related to previous court cases are of essential need for                   the legal literature.
legal professionals. Therefore, the process of automatic information                     Identifying how textual units are related to each other within
extraction from documents containing legal opinions related to                        a machine-readable text is an important task when it comes to in-
court cases can be considered to be of significant importance. This                   terpreting the context. Humans are good at comparing two textual
study is focused on the identification of sentences in legal opinion                  units to determine the way in which those two units are connected.
texts which convey different perspectives on a certain topic or                       Granting this ability to computers is a major discussion topic in
entity. We combined several approaches based on semantic analysis,                    the research related to areas of Natural Language Processing and
open information extraction, and sentiment analysis to achieve our                    Artificial Intelligence. A sentence can be considered as a textual
objective. Then, our methodology was evaluated with the help of                       unit with significant importance in a text. Therefore, analysis of
human judges. The outcomes of the evaluation demonstrate that                         relationships between sentences can be useful to get a clear pic-
our system is successful in detecting situations where two sentences                  ture on the information flow within a text which is made up of a
deliver different opinions on the same topic or entity. The proposed                  considerable number of sentences.
methodology can be used to facilitate other information extraction                       Similarly, identifying the types of relationships existing between
tasks related to the legal domain. One such task is the automated                     sentences in legal opinion texts can be used to identify the informa-
detection of counter arguments for a given argument. Another is                       tion flow within a legal case. Within a document describing legal
the identification of opponent parties in a court case.                               opinions related to a court case, different types of relationships
                                                                                      between sentences can be observed such as elaboration and contra-
KEYWORDS                                                                              diction. Pairs of sentences can be classified into two major groups
                                                                                      based on whether the topics which are being discussed by the two
semantic analysis, sentiment analysis, natural language processing,
                                                                                      sentences in the sentence pair is the same or not. In other words,
information extraction, law
                                                                                      the two sentences in a sentence pair may discuss the same topic
1    INTRODUCTION                                                                     or they may discuss completely different topics. Even if the two
                                                                                      sentences are discussing the same topic, the opinions or views pre-
Documents describing legal opinions related to previous court cases                   sented in the two sentences on the topic may be different. Consider
carry a significant importance when it comes to the legal literature.                 the following sentence pair taken from Lee v. United States [2].
The information presented in these legal opinion texts are used
in different capacities such as evidence, arguments, and facts by
legal officials in the process of constructing new legal cases [29].                   Example 1
Therefore, information extraction from legal opinion texts can be                      • Sentence 1.1: Applying the two-part test for ineffective assistance claims from Strick-
considered as an area of significant importance, within the topic of                     land v. Washington, 466 U. S. 668, the Sixth Circuit concluded that, while the Govern-
                                                                                         ment conceded that Lee’s counsel had performed deficiently, Lee could not show that
automatic information extraction in the legal domain. In order                           he was prejudiced by his attorney’s erroneous advice.
to perform systematic information extraction from a legal opinion                      • Sentence 1.2: Lee has demonstrated that he was prejudiced by his counsel’s erroneous
text, a system should be able to interpret the meaning of a given text.                  advice.

In: Proceedings of the Third Workshop on Automated Semantic Analysis of Information
in Legal Text (ASAIL 2019), June 21, 2019, Montreal, QC, Canada.
© 2019 Copyright held by the owner/author(s). Copying permitted for private and          The above two sentences discuss whether a person named Lee
academic purposes.
Published at http://ceur-ws.org                                                       was able to convince that he was prejudiced by his attorney’s advice
                                                                                      or not. While the first sentence says that Lee could not show that he
ASAIL 2019, June 21, 2019, Montreal, QC, Canada                                                                               Ratnayaka and Rupasinghe, et al.


was prejudiced by his attorney’s advice, the second sentence contra-                             Regardless, there have been some recent attempts to circum-
dicts the first sentence by saying that Lee has demonstrated that he                         vent these problems in the legal domain including information
was prejudiced by his counsel’s erroneous advice. Thus, the two sen-                         organization [10–12], information extraction [28] and information
tences provide different opinions on the same topic. Contradiction                           retrieval [29]. Going forward, owing to the popularity of knowledge
is not a necessary condition in order to classify a pair of sentences                        embedding in the literature, several studies have taken up the task
as providing different opinions on the same topic. For example, con-                         of embedding legal jargon in vector spaces [19, 28]. Further, in the
sider Example 2 which consists of two adjacent sentences which                               information extraction domain, the study by Gamage et al [8] at-
are also taken from Lee v. United States [2].                                                tempted to build a sentiment annotator for the legal domain and the
                                                                                             study by Ratnayaka et al [24] attempted to identify relationships
                                                                                             among sentences in legal opinion texts.
 Example 2
                                                                                                 Discovering situations where two sentences are providing differ-
 • Sentence 2.1: Although he has lived in this country for most of his life, Lee is not a
   United States citizen, and he feared that a criminal conviction might affect his status   ent opinions on the same topic or entity is an important part when
   as a lawful permanent resident.                                                           it comes to identifying relationships among sentences [24]. Contra-
 • Sentence 2.2: His attorney assured him there was nothing to worry about–the Gov-          diction is a sufficient but not a necessary condition in this regard.
   ernment would not deport him if he pleaded guilty.
                                                                                             The study [17] is focused on finding contradictions in text related
                                                                                             to the real world context. In an attempt to define contradiction, the
                                                                                             same study[17] claims that “contradiction occurs when two sen-
    It can be seen that both sentences in this example discuss the                           tences are extremely unlikely to be true simultaneously” and the
topic – the deportation of a person named Lee. Though the two sen-                           study [20] also agrees on that definition. However, the study [17]
tences here do not provide contradictory information, they provide                           also demonstrates that two sentences can be contradictory while
two different viewpoints regarding the same topic. It can be seen                            being true simultaneously. These characteristics of contradiction
that the opinions of Lee and his attorney on the possibility of Lee                          make the process of detecting contradiction relationships more
being deported is different. Therefore, when discussing sentences                            complex.
with different opinions on the same topic, not only the sentences                                In order to become contradictory, two textual units can elabo-
providing contradictory information but also the sentences provid-                           rate not only on the same event but also on the same entity. For
ing multiple viewpoints on the same discussion topic should also be                          example, if one sentence in a sentence pair is saying that a person
considered. In each of the above two examples, Sentence 1 comes                              is a United States citizen while the other sentence is saying that
before Sentence 2. From this point onward, the first sentence in                             very same person is not a United States citizen, it is obvious that
a sentence pair will be referred to as the Target Sentence and the                           the two sentences are providing contradictory information. Here,
second sentence as the Source Sentence.                                                      the contradictory information is upon a person which can be con-
    An important observation which can be made by considering                                sidered as an entity. Therefore, it is more reasonable to consider
Example 2 is that the identification of the shift in the viewpoint in                        that in order to be contradictory, texts must elaborate on the same
that particular occasion is not straightforward. This implicit nature                        topic.
makes the task of identifying sentences which provides different                                 In order to detect contradiction, different features based on the
opinions on the same discussion topic even more challenging. At                              properties of text have been considered in the previous studies
the same time, it can be considered a vital task due to its potential                        [17, 20]. Polarity features and Numeric Mismatches are such com-
to enhance the capabilities of Information Extraction from Legal                             monly used features. The study [17] empirically claims that the
Text by facilitating automatic detection of counter-arguments, iden-                         precision of detecting contradiction falls when numeric mismatches
tification of the stance of a particular party in a court case and to                        are considered.
discover multiple viewpoints to analyze or evaluate a particular                                 The structures of the texts also play a vital role when it comes
legal situation.                                                                             to contradiction detection [17, 20]. Analysis of text structure is
    Hence, the objective of this study is to identify sentences which                        helpful in identifying the common entity or event on which the
have different perspectives on the same discussion topic in a given                          contradiction is occurring. When the structure of a given sentence
court case. For this study, legal opinion texts related to United                            is considered, the subject-object relationship plays an important
States court cases were used.The next section provides details on                            role [3, 18]. Analysis of Typed Dependency Graphs [4] is another
the previous work which are related to our study. Section 3 describes                        useful approach to understand the structure of a particular text and
the methodology followed in this study while the outcomes of the                             to obtain necessary information.
study are discussed in Section 4. Finally we conclude our discussion                             Polarities of the sentences in relation to the sentiments can
in Section 5.                                                                                also play a vital role when it comes to identification of sentence
                                                                                             pairs which provide different opinions on the same topic. It can be
2    RELATED WORK                                                                            observed the seminal RNTN (Recursive Neural Tensor Network)
Computing applications which can be considered to be both efficient                          model [27] which is trained on movie reviews is used in many recent
and effective are scarce due to the challenges in handling legal                             studies [16, 26] which perform sentiment analysis. The trained
jargon[11, 12, 25]. The nature of legal documents employing a                                RNTN model [27] has a bias towards the movie review text[8]. In
vocabulary of mixed origin ranging from Latin to English has been                            order to overcome this problem, the study by Gamage et al [8] has
put forward as a reasoning for difficulties of building computing                            proposed a methodology to develop a sentiment annotator for the
applications for the legal domain [29].
Shift-of-Perspective Identification within Legal Cases                                    ASAIL 2019, June 21, 2019, Montreal, QC, Canada


legal domain using transfer learning and has obtained 6% increase             • No Relation - No relationship can be observed between
in accuracy over the original model [27] within the legal domain.                the two sentences. One sentence discusses a topic which is
   The study by de Silva et al [7] introduces a new algorithm to                 different from the topic discussed in another sentence.
calculate the oppositeness of triples that can be extracted from            It can be seen that the relationship type Shift-in-View defined
microRNA research paper abstracts using open information extrac-         in SRI study [24] aligns with the relationship type that is being
tion. As the study proposes a mechanism to detect inconsistencies        discussed in this study. It can also be further confirmed by looking
within paragraphs, we see it as one potential methodology which          at how CST[23] relationships are adopted in the study[24].
can be adapted to detect Shift-in-View relationship between sen-
tences. However, as the above-mentioned study [7] specifically                     Table 1: Adopting CST Relationships [24]
focuses on discovering inconsistencies in the medical domain, it
is needed to adopt the proposed methodology to the legal domain
                                                                            Definition                        CST Relationships
in order to detect shift-in-perspectives in legal opinion texts. From
this point onward in this paper, we will refer to the study [7] as the                     Paraphrase, Modality, Subsumption, Elaboration,
PubMed Study.                                                                              Indirect Speech, Follow-up, Overlap,
                                                                          Elaboration
   In the study [32], discourse relations between sentences have                           Fulfillment, Description, Historical Background,
been used to generate clusters of similar sentences within texts.                          Reader Profile, Attribution
A Support Vector Machine model is used in this study[32] to de-           Redundancy       Identity
termine the relationships existing between sentences. In the pro-         Citation         Citation
cess of Multi-Class classification performed using the SVM Model,         Shift-in-View    Change of Perspective,Contradiction
the study [32] has defined a class named Change of Topic which            No Relation      -
combines the Contradiction and Change of Perspective relations
as defined in Cross Document Structure Theory (CST) [23]. The               As shown in Table 1, the Shift-in-View relationship includes both
study[32] has obtained lower results for Change of Topic than other      Contradiction and Change of Perspective relationships as defined
relationship types and it claims that average results are due to lack    in CST [23]. Elaboration, Redundancy, Shift-in-View or Citation
of significant features which could properly detect Contradiction        relationships defined in the study[24] suggest that a sentence pair
and Change of Perspective. CST relations and data from CST bank          is discussing the same topic while No Relation suggests that the two
have also been used to train an SVM model in another study [24]          sentences are discussing completely different topics. It has been
in order to predict relationships between sentences in the legal         stated that SRI is able to detect situations where the discussion
domain. Though the study has done improvements to the features           topic is changed with a considerable accuracy [24]. However, it
in [32] and introduced new features which suit the legal domain,         is also stated that the proposed methodology is not able to detect
the results obtained in relation to the Contradiction and Change of      situations where two sentences provide different opinions on the
Perspective relationships as defined in CST [23] is very low. One        same topic.The results obtained in this study [24] are shown in
possible reason is that the CST Bank[22] data set is made up of          Table 2.
sentences from newspaper articles, where the structural and lin-
guistic features may differ from that in the court case transcripts,     Table 2: Confusion Matrix from the Sentence Relationship
especially when it comes to relationships such as Contradiction and      Identifier study [24]
Change of Perspective.


                                                                                                                                                   Shift-in-View
   In identifying whether two sentences are providing different
                                                                                                                     No Relation
                                                                                                    Elaboration




perspectives or opinions regarding the same topic, it is important
                                                                                                                                      Citation


to identify whether the two sentences are discussing the same
topic. The study [24] has proposed a successful methodology to                     Predicted
identify whether a given two sentences are discussing the same                                                                                                     Σ
                                                                           Actual
topic or not. In the same study [24], five relationships that can be       Elaboration          93.9%              6.1%            0.0%          0.0%               99
observed between two sentences are defined as shown below. From
                                                                           No Relation          11.9%             88.1%            0.0%          0.0%               42
this point onward we refer to the system proposed in the study [24]
                                                                           Citation             0.0%               4.8%            95.2%         0.0%               21
as Sentence Relationship Identifier (SRI).
                                                                           Shift-in-View       100.0%              0.0%            0.0%          0.0%                3
                                                                           Σ                     101                 44              20            0               165
    • Elaboration - One sentence adds more details to the infor-
      mation provided in the preceding sentence or one sentence
      develops further on the topic discussed in the previous sen-          It is clear that the machine learning model inside the SRI is not
      tence.                                                             able to detect Shift-in-View relationship. However, Table 2 shows
    • Redundancy - Two sentences provide the same information            that the sentences pairs having Shift-in-View relationships are de-
      without any difference or additional information.                  tected as Elaboration. It can be considered as a positive aspect,
    • Citation - A sentence provides references relevant to the          as Elaboration suggest that both sentences are elaborating on the
      details provided in the previous sentence.                         same topic, which is a necessary condition when detecting sen-
    • Shift-in-View - Two sentences are providing conflicting            tences providing different perspectives on the same topic or entity
      information or different opinions on the same topic or entity.     as described in other studies [17, 20] too.
ASAIL 2019, June 21, 2019, Montreal, QC, Canada                                                                  Ratnayaka and Rupasinghe, et al.

Table 3: Results from Sentence Relationship Identifier study             Example 3
considering Sentence Pairs where Both Judges Agree [24]                  • Sentence 3.1: Lee’s claim that he would not have accepted a plea had he known it
                                                                           would lead to deportation is backed by substantial and uncontroverted evidence.
                                                                         • Sentence 3.2: Accordingly we conclude Lee has demonstrated a “reasonable probability
           Discourse Class   Precision   Recall   F-Measure                that, but for [his] counsel’s errors, he would not have pleaded guilty and would have
            Elaboration       0.921      0.939     0.930                   insisted on going to trial”

            No Relation       0.841      0.881     0.861
              Citation        1.000      0.952     0.975
           Shift-in-View        -          0         -                  as a way to increase the precision of the Shift-in-View detection
                                                                        approaches. Given below are some Transition Words and Transition
                                                                        Phrases we used.
3 METHODOLOGY                                                              Transition Words: thus, accordingly, therefore
3.1 Identifying Sentence Pairs where Both                                  Transition Phrases: as a result, in such cases, because of that, in
    Sentences Discuss the Same Topic                                    conclusion, according to that
It is needed to identify whether the two sentences are discussing
the same topic in detecting sentence pairs which provide different
                                                                        3.3     Use of Coreferencing
opinions on the same topic. Therefore, as the first step, we imple-     Prior to checking for linguistic features which imply that the sen-
mented the Sentence Relationship Identifier(SRI) as it is successful    tence pair is showing Shift-in-View relationship, co-referencing
in identifying whether two sentences are discussing on the same         is performed on the sentence pair. For coreferencing, Stanford
topic or not [24].                                                      CoreNLP CorefAnnotator (“coref") [5] was used. The co-referencing
    According to the study [24], Elaboration, Redundancy,Citation       provides a better picture when the same entities are being men-
and Shift-in-View relationships occur when both sentences discuss       tioned in the two sentences using different names[24].
the same topic. Shift-in-View occurs over Elaboration when the two
sentences provide different opinions on the same topic.                 3.4     Analyzing Relationships between Verbs
    We only consider sentence pairs which are detected as having        The first linguistic approach to detect deviations in opinions ex-
Elaboration relationship type in order to identify whether Shift-       pressed in sentences regarding a particular topic is based on verb
in-View relationship is present. Though Redundancy and Citation         comparison. Under this approach, verbs are compared using the
relationship types also suggest that two sentences are discussing the   negation relationship and using adverbial modifiers.
same topic, the sentence pairs detected with those relationship types      In this approach, subject-object pairs in the Target Sentence is
are not considered. As the Redundancy relationship suggests that        compared with that of the Source Sentence. If the subject or object
two sentences provide similar information, there is no possibility of   in one sentence is present in the other sentence, the verbs in the
having different perspectives. In Citation relationship, one sentence   sentences are considered. Here, we do not consider verbs which
provides evidence or references to confirm the details presented        are lemmatized into "be", "do" in order to focus only on effective
in the other sentence. Thus, it is not probable to have a situation     verbs. The Stanford CoreNLP POS Tagger (“pos") [30] was used in
where two sentences provide different perspectives on the same          identifying verbs in sentences. After extracting the verbs in two
topic.                                                                  sentences, each verb in Target sentence is compared with each verb
    However, if the machine learning model described in the study       in Source sentence to detect verb pairs with similar meaning.
[24] detect a pair of sentences as having Shift-in-View relationship,
such a pair will be detected as a sentence pair which provides          3.4.1 Determining Verbs which Convey Similar Meanings. In order
different opinions on the same topic. Confirming the observations       to convey a similar meaning, it is not necessary that both verbs are
of the study [24], SRI did not identify any pair of sentences as        the same. Also, when semantic similarity measures between two
having Shift-in-View relationship.                                      verbs are considered, it can be observed that there are verb pairs
                                                                        which have very similar meanings but different semantic similarity
3.2    Filtering Sentences using Transition Words                       scores. For example, if the lemmatized forms of verbs in Example
                                                                        1 are considered, it can be observed that the verb demonstrate in
       and Phrases
                                                                        the Target sentence and verb show in the source sentence have simi-
There are Transition Words or Transition Phrases which suggest that     lar meanings. Confirming that observation further, a Wu-Palmer
the Source Sentence of a sentence pair is elaborating or building       similarity score of 1.0 can be obtained for that verb pair. When
up on the Target Sentence. In the Source Sentence of Example 3          the lemmatized forms of verbs in two sentences in Example 2 are
(which was taken from Lee v. United States [2]), the transition word    considered, it can be observed that the word "fear" in the Target
"Accordingly" implies that the Source Sentence is being developed       sentence and "worry" in Source sentence are two verbs with simi-
while agreeing with the Target Sentence.                                lar meanings. However, the Wu-Palmer semantic similarity score
   Therefore, when such a Transition Word or Transition Phrase is       between verbs fear and worry is 0.889. Therefore, it is needed to
present in the Source Sentence, such a sentence pair will be con-       determine an acceptable threshold based on semantic similarity
sidered as having the Elaboration relationship. As a result, such       scores in order to identify verbs with similar meanings.
sentence pairs are not processed further for detecting the Shift-          In order to determine this threshold, we first took 1000 verb pairs
in-View relationship type. We have implemented this mechanism           from legal opinion texts, whose Wu-Palmer similarity scores are
Shift-of-Perspective Identification within Legal Cases                                     ASAIL 2019, June 21, 2019, Montreal, QC, Canada


greater than 0.75. As our objective is to identify pairs of verbs with    how that task was performed is different. Therefore, the adverbial
similar meanings, it could be observed that a Wu-Palmer score of          modifiers related to the verbs in verb pairs identified using the
0.75 was a reasonable lower bound as per the precision values. We         methodology described in section 3.4 were considered. We classi-
annotated those 1000 pairs of verbs based on whether a given verb         fied adverbial modifiers in to three main classes shown in Table
pair actually has two verbs with similar meanings or not. Then we         5. Within each class, there exists a positive subclass and a nega-
gradually incremented the threshold by 0.1 from 0.75 to 0.95 and          tive subclass. In the table, we have shown the positive sub classes
observed the precision and recall values as shown in Table 4.             with unshaded rows while the negative sub classes are shown with
   In addition to Wu-Palmer scores, we performed the same experi-         shaded rows. After defining major classes into which adverbial
ment on the verb pairs using all the eight semantic similarity mea-       modifiers can be classified, lists containing adverbs related to each
sures available in Wordnet[21]. It was observed that Jiang-Conrath        class were created. Table 5 further contains examples of adverbs
[13] and Lin [15] are the two measures which provides reasonable          related to each type. This table does not include all the adverbs we
accuracy in addition to Wu-Palmer semantic similarity[31]. The            are maintaining in the lists.
results from these experiments are shown in Table 4 and in Fig.1. It         If adverbial modifiers connected to both verbs in a verb pair
could be observed that Lin outperforms other two measures when            with similar meaning belong to same Adverbial modifier type, but
F-Measures are considered. It can be seen that 0.75 is the Lin score      with opposite polarities (one positive and one negative), it can be
which has the highest F-Measure. But, it is due to considerably high      identified that the two sentences provide different views in relation
recall and undesirably low precision values. As our intention is to       to the entities that are connected by those verbs.
maintain a proper balance between precision and recall, Lin Score
of 0.86 is selected as the threshold to detect verb pairs with similar    3.6    Discovering Inconsistencies between
meaning. 0.86 is the Lin Score with the second highest F-Measure.                Triples
                                                                          Following the methodology presented in the PubMed study [7], a
                                                                          legal term dictionary was constructed to be served as a Semantic
                                                                          Lexicon for the system. 200+ legal opinion texts were used to extract
                                                                          words for the process. Then a word list consisting 17,000+ unique
                                                                          words were developed by removing stop words. A TF-IDF algorithm
                                                                          [14] based method is used to calculate a value for each term in the
                                                                          dictionary.
                                                                                                        Ícasecount        f t,d
                                                                                                          i=1         t ermcount
                                                                                        T ermV alue =                                        (1)
                                                                                                                  D.F
                                                                             Raw count (ft,d ) for each term is taken, considering each legal
                                                                          opinion text as a seperate document. Term frequency value for
                                                                          a term is calculated by dividing the raw term count by the total
Figure 1: Variation of F-Measures with regard to Different                number of terms in the case. Term frequency value for each case is
Similarity Measures                                                       added together and the result value is divided from the document
                                                                          frequency (D.F ), to calculate the value for a term in the dictionary.
                                                                          Then all the term values are normalized according to the equation
3.5    Detecting Shift-in-View Relationships by                           2.
       Comparing Properties Related to Identified
                                                                                                  (TV − TVmin ) ∗ (1 − TVmin )
       Verbs                                                                 NormalizedTV =                                     + TVmin (2)
                                                                                                        TVmax − TVmin
3.5.1 Negation on Verbs. Usage of negation relationship is a popu-            Here TVmin and TVmax represent the minimum and maximum
lar approach when it comes to detecting inconsistencies and contra-       values of the term values respectively. This normalized value is
dictions in text [7, 9, 17]. In this study, we checked for the negation   used to be served as the semantic weight for the system.
relationship within verbs in verb pairs identified using the method           First, coreference resolving is done on the sentence pairs using
proposed in the section 3.4. If one verb is detected as being negated     the Stanford CoreNLP CorefAnnotator [5] and the pairs with Tran-
while the other verb is not being negated, the sentence pair is           sition Words and Phrases are filtered out. Then OLLIE [18], open
considered as having Shift-in-View relationship. Stanford CoreNLP         information extraction system, is used to extract triples, in (Subject;
dependency parser was used to detect the negation by identify-            Relationship; Object) format, from sentences. When comparing two
ing occurrences of the "neg" tag as described in "Stanford typed          sentences, for the Shift In View relationship, only triple pairs with
dependencies manual" [6].                                                 same subject or object are considered, as the Shift In View relation-
3.5.2 Using Adverbial Modifiers to Detect Shifts-In-View. Another         ship talks about different perspectives on the same topic or entity.
approach to detecting different viewpoints on the same subjects or        The stop words removed relationship strings of a triple pair are
entities can be formulated by considering adverbial modifiers. If the     then compared with each other word by word. The comparison is
adverbial modifiers related to two verbs with similar meanings give       performed in three ways.
opposite or contradictory meanings, that means the viewpoints on              (1) Words which are exactly the same
ASAIL 2019, June 21, 2019, Montreal, QC, Canada                                                                         Ratnayaka and Rupasinghe, et al.

                Table 4: Results Comparison for Different Wu-Palmer, Jiang-Conrath, and Lin Score Thresholds

                                      Wu-Palmer                         Jiang-Conrath                                   Lin
                 Score
                          Precision     Recall     F-Measure    Precision     Recall    F-Measure      Precision      Recall      F-Measure
                  0.75       45.65%    100.00%      62.68%       70.78%       51.54%     59.64%         57.29%        72.37%       63.95%
                  0.80       51.39%    77.19%       61.70%       70.31%       49.34%     57.99%         60.39%        67.54%       63.77%
                  0.85       54.59%    69.08%       60.99%       71.02%       48.90%     57.92%         64.76%        62.06%       63.38%
                  0.86       59.34%    59.21%       59.28%       71.02%       48.90%     57.92%         67.15%        60.96%       63.91%
                  0.90       64.49%    49.78%       56.19%       71.25%       48.90%     58.00%         70.40%        53.73%       60.95%
                  0.95       72.69%    41.45%       52.79%       71.43%       48.25%     57.59%         72.60%        46.49%       56.68%


                 Table 5: Adverbial Modifiers                                  connected with the opposition party, it might be the case where
                                                                               both sentences are conveying opinions which are beneficial for the
 Type Class     Type Name             Modifiers                                opposition party in relation to the topic which is being discussed.
                more frequent         always, often, regularly                     The problem becomes even more complex when the sentence is
 Frequency                                                                     made up of several sub-sentences because each sub-sentence may
                                      accidentally, never, not, less,
                less frequent                                                  have a "Subject" of its own. Therefore, when using the sentiment
                                      loosely, rarely, sometimes
                                      so, well, really, literally , simply,    based approach to detect "Shift-in-View" relationship, we consider
                amplifiers                                                     only the sentence pairs in which each sentence has only one explicit
 Tone                                 for sure, completely, absolutely
                                      kind of, sort of, mildly,                subject. If the subjects in both sentences are the same in such a
                down toners                                                    sentence pair, it can be concluded that two sentences are elaborating
                                      to some extent, almost, all but
                positive manner       elegantly,beautifully,confidently        on the same topic in relation to the same subject. Then, it is checked
 Manner                                                                        whether the two sentences are providing sentiments with opposite
                negative manner       lazily, ugly,faint heartedly
                                                                               polarities. If one sentence provides negative sentiment and other
                                                                               provides positive sentiment while discussing the same topic in
   (2) Exactly same words with one word negated with “not"                     relation to the same subject, it can be concluded that the probability
   (3) Different words                                                         of two sentences giving different perspectives on the same topic is
                                                                               very significant.
   In our study we consider the negation of words with similar
                                                                                   In this approach, the sentences which are composed with subor-
meanings (Lin score above 0.86) instead of considering only the
                                                                               dinate clauses are first split using those clauses. When the sentence
words which are exactly the same. Then, an oppositeness value
                                                                               is split using a subordinating conjunction, that subordinate clause
is obtained for each sentence pair by comparing the triples fol-
                                                                               can be identified as another sentence entity. Throughout this sec-
lowing the algorithmic approach proposed in the PubMed Study
                                                                               tion, we will refer the subordinate clause as inner sentence and the
[7]. A threshold based on the oppositeness values is introduced
                                                                               main clause will be referred to as outer sentence. After the sentence
empirically to select sentence pairs which have the Shift In View
                                                                               is annotated using Stanford CoreNLP Constituency Parser [4] , the
relationship.
                                                                               splitting happens by identifying associated terms with SBAR tag.
3.7     Sentiment-based Approach                                                   The proposed approach is based on analyzing the sentiment of
                                                                               this inner sentence to identify if there is a shift in view relation
Though valuable information can be obtained by analyzing the                   between a sentence pair. If we consider the Example 4 (which was
sentiment of a sentence, the sentiment of a sentence alone hardly              taken from Lee v. United States [2]), The phrases “Lee cannot convince
gives any details on the topics which are being discussed within               the court that a decision to reject the plea bargain”, and “he can
a sentence and on the viewpoint in which the sentence is describ-              establish prejudice under Hill” are the inner sentences. The outer
ing the topic. It is known that the two sentences which are being              sentences are “The government argues”, and “Lee, on the other hand,
compared to detect shifts in view discuss on the same topic as we              argues”.
consider only the sentence pairs with "Elaboration" relationship.
But, when the sentences in legal opinion texts are considered, even
                                                                                 Example 4
if the sentiments of two sentences which elaborate on the same
discussion topic is different, it can not be concluded that the two              • Sentence 4.1: The Government argues that Lee cannot "convince the court that a
                                                                                   decision to reject the plea bargain.
sentences are providing different opinions on the topic.                         • Sentence 4.2: Lee, on the other hand, argues that he can establish prejudice under
    The reason is that the person entities which are described in a                Hill.
sentence and connected with the sentiment of the sentence have a
significant impact on the topic which is being discussed. For exam-
ple, consider two sentences which elaborate on the same discussion                If we consider the sentence pair mentioned in Example 4, both the
topic and having opposite sentiments. If the sentiment of the sen-             inner sentences’ subject is Lee. The phrase “Lee cannot convince the
tence with negative sentiment is connected with the proposition                court that a decision to reject the plea bargain” is having a negative
party while the sentiment of sentence with positive sentiment is               sentiment while the other inner sentence “Lee can establish prejudice
Shift-of-Perspective Identification within Legal Cases                                    ASAIL 2019, June 21, 2019, Montreal, QC, Canada


under Hill” denotes a positive sentiment. Both the outer sentences       Table 6: Results Comparison of Approaches used to detect
are having neutral sentiment. Therefore, it can be observed that         Shift-in-View
there is a shift in view regarding the subject Lee.
                                                                                       Approach                No. of Sentence pairs   Precision

4   EXPERIMENTS AND RESULTS                                                     Verb Relationships                     46               0.609
                                                                                Sentiment Polarity                     230              0.382
As the first step, the 3 major approaches used to detect Shift-in-
                                                                          Inconsistencies between triples              95               0.273
View relationship type were evaluated. In order to perform this
evaluation, 2150 sentence pairs from legal opinion texts related
to criminal court cases were extracted from Findlaw [1]. Each of
these sentence pairs contains two sentences which are consecutive        around 0.6. As mentioned earlier we have selected the Lin semantic
to each other within a legal opinion text document. Next, the ex-        similarity score of 0.86 as the threshold to identify verbs with similar
tracted sentence pairs were input into the Sentence Relationship         meaning after analyzing different semantic similarity measures. The
Identifier (SRI). Input sentence pairs are first processed inside SRI.   precision of identifying verbs with 0.86 Lin score is 0.67. Thus, it
The sentence pairs which are identified as having Elaboration by         can be seen that there is a potential to improve the precision of
the SRI were further processed in order to detect whether there is       detecting Shift-in-View relationships using relationships between
Shift-in-View relationship using the three Shift-in-View detection       verbs by developing a semantic similarity measure which is more
approaches mentioned under Section III.                                  accurate in identifying verbs with similar meanings for the legal
   As the next step, the sentence pairs detected as having the Shift-    domain.
in-View relationship under each approach were taken into consider-          Using the sentiment based model, the achieved precision is 0.38.
ation. The number of detected sentence pairs from each approach          There are few possible reasons behind this observation. The study
is shown in Table 6. Then, the precision of each approach was            on the sentiment annotator model [8] used in this case, states that
calculated. All 46 sentence pairs detected from Verb-Relationship        the accuracy of the model is 76%. The study says that the errors
approach were used when calculating the precision of that approach.      present in its parent model [27] can be propagated to the target
100 sentence pairs randomly selected from the detected 246 sen-          model [8]. The paper on the source model[27] which is based on
tence pairs, which were identified using the Sentiment-Polarity          recursive neural tensor network, shows that the accuracy is reduced
approach was used to determine the precision of the approach. 95         down to 0.5 when the n-gram length of a phrase increases (n>10).
sentence pairs were detected from the approach which uses incon-         As most of the sentences in court case transcripts are reasonably
sistencies between triples to determine Shift-in-View. All of those 95   lengthier, there is a potential that the proposed sentiment based
sentence pairs were used to calculate the precision of that approach.    approach used for the identification of Shift-in-View is affected by
The precision values obtained for each of these approach are also        the above mentioned error.
shown in Table 6. When performing this evaluation, each sentence            Only a precision of 0.27 could be observed in the approach which
pair was first annotated by two human judges. If the two judges          considers inconsistencies using triples as proposed in PubMed
did not agree on a relationship type for a particular sentence pair,     Study. The following reasons may have contributed to the poor
that sentence pair was annotated by an additional human judge.           performances of that approach. From the 2150 sentence pairs which
When the results were calculated, the consideration was given only       were considered, oppositeness values were not calculated for 1570
to the sentence pairs which were agreed by at least two human            pairs. Containing at least one sentence within a sentence pair in
judges to have the same relationship type.                               which the triples could not be extracted by OLLIE[18] is a major
   Due to the scarcity of resources, it was not possible to anno-        reason for not having an oppositeness value. Even if the triples
tate all 2150 sentence pairs based on the relationship type. As a        are extracted from both sentences, if there is no matching between
result, calculating recall of each approach was not possible. If Ta-     either subjects or objects of the two sentences, an oppositeness
ble 2 related to SRI study [24] is considered, it can be observed        value will not be calculated for a sentence pair.
that only 3 out of 165 sentence pairs are determined as having the          Evaluation results demonstrates that analysis of relationships be-
Shift-in-View relationship type by the human judges. It suggests         tween verbs in two sentences as the only approach which performs
that the Shift-in-View relationship type does not occur frequently       the task of detecting Shift-in-View relationships with a precision
when we consider sequential sentence pairs in a legal opinion text.      more than 0.5. Many studies convince the difficulty of detecting
Furthermore, Table 2 suggests that the SRI tends to misattribute         contradiction and change of perspective relationships over other re-
sentence pairs having Shift-in-View as having the Elaboration rela-      lationship types that can be observed between sentences[17, 24, 32].
tionship type. That means, the SRI is successful in determining if       The study [17] also claims the difficulty of generalizing contradic-
the two sentences in the sentence pair is discussing the same topic      tion detection approaches. When considering these facts, it can be
or not. In such circumstances, it is important to be precise when        considered that the results obtained via analyzing verb relation-
determining a sentence pair as having the Shift-in-View relationship     ships are satisfactory. Therefore, we combined only that approach
type. Considering these facts, we can conclude that it is important      with the Sentence Relationship Identifier (SRI) and evaluated the
to prioritize the Shift-in-View detection approaches based on the        overall system made up by combining Shift-in-View detection with
precision.                                                               SRI as shown in Table 7.
   According to the Table 6, it can be seen that the precision which        The results shown in Table 7 were obtained using 200 annotated
could be obtained from analyzing relationships between verbs is          sentence pairs. Each of the considered sentence pair was agreed by
ASAIL 2019, June 21, 2019, Montreal, QC, Canada                                                                                      Ratnayaka and Rupasinghe, et al.


at least two human judges to have the same relationship type. Fur-                         [6] Marie-Catherine De Marneffe and Christopher D Manning. 2008. Stanford typed
thermore, 21 randomly selected sentence pairs which were agreed                                dependencies manual. Technical Report. Technical report, Stanford University.
                                                                                           [7] Nisansa de Silva, Dejing Dou, and Jingshan Huang. 2017. Discovering Inconsis-
by at least two human judges as having Shift-in-View are contained                             tencies in PubMed Abstracts Through Ontology-Based Information Extraction.
within the 200 sentence pairs which were used in this evaluation.                              In Proceedings of the 8th ACM International Conference on Bioinformatics, Compu-
                                                                                               tational Biology, and Health Informatics. ACM, 362–371.
                                                                                           [8] Viraj Gamage, Menuka Warushavithana, Nisansa de Silva, Amal Shehan Perera,
Table 7: Results Obtained from Sentence Pairs in which At                                      Gathika Ratnayaka, and Thejan Rupasinghe. 2018. Fast Approach to Build an
least Two Judges Agree                                                                         Automatic Sentiment Annotator for Legal Domain using Transfer Learning. arXiv
                                                                                               preprint arXiv:1810.01912 (2018).
                                                                                           [9] Jesse A Harris and Christopher Potts. 2009. Perspective-shifting with appositives
             Discourse Class       Precision     Recall     F-Measure                          and expressives. Linguistics and Philosophy 32, 6 (2009), 523–552.
                                                                                          [10] Vindula Jayawardana, Dimuthu Lakmal, Nisansa de Silva, Amal Shehan Perera,
              Elaboration            0.938       0.930         0.933                           Keet Sugathadasa, and Buddhi Ayesha. 2017. Deriving a representative vector for
              No Relation            0.843       0.895         0.868                           ontology classes with instance word vector embeddings. In Innovative Computing
                                                                                               Technology (INTECH), 2017 Seventh International Conference on. IEEE, 79–84.
                Citation             1.000       0.971         0.985                      [11] Vindula Jayawardana, Dimuthu Lakmal, Nisansa de Silva, Amal Shehan Perera,
             Shift-in-View           0.688       0.423         0.524                           Keet Sugathadasa, Buddhi Ayesha, and Madhavi Perera. 2017. Semi-supervised
                                                                                               instance population of an ontology using word vector embedding. In Advances
                                                                                               in ICT for Emerging Regions (ICTer), 2017 Seventeenth International Conference on.
   According to the Table 7, it can be seen that there is a significance                       IEEE, 1–7.
improvement, especially in relation to the Shift-in-View relationship                     [12] Vindula Jayawardana, Dimuthu Lakmal, Nisansa de Silva, Amal Shehan Perera,
                                                                                               Keet Sugathadasa, Buddhi Ayesha, and Madhavi Perera. 2017. Word Vector
type when compared with the results in the study[24] as given in                               Embeddings and Domain Specific Semantic based Semi-Supervised Ontology
Table 3.                                                                                       Instance Population. International Journal on Advances in ICT for Emerging
                                                                                               Regions 10, 1 (2017), 1.
                                                                                          [13] Jay J Jiang and David W Conrath. 1997. Semantic similarity based on corpus
5    CONCLUSION                                                                                statistics and lexical taxonomy. arXiv preprint cmp-lg/9709008 (1997).
Developing a methodology to detect situations where multiple                              [14] Jure Leskovec, Anand Rajaraman, and Jeffrey David Ullman. 2014. Mining of
                                                                                               massive datasets. Cambridge university press.
viewpoints are provided in regard to the same discussion topic                            [15] Dekang Lin et al. 1998. An information-theoretic definition of similarity.. In Icml,
within a legal opinion text is the major research contribution of                              Vol. 98. Citeseer, 296–304.
                                                                                          [16] Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard,
this study. This study has introduced novel approaches to detect                               and David McClosky. 2014. The Stanford CoreNLP natural language processing
deviations in the opinions provided by two sentences regarding the                             toolkit. In Proceedings of 52nd annual meeting of the association for computational
same topic. At the same time, existing methodologies to detect con-                            linguistics: system demonstrations. 55–60.
                                                                                          [17] Marie-Catherine Marneffe, Anna N Rafferty, and Christopher D Manning. 2008.
tradiction and change of perspectives have been evaluated within                               Finding contradictions in text. Proceedings of ACL-08: HLT (2008), 1039–1047.
the study. Additionally, it has been empirically demonstrated the                         [18] Mausam, Michael Schmitz, Robert Bart, Stephen Soderland, and Oren Etzioni.
way in which the outcomes of the study can be used to facilitate                               2012. Open Language Learning for Information Extraction. In Proceedings of Con-
                                                                                               ference on Empirical Methods in Natural Language Processing and Computational
the process of identifying relationships between sentences in docu-                            Natural Language Learning (EMNLP-CONLL).
ments containing legal opinions on court cases. Evaluation of the                         [19] John J Nay. 2016. Gov2vec: Learning distributed representations of institutions
                                                                                               and their legal text. arXiv preprint arXiv:1609.06616 (2016).
performance of existing semantic similarity measures in relation                          [20] Michael J Paul, ChengXiang Zhai, and Roxana Girju. 2010. Summarizing con-
to identifying verbs with similar meaning can be considered as                                 trastive viewpoints in opinionated text. In Proceedings of the 2010 Conference on
another key research contribution of the study.                                                Empirical Methods in Natural Language Processing. Association for Computational
                                                                                               Linguistics, 66–76.
   The proposed approach can also be used to facilitate several                           [21] Ted Pedersen, Siddharth Patwardhan, and Jason Michelizzi. 2004. WordNet::
other information extraction tasks related to the legal domain such                            Similarity: measuring the relatedness of concepts. In Demonstration papers at
as identifying counter arguments to a particular argument, deter-                              HLT-NAACL 2004. Association for Computational Linguistics, 38–41.
                                                                                          [22] Dragomir Radev, Jahna Otterbacher, and Zhu Zhang. 2003. CSTBank: Cross-
mining representatives related to the proposition party and the                                document Structure Theory Bank. http://tangra.si.umich.edu/clair/CSTBank.
opposition party in a court case.                                                              (2003).
                                                                                          [23] Dragomir R Radev. 2000. A common theory of information fusion from multiple
   The accuracy of the approaches proposed in this study can be                                text sources step one: cross-document structure. In Proceedings of the 1st SIGdial
further improved by developing semantic similarity measures and                                workshop on Discourse and dialogue-Volume 10. Association for Computational
sentiment annotators which can perform in the legal domain with                                Linguistics, 74–83.
                                                                                          [24] Gathika Ratnayaka, Thejan Rupasinghe, Nisansa de Silva, Menuka
an improved accuracy. Coming up with such mechanisms can be                                    Warushavithana, Viraj Gamage, and Amal Shehan Perera. 2018. Identi-
considered as the major future work.                                                           fying Relationships Among Sentences in Court Case Transcripts Using Discourse
                                                                                               Relations. In 2018 18th International Conference on Advances in ICT for Emerging
                                                                                               Regions (ICTer). IEEE, 13–20.
REFERENCES                                                                                [25] Erich Schweighofer and Werner Winiwarter. 1993. Legal expert system KON-
 [1] [n. d.]. Caselaw: Cases and Codes - FindLaw Caselaw. https://caselaw.findlaw.             TERM - automatic representation of document structure and contents. In In-
     com/. ([n. d.]). (Accessed on 05/20/2018).                                                ternational Conference on Database and Expert Systems Applications. Springer,
 [2] 1977. Lee v. United States. In US, Vol. 432. Supreme Court, 23.                           486–497.
 [3] Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D Manning.              [26] Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Rea-
     2015. Leveraging linguistic structure for open domain information extraction. In          soning with neural tensor networks for knowledge base completion. In Advances
     Proceedings of the 53rd Annual Meeting of the Association for Computational Lin-          in neural information processing systems. 926–934.
     guistics and the 7th International Joint Conference on Natural Language Processing   [27] Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning,
     (Volume 1: Long Papers), Vol. 1. 344–354.                                                 Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic
 [4] Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency                  compositionality over a sentiment treebank. In Proceedings of the 2013 conference
     parser using neural networks. In Proceedings of the 2014 conference on empirical          on empirical methods in natural language processing. 1631–1642.
     methods in natural language processing (EMNLP). 740–750.                             [28] Keet Sugathadasa, Buddhi Ayesha, Nisansa de Silva, Amal Shehan Perera, Vindula
 [5] Kevin Clark and Christopher D. Manning. 2015. Entity-Centric Coreference                  Jayawardana, Dimuthu Lakmal, and Madhavi Perera. 2017. Synergistic union of
     Resolution with Model Stacking. In Association for Computational Linguistics              word2vec and lexicon for domain specific semantic similarity. In Industrial and
     (ACL).                                                                                    Information Systems (ICIIS), 2017 IEEE International Conference on. IEEE, 1–6.
Shift-of-Perspective Identification within Legal Cases                                      ASAIL 2019, June 21, 2019, Montreal, QC, Canada


[29] Keet Sugathadasa, Buddhi Ayesha, Nisansa de Silva, Amal Shehan Perera, Vindula
     Jayawardana, Dimuthu Lakmal, and Madhavi Perera. 2018. Legal Document
     Retrieval using Document Vector Embeddings and Deep Learning. arXiv preprint
     arXiv:1805.10685 (2018).
[30] Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003.
     Feature-rich part-of-speech tagging with a cyclic dependency network. In Pro-
     ceedings of the 2003 Conference of the North American Chapter of the Association for
     Computational Linguistics on Human Language Technology-Volume 1. Association
     for Computational Linguistics, 173–180.
[31] Zhibiao Wu and Martha Palmer. 1994. Verbs semantics and lexical selection. In
     Proceedings of the 32nd annual meeting on Association for Computational Linguis-
     tics. Association for Computational Linguistics, 133–138.
[32] Nik Adilah Hanin Zahri, Fumiyo Fukumoto, and Suguru Matsuyoshi. 2012. Ex-
     ploiting Discourse Relations between Sentences for Text Clustering. In 24th
     International Conference on Computational Linguistics. 17.