<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Investigating the Similarity of Court Decisions</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sarika Jain</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Deepak Jaglan</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kapil Gupta</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National Institute of Technology Kurukshetra, Department of Computer Applications</institution>
          ,
          <addr-line>Kurukshetra</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <fpage>316</fpage>
      <lpage>326</lpage>
      <abstract>
        <p>The association between words, phrases, and documents is referred to as semantic similarity. Semantic similarity has played a significant role in internet search engines regarding content ranking. It also has wide applications in information retrieval, artificial intelligence, etc., to name a few. This paper reviews the general architecture, categorization of approaches, and techniques and metrics for determining semantic similarity between documents in a comprehensive way. We have conducted experiments on the diferent statistical methods, viz., word vector-based techniques (TF-IDF, LDA, Word2Vec, Doc2Vec, Glove, and fastText), and transformer-based techniques (Longformer-base, Sentence-BERT-large-nli, SentenceBERT-large-nli-stsb, and Sentence-RoBERTa-large-nli-stsb) over Indian Supreme Court decisions and discussed the results. The Doc2Vec approach over the whole document is found to correlate the most with the expert judgment.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Semantic Similarity</kwd>
        <kwd>Legal Document</kwd>
        <kwd>Document Embedding</kwd>
        <kwd>Cosine Similarity</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Semantic similarity can be well described as the relate-ability between words, sentences, and
documents. It is most likely a quantitative measure of information that has evolved into a core
technique that is now widely used in a variety of fields, including biological computing [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ],
information retrieval [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], artificial intelligence [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], geoinformation [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], and natural language
processing [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], as well as other intelligent knowledge-based systems [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. For the use case
scenario, identification of related literature assists legal professionals in obtaining relevant
literature. Some authors have studied similarity analysis of legal judgements [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. We bring
relevant literature primarily based on text-based methods and deep learning approaches like
transformer models.
      </p>
      <p>Our focus in this paper is to review the semantic similarity approaches exhaustively in context
to the legal case documents in particular. This approach is not restrictive to the legal case
documents. Instead, we may use this method in various other subjects’ domains. Throughout
this article, we shall concentrate on the legal arena.</p>
      <p>
        The requirement for an accurate and relatable legal information retrieval area is the most
pressing challenge in today’s legal society. Because the Common Law System is one of the most
widely followed legal systems globally, the success or failure of a case is heavily influenced by
previous instances. The deluge of information on the internet has made it dificult for legal
practitioners to manually discover significant earlier examples that appropriately serve their
current case. As a result, a likely answer is found by comparing the similarity of the diferent
case documents, which various authors have recently studied [
        <xref ref-type="bibr" rid="ref10 ref8 ref9">8, 9, 10</xref>
        ]. Statistical methods, also
known as text-based methods, utilize the textual content of legal documents. These methods
include only primitive text-based similarity measures, such as TF-IDF-based approaches. In [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ],
the authors have improved the text-based technique with the similarity measures such as topic
modeling and neural network models such as word embeddings and document embeddings.
Also, it has been shown that the word vector-based approaches perform better than other
approaches.
      </p>
      <p>The present paper details the outlines of the review of diferent methods used in the text-based
category besides approximate validation of the experimental results. More precisely, we discuss:
1. The comprehensive details of the diferent semantic similarity approaches provide an
insight into the generalized architecture of the various techniques used in semantic
similarity.
2. In context to the legal domain, we confine ourselves to word vector-based and
transformerbased approaches and discuss the experimental results we obtain in each method in both
directions.</p>
      <p>The layout of this paper is as follows: In the next section, we present the analytical discussion
comprising the general architecture for semantic similarity along with the diferent semantic
similarity approaches in detail. A brief discussion regarding the similarity measures followed
by evaluation measures is required for a comparative study in Section 3. Finally, in the same
section, we detail the experimental results obtained in the context of the legal domain, followed
by the underlying discussion. Section 4 deals with the conclusions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Analytical Discussion</title>
      <p>The main content of the present paper is depicted in the following flowchart (Figure 1), i.e., first
of all, we will discuss various document representations methods and document pre-processing.
After that, we will discuss the semantic similarity approaches followed by semantic measures
and evaluation measures. The meaning of these terminologies shall be transparent in their
respective discussions.</p>
      <p>General Architecture</p>
      <p>We feed the input as the unstructured text, and from it, we select the corpus to
obtain the representative text. After that, we initiate the data processing of the representative text
by first removing punctuations and stopwords and then stemming. Thus we get a clean text.
Now, we begin the data modeling of the clean text to extract the features of documents, i.e., the
word embeddings. For that, we employ semantic similarity techniques, viz., word vector-based
techniques (TF-IDF, LDA, Word2Vec, Doc2Vec, Glove, and fastText), and transformer-based
techniques (Longformer-base, Sentence-BERT-large-nli, Sentence-BERT-large-nli-stsb, and
Unstructured</p>
      <p>Text
Corpus</p>
      <p>Corpus</p>
      <p>Selection
[Whole Document]</p>
      <p>Data Preprocessing
Representative [Remove Punctuation, Corpus with</p>
      <p>Text StopSwtoermdsm; iPnegr]form Cleteaxnted
Performance</p>
      <p>Evaluation Measures
[Pearson Coefficient]</p>
      <p>Similarity</p>
      <p>score
Expert Similarity</p>
      <p>Score</p>
      <p>Semantic Information</p>
      <p>Sensitive</p>
      <p>Apply Semantic
Similarity Approaches
[Word Vector-Based,</p>
      <p>Transformer Based]
Calculate Similarity of</p>
      <p>Document Pairs
[Cosine Similarity]</p>
      <p>Sentence-RoBERTa-large-nli-stsb). We calculate the cosine similarity between these feature
documents to obtain the similarity scores. Data modeling and similarity measures can utilize
semantic information. Further, since we also have the similarity scores from the experts, we
will evaluate the Pearson coeficient between the similarity scores given by the expert with the
ones obtained by us. Hence, we quantify how well our methods perform when compared to the
similarity scores given by the experts.</p>
      <sec id="sec-2-1">
        <title>2.1. Document Representation</title>
        <p>There are various ways to document representation, viz., whole document, summary, paragraph,
and the reason for citations (RFCs).</p>
        <p>In whole document representation, the whole of the document is taken under consideration,
while in summary, the important content is taken into consideration, leaving the redundant
part. A set of paragraphs is considered in paragraph document representation in such a way
that each paragraph of one document is compared to all the paragraphs of the other document
in the corpus. RFC method is a citation-based method, and it works on a similar note as the
paragraph-based method. In thematic representation, the theme of the document is taken into
consideration. After selecting meaningful representations from the text of the documents, their
similarity is measured.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Data Pre-processing</title>
        <p>Data preprocessing is crucial in preparing the data since we deal with unstructured text data.
It transforms the text into a more digestible form. Now, we outline the steps involved in the
data preprocessing. Firstly, all of the letters are changed to small lowercase. Then based on
whitespaces, tokenization of the text into words is done. Except for terms containing the letters
hyphen, dot, and comma, all non-alphabetic words are filtered away. After that, standard English
stopwords are then removed from the list of words. Using Porter Stemmer, we finally perform
the overall word stemming. In this way, we obtain a better representation of our text.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Semantic Similarity Approaches and Measures</title>
        <p>The main principles behind the existing approaches that we reproduce in this study are
described in this section. As previously indicated, existing approaches utilize various similarity
measures that are divided into three broad categories: (i) statistical similarity, (ii) graph-based
similarity, and (iii) document clustering-based similarity. We will present a detailed overview of
each category classified above.</p>
        <p>Word Vector</p>
        <p>Based</p>
        <p>String
Based</p>
        <p>Transformer</p>
        <p>Based</p>
        <p>Hybrid</p>
        <p>Metric
Based</p>
        <p>Citation
Based</p>
        <p>Ontology
Based
Statistical Based</p>
        <p>Character Based
Similarity measure
Term Based Similariy
measure</p>
        <p>Semantic Similarity</p>
        <p>Approaches</p>
        <p>Document</p>
        <p>Clustering
Classification and</p>
        <p>Clustering
Algorithms</p>
        <p>Graph Based</p>
        <p>Relational
Single Ontology</p>
        <p>Based
Cross Ontology</p>
        <p>Based</p>
        <p>Hybrid
Lexical
Resource
Embedding
ontologies</p>
        <sec id="sec-2-3-1">
          <title>2.3.1. Statistical Similarity</title>
          <p>The statistical-based similarity approach is built on collecting texts either in written or spoken
forms. There are various ways to compare statistical similarities between legal documents,
viz., word vector-based, string-based, transformer-based, and hybrid-based. We confine our
experiments to the word vector-based and transformer-based techniques in this paper.</p>
          <p>
            The meaning of the word vector-based method is clear from its very name, i.e., it defines the
vector representation of the documents. We enlist all the methods derived from word vector, viz.,
TF-IDF technique, LDA, Word2Vec, Doc2Vec, Glove, fastText. A single vector representation of
the given document (e.g., a legal document) is created in the TF-IDF approach. The computation
of the similarity score between vectors is done with the aid of the cosine similarity (see, e.g., [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ]).
In contrast, as depicted in [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ], the LDA technique is a topic modeling algorithm, and it captures
the semantics of the documents in an appropriate way. In the models, based on neural networks
such as Word2vec and Doc2vec, gives a vector for each distinct word (see, e.g., [
            <xref ref-type="bibr" rid="ref11">11</xref>
            ]) and each
document (see, e.g., [
            <xref ref-type="bibr" rid="ref12">12</xref>
            ]), respectively. Similar to word2vec method, the dense vectors are
constructed in both these GloVe (see, e.g., [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ]) and fastText methods (see, e.g., [
            <xref ref-type="bibr" rid="ref14">14</xref>
            ]).
          </p>
          <p>String-based similarity includes the character and term-based similarity measures. The
transformer-based similarity approach is built on the language models with deep contextual
text representations by incorporating the word positioning. The various transformer techniques
are given as Longformer-base, Sentence-BERT-large-nli, Sentence-BERT-large-nli-stsb, and
Sentence-RoBERTa-large-nli-stsb.</p>
          <p>
            To address the constraints of the numerous statistically-based similarity approaches listed
above, a hybrid model was created by combining some or all of them in a suitable way to meet
at least all of the essential criteria of each feasible combination of methods. For more details in
the context of the hybrid method, the reader is referred to [
            <xref ref-type="bibr" rid="ref15">15</xref>
            ].
          </p>
        </sec>
        <sec id="sec-2-3-2">
          <title>2.3.2. Document Clustering</title>
          <p>Clustering is an unsupervised learning problem in which the goal is to arrange a set of objects in
such a way that the objects in the same cluster are more similar (in meaning) to each other than
the objects in the other cluster. Clustering may be used in various disciplines, with intelligent
text clustering being one of the most common. Traditional text clustering algorithms gathered
documents based on keyword matching, which meant that the texts were grouped without any
descriptive concepts. As a result, non-similar texts were grouped. The essential answer to this
challenge is group papers based on semantic similarity, which means grouping pages based on
meaning rather than keywords.</p>
          <p>
            One of the most well-known methods for producing a single grouping is k-means, wherein
the number of clusters, , must be determined beforehand. Initially, there are  clusters
specified, and after that, each document in the document collection is reassigned based
on the document’s resemblance to the  clusters. The  clusters are then updated. After
that, the document set’s documents are all reassigned. This method is repeated until all 
clusters remain the same. Alternatively, from [
            <xref ref-type="bibr" rid="ref16">16</xref>
            ], bisecting -means method is used to
cluster documents. Here, all items are thought to be part of a single cluster. A cluster is
broken into two every time. This process is continued until the desired number of clusters
has been achieved. The reader is referred to [
            <xref ref-type="bibr" rid="ref17 ref18 ref19">17, 18, 19</xref>
            ] for more details on clustering approaches.
          </p>
        </sec>
        <sec id="sec-2-3-3">
          <title>2.3.3. Graph Based Similarity</title>
          <p>The graph-based similarity approach is based on graphical methods. These methods are
further based on diferent techniques, ontology-based, relational-based, citation-based, and
hybrid-based. The prior-case citation network of the document is constructed to compute the
Precedent Citation Similarity. The vertices of the network are the case documents. A directed
edge exists between two vertices  and  if document  cites document  in its text. Consider an
example graph such that an edge exists from vertex A to E since A cites E. To build document
vectors, we investigate citation-based networks approaches in which documents are nodes and
edges correspond to citations.</p>
          <p>
            The relational approach emphasizes measuring the relation between two words, unlike
measuring the degree of similarity. Using a predetermined pattern of vector frequencies from
a vast corpus, this approach determines the link between word pairs. It enhances current
ontologies and is utilized in document semantic annotation. The reader is referred to [
            <xref ref-type="bibr" rid="ref20 ref21 ref22">20, 21, 22</xref>
            ]
for more details on these three approaches.
          </p>
          <p>
            The ontology-based approach is a graph-based semantic similarity approach, and it is
classified into three broad methods: single ontology-based, cross ontology-based, and lexical
resource. The path distance between concepts determines how similar the two concepts are.
The ontology or taxonomy structure is used to calculate similarity in this metric. A type relation
links essential linkages in this ontology or taxonomic structure. As a result, the shortest path is
used to compute similarity, and the length of the path defines the degree of similarity. The depth
relative measure is similar to the shortest path approach, but it takes into account the depth of
the edges linking the two concepts in the ontology’s basic structure and determines the depth
between the root and the target concept. In the information-based approach, also known as the
corpus-based approach, the information previously contained in the ontologies or taxonomy
is supplemented with the knowledge given by the corpus. For comparing the concepts, the
hybrid and feature-based measures consider the knowledge derived from diferent sources and
features, respectively. We refer the reader to [
            <xref ref-type="bibr" rid="ref23">23</xref>
            ] for further details on the DeepWalk algorithm.
          </p>
          <p>Previously mentioned semantic similarity measurements are intended for a single ontology.
With the expansion of online information sources, metrics are needed to calculate the similarity
between concepts belonging to diferent ontologies. The methods that quantify the comparison
of the terms from various ontologies are known as cross ontology measures.</p>
          <p>
            To compute the semantic similarity, one employs WordNet and Wikipedia as Lexical
resources. The wordNet technique is based on Directed Acyclic Graphs (DAG) theory. The
semantic distance and DAG information compute the semantic similarity between the words or
concepts. We refer the reader to [
            <xref ref-type="bibr" rid="ref24">24</xref>
            ] and [
            <xref ref-type="bibr" rid="ref25">25</xref>
            ] for further details on DAG.
          </p>
          <p>
            The hybrid methods can be a combination of statistical, ontology, and relational approaches.
We refer the reader to [
            <xref ref-type="bibr" rid="ref26">26</xref>
            ] for more details on such approaches.
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Experimental Results and Discussion</title>
      <p>This section compares these scores to those assigned by domain experts to see if they are
consistent. We have taken the data sets of legal documents, viz., Indian Supreme Court case
decisions (gold standard pairs) (see 3.1), for legal document similarity.</p>
      <sec id="sec-3-1">
        <title>3.1. Dataset</title>
        <p>
          The dataset contains all Indian Supreme Court case decisions in text format spanning 67
years (from 1950 to 2016). Each text begins with an optional headnote (a summary of a
legal case that incorporates several legal concerns and specifies the written laws employed
throughout the litigation process) and continues with the case’s whole litigation procedure.
We crawled the texts from the Legal Information Institute of India’s (LIIofIndia) website
(http://www.liiofindia.org/ in/cases/cen/INSC/), a website that maintains several legal databases.
A gold standard comprising legal expert judgments on how similar two documents are, is
essential to compare and evaluate our methods. We have analyzed the 47 pairs of the case
documents of the Indian Supreme Court, as our gold standard, along the lines of [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] and [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
The expert annotations ranging from 0 (lowest similarity) to 10 (highest similarity) were sought
for each of these pairs.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Evaluation Measure</title>
        <p>We calculate the similarity scores using each of our techniques for each of the 47 test pairings
to assess our techniques. Then, for each strategy, we find the Pearson Correlation Coeficient
between the 47 scores obtained by the techniques to those provided by the experts.</p>
        <sec id="sec-3-2-1">
          <title>3.2.1. Calculate Similarity between pairs</title>
          <p>Finding similarities among documents is vital from the perspective of Information Retrieval
and allied fields. The approaches create for two documents a vector representation with the
dimensions being the terms in the documents, word embeddings, or semantic notions. As a
result, we obtained the vectors of the document pairs. Finally, we apply cosine similarity to find
the angle between the resultant vectors.</p>
          <p>Cosine Similarity: It is a similarity measure of two non-zero vectors of an inner
product space, which finds the cosine of the angle between them. The cosine similarity of two
vectors having the same orientation is 1, and vectors that are orthogonal have the similarity of
0.The cosine similarity cos( ) of two vectors  and  is
cos( ) =</p>
          <p>·  ,
|||| ||||
where, (· ) represents the vector dot product.</p>
        </sec>
        <sec id="sec-3-2-2">
          <title>3.2.2. Performance</title>
          <p>The Pearson’s correlation coeficient is used to measure how well our approaches work
compared to expert similarity scores. The correlation between the obtained scores and those
ofered by legal experts is then calculated.</p>
          <p>Correlation coeficient ( ): It is the ratio of the two variables’ covariance and standard
deviations. Mathematically, let P and Q be two variables, then correlation coeficient ( ) is
defined below as
 =</p>
          <p>,
(, )</p>
          <p>where (, ) represents the covariance between  and , and   and   represent the
standard deviations of variables  and . Also, we have the inequality that − 1 ≤  ≤ 1. The
value  = − 1 signifies that the variables are anti-correlated whereas  = 1 signifies that they
are highly correlated.</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Results and Discussion</title>
        <p>scores-given (1) by legal experts and (2) by those that we obtained from the experiment by using
word vector-based and transformer-based techniques. To find the similarity scores between
the pairs, we used the case of word vector-based the following approaches: TFIDF, Doc2vec,
GloVe, and fastText methods, while in transformer-based, we employed Longformer-base,
Sentence-BERT-large-nli-stsb, and Sentence-RoBERTa-large-nli-stsb.</p>
        <p>In Table (2), for each technique, viz., word vector based and transformer-based, we compute
the Pearson correlation coeficient for each method with respect to the expert scores. The
highest correlation value obtained for both the approaches is in the italic font, i.e., Doc2Vec and
Sentence-RoBERTa-large-nli-stsb.</p>
        <p>
          The methods corresponding to which the detailed similarity scores between each pair are
computed in the Table (1) are represented by the bold font in the Table (2). When the expert
scores are assigned low, the word vector-based technique is closer to the expert scores than
the transformer-based technique. Whereas, when the expert scores are assigned as high, the
transformer-based approach is closer to the expert scores than the word vector-based. The
Pearson correlation coeficient in the transformer-based method is lesser than that of the word
vector-based. This trend can also be seen in [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] where the authors obtain that the evaluation
parameters are lesser in transformer-based methods as compared to word vector-based methods
in the context of the US Supreme Court decisions. The higher the value of the correlation, the
better the corresponding method’s performance. Doc2vec obtains the highest correlation value
with the experts’ score (is computed as 0.685) and Sentence-RoBERTa-large-nli-stsb (is computed
as 0.401) methods, respectively, in the word vector-based technique and the transformer-based
technique. Overall, Doc2vec provides the highest correlation value with the experts’ scores.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>This paper presents a comprehensive review of the semantic similarity, i.e., categorization,
and techniques and metrics for determining semantic similarity. We then discuss exclusively
the semantic similarity of the legal court case documents wherein we confine ourselves to
word-vector-based and transformer-based techniques in the context of the experiments. Finally,
we discuss the results we obtained while computing semantic similarity among legal documents
with diferent techniques, viz., word vector-based techniques (TF-IDF, LDA, Word2Vec, Doc2Vec,
Glove, and fastText), and transformer-based techniques (Longformer-base,
Sentence-BERT-largenli, Sentence-BERT-large-nli-stsb, and Sentence-RoBERTa-large-nli-stsb). We observed that the
Doc2vec similarity correlates the most with expert judgment from both the techniques, viz.,
word vector-based and transformer-based techniques.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Acknowledgment</title>
      <p>This work is supported by the IHUB-ANUBHUTI-IIITD FOUNDATION set up under the
NMICPS scheme of the Department of Science and Technology, India.</p>
      <sec id="sec-5-1">
        <title>Word Vector Based</title>
        <p>Doc2vec GloVe
0.160 0.864
0.146 0.838
0.084 0.838
0.271 0.910
0.238 0.895
0.051 0.904
0.263 0.899
0.353 0.903
0.322 0.935
0.193 0.885
0.358 0.898
0.459 0.960
0.160 0.864
0.238 0.842
0.561 0.959
0.178 0.957
0.492 0.951
0.527 0.960
0.581 0.957
0.500 0.963
0.351 0.935
0.266 0.954
0.393 0.931
0.297 0.893
0.536 0.947
0.356 0.960
0.492 0.954
0.393 0.926
0.439 0.932
0.372 0.909
0.529 0.964
0.540 0.931
0.234 0.903
0.836 0.989
0.431 0.947
0.177 0.883
0.482 0.942
0.539 0.944
0.648 0.973
0.537 0.943
0.695 0.974
0.619 0.972
0.838 0.990
0.584 0.914
0.540 0.945
0.725 0.980
0.750 0.978
fastText
0.347
0.386
0.179
0.521
0.527
0.304
0.572
0.688
0.635
0.447
0.712
0.796
0.347
0.502
0.723
0.583
0.648
0.809
0.685
0.788
0.551
0.703
0.566
0.602
0.754
0.681
0.846
0.586
0.724
0.551
0.755
0.672
0.560
0.931
0.745
0.357
0.817
0.727
0.933
0.858
0.922
0.941
0.949
0.687
0.725
0.952
0.884</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Gutierrez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. J.</given-names>
            <surname>Strachan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Dou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Blake</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Eilbeck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Natale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lin</surname>
          </string-name>
          , et al.,
          <article-title>Omnisearch: a semantic search system based on the ontology for microrna target (omit) for microrna-target gene interaction data</article-title>
          ,
          <source>Journal of biomedical semantics 7</source>
          (
          <year>2016</year>
          )
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>H.-M. Müller</surname>
            ,
            <given-names>E. E.</given-names>
          </string-name>
          <string-name>
            <surname>Kenny</surname>
            ,
            <given-names>P. W.</given-names>
          </string-name>
          <string-name>
            <surname>Sternberg</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Ashburner</surname>
          </string-name>
          ,
          <article-title>Textpresso: an ontology-based information retrieval and extraction system for biological literature</article-title>
          ,
          <source>PLoS biology 2</source>
          (
          <year>2004</year>
          )
          <article-title>e309</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P. D.</given-names>
            <surname>Turney</surname>
          </string-name>
          ,
          <article-title>Measuring semantic similarity by latent relational analysis</article-title>
          ,
          <source>arXiv preprint cs/0508053</source>
          (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Schwering</surname>
          </string-name>
          ,
          <article-title>Approaches to semantic similarity measurement for geo-spatial data: a survey</article-title>
          ,
          <source>Transactions in GIS 12</source>
          (
          <year>2008</year>
          )
          <fpage>5</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>I.</given-names>
            <surname>Matveeva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Levow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Farahat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Royer</surname>
          </string-name>
          ,
          <article-title>Term representation with generalized latent semantic analysis</article-title>
          ,
          <source>Amsterdam Studies in the Theory and History of Linguistic Science Series 4</source>
          <volume>292</volume>
          (
          <year>2007</year>
          )
          <fpage>45</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Oussalah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mohamed</surname>
          </string-name>
          ,
          <article-title>Knowledge-based sentence semantic similarity: algebraical properties</article-title>
          ,
          <source>Progress in Artificial Intelligence</source>
          <volume>11</volume>
          (
          <year>2022</year>
          )
          <fpage>43</fpage>
          -
          <lpage>63</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Reddy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. B.</given-names>
            <surname>Reddy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>Similarity analysis of legal judgments</article-title>
          ,
          <source>in: Proceedings of the fourth annual ACM Bangalore conference</source>
          ,
          <year>2011</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Reddy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. B.</given-names>
            <surname>Reddy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Suri</surname>
          </string-name>
          ,
          <article-title>Finding similar legal judgements under common law system</article-title>
          , in: International workshop on databases in
          <source>networked information systems</source>
          , Springer,
          <year>2013</year>
          , pp.
          <fpage>103</fpage>
          -
          <lpage>116</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Reddy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. B.</given-names>
            <surname>Reddy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>Similarity analysis of legal judgments</article-title>
          ,
          <source>in: Proceedings of the fourth annual ACM Bangalore conference</source>
          ,
          <year>2011</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mandal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chaki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Saha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <article-title>Measuring similarity among legal court case documents</article-title>
          ,
          <source>in: Proceedings of the 10th annual ACM India compute conference</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mikolov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chen</surname>
          </string-name>
          , G. Corrado,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dean</surname>
          </string-name>
          ,
          <article-title>Eficient estimation of word representations in vector space</article-title>
          ,
          <source>arXiv preprint arXiv:1301.3781</source>
          (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Le</surname>
          </string-name>
          , T. Mikolov,
          <article-title>Distributed representations of sentences and documents</article-title>
          ,
          <source>in: International conference on machine learning, PMLR</source>
          ,
          <year>2014</year>
          , pp.
          <fpage>1188</fpage>
          -
          <lpage>1196</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.</given-names>
            <surname>Pennington</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Socher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Manning</surname>
          </string-name>
          , Glove:
          <article-title>Global vectors for word representation</article-title>
          ,
          <source>in: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)</source>
          ,
          <year>2014</year>
          , pp.
          <fpage>1532</fpage>
          -
          <lpage>1543</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>P.</given-names>
            <surname>Bojanowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Grave</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Joulin</surname>
          </string-name>
          , T. Mikolov,
          <article-title>Enriching word vectors with subword information, Transactions of the association for computational linguistics 5 (</article-title>
          <year>2017</year>
          )
          <fpage>135</fpage>
          -
          <lpage>146</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ostendorf</surname>
          </string-name>
          , E. Ash,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ruas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Gipp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Moreno-Schneider</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Rehm, Evaluating document representations for content-based legal literature recommendations</article-title>
          ,
          <source>in: Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>109</fpage>
          -
          <lpage>118</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>L.</given-names>
            <surname>Sahni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sehgal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kochar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ahmad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ahmad</surname>
          </string-name>
          ,
          <article-title>A novel approach to find semantic similarity measure between words</article-title>
          ,
          <source>in: 2014 2nd International Symposium on Computational and Business Intelligence</source>
          , IEEE,
          <year>2014</year>
          , pp.
          <fpage>89</fpage>
          -
          <lpage>92</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>Y.-S.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-Y.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-J.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>A similarity measure for text classification and clustering</article-title>
          ,
          <source>IEEE transactions on knowledge and data engineering 26</source>
          (
          <year>2013</year>
          )
          <fpage>1575</fpage>
          -
          <lpage>1590</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Nourashrafeddin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Milios</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. V.</given-names>
            <surname>Arnold</surname>
          </string-name>
          ,
          <article-title>An ensemble approach for text document clustering using wikipedia concepts</article-title>
          ,
          <source>in: Proceedings of the 2014 ACM symposium on Document engineering</source>
          ,
          <year>2014</year>
          , pp.
          <fpage>107</fpage>
          -
          <lpage>116</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>M.</given-names>
            <surname>Steinbach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Karypis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <article-title>A comparison of document clustering techniques (</article-title>
          <year>2000</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>R.</given-names>
            <surname>Tous</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Delgado</surname>
          </string-name>
          ,
          <article-title>A vector space model for semantic similarity calculation and owl ontology alignment</article-title>
          ,
          <source>in: International Conference on Database and Expert Systems Applications</source>
          , Springer,
          <year>2006</year>
          , pp.
          <fpage>307</fpage>
          -
          <lpage>316</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>P. D.</given-names>
            <surname>Turney</surname>
          </string-name>
          ,
          <article-title>Measuring semantic similarity by latent relational analysis</article-title>
          ,
          <source>arXiv preprint cs/0508053</source>
          (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>E.</given-names>
            <surname>Giovannetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Marchi</surname>
          </string-name>
          , S. Montemagni,
          <article-title>Combining statistical techniques and lexicosyntactic patterns for semantic relations extraction from text</article-title>
          ., in: SWAP, Citeseer,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>B.</given-names>
            <surname>Perozzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Al-Rfou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Skiena</surname>
          </string-name>
          , Deepwalk:
          <article-title>Online learning of social representations</article-title>
          ,
          <source>in: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining</source>
          ,
          <year>2014</year>
          , pp.
          <fpage>701</fpage>
          -
          <lpage>710</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>P.</given-names>
            <surname>Qin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <article-title>A new measure of word semantic similarity based on wordnet hierarchy and dag theory</article-title>
          , in: 2009
          <source>International Conference on Web Information Systems and Mining</source>
          , IEEE,
          <year>2009</year>
          , pp.
          <fpage>181</fpage>
          -
          <lpage>185</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Han</surname>
          </string-name>
          ,
          <article-title>Semantic information retrieval based on wikipedia taxonomy</article-title>
          ,
          <source>International Journal of Computer Applications Technology and Research</source>
          <volume>2</volume>
          (
          <year>2013</year>
          )
          <fpage>77</fpage>
          -
          <lpage>80</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>G.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Buckley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. M.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <article-title>A wordnet-based semantic similarity measure enhanced by internet-based knowledge</article-title>
          .,
          <source>in: SEKE</source>
          ,
          <year>2011</year>
          , pp.
          <fpage>175</fpage>
          -
          <lpage>178</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>