<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>with Semantic Embeddings</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alessandro Sajeva</string-name>
          <email>alessandro.sajeva@uniroma3.it</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefano Iannucci</string-name>
          <email>stefano.iannucci@uniroma3.it</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Carlo Marchetti</string-name>
          <email>carlo.marchetti@senato.it</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Paolo Merialdo</string-name>
          <email>paolo.merialdo@uniroma3.it</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Riccardo Torlone</string-name>
          <email>riccardo.torlone@uniroma3.it</email>
        </contrib>
      </contrib-group>
      <fpage>3</fpage>
      <lpage>11</lpage>
      <abstract>
        <p>The Italian Senate faces the problem of clustering amendments to optimize the scheduling of parliamentary sessions. Currently, this task is carried out by Similis, an application that tackles this problem by using a traditional term-frequency technique, which leads to clustering based on wording rather than semantics. Recent advances in natural language processing have led Italian institutions to investigate the adoption of pre-trained language models (PTLMs) for text analysis. Along this line, in this paper, we propose CLAMSE, an alternative system to Similis that uses Sentence-BERT pre-trained models to generate embeddings and then groups similar amendments through hierarchical agglomerative clustering. Our preliminary evaluation shows that CLAMSE achieves comparable performance to Similis using embeddings generated by pre-trained models without fine-tuning, paving the way for applying a clustering method with advanced contextual understanding. This study contributes to enhancing the efectiveness of institutional decision-making processes through the adoption of PTLMs.</p>
      </abstract>
      <kwd-group>
        <kwd>language models</kwd>
        <kwd>embeddings</kwd>
        <kwd>clustering</kwd>
        <kwd>public afairs</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>Pre-Trained Language Models (PTLMs) are emerging as valuable allies in addressing various
problems across diverse domains and represent a great opportunity for enhancing parliamentary
eficiency and efectiveness. In this context, and within the framework of a collaboration between
Roma Tre University and the Italian Senate, an interest in systems based on PTLMs to support
parliamentary activities has taken shape. This paper investigates the problem of clustering
similar amendments.</p>
      <p>Amendments represent proposed changes to legislative texts and are a fundamental element
of the legislative process. They may vary widely in terms of wording but often share similar
intentions and objectives. Similar amendment proposals should be discussed simultaneously, if
possible. Therefore, clustering amendments according to their similarity is an essential activity
to facilitate the work of oficials and efectively organize voting sessions, while ensuring the
coherence and completeness of legislative proposals. Indeed, amendments that difer by only
(R. Torlone)
a few words are usually proposed in large numbers by parliamentary groups that want to
iflibuster, with the aim of slowing down the legislative process. Combining debates on similar
amendment proposals therefore allows for the greatest possible eficiency in terms of time.</p>
      <p>In this paper, we aim to explore the potential of PTLMs in such a crucial context. The
Senate already has a tool to support this activity, called Similis, which adopts a traditional
term-frequency technique. Although Similis is efective in grouping short amendments sharing
many tokens, it loses efectiveness with longer amendments that adopt diferent lexicons yet
preserve the same semantics.</p>
      <p>
        To overcome this issue we have investigated an alternative solution that leverages a PTLM.
Our approach has been implemented in CLAMSE (Clustering Amendments with Semantic
Embeddings), an alternative system to Similis, which relies on Sentence-BERT [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] PTLMs for
converting amendments into embeddings, and then groups similar embeddings via a hierarchical
agglomerative clustering (HAC) technique.
      </p>
      <p>The preliminary experiments we conducted yielded promising results showing the
efectiveness of adopting solutions based on PTLMs in demanding processes of public administration,
with the advantage of simplifying the development process by eliminating the need for building
from-scratch implementations or task-specific models.</p>
      <p>The rest of the paper is organized as follows. In Section 2 we discuss related works and the
pre-existent approach to amendments clustering. In Section 3 we illustrate our PTLM solution
and in Section 4 we report on its evaluation. Finally, in Section 5 we draw some conclusions.</p>
    </sec>
    <sec id="sec-3">
      <title>2. Related work and earlier solution</title>
      <p>
        In recent years, there has been a growing emphasis on leveraging advanced text processing
techniques to enhance institutional work globally. In the context of a collaboration between
Roma Tre University and the Senate of the Italian Republic, and more generally in the application
of artificial intelligence systems to support legislative activities, several important studies have
been carried out. A machine learning system for the classification of documents has been
developed [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], and another important contribution concerned the implementation of a system
that exploits Sentence-BERT [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] for the alignment of stenographic reports with audio recordings
of parliamentary sessions [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], representing steps forward in the digital transformation of
legislative practices.
      </p>
      <p>
        A pivotal aspect of this evolution lies in the transition from traditional vectorization methods
towards more semantic-based approaches. In the domain of text clustering, Term
FrequencyInverse Document Frequency (TF-IDF) stands out as one of the most commonly employed
methods for representing textual data. However, TF-IDF fails to incorporate the positional and
contextual aspects of words within sentences. A study conducted simulations to demonstrate
that BERT consistently outperforms the TF-IDF method [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Furthermore, recent studies
have employed Sentence-BERT as a vectorization technique before clustering analysis [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
Comparisons have been made between Sentence-BERT and traditional methods.
SentenceBERT emerged as particularly adept at understanding topics within clustering algorithms [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
Other research works also have showcased the efectiveness of semantic embeddings derived
from foundation models in clustering endeavors in the realm of dataset deduplication [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>This transition towards semantic representations marks a paradigm shift from shallow
representations incorporating syntactic meaning to deep context understanding, providing a more
sophisticated text comprehension.</p>
      <p>
        Similis. The IT department of the Italian Republic Senate has developed a solution for
clustering similar amendments, called Similis, in close collaboration with the Institute of Legal
Informatics and Judicial Systems (CNR-IGSG) [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. The similarity sought by Similis focuses
on the wording of sentences, thus favoring texts with high syntactic rather than semantic
coherence. The algorithm does not rely on a priori information about amendments, and groups
amendments by means of HAC with complete linkage [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The content of each amendment is
represented by a vector which is built according to a bag-of-word model with term-frequency
(TF) with Euclidean normalization [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], after word stemming and stop-words removal.
Amendment similarity is measured using a traditional cosine similarity metric. The thresholds for the
cosine similarity and dendrogram cut-of have been identified empirically and are set to 0.8 and
0.2, respectively.
      </p>
    </sec>
    <sec id="sec-4">
      <title>3. CLAMSE</title>
      <p>Our semantic embeddings-based solution consists of a pipeline, which involves three phases:
() preprocessing, () encoding, and () clustering.</p>
      <p>The system takes a set of amendments that refer to a single Senate act as input. The
preprocessing phase aims at cleaning the text of each amendment from special characters, thereby
increasing the quality of embeddings and, consequently, improving clustering performance.
The pre-processed text is then sent to the encoding block, which computes an embedding
for each amendment. It is worth saying that Sentence-BERT models are implemented in a
Python framework known as SentenceTransformers. This provides several PTLMs, each with
specific characteristics. The encodings produced by each model serve as input for the clustering
phase that implements a traditional HAC. The final solution for the problem is determined by
selecting the most optimal clustering among those generated by applying HAC to each corpus
of embeddings obtained in the encoding phase.</p>
      <p>In the following, we provide a more detailed description of the three phases.</p>
      <sec id="sec-4-1">
        <title>3.1. Preprocessing</title>
        <p>The input dataset, which is structured as a JSON Line (JSONL) file, is transformed into the
standard JSON format for readability. The JSON file reports a record containing the text and
the amendment number for each amendment. Also, each record is associated with a cluster
identifier, which represents the ground truth for a clustering assignment. 1</p>
        <p>The text of the amendments includes numerous tags and annotations designed for display on
web browsers but lacking any semantic significance. Therefore, it was decided to remove them
to enhance clustering performance. In particular, we remove all the occurrences of the HTML
1In the original JSON file, these elements correspond to the ’num_em’, ’text_emend’, and ’id_cluster’ attributes
macro for the blank space (”&amp;nbsp”), and we replace all the occurrences of HTML tags with
whitespace characters.</p>
      </sec>
      <sec id="sec-4-2">
        <title>3.2. Amendment encoding with Sentence-BERT</title>
        <p>The cleaned corpus of amendments is passed to the encoding module. Here, a range of
SentenceBERT PTLMs is loaded, and each model generates an embedding for the text of each amendment.
Not all of the pre-trained models produce normalized embeddings. However, since normalizing
embeddings leads to improved performance, a normalization step is added by dividing each
component of the vector by its norm and ensuring that the norm of the resulting vector is equal
to 1 (L2 normalization). Table 1 reports the PTLMs that we have considered in our experimental
activities. Since the dataset used for the experiments contains amendments with an average of
137 tokens, models with a maximum sequence length of less than 512 tokens were excluded
from the study. 2</p>
      </sec>
      <sec id="sec-4-3">
        <title>3.3. Clustering algorithm</title>
        <p>In the last phase, a clustering activity is carried out on each corpus of embeddings generated in
the previous phase. In particular, we adopt a HAC approach. Roughly speaking, a HAC algorithm
operates as follows. Initially, each sample in the input dataset is treated as an individual cluster.
Then, the algorithm iterates, merging in each step the most suitable pair of clusters until only
one cluster remains. The decision on which pair of clusters to merge is based on the cosine
similarity between the embeddings representing the amendments, with complete linkage, i.e.,
the similarity between clusters equals the similarity between the two most dissimilar samples,
one in each cluster.</p>
        <p>The cut-of threshold for the dendrogram generated by the HAC algorithm is set to a value
that maximizes the quality of the clusters. In CLAMSE, we use the silhouette score to measure
clustering quality: it measures how similar an element is to members of the cluster it belongs
to, compared to other members of other clusters. It ranges from −1 to +1, where a high value
indicates that there is cohesion between elements of the same cluster and separation between
elements of diferent clusters.
2For a complete list of the models see https://www.sbert.net/docs/pretrained_models.html.</p>
        <p>Since we have a clustering solution for each PTLM used to generate the embeddings, we
choose the clustering with the highest silhouette score as the final solution.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. Experimental evaluation</title>
      <p>The evaluation of CLAMSE is carried out in terms of both its absolute performance and in
comparison to the pre-existing Similis solution. In particular, we first analyze the performance
of the diferent PTLMs to identify the best solution that CLAMSE could achieve. Then, we
compare CLAMSE to Similis. Finally, we present the results of an experiment that shows how
the preprocessing phase can influence the final clustering results.</p>
      <sec id="sec-5-1">
        <title>4.1. Ground truth</title>
        <p>We evaluated CLAMSE on a set of amendments, named “Act 1248”. It contains 1261 amendments
that have been clustered manually (each amendment, in the original JSONL file, is annotated
with a label representing the target cluster). Table 2 reports the main characteristic of this
dataset. Note there is a large variation in both the size of amendments occurring in such
dataset (ranging from a few to thousands of tokens) and the number of elements occurring in a
cluster (ranging from a singleton to tens of amendments). This makes the task of automatic
amendments clustering challenging.</p>
      </sec>
      <sec id="sec-5-2">
        <title>4.2. Evaluation metrics</title>
        <p>
          The performance of Similis is reported by means of the Adjusted Rand-Index (ARI) [
          <xref ref-type="bibr" rid="ref11 ref12 ref13">11, 12, 13</xref>
          ]
and the Adjusted Mutual Information (AMI) [
          <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
          ], which are standard metrics to evaluate
the results of a clustering process.
        </p>
        <p>
          ARI assesses the agreement between the clusters produced by a clustering algorithm and
the ground truth, correcting for chance agreement. AMI is another measure used to evaluate
the similarity between two clusterings of data, but it is based on mutual information. Both
ARI and AMI are used to measure the similarity between the true labels in the ground truth
and the clustering labels produced by CLAMSE, taking into account the possible presence of
random agreements. Both metrics range from 0 to 1, where a score of 1 indicates that true labels
and clustering labels are identical, and a score of 0 indicates that they are completely diferent.
ARI is more suitable when the ground truth clustering has large equal-sized clusters. AMI is
preferable when the ground truth clustering is unbalanced and there exist small clusters [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ].
        </p>
        <p>As above mentioned, silhouette is another metric for evaluating the quality of clustering.
However, in our solution, it is used internally to the HAC algorithm to find the optimal number
of clusters.</p>
      </sec>
      <sec id="sec-5-3">
        <title>4.3. Comparison with Similis</title>
        <p>The performances of CLAMSE compared to Similis are reported in Table 3, which shows that
CLAMSE achieves better results both for ARI and AMI.</p>
        <p>Table 3 reports also the performance of CLAMSE without the preprocessing phase. It is
worth observing how cleaning text contributes to improved performance as the semantics of
the amendments can thus be better understood by the model.</p>
        <p>model</p>
      </sec>
      <sec id="sec-5-4">
        <title>4.4. Robustness of CLAMSE</title>
        <p>As we discussed in Section 3.2, CLAMSE utilizes several PTLMs from the Sentence-BERT library,
known as SentenceTransformers, and chooses the solution that produces the best clustering
based on the silhouette score. In order to evaluate the robustness of the approach, for each
Sentence-BERT PTLM we have computed the best clustering that could be obtained from the
HAC in terms of ARI and AMI. Essentially, at every iteration of the HAC algorithm, we compute
ARI and AMI, which need the ground truth. The highest AMI and AMI scores represent the best
performance that can be achieved by the CLAMSE approach, which chooses the best clustering
of each PTLM based on the silhouette scores.</p>
        <p>Table 4 shows, sorted by descending AMI, the best performances achievable by CLAMSE
with each of the models in Tables 1. Observing the results of this experiment, two important
observations occur. First, notice that in many cases, the CLAMSE algorithm achieves the
best results it could obtain. This demonstrates the efectiveness of the silhouette score for
choosing the best clustering of each model. Secondly, it’s important to note that while several
models outperform Similis, there are also many models that exhibit inferior performance. This
underscores the efectiveness of our approach in model selection.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Conclusions</title>
      <p>We presented CLAMSE, a system that addresses the problem of amendments clustering.
CLAMSE applies a HAC algorithm to the embeddings of the amendments built by several
model
MQMBC1
PMB2
MQMBD1
AMB1
GTXL
silhouette
Sentence-BERT PTLMs. This is a preliminary study whose main objective was to explore and
evaluate the application of embeddings generated by PTLMs, in particular Sentence-BERT, in
comparison to the traditional approach based on TF implemented by the existing system Similis.
The preliminary results are encouraging and show that CLAMSE, without fine-tuning, achieved
interesting performance.</p>
      <p>While these results are promising, several limitations should be noted. First, due to the
constraints of using models with a maximum input length of 512 tokens, amendments longer
than this threshold are truncated, accounting for approximately 3.33% of the dataset. Second, the
encoding is based solely on the textual content of the amendments, ignoring the specific articles
of the law to which they refer. As a result, amendments with identical text but referencing
diferent articles may be grouped together in cases where they should not be, potentially
compromising the accuracy of the clustering results. In addition, the computational eficiency
of the method is a concern, as it requires encoding the entire corpus with multiple models and
running HAC each time to determine the optimal clustering result.</p>
      <p>This evaluation suggests a promising potential for the future development of CLAMSE.
In particular, the possibility of training specific models for amendment clustering could be
explored, taking advantage of the pre-trained models that already show good eficiency in
terms of semantic embedding. This approach may represent an interesting research direction
to further optimize the performance of the system and refine its adaptability to new datasets of
amendments to be clustered.</p>
      <p>
        It is worth observing that Similis can leverage Linkoln [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], a system for the automatic
extraction of legislative and jurisprudential references from texts in the Italian language. In
the future, we plan to study the opportunity of enhancing CLAMSE with the text annotation
provided by Linkoln. There exists a pronounced enthusiasm for large language models (LLMs)
over traditional PTLMs. Numerous governments are actively experimenting with LLMs for
classification tasks and question-answering [
        <xref ref-type="bibr" rid="ref18 ref19">18, 19</xref>
        ]. Our future endeavors will focus on harnessing
the power of LLMs to enhance the performance of CLAMSE.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>N.</given-names>
            <surname>Reimers</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Gurevych</surname>
          </string-name>
          ,
          <article-title>Sentence-bert: Sentence embeddings using siamese bert-networks</article-title>
          ,
          <year>2019</year>
          . arXiv:
          <year>1908</year>
          .10084.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A. D.</given-names>
            <surname>Angelis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. D.</given-names>
            <surname>Cicco</surname>
          </string-name>
          , G. Lalle,
          <string-name>
            <given-names>C.</given-names>
            <surname>Marchetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Merialdo</surname>
          </string-name>
          <article-title>, Multi-label classification of bills from the italian senate</article-title>
          , in: AIxPA@AI*IA,
          <year>2022</year>
          . URL: https://api.semanticscholar. org/CorpusID:254234138.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bertillo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. D.</given-names>
            <surname>Donato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Marchetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Merialdo</surname>
          </string-name>
          ,
          <article-title>Enhancing accessibility of parliamentary video streams: Ai-based automatic indexing using verbatim reports</article-title>
          ,
          <source>EasyChair Preprint no. 10892</source>
          ,
          <issue>EasyChair</issue>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Subakti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Murfi</surname>
          </string-name>
          ,
          <string-name>
            <surname>N. Hariadi,</surname>
          </string-name>
          <article-title>The performance of bert as data representation of text clustering (</article-title>
          <year>2021</year>
          ). doi:
          <volume>10</volume>
          .21203/rs.3.rs-
          <volume>940164</volume>
          /v1.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Hasani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Heryadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Arifin</surname>
          </string-name>
          , Lukas, W. Suparta,
          <article-title>Density based spatial clustering of applications with noise and sentence bert embedding for indonesian utterance clustering</article-title>
          , in: 2023 International Conference on Computer Science, Information Technology and Engineering (ICCoSITE),
          <year>2023</year>
          , pp.
          <fpage>386</fpage>
          -
          <lpage>391</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICCoSITE57641.
          <year>2023</year>
          .
          <volume>10127683</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Susanto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pradita</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Stryadhi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Setiawan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hasani</surname>
          </string-name>
          ,
          <article-title>Text vectorization techniques for trending topic clustering on twitter: A comparative evaluation of tf-idf, doc2vec, and sentence</article-title>
          -bert,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICORIS60118.
          <year>2023</year>
          .
          <volume>10352228</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Abbas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Tirumala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Simig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ganguli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Morcos</surname>
          </string-name>
          ,
          <article-title>Semdedup: Data-eficient learning at web-scale through semantic deduplication</article-title>
          ,
          <year>2023</year>
          . arXiv:
          <volume>2303</volume>
          .
          <fpage>09540</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>T.</given-names>
            <surname>Agnoloni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Marchetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Battistoni</surname>
          </string-name>
          , G. Briotti,
          <article-title>Clustering similar amendments at the Italian senate</article-title>
          , in: D.
          <string-name>
            <surname>Fišer</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Eskevich</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Lenardič</surname>
          </string-name>
          , F. de Jong (Eds.),
          <source>Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference</source>
          , European Language Resources Association, Marseille, France,
          <year>2022</year>
          , pp.
          <fpage>39</fpage>
          -
          <lpage>46</lpage>
          . URL: https://aclanthology.org/
          <year>2022</year>
          .parlaclarin-
          <volume>1</volume>
          .7.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>F.</given-names>
            <surname>Murtagh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Contreras</surname>
          </string-name>
          ,
          <article-title>Algorithms for hierarchical clustering: an overview</article-title>
          ,
          <source>Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery</source>
          <volume>2</volume>
          (
          <year>2012</year>
          )
          <fpage>86</fpage>
          -
          <lpage>97</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>H.</given-names>
            <surname>Schütze</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Manning</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Raghavan</surname>
          </string-name>
          , Introduction to information retrieval, volume
          <volume>39</volume>
          , Cambridge University Press Cambridge,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Hubert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Arabie</surname>
          </string-name>
          ,
          <article-title>Comparing partitions</article-title>
          ,
          <source>Journal of classification 2</source>
          (
          <year>1985</year>
          )
          <fpage>193</fpage>
          -
          <lpage>218</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Albatineh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Niewiadomska-Bugaj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Mihalko</surname>
          </string-name>
          ,
          <article-title>On similarity indices and correction for chance agreement</article-title>
          ,
          <source>Journal of classification 23</source>
          (
          <year>2006</year>
          )
          <fpage>301</fpage>
          -
          <lpage>313</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>W. M.</given-names>
            <surname>Rand</surname>
          </string-name>
          ,
          <article-title>Objective criteria for the evaluation of clustering methods</article-title>
          ,
          <source>Journal of the American Statistical association 66</source>
          (
          <year>1971</year>
          )
          <fpage>846</fpage>
          -
          <lpage>850</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>N. X.</given-names>
            <surname>Vinh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Epps</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bailey</surname>
          </string-name>
          ,
          <article-title>Information theoretic measures for clusterings comparison: is a correction for chance necessary?</article-title>
          ,
          <source>in: Proceedings of the 26th annual international conference on machine learning</source>
          ,
          <year>2009</year>
          , pp.
          <fpage>1073</fpage>
          -
          <lpage>1080</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Romano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bailey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Verspoor</surname>
          </string-name>
          ,
          <article-title>Standardized mutual information for clustering comparisons: one step further in adjustment for chance</article-title>
          ,
          <source>in: International conference on machine learning, PMLR</source>
          ,
          <year>2014</year>
          , pp.
          <fpage>1143</fpage>
          -
          <lpage>1151</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S.</given-names>
            <surname>Romano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. X.</given-names>
            <surname>Vinh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bailey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Verspoor</surname>
          </string-name>
          ,
          <article-title>Adjusting for chance clustering comparison measures</article-title>
          ,
          <source>Journal of Machine Learning Research</source>
          <volume>17</volume>
          (
          <year>2016</year>
          )
          <fpage>1</fpage>
          -
          <lpage>32</lpage>
          . URL: http://jmlr.org/ papers/v17/
          <fpage>15</fpage>
          -
          <lpage>627</lpage>
          .html.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <article-title>Istituto di Informatica Giuridica e Sistemi Giudiziari (IGSG) del Consiglio Nazionale delle Ricerche (CNR), Linkoln</article-title>
          , https://linkoln.gitlab.io/,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>A.</given-names>
            <surname>Peña</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Morales</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Fierrez</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Serna</surname>
          </string-name>
          , J. Ortega-Garcia, u. Puente,
          <string-name>
            <given-names>J.</given-names>
            <surname>Córdova</surname>
          </string-name>
          , G. Córdova,
          <article-title>Leveraging Large Language Models for Topic Classification in the Domain of Public Afairs</article-title>
          , Springer Nature Switzerland,
          <year>2023</year>
          , p.
          <fpage>20</fpage>
          -
          <lpage>33</lpage>
          . URL: http://dx.doi.org/10.1007/ 978-3-
          <fpage>031</fpage>
          -41498-
          <issue>5</issue>
          _2. doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -41498-
          <issue>5</issue>
          _
          <fpage>2</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>S.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <article-title>Application of large language model in intelligent q&amp;a of digital government</article-title>
          ,
          <source>in: Proceedings of the 2023 2nd International Conference on Networks, Communications and Information Technology, CNCIT '23</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2023</year>
          , p.
          <fpage>24</fpage>
          -
          <lpage>27</lpage>
          . URL: https://doi.org/10.1145/3605801. 3605806. doi:
          <volume>10</volume>
          .1145/3605801.3605806.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>