<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Automatic Summarization of Legal Texts, Extractive Summarization using LLMs</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>David Preti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Cristina Giannone</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Favalli</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Raniero Romagnoli</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Almawave S.P.A.</institution>
          ,
          <addr-line>via Casal Boccone 10, Roma, 00133</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this work, we describe the first results of experimentation with summarization systems based on large language models to produce an extractive summarization of the judgments (massime). We propose a novel approach for this task, exploiting the generative capabilities of LLM and removing all possibilities of hallucination. Our study aims to assess the efectiveness and eficiency of generative models in summarizing the court's decisions. Through a comprehensive analysis of several summarization system setups, we evaluate the quality of the summaries generated by each approach and their ability to capture the key legal principles and linguistic features in the courts' decisions.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Legal Text</kwd>
        <kwd>Summarization</kwd>
        <kwd>LLM</kwd>
        <kwd>Generative AI</kwd>
        <kwd>Human in the Loop</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>ignated ofice, utilizing a human-in-the-loop approach as
discussed in [4].</p>
      <p>Artificial intelligence systems, now employed across a The process of analyzing judgments and extracting
relewide array of fields, can also serve as valuable aids for vant sentences can be significantly simplified through the
legal practitioners. use of pre-trained models [5, 6]. These models function
Increasingly sophisticated tools enhance information as versatile universal sentence/text encoders, capable
search capabilities, automate the drafting or verification of addressing a range of downstream tasks, including
of legal documents, and facilitate technical evaluations, summarization [7]. These models consistently
outpersuch as predictive justice. Utilizing such tools can yield form other approaches, particularly after fine-tuning or
significant benefits by enhancing the eficiency and qual- domain-adaptation [8].
ity of legal processes. In civil and common law systems, Despite the success of pre-trained transformers and LLMs
accessing legal judgments to retrieve legal decisions is es- in other summarization tasks[9], certain phenomena,
sential for various legal tasks, including defending clients, such as hallucination in the generation of the text [10],
constructing cases for prosecution, and issuing judicial the task of producing massime is still challenging for
curdecisions. In Italy, to ensure widespread information on rent extractive and abstractive summarization systems.
the courts’ decisions, for this purpose, a dedicated body, Additionally, legal texts are often extensive, further
inthe Uficio del Massimario , was established, whose pur- creasing the summarization task’s complexity.
Identipose is to produce massime. fying the portions of the text that contain the relevant
In a concise yet detailed manner, these summaries (mas- information to be reported in the massime becomes
chalsime) encapsulate the legal principles articulated in judg- lenging due to their length [11].
ments. Hence, legal professionals can consult these mas- In this paper, we present an approach to producing an
sime instead of delving into the entirety of legal decisions. extractive summary by exploiting the ability of an LLM
The task of summarising legal texts and producing mas- to generate abstract summaries from a document. Our
sime has been widely addressed in the last years [1], approach selects, from the abstract, the sentences that
especially with the advent of the Generative AI [2, 3]. best match the sentences in the source document. This
Given the complexity of the task, the approach outlined approach, described in Sec. 2, reduces the hallucination
in [1] focuses on handling the automatic production of phenomena, achieving results in a zero-shot setting,
dea massima as an extractive summarization task. This scribed in Sec. 3 comparable with a model trained with a
involves extracting the most pertinent part of the judge- domain dataset.
ment to assist in the drafting of the massima by the
des</p>
    </sec>
    <sec id="sec-2">
      <title>2. Extractive Projection</title>
      <p>0.65
0.28
0.05
0.08
0.07
0.16
document, without any parameter fixed a priori.
Moreover, while in [7] this greedy selection algorithm is used
to obtain an oracle summary for each document used as
a reference to train the extractive model, here this
algotion and facts not present in the original document are rithm is used to project the (abstractive) generated
sumgenerated in the output summary. Several attempts have mary into the segments of the original document. Note
been made to try to tame such unwanted behaviour (for that such procedure completely removes by construction
instance, see [13]), which may lead to serious problems any possibility of hallucination since the projection cuts
in sensitive domains. of all possible novelties and generations.
Given its specific lexicon, the vast amount of fixed forms The greedy selection procedure employed is then simply
and judicial references, the legal domain is very delicate a combinatorial optimization algorithm based on coverage
and unsuitable for a straightforward application of gener- metrics. In this respect, we tested several metrics,
rangative systems. To overcome such a problem, we introduce ing from the average of ROUGE-1 and ROUGE-2 [14] as
what we refer to as extractive projection, meaning, a trans- originally proposed in [7] to diferent linear combinations
formation mapping a generated text into sentences of the of Rouge-n and more sophisticated similarity metrics (e.g.,
original document. BERTscore [15].</p>
      <p>Defining the source documents  ∈ , the abstractive We observe that with the exception of very rare cases
summary as  ∈  with ,  respectively the space where the generated summary is produced in a diferent
of documents and abstractive summaries, the summa- language with respect to the original document, all the
rization prompt  ∈  . The generative summarization coverage metrics produce accurate results (see Tab. 1).
transformation is defined as: In the multilingual setup, only a similarity metric based
on multilingual embeddings, which is insensitive to
lan :  ×  →  guage shifts, produces reasonable results, while ROUGE
 = (|) . (1) does not work correctly.</p>
      <p>We introduce the extractive summary as ′ ∈ ′ ⊂ ,
and the extractive projection Γ
3. Results
˜
Γ :  ×  → ′
′ = Γ( ˜; ) ,</p>
      <p>(2)
where ˜ ∈ ˜, and ˜ is the space of segmented
documents (i.e., containing the same documents as , but
each one is split into a set of segments).</p>
      <p>The projection Γ used in this work is a slightly modified
version of the algorithm proposed in [7] to pre-process
the data. As a main diference from [ 7] we allow the
algorithm to select up to all the segments present in the
As discussed in Sec. 1, we trained and tested the
extractive summarization systems introduced in [1] on a
dataset composed by judgments and massime from
diferent courts1. Starting from a whole dataset of 1340 couple
of (judgement, massima), we randomly selected a subset
of 199 of them as a validation set, 940 as a train set, and
the remnant 201 as test set. The latter has been further
refined down to 61 "high quality" examples. For such
1The data are publicly available on the
https://www.inps.it/it/it/inps-comunica/atti/sentenze.html
website
1
2</p>
      <p>Write a summary in Italian of 150 words of the following text delimited by triple backquote:
“‘content“‘
Scrivi una massima in Italiano di 150 parole della seguente porzione di testo delimitata dalle virgolette.</p>
      <p>La massima deve rispondere ai seguenti generali requisiti:
a) fedeltà alla decisione;
b) sintesi nell’enunciazione del principio;
c) chiarezza e precisione del principio enunciato
La massima costituisce l’enucleazione del principio di diritto e non il riassunto della decisione e non può tradursi nella mera
riproduzione di passaggi argomentativi della motivazione.</p>
      <p>“‘content“‘
selection, we first used the greedy algorithm proposed in
Sec. 2 based on the average of ROUGE1 and ROUGE2 and
then selected only data with that value larger or equal to
0.6 (see Fig. 2). The scores for the extractive model Ext,
compared with the Oracle and those produced using
generative models, are collected in Tab. 1. More
specifically, we used two diferent prompts 1 and 2 (for details
see Tab. 3) to estimate the efect of a "generic"
summarization prompt, with a "task tuned" prompt specifically
referring to the features of a massima [16]. As expected,
we observe a small improvement in scores with all
generative models using 2 over 1. Moreover, we compare the
scores of a straightforward abstractive summarization
Abs, with the setup proposed in this work, i.e., including
the extractive projection called Gen-Ext in Tab. 1. For
all the evaluations, we used a generative model of the Figure 2: Fraction of test data as a function of the score
gpt-turbo [17] family2. Interestingly, the scores obtained (ROUGE1 +ROUGE2)/2 computed on the segments extracted
using zero-shots (no fine-tuning or contextual examples by the oracle combinatorial algorithm.
are involved) generative models, in both their types:
abstractive (Abs) and extractive (Gen-Ext), seem to
perform reasonably well when compared to the Ext model. cedure in a legal domain, where preserving factuality is
An example of the summaries produced in all the setups mandatory.
are displayed in Tab. 3. While obtain only partial results, we find them to be
It is worth noting that the scores obtained in this work reasonably promising but requiring some further
investishould be interpreted only as a reference. They are af- gation.
fected by large statistical fluctuations, which make a di- A comparison with diferent "open source" LLMs as
genrect comparison among the scores very tricky. Moreover, erative models, estimating the parameter scaling efects
coverage scores are known to have a limited correla- on the performances and a complete or partial (see for
tion with the efective quality of the summary produced, instance [18] ) fine-tuning or domain-adaptation is
dedwhich requires some human evaluation by domain ex- icated to future studies. In conclusion, while
discountperts. ing the dificulty of the task, given both by the inherent
complexity of the structure of a massima that cannot be
treated as a simple summary, and by the dificult
evalua4. Conclusions tion of the results found, as well as the fact that automatic
"token"-coverage metrics require some evaluation by
huIn this work we discussed the first results of a novel ap- man domain experts, we believe that LLMs, if
appropriproach that can be used to obtain "hallucination"-free ately applied, can ofer a valuable tool even in domains
results out of a generative model. We applied such pro- where factuality is paramount.
Target
Abs(p1)
Abs(p2)
Gen-Ext(p1)
Gen-Ext(p2)</p>
      <p>L’inserzione automatica di clausole, prevista dall’art. 1339 cod. civ., costituisce una restrizione significativa del diritto di
libertà economica consacrato dall’art. 41 Cost. di cui è espressione l’autonomia privata, e deve quindi trovare il suo
fondamento in una legge formale o in un altro atto avente valore di legge in senso sostanziale o da esso richiamato
tramite rinvio integrativo; pertanto, con riferimento al contratto avente ad oggetto la prestazione di servizi di vigilanza
da parte di un istituto privato, tale integrazione non può aver luogo in base al decreto con cui il prefetto approva la
relativa tarifa ai sensi dell’art. 135 del r.d. n. 773 del 1931, trattandosi di un mero atto amministrativo espressivo di un
sindacato di congruità, avente natura di merito, del prezzo di una prestazione contrattuale, né l’ingerenza con eficacia
reale sulla tarifa proposta dall’Istituto e liberamente accettata dal committente può trovare fondamento nella generale
potestà del prefetto di impartire prescrizioni nell’interesse pubblico, in sede di rilascio di autorizzazioni di polizia, ai
sensi dell’art. 9 del r.d. n. 773 cit..</p>
      <p>L’Istituto cooperativo di V. ed I. C. s.r.l. ha citato in giudizio l’Istituto Nazionale Previdenza Sociale per ottenere il
pagamento di una somma di denaro a titolo di maggior corrispettivo per il servizio di vigilanza. Dopo un lungo iter
giudiziario, la Corte d’appello di Catanzaro ha respinto la domanda, sostenendo che il provvedimento prefettizio in
materia di tarife per la vigilanza non poteva derogare al contenuto contrattuale liberamente scelto dalle parti. Il
fallimento dell’Istituto Cooperativo di V. e I. C. ha proposto ricorso per cassazione, ma la Corte ha respinto il ricorso,
condannando al pagamento delle spese di giudizio. La Corte ha sostenuto che l’inserzione automatica di clausole ex art.
1339 c.c. riguardava solo i prezzi imposti con legge e che l’ingerenza con eficacia reale sulla tarifa proposta dall’istituto
di vigilanza e liberamente accettata dal committente non poteva trovare fondamento nella generale potestà di impartire
prescrizioni nell’interesse pubblico.</p>
      <p>L’autonomia privata non può essere derogata da atti amministrativi, ma solo da leggi formali. Il principio dell’autonomia
privata è fondamentale e non può essere interpretato estensivamente. Le tarife prefettizie non hanno eficacia normativa
e non possono essere conferite da una presunta convergenza di vedute delle parti. Inoltre, il dovere di solidarietà sociale
non può essere utilizzato come motivo di condanna al pagamento del maggior prezzo. La libertà economica può subire
restrizioni solo in forza di una disposizione di legge. L’inserzione automatica di clausole ex art. 1339 c.c. riguarda solo i
prezzi imposti con legge e non può essere derogata da atti amministrativi. La fedeltà alla decisione giuridica richiede il
rispetto di tali principi.
- con atto di citazione notificato il 5 luglio 1995 l’istituto cooperativo di v. ed i. c. s.r.l. conveniva dinanzi al tribunale
di lamezia terme l’istituto nazionale previdenza sociale per ottenerne la condanna al pagamento della somma di l.
639.590.300 a titolo di maggior corrispettivo per il servizio di vigilanza espletato in favore dell’ente, presso la sede di
lamezia, dovuto in forza dell’inserzione automatica della tarifa inderogabile emanata dal prefetto di catanzaro in materia
di vigilanza, superiore al prezzo pattuito con il contratto stipulato il 25 agosto 1983. dopo l’espletamento di consulenza
tecnica d’uficio il tribunale di lamezia terme con sentenza 23-27 novembre 1999 condannava l’inps al pagamento della
somma di l. 1.148.787.862, oltre le spese di giudizio. avverso la sentenza, non notificata, proponeva ricorso per cassazione
il fallimento dell’istituto cooperativo di v. e i. 157 del relativo regolamento di esecuzione, nonché l’art. 1175 e 1375 cod.
civ. e dell’art. diritto - con il primo motivo il ricorrente deduce la violazione degli artt. 9, 134 e 135 del citato testo unico
delle leggi di pubblica sicurezza e dell’art. 1339 cod. civile. con il secondo motivo ricorrente censura l’omessa motivazione
nel discostarsi dalla concorde interpretazione delle parti. con l’ultimo motivo il fallimento deduce la violazione degli artt.
2 della costituzione.
1339, cod. civ. 1339 cod. civ. l’inserzione automatica di clausole, prevista dall’art. 1339, cod. civ., costituisce una
deroga incisiva al principio dell’autonomia privata e deve quindi trovare il suo fondamento in una legge formale - come
testualmente previsto dalla norma - o in altro atto avente valore di legge in senso sostanziale o da esso richiamato
tramite rinvio integrativo. il diritto di libertà economica consacrato dall’art. 41 cost., di cui è espressione l’autonomia
negoziale delle parti nel modellare il contenuto di un contratto, può sofrire restrizioni solo in forza di una disposizione di
legge, insuscettibile di interpretazioni estensive (ibidem, terzo comma). ne consegue la vigenza, in subiecta materia, di un
principio di stretta interpretazione dell’art. civ. ; vieppiù giustificato da esigenze di tutela della concorrenza e del mercato,
che verrebbero lese da una pratica di prezzi amministrati. l’asserita convergenza di vedute sull’eficacia cogente delle
tarife prefettizie non può, neanche in astratto, valere a conferire loro l’eficacia normativa di cui sono intrinsecamente
prive. l’invocazione di un inderogabile dovere di solidarietà sociale che avrebbe imposto la maggiorazione del prezzo non
ha, infatti, alcuna attinenza con l’operatività dell’eterointegrazione ex art.
tions of the Association for Computational
Linguistics 12 (2024) 39–57. URL: https://doi.org/10.1162/
[1] F. Achena, D. Preti, D. Venditti, L. Ranaldi, C. Gi- tacl_a_00632. doi:10.1162/tacl_a_00632.
annone, F. M. Zanzotto, A. Favalli, R. Romagnoli, [10] D. de Vargas Feijó, V. P. Moreira, Improving
abstracLegal summarization: to each court its own model, tive summarization of legal rulings through textual
in: F. Boschetti, G. E. Lebani, B. Magnini, N. Novielli entailment, Artif. Intell. Law 31 (2023) 91–113.
(Eds.), Proceedings of the 9th Italian Conference on URL: https://doi.org/10.1007/s10506-021-09305-4.
Computational Linguistics, Venice, Italy, Novem- doi:10.1007/S10506-021-09305-4.
ber 30 - December 2, 2023, volume 3596 of CEUR [11] E. Bauer, D. Stammbach, N. Gu, E. Ash, Legal
exWorkshop Proceedings, CEUR-WS.org, 2023. URL: tractive summarization of u.s. court opinions, 2023.
https://ceur-ws.org/Vol-3596/paper1.pdf. [12] L. Huang, W. Yu, W. Ma, W. Zhong, Z. Feng,
[2] T. Dal Pont, F. Galli, A. Loreggia, G. Pisano, H. Wang, Q. Chen, W. Peng, X. Feng, B. Qin, T. Liu,
R. Rovatti, G. Sartor, Legal Summarisation A survey on hallucination in large language models:
through LLMs: The PRODIGIT Project, arXiv Principles, taxonomy, challenges, and open
quese-prints (2023) arXiv:2308.04416. doi:10.48550/ tions, 2023. arXiv:2311.05232.
arXiv.2308.04416. arXiv:2308.04416. [13] Z. Ji, T. Yu, Y. Xu, N. Lee, E. Ishii, P. Fung, Towards
[3] M. Cherubini, F. Romano, A. Bolioli, mitigating hallucination in large language models
N. De Francesco, I. Benedetto, Summariza- via self-reflection, 2023. arXiv:2310.06271.
tion di testi giuridici: una sperimentazione con [14] C.-Y. Lin, ROUGE: A package for automatic
evalgpt-3, Rivista Italiana di Informatica e Diritto uation of summaries, in: Text Summarization
(2023). doi:10.32091/RIID0103. Branches Out, Association for Computational
Lin[4] F. M. Zanzotto, Viewpoint: Human-in-the-loop guistics, Barcelona, Spain, 2004, pp. 74–81. URL:
artificial intelligence, Journal of Artificial Intelli- https://aclanthology.org/W04-1013.
gence Research 64 (2019) 243–252. URL: https://doi. [15] T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger,
org/10.1613%2Fjair.1.11345. doi:10.1613/jair.1. Y. Artzi, Bertscore: Evaluating text generation
11345. with BERT, CoRR abs/1904.09675 (2019). URL: http:
[5] M. E. Peters, M. Neumann, M. Iyyer, M. Gard- //arxiv.org/abs/1904.09675. arXiv:1904.09675.
ner, C. Clark, K. Lee, L. Zettlemoyer, Deep [16] C. di Cassazione, Sintesi criteri della
mascontextualized word representations, 2018. simazione civile e penale (2024). URL:
arXiv:1802.05365. https://www.cortedicassazione.it/resources/
[6] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, BERT: cms/documents/SINTESI_CRITERI_DELLA_
Pre-training of deep bidirectional transformers for MASSIMAZIONE_CIVILE_E_PENALE.pdf.
language understanding, in: Proceedings of the [17] OpenAI, Gpt-3.5-turbo-1106 large language model
2019 Conference of the North American Chap- (2023).
ter of the Association for Computational Linguis- [18] E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu,
tics: Human Language Technologies, Volume 1 Y. Li, S. Wang, W. Chen, Lora: Low-rank
(Long and Short Papers), Association for Com- adaptation of large language models, CoRR
putational Linguistics, Minneapolis, Minnesota, abs/2106.09685 (2021). URL: https://arxiv.org/abs/
2019, pp. 4171–4186. URL: https://aclanthology.org/ 2106.09685. arXiv:2106.09685.</p>
      <p>N19-1423. doi:10.18653/v1/N19-1423.
[7] Y. Liu, M. Lapata, Text summarization with
pretrained encoders, 2019. URL: https://arxiv.org/abs/
1908.08345. doi:10.48550/ARXIV.1908.08345.
[8] X. Jin, D. Zhang, H. Zhu, W. Xiao, S.-W. Li, X. Wei,</p>
      <p>A. Arnold, X. Ren, Lifelong pretraining:
Continually adapting language models to emerging corpora,
in: Proceedings of BigScience Episode #5 –
Workshop on Challenges &amp; Perspectives in Creating
Large Language Models, Association for
Computational Linguistics, virtual+Dublin, 2022, pp. 1–16.</p>
      <p>URL: https://aclanthology.org/2022.bigscience-1.1.</p>
      <p>doi:10.18653/v1/2022.bigscience-1.1.
[9] T. Zhang, F. Ladhak, E. Durmus, P. Liang, K.
McKeown, T. B. Hashimoto, Benchmarking Large
Language Models for News Summarization,
Transac</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>