<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>ITALIAN-LEGAL-BERT: A Pre-trained Transformer Language Model for Italian Law</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Daniele Licari</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giovanni Comandè</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>EMbeDS, Sant'Anna School of Advanced Studies</institution>
          ,
          <addr-line>Pisa, 56127</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The state of the art in natural language processing is based on transformer models that are pre-trained on general knowledge and enable eficient transfer learning in a wide variety of downstream tasks even with limited data sets. However, these models significantly decrease performance when operating in specific and sectoral domains. This is problematic in the Italian legal context, as there are many discrepancies between the language found in generic open source corpora (e.g., Wikipedia and news articles) and legal language, which can be cryptic, Latin-based, and domain idiolectal formulas. In this paper, we introduce the ITALIAN-LEGAL-BERT model with additional pre-training of the Italian BERT model on Italian civil law corpora. It achieves better results than the 'general-purpose' Italian BERT in diferent domain-specific tasks.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Legal artificial intelligence</kwd>
        <kwd>Pre-trained language model</kwd>
        <kwd>Italian Legal BERT</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In many domains, specialized models performed better than pre-trained models on general
domains[
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4">1, 2, 3, 4, 5</xref>
        ]. In general, the more semantically distant a domain-specific language
is from the common language than the greater the advantages of using specialized models,
especially in complex tasks.
      </p>
      <p>In the Italian legal context, the discrepancy between specific language and general language
is even more pronounced. The Italian legal language has its unavoidable complexity, like
all technical languages, but it is made even more obscure by useless stylistic expedients that
often forcibly show a continuity with the languages of the past (Latin or old Italian). The
full understanding of judicial texts is the exclusive prerogative of domain experts. It contains
technicalities with specific and unambiguous meanings (“contumacia”, “anticresi”, “anatocismo”,
“sinallagma”). It also makes extensive use of terms in general use but often employed with
their own and specific meanings, if not entirely diferent from those in common use. For
example, “nullità”, “annullabilità”, “ineficacia”, “inutilizzabilità”, which outside of legal language
are synonyms of annulment, denote entirely distinct and diferent concepts and situations.
Such locutions as “buon padre di famiglia” (good family man) and “possessore di buona fede”
(possessor of good faith) indicate diferent concepts from the language of common use [6].</p>
      <p>Chalkidis et al. [7] developed the first transformers-based model for the English legal domain
(LEGAL-BERT) by improving the performance of the general-purpose model (BERT-BASE) in
several prediction tasks. The basic idea is that a model with legal domain knowledge can classify
legal documents better than a model with general knowledge.</p>
      <p>Taking inspiration from LEGAL-BERT, we report on the development of the
ITALIAN-LEGALBERT model capable of understanding the semantic meaning of Italian legal texts by additional
pre-training of ITALIAN XXL BERT (available on hunggingface hub[8]) on Italian civil law
corpora.</p>
      <p>In this work, we make the following contributions:
1. We publicly release1 ITALIAN-LEGAL-BERT to assist Italian legal NLP research. It is, to
the best of our knowledge, the first pre-trained language model further trained on a large
corpus of Italian civil cases.
2. We demonstrate that ITALIAN-LEGAL-BERT outperforms the generalized equivalent
in terms of perplexity (PPT) and end results in downstream tasks such as sequence
classification, semantic similarity, and named entities recognition in the Italian legal
domain.
3. We also evaluated the model on anonymized datasets to explore whether it is biased
toward demographic information and personal data.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>The legal writing system difers greatly from generic texts with many domain-specific
peculiarities. Some researchers demonstrated that the use of domain-specific pre-trained models can
improve the performance of downstream tasks in the legal domain.</p>
      <p>Chalkidis et al. [7] proposed the LEGAL-BERT model pre-trained from scratch on 11.5 GB of
legal texts and its variant further pretraining BERT-base on legal corpora. Their experiments
indicated more substantial improvement in the most challenging end-task (i.e., multi-label
classification in ECHR-CASES and contract header, lease details in CONTRACTS-NER) where
in-domain knowledge is more important. In addition, no significant diferences were found in
performance between the two LEGAL-BERT variants.</p>
      <p>Similar evidence is reported by Zheng et al. [9]. They also trained LEGAL-BERT models
both with additional pre-training from the BERT base and with pre-training from scratch using
a 37GB legal text collection. They compared their LEGAL-BERT and BERT-Base models on
diferent downstream NLP tasks with diferent dificulties and domain specificity. They suggest
using domain-specific pre-trained models for highly dificult legal tasks. They performed better
than BERT-base in complex downstream tasks such as identifying whether contract terms are
potentially unfair [10]. In contrast, additional domain pretraining adds little value to simpler
tasks compared to BERT.</p>
      <p>The recent works of Zhang et al [11, 12] on legal argument mining confirm this trend.
Domain-specific BERT variants have achieved strong performance in many tasks. No significant
diferences were found between the two diferent methods of domain adaptation.</p>
      <sec id="sec-2-1">
        <title>1on huggingface.co/dlicari/Italian-Legal-BERT</title>
        <p>Success in this area encouraged researchers to create pre-trained language models on legal
corpora in diferent languages [ 13]. Masala et al 2021[14] released the jurBERT model
pretrained on a large Romanian legal corpus. It outperformed several strong baselines for legal
judgment prediction. In the same year, Douka et al [15] created a language model adapted
to French legal text demonstrating that their model works betters in the French legal domain
than their generalized equivalents. In China, researchers [16] have improved many predictive
tasks on long Chinese legal documents through a pre-trained language model on millions of
documents published by the Chinese government.</p>
        <p>
          In Italy, A. Tagarelli and A. Simeri 2022[
          <xref ref-type="bibr" rid="ref5">17</xref>
          ] proposed LamBERTa models for retrieving law
articles, developing a BERT further pre-trained on the Italian civil code (ICC, few megabytes of
data). Their model outperformed the "predecessors" of BERT text classification models (BiLSTM,
TextCNN, TextRCNN, Seq2Seq, Transformer) for prediction tasks on ICC articles. Unfortunately,
they did not provide a direct comparison with the Italian BERT model on which the domain
adaptation was performed. Therefore, it was not possible to evaluate the advantages of domain
iftting of the BERT model over the equivalent generalized model in the reported downstream
tasks.
        </p>
        <p>The work cited above difers from ours in terms of the reference corpus, problems addressed,
and analysis of results. First, our model was trained using a large amount of decrees, ordinances,
and judgments of Italian courts. They may include, in addition to the cited laws of the civil code,
judge’s reasons, facts, decisions, reasons, proposals of the parties, medico-legal information,
legal rules, verified evidence, witnesses, etc. Second, this variety of information and the size
of the training dataset allowed us to create a language model that better represents the Italian
legal context by capturing the complex semantic interactions between facts, reasons, and
laws. Therefore, our model can be applied to more complex general tasks, such as identifying
rhetorical roles, retrieving similar cases, extracting arguments, argument mining, legal reading
comprehension, and legal question answering. Third, our analysis focused on directly comparing
the generalized Italian BERT model and the adapted model on the legal domain
ITALIAN-LEGALBERT, to assess the improvements achieved in several downstream tasks. Finally, our model was
shared on the Huggingface platform, to maximize usability and make a concrete contribution to
the growth of NLP applications in the Italian legal context.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Italian Legal BERT</title>
      <p>
        Background. BERT (Bidirectional Encoder Representations from Transformers [
        <xref ref-type="bibr" rid="ref6">18</xref>
        ]) is a
contextual word embedding model using the transformers architecture [
        <xref ref-type="bibr" rid="ref7">19</xref>
        ] that can create
context-sensitive embedding for each word in a given sentence, which will then be used for
downstream tasks. BERT can be embedded in a downstream task and is developed as a
taskspecific integrated architecture.
      </p>
      <p>Italian BERT. The Italian XXL BERT model (cased, 12-layer, 768-hidden, 12-heads, 110M
parameters) has the Bidirectional Encoder Representations from Transformers architecture and
has been trained on large Italian corpora 81 GB derived from Wikipedia in Italian, various texts
from the OPUS corpora collection (opus.nlpl.eu), and data from the Italian part of the OSCAR
corpus (oscar-corpus.com). It is available on the Huggingface model hub [8] and trained by the</p>
      <sec id="sec-3-1">
        <title>MDZ Digital Library team at the Bavarian State Library.</title>
        <p>Training procedure. We initialized ITALIAN-LEGAL-BERT with ITALIAN XXL BERT
and pretrained for an additional 4 epochs on 3.7 GB of text from the National Jurisprudential
Archive using the Huggingface PyTorch-Transformers library [8]. We used BERT architecture
with a language modeling head on top, AdamW Optimizer, initial learning rate 5e-5 (with
linear learning rate decay, ends at 2.525e-9), sequence length 512, batch size 10 (imposed
by GPU capacity), 8.4 million training steps, device 1*GPU V100 16GB. More details on the
hyperparameters we consider for each training phase can be found in the appendix.</p>
        <p>
          Training Dataset. National Jurisprudential Archive (Archivio Giurisprudenziale Nazionale,
pst.giustizia.it) is a public repository containing millions of legal documents (decrees, orders,
and civil judgments) from Italian courts and courts of appeal. We downloaded about 235,000
documents as PDF files. The documents were converted to plain text using the Tika framework
[
          <xref ref-type="bibr" rid="ref8">20</xref>
          ].
        </p>
        <p>Preprocessing Dataset. We preprocessed the case law corpus with some cleaning functions.
We compacted whitespace and new lines using a regular expression. The sentence segmentation
process was customized by adding new tokenization rules to the spaCy model for the Italian
language. The added exceptions concern abbreviations and acronyms used in Italian legal texts2.
Segmented sentences were cleaned up by removing all special characters through an additional
expression rule. The final corpus contains 21,004,500 sentences and 498,002,402 words (3.7 GB).
The final model input was created using the Italian BERT tokenizer on the corpus sentences,
truncating them to the maximum length (512 tokens).</p>
        <p>Evaluation Dataset. We downloaded an additional 20,000 civil cases from the National
Jurisprudential Archive. We applied the same preprocessing procedure as the training set to
create a corpus containing 566,000 sentences and 17,936,466 words. In order to evaluate the
performance in the criminal context we have also downloaded 21,000 criminal cases from
italgiureweb (italgiure.giustizia.it) corpus containing 702,677 sentences and 20,164,194 words.
Finally, we applied random masking (15% tokens) to the sentences in both datasets.</p>
        <p>MLM Evaluation. Perplexity (PPL) is one of the most common metrics for evaluating
language models. It is the exponential of the cross-entropy loss, a lower perplexity indicates a
better model. The perplexity for the MLM objective is computed to make predictions for the
masked tokens (which represent 15% of the total here) while having access to the rest of the
tokens.
2The full list is available at https://huggingface.co/dlicari/Italian-Legal-BERT/blob/main/abbreviazioni.csv</p>
        <sec id="sec-3-1-1">
          <title>Sentence (Mask is strikethrough)</title>
          <p>ITA BERT
Il padre può vedere il figlio a
weekend alternati
en: The father can see his son on
alternate weekends
1. ’genitore’ (53.61%)
2. ’padre’ (27.70%)
3. ’papà’ (6.81%)
4. ’marito’ (2.19%)
5. ’proprietario’ (0.62%)
viene stabilita una collocazione 12.. ’’gmaernaon’ti(t1a0’.7(224%.1)%)
paritetica dei figli
en: an equal placement of the
children is established.
3. ’proposta’ (6.66%)
4. ’stabilita’ (4.52%)
5. ’assicurata’ (4.09%)
1. ’.’ (38.58%)
assegno di mantenimento
comprensivo di spese straordinarie 2. ’mediche’ (17.01%)
3. ’;’ (6.62%)
en: maintenance allowance
including extraordinary expenses. 4. ’legali’ (4.55%)
5. ’generali’ (3.35%)
1. ’trattamento’ (8.58%)
viene stabilito il mantenimento
di2. ’prezzo’ (7.43%)
retto
3. ’contratto’ (5.08%)
en: direct maintenance is
estab4. ’contributo’ (4.23%)
lished
5. ’lavoro’ (4.06%)
1. ’pelle’ (19.12%)
cambiamento di sesso senza oper- 2. ’capelli’ (16.54%)
azione chirurgica 3. ’sesso’ (8.53%)
en: sex change without surgery 4. ’colore’ (6.17%)
5. ’peso’ (4.48%)
1. ’Comune’ (11.89%)
2. ’giudice’ (9.17%)</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>Il ricorrente ha chiesto revocarsi</title>
          <p>l’obbligo di pagamento.
en: The plaintif requested that the 34.. ’’claitvtoardaitnoor’e’(4(.37.01%7%))
payment obligation be revoked.</p>
        </sec>
        <sec id="sec-3-1-3">
          <title>Non avendo la Corte di merito val</title>
          <p>utato la prova
en: Not having the Court of merit
assessed the evidence</p>
          <p>The results in Table 1 showed that ITALIAN-LEGAL-BERT dropped perplexity by 18.2% in
civil cases and by 15.4% in criminal cases with respect to Italian XXL BERT. Lower perplexity
scores on criminal cases could indicate greater use of commonly used notions than on civil
cases.</p>
          <p>Fill Mask. A further qualitative investigation was conducted by asking the judges for some
domain sentences and making an inference about a mask word contained in the sentence. We
used the mask filling pipeline of the Hugging Face Transformers library to return the top 5
suggestions for the masked word. Tab 2 reports the results on Italian BERT and
ITALIANLEGAL- BERT models, the strikethrough words have been masked to be predicted by the
models.</p>
          <p>This analysis helps us to better study the implicit knowledge that the ITALIAN-LEGAL-BERT
model has accumulated during pre-training. As can be seen in Table 2, the correct word always
appears in the top three in the inference made with the ITALIAN-LEGAL-BERT and indicates
that our model succeeds in capturing the specific context better than the general model.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Downstream evaluation task</title>
      <p>
        The Italian BERT and ITALIAN-LEGAL-BERT models were evaluated and compared on three
domain-specific downstream tasks. In the first task, we trained the models with an additional
sequence tagging layer on the top using spaCy[
        <xref ref-type="bibr" rid="ref9">21</xref>
        ] to recognize name/role of actors involved in
the trial. For the second task, we trained the models with a sequence classification head (a linear
layer on top of the pooled output) for the classification of sentence type. In the last downstream
task, we tested the models on textual semantic similarity using sentence embeddings (mean
pooling on the last layer of the models) and cosine similarity.
4.1. Named Entity Recognition
We trained and evaluated the ITALIAN-LEGAL-BERT and Italian-BERT models on a Named
Entity Recognition (NER) task to identify named entities on the type of person found in the text
of judgments. We defined 7 entity types, as shown in Table 3
      </p>
      <p>
        Dataset. We selected 118 judgments from the civil law database of the Court of Genoa with
which we have a scientific collaboration agreement. Given the significant experience of our
research group on these issues, the selected judgments are all those of personal injury (No. 59)
contained in the database, and an equal number of family judgments was selected stratified to
the text’s length. Next, we converted the PDF files to plain text using Tika [
        <xref ref-type="bibr" rid="ref8">20</xref>
        ], applied some
text cleaning functions (removal of multiple blank lines and extra spaces), and converted the
texts to an annotable data structure (jsonl format) to import them into the Doccano annotation
tool [
        <xref ref-type="bibr" rid="ref10">22</xref>
        ]. We set up and used the Doccano tool for quick and easy manual annotation of texts
with the 7 predefined entities. The experts found and annotated 6,355 entities; Table 3 shows
the distribution of entities on the dataset. Finally, the dataset was split 80% for model training
(10% of the training set for validation) and 20% for model evaluation in a stratified to preserve
the distribution of entities in the two subsets.
      </p>
      <p>
        Model architecture. We created our NER models using spaCy’s v3.2 Named Entity
Recognition system [
        <xref ref-type="bibr" rid="ref9">21</xref>
        ]. The model architecture consists of a two-tier pipeline: the contextual
embedding layer and the transition-based chunking model [
        <xref ref-type="bibr" rid="ref11">23</xref>
        ]. The first uses pre-trained
language models to encode tokens into continuous vectors based on their context. The
second predicts text structure by mapping it onto a set of state transitions. It uses the output
(contextual word embeddings) from the previous step to incrementally construct states from
the input sequence and assign them an entity label using a multilayer neural network. We
trained and compared two spacy-based entity recognition pipelines using Italian BERT and
ITALIAN-LEGAL-BERT as the contextual embedding layer.
      </p>
      <p>Training procedure. We trained two named entity recognition pipelines, Italian BERT
+ spaCy’s NER and ITALIAN-LEGAL-BERT+ spaCy’s NER, using AdamW Optimizer, initial
learning rate 5e-5 (with linearly decay), 20000 maximum number of steps, 250 warm-up steps,
early stopping patience on the F1 validation score, and batch size 128 (see Table 10 in the
Appendix for more details).</p>
      <p>Evaluation. We compared the two NER pipelines using the exact match criterion with
gold-standard entities (both entity boundary and type are correct) in the test set. Precision,
recall, and F-score are used to evaluate and compare the performance. The results in Table 4
show that the NER pipeline with ITALIAN-LEGAL-BERT contextual encoder outperforms that
with Italian BERT in recognizing most entities.
4.2. Sentence Classification</p>
      <p>F1
Unlike the English legal context, there are no public datasets on which to test models on
downstream NLP tasks in the legal context. Then, we created a new benchmark dataset for
sentence classification tasks. A common civil judgment has 5 basic parts:
1. INTRODUCTION: an indication of the judge who pronounced it; an indication of the
parties and their lawyers;
2. CONCLUSION OF THE PARTIES: the conclusions of the prosecutor (if any) and those of
the parties;
3. DEVELOPMENT OF THE TRIAL: summary of the appealed judgment and reasons of
appeal;
4. REASON: the concise statement of the factual and legal reasons for the decision (the
statement of reasons);
5. CONCLUSION: the decisional content of the judgment.</p>
      <p>We want to evaluate the ITALIAN-LEGAL-BERT model on a sentence classification task
by trying to predict the belonging section. Although this downstream task was created to
benchmark it could have practical utility because Italian judgments do not follow a precise
standard, often sections are merged or are identified in a variety of headers that making it
dificult to apply rules based on regular expressions.</p>
      <p>Benchmark Dataset. We randomly selected 6,190 sentences from documents with 5 sections
(using regular expression) from Italian Civil Law DB (pst.giustizia.it) stratified on section length,
Table 5.</p>
      <p>Finally, the dataset was split 80% for training models and 20% for model evaluation in a
stratified fashion on the section name to preserve the distribution of sentences across both
subsets. The training set was further divided using its 10% for validation.</p>
      <p>Training procedure. We trained Italian BERT and ITALIAN-LEGAL-BERT models with a
sequence classification head on top (a linear layer on top of the pooled output) using the same
hyperparameters configuration for both (in Table 11 in the Appendix). The final models were
trained at the best epoch with a higher validation MCC (Matthew’s correlation coeficient) score
in the range of 1 to 7 epochs (5 was the best epoch for Italian BERT and 3 for
ITALIAN-LEGALBERT).</p>
      <p>Evaluation. We compared the results on the test set of the two models, Italian BERT and
ITALIAN-LEGAL-BERT, trained with the same configuration (Table 11). Models’ performance
was evaluated on the F1 MACRO and MCC scores on the test set. The results in Table 6 show
that the pre-trained model on the Italian legal domain (0.89 F1, 0.83 MCC) outperforms the
"general-purpose" models (0.869 F1, 0.806 MCC) in this sentence classification task.</p>
      <sec id="sec-4-1">
        <title>Italian BERT</title>
      </sec>
      <sec id="sec-4-2">
        <title>ITALIAN-LEGAL-BERT</title>
        <p>
          Model Bias. Similar to Chalkidis et al. [
          <xref ref-type="bibr" rid="ref12">24</xref>
          ], we investigated how sensitive our model is to
personal data. The main information may concern "parties", "witnesses", "important companies",
"identifiers", "dates" or "places". The purpose is to understand whether the model over-fits on
these data and makes decisions based on demographic and personal information. E.g., ’Mario
Rossi’ is a judge then it is a ‘Decision’ sentence, or ’Daniele Licari’ is a defendant then it is
a ‘Conclusion of the parties’ sentence. The following experiments focused on the sensitivity
of our models to such information by training and evaluating the models on an anonymized
version of the dataset.
        </p>
        <p>
          For entity recognition to be anonymized, we used the model from previous work ([
          <xref ref-type="bibr" rid="ref13">25</xref>
          ]) based
on pre-trained Transformers embeddings and the transition-based chunking model of spaCy. It
found 6,393 entities to be anonymized on the dataset (6,190 sentences). We applied two diferent
anonymization strategies: OMISSIS and TAGGING. The OMISSIS strategy replaced Named
Entities with a fixed value (e.g. “Daniele lives in Milan” -&gt; “OMISSIS lives in OMISSIS”). The
TAGGING strategy replaced Named Entities with the entity name (e.g. “Daniele lives in Milan”
-&gt; “PERSON lives in LOCATION).
        </p>
        <p>The two versions of the anonymized dataset were used to train the two sentence classification
models, with Italian BERT and ITALIAN-LEGAL-BERT, using the same configuration and
training procedure performed on the raw data. Table 7 shows the comparison of results on the
classification models trained on raw and anonymized datasets.</p>
        <p>The results of the models on the anonymized dataset and the original dataset are very similar,
which might indicate that personal data are not relevant for section prediction.
4.3. Semantic Similarity
We tested the ability of the model on the task of determining whether two pieces of text are
similar, in terms of meaning. The strong assumption is that two contiguous sentences within a
specific section are semantically related and refer to the same context, instead, two sentences
taken randomly from two diferent documents and diferent sections can refer to a diferent
context.</p>
        <p>Dataset. We built the dataset by taking, from a subset of 1,000 judgments from the Italian
Civil Law DB, pairs of contiguous portions of the text (of 5 sentences) in the "CONCLUSION
OF PARTIES" and "DEVELOPMENT OF PROCESS” sections and text pairs from two diferent
documents and sections. We labeled as ‘similar’ the contiguous pairs from the same document
and ‘unsimilar’ the pairs from diferent documents. The final dataset contains 2,000 text pairs
(1,000 labeled as ‘similar’ and the other 1,000 as ‘unsimilar’). The choice of taking similar
sentences from the two sections was made on the basis that the "CONCLUSION OF THE
PARTIES" and "DEVELOPMENT OF THE PROCESS" sections are more descriptive and with
self-contained concepts than other sections such as ‘REASON’ or ‘CONCLUSION’ that contain
many references to the previous sections.</p>
        <p>Similarity Procedure. The semantic similarity between the text pairs in the dataset was
evaluated using both the Italian BERT and ITALIAN-LEGAL-BERT models to obtain the context
vectors of the two sentences to be compared (using mean pooling on the last layer) and, then,
the cosine similarity for the similarity scores between the pair of vectors ((, ) = ||||··|||| ).
For each model, a similarity threshold was established to identify similar and non-similar texts.
The Figure 1 shows the similarity scores distribution over the groups of ‘similar’ and ‘unsimilar’
pairs of sentences, calculated using the Italian BERT and ITALIAN-LEGAL-BERT models as
contextual sentence encoders.</p>
        <p>Optimized threshold. A similarity threshold is a numerical value that is applied to the
similarity scores to identify the two classes (’similar’ and ’unsimilar’). Diferent thresholds
produce diferent results in terms of precision, recall, and F1-score when compared to the
annotated dataset. A threshold that is too low classifies all sentences as ’similar’, and conversely,
a value that is too high could lead to classifying all pairs of sentences as ’unsimilar’. The choice
of a correct similarity threshold depends on the data under consideration and the specific vector
space of a model. Therefore, we optimized its value independently on both models by selecting
the best value that maximizes the F1-score on the dataset. The values tested are in the range of
0 to 1 with step 0.001. The experiments suggest 0.897 as the best threshold for Italian BERT and
0.981 for ITALIAN-LEGAL-BERT (the red dashed lines in Figure 1).</p>
        <p>Evaluation. The performance of text similarity classifications with optimized threshold was
evaluated with precision, recall, and F1 score based on true labels. The experimental results,
reported in Table 8, show that the ITALIAN-LEGAL-BERT model outperformed the Italian
BERT model in this downstream semantic similarity task.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Limitations</title>
      <p>The main limitations come from the limited computational resources with which our models
were trained. We are aware that a larger batch size, extended parameters optimization, and a
larger data set could lead to better results.</p>
      <p>Another limitation concerns the use of a single data source. Unlike the English language,
it is not easy to find large legal corpora on which to train domain-specific models. Although
the dataset contains decrees, orders, and judgments from all Italian courts, we did not consider
criminal law in our training. However, we have evaluated the perplexity of mask filling on
more than 20,000 criminal cases obtaining results similar to the civil context. This suggests to
us that the model might work well in the criminal context as well, but further investigation of
downstream legal tasks is needed. In addition, although the model was evaluated on a diferent
dataset of the pretraining data, the civil evaluation dataset could still contain some documents
written by the judges themselves which could afect the gain of ITALIAN-LEGAL-BERT. We
think it is a small gain since the criminal case evaluation dataset (written by diferent judges) is
still significant compared to the generic Italian BERT model.</p>
      <p>Moreover, the type of downstream task could be a limiting factor in model performance.
ITALIAN LEGAL BERT is designed to improve current performance in complex Italian legal
tasks, where domain knowledge is very important. As suggested by experiments on English
Legal-BERT[9], using the model in simple downstream tasks may not lead to improvements
over the model trained on general knowledge or even worsen performance.</p>
      <p>Finally, a common limitation of all Deep Learning systems is that they are not easily
interpreted and maintain biases in the data on which it was trained. In particular, biases in the
data can lead the model to generate stereotypical or biased content. We explore if models are
biased towards demographic and personal information via data anonymization, but the analysis
depends on the specific downstream task and deserves further investigation.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion and Future Direction</title>
      <p>
        In this article, we introduced ITALIAN-LEGAL-BERT which aims to improve the outcomes of
downstream NLP tasks in the Italian legal domain and contribute to the advancement of NLP
legal research, computational law, and legal technology applications. It is a pre-trained linguistic
representation for Italian law based on ITALIAN BERT XXL with additional pretraining on
235,000 civil cases (domain-adaptive pretraining). We compared the ITALIAN-LEGAL-BERT
and Italian BERT models on the downstream tasks of identifying named entities by person type,
semantic similarity, and classifying rhetorical sentences by section class. We demonstrated that
it can improve the performance of the ’general-purpose’ model on downstream tasks in the
Italian legal domain. In the future, we plan to exploit the ITALIAN-LEGAL-BERT’s potential
and test it on more complex tasks, such as rhetorical role identification (e.g. evidence, legal rule,
reasoning, decision) [
        <xref ref-type="bibr" rid="ref14">26</xref>
        ], similar case retrieval, legal reading comprehension, and legal question
answering. In addition, we are working to test it in combination with other deep learning
architectures (LSTM, CNN) to achieve better results. Finally, we intend to release new versions
of the ITALIAN-LEGAL-BERT pre-trained from scratch on the large Italian legal corpora.
Harms (WOAH 2021), Association for Computational Linguistics, Online, 2021, pp. 17–25.
      </p>
      <p>URL: https://aclanthology.org/2021.woah-1.3. doi:10.18653/v1/2021.woah-1.3.
[5] M. Polignano, P. Basile, M. Degemmis, G. Semeraro, V. Basile, Alberto: Italian bert language
understanding model for nlp challenging tasks based on tweets, in: CLiC-it, 2019.
[6] M. Rosati, Forte e chiaro: Il linguaggio del giudice, IL LINGUAGGIO DEL PROCESSO (2016)
115–119. URL: https://www.uniba.it/ricerca/dipartimenti/sistemi-giuridici-ed-economici/
edizioni-digitali/i-quaderni/Quaderni62017Triggiani.pdf.
[7] I. Chalkidis, M. Fergadiotis, P. Malakasiotis, N. Aletras, I. Androutsopoulos, LEGAL-BERT:
The Muppets straight out of Law School, in: Findings of the Association for Computational
Linguistics: EMNLP 2020, Association for Computational Linguistics, Online, 2020, pp.
2898–2904. URL: https://aclanthology.org/2020.findings-emnlp.261. doi: 10.18653/v1/
2020.findings-emnlp.261.
[8] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf,
M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. Le Scao,
S. Gugger, M. Drame, Q. Lhoest, A. Rush, Transformers: State-of-the-Art Natural
Language Processing, in: Proceedings of the 2020 Conference on Empirical Methods in
Natural Language Processing: System Demonstrations, Association for Computational
Linguistics, Online, 2020, pp. 38–45. URL: https://aclanthology.org/2020.emnlp-demos.6.
doi:10.18653/v1/2020.emnlp-demos.6.
[9] L. Zheng, N. Guha, B. R. Anderson, P. Henderson, D. E. Ho, When Does Pretraining
Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset, 2021. URL:
http://arxiv.org/abs/2104.08671, arXiv:2104.08671 [cs].
[10] M. Lippi, P. Pałka, G. Contissa, F. Lagioia, H.-W. Micklitz, G. Sartor, P. Torroni,
CLAUDETTE: an automated detector of potentially unfair clauses in online terms of
service, Artificial Intelligence and Law 27 (2019) 117–139. URL: https://doi.org/10.1007/
s10506-019-09243-2. doi:10.1007/s10506-019-09243-2.
[11] G. Zhang, D. Lillis, P. Nulty, Can Domain Pre-training Help Interdisciplinary Researchers
from Data Annotation Poverty? A Case Study of Legal Argument Mining with BERT-based
Transformers (????) 10.
[12] G. Zhang, P. Nulty, D. Lillis, Enhancing Legal Argument Mining with Domain Pre-training
and Neural Networks, Journal of Data Mining &amp; Digital Humanities NLP4DH (2022) 9147.</p>
      <p>URL: https://jdmdh.episciences.org/9147. doi:10.46298/jdmdh.9147.
[13] J. Cui, X. Shen, F. Nie, Z. Wang, J. Wang, Y. Chen, A Survey on Legal Judgment Prediction:
Datasets, Metrics, Models and Challenges, 2022. URL: http://arxiv.org/abs/2204.04859.
doi:10.48550/arXiv.2204.04859, arXiv:2204.04859 [cs].
[14] M. Masala, R. Iacob, A. S. Uban, M.-A. Cidotã, H. Velicu, T. Rebedea, M. Popescu, jurBERT:
A Romanian BERT Model for Legal Judgement Prediction, NLLP (2021). doi:10.18653/
v1/2021.nllp-1.8.
[15] S. Douka, H. Abdine, M. Vazirgiannis, R. E. Hamdani, D. R. Amariles, JuriBERT: A
MaskedLanguage Model Adaptation for French Legal Text, NLLP (2021). doi:10.18653/v1/2021.
nllp-1.9.
[16] C. Xiao, X. Hu, Z. Liu, C. Tu, M. Sun, Lawformer: A pre-trained language model for
Chinese legal long documents, AI Open 2 (2021) 79–84. URL: https://www.sciencedirect.
com/science/article/pii/S2666651021000176. doi:10.1016/j.aiopen.2021.06.003.</p>
    </sec>
    <sec id="sec-7">
      <title>A. Settings and the Hyperparameters</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yoon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. H.</given-names>
            <surname>So</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Kang,</surname>
          </string-name>
          <article-title>BioBERT: a pre-trained biomedical language representation model for biomedical text mining</article-title>
          ,
          <source>Bioinformatics</source>
          (
          <year>2019</year>
          )
          <article-title>btz682</article-title>
          . URL: http://arxiv.org/abs/
          <year>1901</year>
          .08746. doi:
          <volume>10</volume>
          .1093/bioinformatics/btz682, arXiv:
          <year>1901</year>
          .08746 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Alsentzer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Murphy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Boag</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.-H.</given-names>
            <surname>Weng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Jindi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Naumann</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>McDermott, Publicly Available Clinical BERT Embeddings</article-title>
          ,
          <source>in: Proceedings of the 2nd Clinical Natural Language Processing Workshop</source>
          , Association for Computational Linguistics, Minneapolis, Minnesota, USA,
          <year>2019</year>
          , pp.
          <fpage>72</fpage>
          -
          <lpage>78</lpage>
          . URL: https://aclanthology.org/W19-1909. doi:
          <volume>10</volume>
          .18653/ v1/
          <fpage>W19</fpage>
          -1909.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>I.</given-names>
            <surname>Beltagy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lo</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Cohan,
          <article-title>SciBERT: A Pretrained Language Model for Scientific Text</article-title>
          ,
          <source>in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP)</source>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Hong Kong, China,
          <year>2019</year>
          , pp.
          <fpage>3615</fpage>
          -
          <lpage>3620</lpage>
          . URL: https://aclanthology.org/D19-1371. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>D19</fpage>
          -1371.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>T.</given-names>
            <surname>Caselli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Basile</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mitrović</surname>
          </string-name>
          , M. Granitzer,
          <article-title>HateBERT: Retraining BERT for abusive language detection in English</article-title>
          ,
          <source>in: Proceedings of the 5th Workshop on Online Abuse and</source>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tagarelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Simeri</surname>
          </string-name>
          ,
          <article-title>Unsupervised law article mining based on deep pre-trained language representation models with application to the italian civil code</article-title>
          ,
          <source>Artificial Intelligence and Law</source>
          <volume>30</volume>
          (
          <year>2022</year>
          )
          <fpage>417</fpage>
          -
          <lpage>473</lpage>
          . URL: https://doi.org/10.1007/s10506-021-09301-8. doi:
          <volume>10</volume>
          .1007/ s10506-021-09301-8.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          , M.-
          <string-name>
            <given-names>W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          , BERT:
          <article-title>Pre-training of Deep Bidirectional Transformers for Language Understanding</article-title>
          ,
          <year>2019</year>
          . URL: http://arxiv.org/abs/
          <year>1810</year>
          .04805. doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>1810</year>
          .
          <volume>04805</volume>
          , arXiv:
          <year>1810</year>
          .04805 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Parmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Gomez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kaiser</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Polosukhin</surname>
          </string-name>
          ,
          <article-title>Attention is all you need</article-title>
          ,
          <source>CoRR abs/1706</source>
          .03762 (
          <year>2017</year>
          ). URL: http: //arxiv.org/abs/1706.03762. arXiv:
          <volume>1706</volume>
          .
          <fpage>03762</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>C. A.</given-names>
            <surname>Mattmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Zitting</surname>
          </string-name>
          , Tika in action, Manning Publications, Shelter Island, NY,
          <year>2012</year>
          . OCLC: ocn731912756.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>M.</given-names>
            <surname>Honnibal</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Montani</surname>
          </string-name>
          , spaCy 2:
          <article-title>Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing, 2017</article-title>
          . To appear.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>H.</given-names>
            <surname>Nakayama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kubo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kamura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Taniguchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liang</surname>
          </string-name>
          , doccano: Text annotation tool for human,
          <year>2018</year>
          . URL: https://github.com/doccano/doccano, software available from https://github.com/doccano/doccano.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>G.</given-names>
            <surname>Lample</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ballesteros</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Subramanian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kawakami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Dyer</surname>
          </string-name>
          ,
          <article-title>Neural architectures for named entity recognition</article-title>
          ,
          <source>CoRR abs/1603</source>
          .01360 (
          <year>2016</year>
          ). URL: http://arxiv.org/abs/1603. 01360. arXiv:
          <volume>1603</volume>
          .
          <fpage>01360</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>I.</given-names>
            <surname>Chalkidis</surname>
          </string-name>
          , I. Androutsopoulos,
          <string-name>
            <given-names>N.</given-names>
            <surname>Aletras</surname>
          </string-name>
          ,
          <article-title>Neural Legal Judgment Prediction in English, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</article-title>
          , Florence, Italy,
          <year>2019</year>
          , pp.
          <fpage>4317</fpage>
          -
          <lpage>4323</lpage>
          . URL: https://aclanthology.org/P19-1424. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>P19</fpage>
          -1424.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>D.</given-names>
            <surname>Licari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Romano</surname>
          </string-name>
          , G. Comandé,
          <article-title>Anonymization of italian legal textual documents using deep learning</article-title>
          , volume
          <volume>2</volume>
          <source>of Proceeding of the 16th International Conference on Statistical Analysis of Textual Data (JADT22)</source>
          , VADISTAT Press / Edizioni Erranti, Naple,
          <year>2022</year>
          , pp.
          <fpage>552</fpage>
          -
          <lpage>559</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>V. R.</given-names>
            <surname>Walker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Pillaipakkamnatt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Davidson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Linares</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. J.</given-names>
            <surname>Pesce</surname>
          </string-name>
          ,
          <article-title>Automatic Classification of Rhetorical Roles for Sentences: Comparing Rule-Based Scripts with Machine Learning</article-title>
          ,
          <source>in: ASAIL@ICAIL</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          ['BertForSequenceClassification']
          <source>BertTokenizer AdamW 2e-05 8 [1,7] 5e-05 0.06 0.2 0</source>
          .
          <article-title>1 (on best MCC score</article-title>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>