<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>LLMs to Detect Cyber Child Abuse in the in Textual Conversations</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Maria Teresa Baldassarre</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vita Santa Barletta</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vito Bavaro</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Danilo Caivano</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alberto Pio De Matteis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Lippolis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antonio Piccinno</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, University of Bari “Aldo Moro”</institution>
          ,
          <addr-line>Via Orabona 4, 70121 Bari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In contemporary online interactions, identifying inappropriate language and safeguarding minors from harmful communication is a critical challenge. This study explores the use of Large Language Models (LLMs) to analyze text, detecting patterns indicative of age-specific language and the presence of sexual or pornographic references. A fine-tuning of the LLaMAntino model was performed, using a dataset of synthetically generated sentences designed to replicate realworld scenarios. The fine-tuned model demonstrated enhanced performance compared to its baseline (given by LLaMAntino 3 ANITA 8B), providing detailed and context-sensitive explanations for its classifications. The results highlight the potential of LLMs in addressing sensitive linguistic phenomena with precision, offering a foundation for detecting indirect combinations of sexual references in conversations involving minors. Future work can focus on incorporating real conversational data and involving subject matter experts to refine the model's interpretability and reliability. Additionally, the exploration of advanced architectures and fine-tuning techniques will be considered to further balance model complexity and processing efficiency. 1 In the modern digital age, the prevalence of harmful communication in online interactions has become an increasingly pressing issue, particularly when it comes to protecting minors. Inappropriate language, which may include age-improper expressions or sexually explicit content, poses significant risks to the emotional and psychological well-being of young individuals [1]. If left unaddressed, these dangerous interactions can contribute to unsafe digital environments, foster exploitative behaviours, and erode trust in online platforms. Identifying and mitigating such communication is crucial to ensuring a safer and more supportive space for minors in the digital landscape. Modern Large Language Models (LLMs) offer a promising solution to this challenge due to their ability to analyze and interpret complex textual patterns and contexts. These advanced models,</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>when fine-tuned with targeted datasets, can effectively identify sensitive linguistic phenomena
and provide detailed, context-aware explanations. By leveraging the power of LLMs,
researchers and developers can create tools to detect harmful communication patterns,
safeguard vulnerable populations, and promote healthier digital interactions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Recent research on large language models (LLMs) in the field of harmful language detection
and safeguarding minors has seen notable advancements. Studies have explored techniques for
identifying harmful communication patterns, including predatory behavior, sexually explicit
content, and age-inappropriate language. This involves fine-tuning models with
domainspecific datasets and leveraging transfer learning to improve their ability to detect subtle
linguistic cues indicative of harmful interactions, as demonstrated in [
        <xref ref-type="bibr" rid="ref2 ref3">2,3</xref>
        ].
      </p>
      <p>
        Recent work has also focused on improving LLMs' capabilities to process multimodal inputs
and apply advanced reasoning techniques for better contextual understanding. For instance,
studies such as [
        <xref ref-type="bibr" rid="ref4 ref5">4,5</xref>
        ] have explored the integration of visual and textual data to enhance the
detection of harmful content, including age-inappropriate material, in digital platforms. By
combining text and image analysis, these models can identify potential risks more accurately,
such as detecting sexual or harmful content across both written and visual formats. This
multimodal approach emphasizes the importance of using different data types to broaden the
scope of harmful content detection, offering a more comprehensive and effective safeguard for
minors in online spaces.
      </p>
      <p>
        In general, these studies collectively underscore the growing potential of LLMs to address the
challenges of harmful communication in digital spaces, particularly in contexts involving
minors. By fine-tuning models on specialized datasets, incorporating multimodal reasoning, and
addressing ethical concerns, researchers are laying a foundation for safer and more responsible
AI applications in safeguarding vulnerable populations [
        <xref ref-type="bibr" rid="ref6 ref7">6,7</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed Approach</title>
      <p>Current LLMs face challenges in accurately determining and explaining whether a text contains
age-specific language and/or sexual (abuse) references. To address this, we focused on
finetuning a single model to specialize in these tasks more effectively. Using LLaMAntino as our
baseline, we examined its ability to provide detailed explanations for requests like: "Analyses
the following text and indicates whether the language used belongs to a minor or an adult,
explaining:" and " Analyses the following text and indicates whether the language used contains
sexual references or porn, explaining:".</p>
      <p>The initial responses from the model were not sufficiently detailed, leading us to perform
finetuning using a carefully constructed dataset. We annotated 500 synthetic samples to train and
test the model, ensuring the dataset covered a variety of cases involving minor/adult language
distinctions and adult sexual references. These synthetic samples were generated using
ChatGPT-4o, providing to the model a more comprehensive understanding of the nuances
involved in identifying and explaining these specific linguistic patterns. The fine-tuned model
•
•
•
•
•
•
•
aimed to enhance its accuracy and ability to provide clearer, context-sensitive explanations for
such complex tasks.</p>
      <sec id="sec-3-1">
        <title>3.1. Description of the solution and dataset</title>
        <p>The initial phase of our work focused on generating synthetic phrases for two distinct tasks:
identifying age-specific language and detecting sexual content in text.</p>
        <p>For the age detection task, we defined the following parameters to categorize the content:
For the sexual content detection task, the following parameters were established:
Age group: This category helps to classify text based on the perceived age of the speaker.
o 13 years or under
o 14 -17 years
o 18 years or over
Sentence complexity: This parameter captures the complexity of the language used,
which can vary across age groups.</p>
        <p>o very simple
o simple
o intermediate
Use of slang: Identifying whether slang is present helps in determining the likelihood of
the text belonging to a younger or older audience.</p>
        <p>o Yes
o No
Sexual allusion: This identifies whether the text contains indirect or direct references to
sexual matters.</p>
        <p>o Yes
o No
Presence of emojis: Emojis can often indicate a more casual or suggestive tone,
influencing the interpretation of the text.</p>
        <p>o Yes
o No
Reference to digital content: This category captures whether the text mentions digital
content, such as images or videos, which may relate to explicit or suggestive material.
o None
o Image
o Video
Harmful behavior: This parameter focuses on detecting negative attitudes such as
harassment, manipulation, coercion, or objectification within the text.</p>
        <p>o Harassment
o Manipulation
o Coercion
o Objectification
o None
•
•
•
•
•
•</p>
        <p>Explicit language: This measures the level of explicit language used in the text, from
“none” to “high”.</p>
        <p>o No
o Low
o Medium
o High
The next step involved two datasets, one for the age detection task and one for the sexual
content detection task, creating a unified collection of synthetic phrases. By generating a total
of 500 samples across both tasks, we ensured a diverse representation of content. These samples
were carefully annotated based on the parameters outlined above, which enabled us to
finetune the model to better identify and explain age-specific language and sexual references in
text. Subsequently, an accurate explanation for each phrase in the dataset was generated (using
ChatGPT-4o).</p>
        <p>Moreover, the explanation of each phrase, along with the previously identified categories, were
grouped into a single paragraph. The final dataset, created in JSON format, was divided into a
training set (450 examples) and a test set (50 examples) containing:</p>
        <p>Instruction: LLM commands
o "Analyse the following text and indicate whether the language used belongs to
a minor or an adult, explaining:"
o "Analyse the following text and indicates whether the language used contains
sexual references or porn, explaining:"
Input: dataset phrases</p>
        <p>Output: explanation for each phrase
An example for each prompt is reported below:</p>
        <p>Age task
o "Instruction": "Analyses the following text and indicates whether the language
used belongs to a minor or an adult, explaining:"
o "Input": " My dad took away my PlayStation because I took a punishment."
o "Output": "This sentence reflects the language of children aged 13 years or
younger, as the parental reaction and focus on the PlayStation is typical of this
age group. In particular, it is a sentence characterised by an intermediate
complexity that does not make use of slang."
Sexual task
o "Instruction": " Analyses the following text and indicates whether the language
used contains sexual references or porn, explaining:"
o "Input": "I imagine you in your room right now, make a video for me."
o "Output": "The text presents references to digital content, in particular videos.</p>
        <p>The imagining of the person in their room, accompanied by the request for a
video, evokes a private and sensual image, typical of an erotic context. It is
possible to define the attitude in question as harassment. In addition, one can
assess the language used as highly suggestive of sexual themes.”
Finally, fine-tuning was performed on LLaMAntino (version 3-ANITA-8B-Inst-DPO-ITA) using
the previously created dataset.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Main technical details</title>
        <p>
          For the training, we chose LLaMAntino as the pre-trained model. LLaMAntino [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] was selected
because it is a model designed for the Italian language and is recognized as one of the best LLMs
available for this language. Since our dataset is entirely in Italian, it was the most natural choice
for this experiment.
        </p>
        <p>Fine-tuning of the model was performed on Kaggle using the following parameters:
•
•
•
•
•
num train epochs: 10
learning rate: 2e-4,
logging steps: 50,
optimizer: adamw 8bit,
weight decay: 0.01
We obtained the following results (1 epoch corresponds to 56 steps):</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Evaluation</title>
      <p>Model evaluation was simultaneously performed for both age and sexual tasks.
Evaluation of the age task consisted of verifying how accurately the model recognized the types
“Age Group”, “Sentence complexity” and “Use of Slang” by comparing them with our ground
truth.</p>
      <sec id="sec-4-1">
        <title>4.2. Test results for sexual task</title>
        <sec id="sec-4-1-1">
          <title>Category</title>
          <p>Age Group
Sentence complexity
Use of Slang</p>
        </sec>
        <sec id="sec-4-1-2">
          <title>Total</title>
        </sec>
        <sec id="sec-4-1-3">
          <title>Category</title>
          <p>Sexual allusion
Presence of emojis
Reference to
content
Harmful behavior
Explicit language
Total
digital
25
25
25
75
25
25
25
25
25
125</p>
        </sec>
        <sec id="sec-4-1-4">
          <title>Accuracy</title>
          <p>96,00%
72,00%
88,00%
85,33%
Accuracy
100,00%
100,00%
96,00%
68,00%
84,00%
89,60%</p>
        </sec>
        <sec id="sec-4-1-5">
          <title>Ground truth</title>
        </sec>
        <sec id="sec-4-1-6">
          <title>Output</title>
        </sec>
        <sec id="sec-4-1-7">
          <title>Ground truth Output</title>
          <p>24
18
22
64
25
25
24
17
21
112
Evaluation of the sexual task consisted of verifying how accurately the model recognized the
types “Sexual allusion”, “Presence of emojis”, “Reference to digital content”, “Harmful behavior”
and “Explicit language” by comparing them with our ground truth.</p>
          <p>The accuracy metrics were assessed using a test set containing 50 sentences.
We decided to give 1 point if the category type matched our ground truth and 0 points if there
was no match. Then we aggregated all data and we measured partial and total accuracy.</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>4.1. Test results for age task</title>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Comparison among baseline and fine-tuned model</title>
        <p>To show the differences between the fine-tuned model and the baseline (LLaMAntino version
3-ANITA-8B-Inst-DPO-ITA), we executed the two following prompts (respectively for age and
sexual task), using two sentences from the test set as an example:
•</p>
        <p>Analyse the following text and indicate whether the language used belongs to a minor
or an adult, explaining:
“When will I be able to go out alone without them annoying me?”</p>
        <sec id="sec-4-3-1">
          <title>Model</title>
        </sec>
        <sec id="sec-4-3-2">
          <title>Output</title>
          <p>Baseline</p>
          <p>I believe that the language used in this text belongs to a minor
(probably a child around 4-6 years of age) in the care of a parent
or an adult, as the text expresses a concern related to personal
protection, typical of an age when dependence on parents is still
great and awareness of one's own safety is in formation.
displaying a more autonomous tone. In particular, it is a
The sentence belongs to 14-17 year olds, who seek autonomy
and want to show that they are capable without help by
sentence characterised by a simple complexity that does not
make use of slang.
•</p>
          <p>Analyse the following text and indicates whether the language used contains sexual
references or porn, explaining:
“I'd like to see you without those clothes on... !"#$%&amp;'”</p>
        </sec>
        <sec id="sec-4-3-3">
          <title>Model</title>
          <p>Baseline
LLaMAntino fine-tuned</p>
        </sec>
        <sec id="sec-4-3-4">
          <title>Output</title>
          <p>I do not consider that the language used in the text contains
sexual or pornographic references.</p>
          <p>The text expresses a friendly or romantic expression, with a light
and non-offensive tone, as the sentence is addressed to someone
known and not to an anonymous subject. The presence of an
emoticon such as</p>
          <p>suggests a playful and non-sexually
explicit tone. In general, the text is more akin to a polite
expression and not erotic or offensive content.</p>
          <p>The phrase emphasises the desire to see the person without
clothes, implying an explicit sexual intention. The tone is made
more provocative by the emoji !"#$%&amp;'. It is possible to define the
attitude in question as objectification. In addition, one can
assess the language used as highly racy on issues of a sexual
nature.</p>
        </sec>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Test the fine-tuned model on a simulated chat</title>
        <p>To test our model in a typical use case, we simulated a chat between two people, one adult
and the other a minor, and applied prompts for age recognition and sexual language
detection in sequence.</p>
        <p>•</p>
        <p>Analyse the following text and indicate whether the language used belongs to a minor
or an adult, explaining:
“User 1: You look too sexy in that photo, but I want to see you without anything. User 2: I
don't know... mum says you don't do these things”</p>
        <sec id="sec-4-4-1">
          <title>Output</title>
          <p>The User 1 sentence reflects sexually explicit language typical of
an adult context, while the User 2 reflection introduces an
ethical concern characteristic of 14-17 year old adolescents. In
particular, it is a sentence characterised by an intermediate
complexity that does not make use of slang.
•</p>
          <p>Analyse the following text and indicates whether the language used contains sexual
references or porn, explaining:
“User 1: You look too sexy in that photo, but I want to see you without anything. User 2: I
don't know... mum says you don't do these things”</p>
        </sec>
        <sec id="sec-4-4-2">
          <title>Model</title>
        </sec>
        <sec id="sec-4-4-3">
          <title>Output</title>
          <p>LLaMAntino fine-tuned</p>
          <p>The text contains references to digital content, in particular
photos. The request for nudity emphasises a specific interest in
explicitly sexual content. The language used is directly related
to a context of an erotic nature. It is possible to define the
attitude in question as coercion. In addition, one can assess the
language used as highly racy on topics of a sexual nature.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion and limitations</title>
      <p>As we can notice from the previously provided examples, it is evident that the baseline model
generates responses that are more generic, imprecise, and less aligned with our specific goals.
In contrast, the fine-tuned model produces more technical and detailed responses, indicating
that the fine-tuning process has enhanced its ability to perform the proposed task effectively.
Looking ahead, there is still room for improvement. The absence of real-world data has limited
the model overall accuracy, as it struggles to capture the nuances of diverse age groups and
variations in sexual language. Collaboration with field experts, such as psychologists and
sociologists, could help refine the selection of dataset categories and provide more precise
advice and explanations tailored to the task.</p>
      <p>Additionally, experimenting with more advanced models, such as LLaMAntino with 13B
parameters, could further enhance performance. However, it is important to consider the
tradeoff between effectiveness and efficiency, as overly complex models may improve accuracy at
the expense of practical usability.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This work was partially supported by the following projects: SERICS - “Security and Rights In
the CyberSpace - SERICS” (PE00000014) under the MUR National Recovery and Resilience Plan
funded by the European Union – NextGenerationEU.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Livingstone</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Smith</surname>
            ,
            <given-names>P. K.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>Annual research review: Harms experienced by child users of online and mobile technologies: the nature, prevalence and management of sexual and aggressive risks in the digital age</article-title>
          .
          <source>Journal of child psychology and psychiatry, and allied disciplines</source>
          ,
          <volume>55</volume>
          (
          <issue>6</issue>
          ),
          <fpage>635</fpage>
          -
          <lpage>654</lpage>
          . https://doi.org/10.1111/jcpp.12197
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Nguyen</surname>
            ,
            <given-names>T. T.</given-names>
          </string-name>
          , Wilson,
          <string-name>
            <given-names>C.</given-names>
            , &amp;
            <surname>Dalins</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Fine-tuning llama 2 large language models for detecting online sexual predatory chats and abusive texts</article-title>
          .
          <source>arXiv preprint arXiv:2308</source>
          .
          <fpage>14683</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Cook</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zilka</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>DeSandre</surname>
          </string-name>
          , H.,
          <string-name>
            <surname>Giles</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weller</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Maskell</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2022</year>
          ).
          <article-title>Can We Automate the Analysis of Online Child Sexual Exploitation Discourse?</article-title>
          .
          <source>arXiv preprint arXiv:2209</source>
          .
          <fpage>12320</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Lin</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Luo</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ma</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Beneath the surface: Unveiling harmful memes with multimodal reasoning distilled from large language models</article-title>
          .
          <source>arXiv preprint arXiv:2312</source>
          .
          <fpage>05434</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Hu</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yin</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          (
          <year>2024</year>
          ).
          <article-title>Multimodal chain-of-thought reasoning via chatgpt to protect children from age-inappropriate apps</article-title>
          .
          <source>arXiv preprint arXiv:2407</source>
          .
          <fpage>06309</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Barletta</surname>
            ,
            <given-names>V. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Calvano</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Sciacovelli</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2024</year>
          ).
          <article-title>Cyber social security in multi-domain operations</article-title>
          .
          <source>In Proceedings of the 2024 IEEE International Workshop on Technologies for Defense and Security (TechDefense)</source>
          (pp.
          <fpage>41</fpage>
          -
          <lpage>46</lpage>
          ). IEEE. https://doi.org/10.1109/TechDefense63521.
          <year>2024</year>
          .10863352
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Barletta</surname>
            ,
            <given-names>V. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Caivano</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Catalano</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>de Gemmis</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Impedovo</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2024</year>
          ).
          <article-title>Cyber social security education</article-title>
          .
          <source>In Extended Reality: International Conference, XR Salento</source>
          <year>2024</year>
          , Lecce, Italy, September 4-
          <issue>7</issue>
          ,
          <year>2024</year>
          , Proceedings, Part IV (pp.
          <fpage>240</fpage>
          -
          <lpage>248</lpage>
          ). Springer. https://doi.org/10.1007/978-3-
          <fpage>031</fpage>
          -71713-0_
          <fpage>16</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8] “LLaMAntino on HuggingFace” Link: https://huggingface.co/swap-uniba/LLaMAntino-3
          <string-name>
            <surname>-ANITA-</surname>
          </string-name>
          8B
          <article-title>-</article-title>
          <string-name>
            <surname>Inst-</surname>
          </string-name>
          DPO-ITA
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>