<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Ital-IA</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Hype: Toward a Concrete Adoption of the Fair and Responsible Use of AI</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Lelio Campanile</string-name>
          <email>lelio.campanile@unicampania.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roberta De Fazio</string-name>
          <email>roberta.defazio@unicampania.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michele Di Giovanni</string-name>
          <email>michele.digiovanni@unicampania.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fiammetta Marulli</string-name>
          <email>fiammetta.marulli@unicampania.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Artificial Intelligence</institution>
          ,
          <addr-line>Generative AI, Ethical AI</addr-line>
          ,
          <country>Large Language Models</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Mathematics and Physics Università degli Studi della Campania ”L. Vanvitelli”</institution>
          ,
          <addr-line>viale Lincoln 5, Caserta, 81100</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>4</volume>
      <abstract>
        <p>Artificial Intelligence (AI) is a fast-changing technology that is having a profound impact on our society, from education to industry. Its applications cover a wide range of areas, such as medicine, military, engineering and research. The emergence of AI and Generative AI have significant potential to transform society, but they also raise concerns about transparency, privacy, ownership, fair use, reliability, and ethical considerations. The Generative AI adds complexity to the existing problems of AI due to its ability to create machine-generated data that is barely distinguishable from human-generated data. Bringing to the forefront the issue of responsible and fair use of AI. The security, safety and privacy implications are enormous, and the risks associated with inappropriate use of these technologies are real. Although some governments, such as the European Union and the United States, have begun to address the problem with recommendations and proposed regulations, it is probably not enough. Regulatory compliance should be seen as a starting point in a continuous process of improving the ethical procedures and privacy risk assessment of AI systems. The need to have a baseline to manage the process of creating an AI system even from an ethics and privacy perspective becomes progressively more important In this study, we discuss the ethical implications of these advances and propose a conceptual framework for the responsible, fair, and safe use of AI.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <sec id="sec-1-1">
        <title>Artificial Intelligence (AI) is a rapidly advancing field of</title>
        <p>science and technology that has the potential to
revolutionize various sectors of industry and society. With
its ability to process vast amounts of data, generate
insights, and support decision-making, AI has emerged
as an important part of many organizations’ processes.
However, concerns about the impact of AI on society,
particularly from an ethical perspective, have increased
as its use has grown. From self-driving cars to virtual
assistants, the applications of AI are endless as the quality
and performance of AI techniques and methods continue
to improve.
ative AI is a subset of AI that uses Machine Learning (ML)
plications of AI and increases the dangers it poses. Gener- of view.
algorithms to generate new content based on existing
data. It makes it possible to create content that appears as
new and original, but is the result of generating statistics
based on training data sets.</p>
        <p>Generative AI raises new ethical challenges and a
nized by CINI, May 29-30, 2024, Naples, Italy
∗Corresponding author.
†These authors contributed equally.
(R. D. Fazio)
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License</p>
        <p>CEUR</p>
        <p>ceur-ws.org
whole new set of emerging issues because of the dificulty
in separating human-generated content from
machinegenerated content.</p>
        <p>It becomes crucial a fair use of AI in any field of
application, first and foremost in sensitive fields such as
medical, military, and engineering, where the human
decision-making component is of primary importance,
but also in research and education where fair use of AI
becomes critical to the informed growth of students with
critical thinking and quality research. With the rapid
developments in machine learning and generative AI
models, the newborn of more powerful Large Language</p>
      </sec>
      <sec id="sec-1-2">
        <title>Model (LLM) models such as ChatGPT, Claude, Mistral and others continue to receive attention focusing on the</title>
      </sec>
      <sec id="sec-1-3">
        <title>There are both exciting opportunities and significant</title>
        <p>ethical challenges associated with the use of generative
AI. The technology has the potential to revolutionize
various sectors of society. However, it also raises concerns
about job displacement, transparency, privacy,
ownership, inequality, and reliability. To ensure that, the
benefits of generative AI are maximized while its risks are
minimized, the development of responsible and ethical
frameworks for its use will be critical.</p>
      </sec>
      <sec id="sec-1-4">
        <title>In this paper, we explore the key ethical issues, promises, and perils of AI use, and propose a conceptual framework that could contribute to the responsible, reliable, fair, and safe use of AI.</title>
      </sec>
      <sec id="sec-1-5">
        <title>The rest of this paper is structured as follows: Section</title>
      </sec>
      <sec id="sec-1-6">
        <title>2 gives a brief overview of AI and generative AI, Section</title>
        <p>
          3 focuses on the ethical implications and issues of AI. and related to ethical and legal aspects. Observing the
Section 4 presents the conceptual framework. Finally, phenomenon of the outstanding popularity of these kinds
Section 5 presents the conclusion and future research of systems and tools among common users, it brings back
directions. to mind the efects of Web 2.0, with the introduction of
User Generated Contents (UGC)[8], where people were
enabled to write everything almost everywhere. A
dele2. AI and Generative AI terious phenomenon deriving from the exceeding
democbackground racy of the web still remains represented by the fake
news unconditioned spreading[9], as discussed in the
In the last few years, Artificial Intelligence Generated studies proposed in[
          <xref ref-type="bibr" rid="ref11 ref25 ref9">10</xref>
          ], [11], [12]. Fake news could be
Content (AIGC)[1] has gained outstanding popularity, automatically generated by GAI systems, with features
not only in the computer science research community that make them challenging to be distinguished from
but, mainly in terms of interest in the various content real news when automatic classification systems are
emgeneration products built by large tech companies. AIGC ployed. With the very recent advances of GAI,
generatrefers typically to contents that can be automatically ing fake content is within everyone’s reach. Finally, also
generated by adopting advanced Generative AI (GAI) novel cyber-security issues are introduced by the
malitechniques, as opposed to being created by human au- cious exploitation of generative AI[13]. Foremost among
thors. them there are the adversarial attacks, performed mostly
GAI-based systems can automate the creation of large by re-shaping and re-arranging well-known malicious
quantity of content in a very short time. The most rep- behaviours and activities under a novel unknown guise,
resentative exemplars are provided by the OpenAI tools, to cheat defence and intrusion detection systems[14].
namely ChatGPT[2] and DALL-E[3]: these tools can gen- Zero-days attacks, along with data and model poisoning
erate, respectively, but not limited to, textual documents attacks, are very frequently supported by GAI-based
sysand pictures, exploiting large knowledge bases laying tems[15]. In [
          <xref ref-type="bibr" rid="ref11 ref25 ref9">10</xref>
          ] and [16] poisoning attacks targeting
under the interaction systems, typically provided as con- machine learning models, performed by the exploitation
versational agents. The extraordinary popularity of these of adversarial and GAI are discussed. In the work of
tools can be reasonably found exactly in the key aspects [17], a case study for energy distribution and dispatching
to being friendly and ready-to-use tools for not expert systems frauds is discussed, highlighting the potential
people: by adopting a very familiar interface, provided drawbacks and threats deriving from a malicious-aim
in the shape of an instant messaging system, properly driven exploitation of Generative Adversarial Networks
called conversational agents or shortly as chatbots, com- (GANs)[18], several years before the current explosion
mon users are enabled to test and exploit efectively the of popularity of current GAI systems.
potential of the generative technologies. ChatGPT is
a Large Language Model (LLM)[4] — based tool,
developed by OpenAI for building conversational AI systems, 3. Ethics aspects: promises and
which can eficiently understand and respond to human perils
language inputs in a meaningful way[5]. In addition,
DALL-E is another state-of-the-art GAI model also devel- The emerging new possibilities associated with AI and
oped by OpenAI, which is capable of creating unique and GAI raise various ethical challenges that should be
adhigh-quality images from textual descriptions in a few dressed in a comprehensive manner. Researchers,
physiminutes, such as “a pink rabbit going to Mars boarding cists, and engineers should not remain at the legal
miniits flying basket”, in a photorealistic style. Anyway, GAI mum compliance in terms of facing ethical issues in the
is not free from research challenges, concerning, for ex- field of AI, but they should study, understand, and act
ample, the appropriate set of commonly used evaluation in a better possible way to mitigate or eliminate those
metrics for assessing fidelity, faithfulness and quality of issues.
artificially generated data, as discussed in [ 6]. A further A significant concern in AI is handled by bias. Starting
analysis concerning GAI methodologies and research as- from data collection to model training, bias is a potential
pects, along with a comprehensive classification of input risk in diferent stages of the AI process. The potential
and output formats used in GAI systems, is provided in risk is to perpetuate the existing bias in training data to
[7]. AI model. This risk becomes very high in GAI model
        </p>
        <p>Whether GAI represents a significantly challenging training. [19]
issue for researchers, involved in understanding and im- In GAI, the amount of data used to train the model
proving the representation of the knowledge that is be- is enormous. Often this data has been collected from
hind, GAI-based systems are also carriers of not trivial the Internet and using diferent and heterogeneous data
implications, as the ones represented by social impact sources. Unlike a traditional AI model, which is used
consent of the people involved, possibly violating
their privacy.
• Identity Theft: by spreading false information,
misleading content, or malicious messages, fake
content can be used to impersonate individuals
and cause significant damage to their reputations
and privacy, as well as organize financial fraud.
• Revenge Porn: fake content can be used to create
not real videos or images that show people in
compromising situations, damaging their privacy
and reputation. In the most serious cases, money
is solicited for extensive purposes.
• Misinformation or Disinformation: fake content
can have a significant impact on public opinion,
trust, and decision-making by spreading false
information or propaganda. This misinformation
can also have a serious impact on society. It
can lead to social unrest, political instability, and
other negative consequences.
for prediction or classification purpose, the generative
model is used to create new content. Bias in training data
increases, in this case a specific ethical issue, perpetuating
a potential social bias in the newly generated content.</p>
        <p>In addition, researchers working in this area face
another ethical dilemma: if the data contain biases that
reflect society, is it correct to work to mitigate these bi- It is important to emphasize that privacy issues arise
ases? If so, how? ifrst in AI processes. In fact, there are significant privacy</p>
        <p>Certainly, it must be done with the utmost care, be- issues at the data collection stage, because in this stage
cause biased AI systems could potentially exacerbate is where sensitive information is collected and stored,
existing societal inequalities. They could perpetuate prej- making it vulnerable to potential security breaches and
udice or reinforce stereotypes. They could also produce unauthorized access. [22], [23]
disparate outcomes for groups based on factors like race, Finally, it is also interesting for this discussion to
mengender, or socioeconomic status, leading to further in- tion the problem of copyrighting the content on which
equality and social unrest. There is a real risk of perpet- AI systems, especially GAI systems, are trained. Often
uating harmful stereotypes and possibly even distorting the source of this data is not really known.
beliefs [20]. GAI systems can use, process, and generate content</p>
        <p>Strictly related to the bias issue, in the special mode without explicit consent, potentially violating the privacy
with generative content creation, there is the problem of of individuals and organizations.
misleading information and fake news generation. The considerations presented here on the ethical risks</p>
        <p>The ability of LLMs to create information that is not associated with AI and the perils that arise from them,
present in their original data, known as hallucination of depend in great part on unaccountable or unfair use of
LLMs or more technically called emerging feature [21], AI, both by the creators of AI systems and by the end
introduces the problem of creating misleading text, which users of such technologies.
could easily become fake news. In the field of text generation, there are a lot of use</p>
        <p>Moreover, recent developments in GAI allow not only cases where LLMs can help and improve the regular
actext, but also images (figure 1), video, and audio to be tivities of students and researchers if it is used in a fair
created, enabling non-technical people to efectively use way.
these techniques through simple applications. GAI systems, such as ChatGPT, could be leveraged for</p>
        <p>It is clear that illegal use of these technologies has led students to get ideas or insights on specific topics. If you
to attempts at fraud and extortion, and can also lead to know well the idea that you want to write, then the GAI
major legal and social problems. The illegal use of these could help you to write it without grammar mistakes,
techniques to create images and videos that substitute especially if you are writing in non-native language.
the face or other physical characteristics of one person This could greatly benefit non-native speakers, even
for those of another. Because of their potential to cre- in academia, in a sort of democratization of the
dissemiate believable and deceptive content that can be used nation of scientific thought, without having to resort to
to spread misinformation or damage the reputation of expensive language revision services.
individuals. The most relevant privacy issues include: On the other hand, an unfair and unethical use of
this technology by students and researchers raises a very
important ethical and legal problem related to authorship.
• Privacy Violation: fake content can be used to</p>
        <p>manipulate existing videos or images without the</p>
        <p>The need to know whether a piece of content is
humangenerated or machine-generated is becoming more
relevant and critical.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>4. A conceptual framework</title>
      <sec id="sec-2-1">
        <title>Faced with these ethical and practical problems, the gov</title>
        <p>ernments of various countries around the world have not
stood still.</p>
        <p>The United States has responded with the AI Bill of
Right [24], which is not a regulation but only a white
paper of recommendation from the White House Ofice
of Science and Technology Policy. It outlines the main
principles to be followed to pursue ethical issues in AI.
It is a guideline for designing and deploying AI systems
that respect human rights, enhance fairness, and protect
personal privacy.</p>
        <p>The European Commission has gone further with the
EU AI Act, a “Proposal for a Regulation of the
European Parliament and of the Council laying down
harmonized rules on artificial intelligence and amending certain
Union legislative acts” [25]. It is a fully-fledged
legislative proposal that aims to address the risks associated
with artificial intelligence systems.</p>
        <p>The AI Act intends to ensure that AI systems are
trustworthy, reliable, and beneficial to individuals and society.</p>
        <p>Furthermore, once again the European Commission
enacted the General Data Protection Regulation (GDPR)
[26], which although not closely related to artificial
intelligence, protects the privacy rights of European citizens,
with particular emphasis on the automatic gathering and
processing of personal data.</p>
        <p>These documents provide a solid foundation for the
development of a conceptual framework to assist
researchers and companies in developing and deploying AI
systems that are not only compliant but also address and
attempt to solve AI ethical issues.</p>
        <p>The four pillars on which the proposed framework is
built are:
systems. These practices should be organized with clear
guidelines to avoid any misunderstanding or abuse of AI
techniques.</p>
        <p>Finally, a regular and cyclical risk assessment process
specific to AI systems is required to promptly identify,
evaluate, and prioritize potential risks associated with
the development of AI systems.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>5. Conclusion and Future Works</title>
      <p>Generative AI is all about creating artificial data that
looks like the real thing. This super-realistic data can
be a game-changer in many fields, from video games to
medicine and finance, until the arts. The resulting
production of GAI is sometimes referred to as “fake data”,
to evidence that the contents were generated by an
automatic process performed by a machine and not by a
human being. GAI enables to generate fake but
realistic images, to write new text, compose music, and even
build chatbots that seem like chatting with real people.
Besides the research eforts to improve the quality of the
AI production, several ethical, legal and security issues
need to be addressed.</p>
      <p>It is apparent to need to address these issues
systematically and beyond mere regulatory compliance. The
development of a conceptual framework to address these
issues should be a good starting point.</p>
      <p>Future work will include improving the framework
and exploring ways to make it more practical, including
measures of the performance of ethical and responsible
use of AI and GAI.</p>
      <p>Moreover, we plan an in-depth look at the topic of
detecting human user-generated texts from texts generated
by a GAI, exploring existing techniques and towards new
ones. Finally, we will continue research in the area of
XAI (whose exploration began in[27] and [28]), which
also extends to GAI, in order to improve the transparency
of AI systems.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          477-
          <fpage>511</fpage>
          . [16]
          <string-name>
            <given-names>L.</given-names>
            <surname>Verde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Marulli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Marrone</surname>
          </string-name>
          , Exploring the im-
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <article-title>model reliability</article-title>
          ,
          <source>Procedia Computer Science 192</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          (
          <year>2021</year>
          )
          <fpage>2624</fpage>
          -
          <lpage>2632</lpage>
          . [17]
          <string-name>
            <given-names>F.</given-names>
            <surname>Marulli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. A.</given-names>
            <surname>Visaggio</surname>
          </string-name>
          , Adversarial deep learning
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <source>ings of the 2019 Summer Simulation Conference,</source>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          . [18]
          <string-name>
            <given-names>I.</given-names>
            <surname>Goodfellow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pouget-Abadie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mirza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <source>of the ACM</source>
          <volume>63</volume>
          (
          <year>2020</year>
          )
          <fpage>139</fpage>
          -
          <lpage>144</lpage>
          . [19]
          <string-name>
            <given-names>K.</given-names>
            <surname>Wach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Duong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ejdys</surname>
          </string-name>
          , R. Ka-
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>gpt</surname>
          </string-name>
          (
          <year>2023</year>
          ). URL: https://www.scopus.com/
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <article-title>inward/record</article-title>
          .uri?eid=
          <fpage>2</fpage>
          -
          <lpage>s2</lpage>
          .
          <fpage>0</fpage>
          -
          <lpage>85183620669</lpage>
          &amp;doi=
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          10.15678%
          <fpage>2fEBER</fpage>
          .
          <year>2023</year>
          .
          <volume>110201</volume>
          &amp;partnerID=
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <volume>40</volume>
          &amp;md5=
          <fpage>deab98413c32b948ba57308e7e53fa6a</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <source>doi:10</source>
          .15678/EBER.
          <year>2023</year>
          .
          <volume>110201</volume>
          . [20]
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Abhishek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Derdenger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kim</surname>
          </string-name>
          , K. Srini-
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <source>arXiv:2403.02726</source>
          (
          <year>2024</year>
          ). [21]
          <string-name>
            <given-names>L.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yu</surname>
          </string-name>
          , W. Ma,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>tions</surname>
          </string-name>
          (
          <year>2023</year>
          ). arXiv:
          <volume>2311</volume>
          .
          <fpage>05232</fpage>
          . [22]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , M. Wu,
          <string-name>
            <given-names>G. Y.</given-names>
            <surname>Tian</surname>
          </string-name>
          , G. Zhang, J. Lu, Ethics
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <volume>222</volume>
          (
          <year>2021</year>
          )
          <fpage>106994</fpage>
          . [23]
          <string-name>
            <given-names>B.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shaham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Rahayu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Farokhi</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <source>(CSUR) 54</source>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>36</lpage>
          . [24]
          <article-title>White House Ofice of Science</article-title>
          and Technology
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>Policy</surname>
          </string-name>
          ,
          <source>Ai bill of right</source>
          ,
          <year>2022</year>
          . URL: https://www.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          01 April,
          <year>2024</year>
          . [25] Council of European Union, Eu ai act,
          <year>2024</year>
          . URL:
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <surname>ST-5662-</surname>
          </string-name>
          2024-INIT/en/pdf, accessed on 01 April,
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <year>2024</year>
          . [26]
          <article-title>Council of European Union, Regulation (eu)</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <year>2016</year>
          /
          <article-title>679 of the european parliament and of the</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          council - general
          <source>data protection regulation</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          TXT/?uri=CELEX%
          <fpage>3A02016R0679</fpage>
          -
          <lpage>20160504</lpage>
          ,
          <fpage>ac</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <source>cessed on 01 April</source>
          ,
          <year>2024</year>
          . [27]
          <string-name>
            <given-names>L. P.</given-names>
            <surname>Di Bonito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Campanile</surname>
          </string-name>
          , E. Napolitano, M. Ia-
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <source>ing Research and Design</source>
          (
          <year>2023</year>
          ). doi:https://doi.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <source>org/10</source>
          .1016/j.cherd.
          <year>2023</year>
          .
          <volume>06</volume>
          .006. [28]
          <string-name>
            <given-names>L.</given-names>
            <surname>Campanile</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. Di</given-names>
            <surname>Bonito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Iacono</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Di</surname>
          </string-name>
          Na-
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <source>ELLING AND SIMULATION</source>
          <year>2023</year>
          (
          <year>2023</year>
          )
          <fpage>575</fpage>
          -
          <lpage>581</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>