<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Responsibile and Reliable AI: Activities of the CINI-AIIS Lab at University of Naples Federico II</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Flora Amato</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giovanni Maria De Filippis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antonio Galli</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michela Gravina</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lidia Marassi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefano Marrone</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elio Masciari</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vincenzo Moscato</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antonio M. Rinaldi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Cristiano Russo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Carlo Sansone</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Cristian Tommasino</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Electrical Engineering and Information Technology (DIETI), University of Naples Federico II</institution>
          ,
          <addr-line>Via Claudio 21, 80125, Naples</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Over the course of the last decade, AI researchers have made groundbreaking progress in hard and longstanding problems related to machine learning, computer vision, speech recognition, and autonomous systems. Despite the success of AI, its adoption so far is mostly in low-risk applications, while the uptake in medium/high-risk applications, which might have a deeper transformative impact on our society, such as in healthcare, public administration, safety-critical industries etc., is still low compared to expectations. The reasons for such lagging are profound and range from technological limitations to dificulties associated with the conformity assessment to policies and standards. This paper introduces and discusses the perspectives and initiatives undertaken in these regards by the CINI AI-IS (the Italian National Consortium for Informatics, Artificial Intelligence and Intelligent Systems) Lab at the University of Naples Federico II.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Artificial Intelligence</kwd>
        <kwd>Ethics</kwd>
        <kwd>Human-Centred</kwd>
        <kwd>Trustworthy</kwd>
        <kwd>Deep Learning</kwd>
        <kwd>Machine Learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>identification and healthcare. The United States, while
lacking a unified federal framework, sees regulatory
iniAs artificial intelligence (AI) becomes increasingly inte- tiatives that are more sector-specific and decentralized,
grated into critical sectors such as healthcare, finance, as suggested by the AI Bill of Rights1. Agencies like
and transportation, the need for a more reliable and re- the Federal Trade Commission (FTC) and the Food and
sponsible deployment of AI technologies is becoming Drug Administration (FDA) have issued guidelines that
central. This widespread application underscores the ne- address AI’s use in consumer protection and medical
decessity for regulations and certifications to manage the vices, respectively [1]. In Asia, countries like China and
profound impact that AI systems are expected to, and Singapore have also made significant strides in
establishare already having, on society and individual lives, to ing AI guidelines, with the former working on a series of
define the operational and developmental framework for ethics guidelines and governance principles [2], focusing
these technologies. The current landscape of regulations on controlling AI’s social impacts and promoting shared
governing AI is characterized by a diverse and evolving norms. Singapore has been a front-runner with its Model
framework that varies significantly across diferent re- AI Governance Framework, which provides detailed and
gions.In the European Union, the AI Act is a pioneering actionable guidance to private-sector companies on
relegislative efort that aims to set a comprehensive regula- sponsible AI deployment [3].
tory framework for AI, focusing on risk assessment and Alongside governmental regulations, industry
stanmitigation. It classifies AI systems according to their risk dards play a crucial role in shaping the AI regulatory
levels and imposes stricter requirements on high-risk ap- landscape. Organizations such as the Institute of
Electriplications, particularly in critical areas such as biometric cal and Electronics Engineers (IEEE), the International
Organization for Standardization (ISO) and the
EuroIntiazle-dIAby20C2I4N:I4,tMhaNyat2i9o-n3a0l, C20o2n4f,erNeanpcleeso,nItAarlytificial Intelligence, orga- pean Committee for Standardization/European
Commit* Corresponding author. tee for Electrotechnical Standardization (CEN/CENELEC)
† These authors contributed equally. have developed standards that provide frameworks for AI
$ stefano.marrone@unina.it (S. Marrone) ethics, performance, and safety. Moreover, certifications
0000-0001-7003-4781 (F. Amato); 0009-0002-8395-0724 are emerging as important tools for ensuring
compli(G. M. D. Filippis); 0000-0001-9911-1517 (A. Galli); ance with ethical standards and regulatory requirements,
(0L0.0M0-a0r0a0s1s-i5);003030-906-01070(1M-6. 8G5r2a-0v3in7a7);(S0.0M09a-r0r0o0n6e-8);134-5466 mainly with the aim of reassuring consumers, partners,
0000-0001-7003-4781 (E. Masciari); 0000-0001-7003-4781 and regulators of an AI system’s adherence to accepted
(V. Moscato); 0000-0001-7003-4781 (A. M. Rinaldi); norms and practices. These processes are supported by
0000-0002-8732-1733 (C. Russo); 0000-0002-8176-6950 (C. Sansone); governments across Europe, with diferent initiatives
0000-0001-9763-8745 (C. Tommasino)
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License 1https://www.whitehouse.gov/ostp/ai-bill-of-rights/
Attribution 4.0 International (CC BY 4.0).
that are actively leveraging AI to foster innovation and
address societal challenges, implementing a variety of
policies and funding mechanisms to support AI research,
development, and integration into key sectors. Italy, in
particular, is advancing its AI initiatives through the
National Recovery and Resilience Plan (PNRR). This
strategic plan focuses on enhancing Italy’s digital
infrastructure and capabilities in AI, aiming to improve public
sector eficiency and drive economic growth. Investments
are directed towards integrating AI in public
administration, healthcare, and environmental sustainability,
showcasing a robust commitment to digital transformation in
line with EU priorities.</p>
      <p>In this paper, we will thus introduce and discuss the
perspectives and initiatives undertaken on responsible
and reliable AI by the CINI AI-IS (the Italian National
Consortium for Informatics, Artificial Intelligence and
Intelligent Systems) Lab at the University of Naples
Federico II, specifically focusing on the activities involving
the members of the PICUS Lab2 as part of the AI-IS Node.</p>
      <p>To this aim, Section 2 will describe the lab’s activities
concerning the AI certification and regulations, from both
a technical and an ethical perspective, while Section 3
will introduce the FAIR project, an initiative aiming to
guide frontier research for advanced AI methodologies
and techniques.</p>
    </sec>
    <sec id="sec-2">
      <title>2. The role of certification and regulations in AI</title>
      <p>As highlighted in Section 1, the role of industrial
standards as well as of independent certification procedures
is pivotal in shaping the landscape of a resilient and
reliable AI deployment. These frameworks not only
ensure that AI systems operate within ethical and technical
guidelines but also enhance trust and reliability in AI
applications across various sectors.
2.1. The current AI standardization
landscape
ifelds. IEEE is also been working on initiatives
around ethical considerations and safety in AI
technologies. Among all, the IEEE P7000 series
[6] stands out in this regard, featuring standards
such as P7001 (which enhances transparency in
autonomous systems), P7003 (which addresses
concerns related to algorithmic bias) and P7006
(which focuses on the management of personal
data by AI agents);
• ISO: The International Organization for
Standardization, in partnership with the International
Electrotechnical Commission (IEC), actively
develops standards that address a wide range of
issues concerning AI, such as terminology, data
quality, lifecycle processes, robustness, and bias.</p>
      <p>These eforts aim to ensure the safety, reliability,
and interoperability of AI systems. Notable
standards include ISO/IEC 23053:2022 (which focuses
on frameworks for machine learning systems),
ISO/IEC TR 24027:2021 (which focuses on bias in
AI systems and AI-aided decision-making) and
ISO/IEC TR 24028:2020 (which details the
trustworthiness of AI, covering aspects such as
robustness, resilience, accuracy, and reproducibility);
• CEN/CENELEC: the European Committee for</p>
      <p>Standardization and the European Committee for
Electrotechnical Standardization, harmonize
standards across EU member states, enhancing AI
technology compliance with EU norms like the AI
Act. Currently, CEN/CENELEC has not published
specific standards that are solely dedicated to AI.</p>
      <p>Instead, their work often integrates AI
considerations into broader technological and industrial
standards. They work closely with international
organizations like ISO to ensure that European
standards align with global eforts, particularly
in areas such as data quality, security, and ethical
use of technology.
2.2. Certifing AI-bases systems
Under the AI Act proposed by the European Union,
naThe current landscape of AI standardization is a dynamic tional AI authorities will have significant responsibilities.
and complex field characterized by eforts from various Their role will include monitoring and ensuring
compliinternational bodies to develop and refine standards that ance with the Act’s regulations within their jurisdictions.
address the rapid advancements in AI technology [4, 5]. These authorities will assess AI systems for adherence
Key organizations like IEEE, ISO, and CEN/CENELEC to stipulated standards, particularly for high-risk
appliare at the forefront, each contributing to a global frame- cations, ensuring that these systems do not compromise
work that aims to ensure AI systems are developed and safety or public interests. Additionally, they will provide
deployed ethically and safely: guidance to organizations on implementing AI
technologies in line with the AI Act’s requirements, enhancing the
• IEEE: The Institute of Electrical and Electronics overall governance of AI across the EU. To support this,
Engineers is a prominent entity known for set- in the European Union several key certification
authoriting industry standards, in various technology ties are responsible for ensuring compliance of industry
2https://picuslab.dieti.unina.it/ applications with AI standards. Notably, the European
Commission itself plays a pivotal role by setting regula- cies for AI ethicists, being advanced towards certification
tory frameworks such as the AI Act. National bodies like in both France and Italy; and the fundamental rights
Germany’s TÜV and France’s AFNOR also contribute impact assessment.
significantly. In Italy, ACCREDIA is the central body Concerning the activities on AI certification procedure,
that certifies AI according to national and EU standards, the lab is part of a project involving Accredia and the
ensuring that AI systems are safe, reliable, and adhere CINI AI-IS on the study and definition of procedures
to the required ethical guidelines. These authorities col- for the conformity adherence of AI-based systems to
inlectively uphold the integrity and trustworthiness of AI ternational AI technical standards. This is crucial for
applications across Europe. interoperability, safety, and ethical alignment across
different industries and applications, providing a common
2.3. The Naples node’s activities language and expectations for developers, users, and
regulators. Given the lab expertise, the activities are focused
Over the years, the Picus Lab has been active in the field on adherence to the standard ISO/IEC TR 24027:2021,
of responsible and reliable AI, with applications to dif- focusing on bias in AI systems and AI-aided
decisionferent domains including cybersecurity [7, 8], genera- making, considering, as a case of study, the healthcare
tive and foundation models [9, 10], law and compliance domain. To this aim, we first conducted a thorough
anal[11, 12], education [13, 14] and society [15, 16]. The ysis of the standard, to better frame the concept of bias,
lab has also been active in promoting trustworthy and identifying its sources and potential mitigation actions
human-centred AI, among which it is worth mentioning from a technical perspective according to the standard
chairing workshops series co-located with important in- itself. Subsequently, we examined the classical software
ternational conferences (e.g., HCAI-EP3, HCAI4U4) and lifecycle of an AI system in the medical domain to
deterfounding and implementing a Human-Centered AI Mas- mine the optimal insertion points for compliance checks.
ter’s program (HCAIM5), a master program, co-financed Lastly, we proposed a procedure to check the adherence
by the Connecting Europe Facility of the European Union, to these standards, designed to assist developers in
makdeveloped by a consortium consisting of four European ing their products compliant, while also enabling the
universities, three Centres of Excellence (CoE), and three certification body to quantitatively verify adherence to
SMEs, ofering an integrated ethical, technical, and prac- the standards.
tical curriculum for understanding the construction of
AI models, their realization at an industrial scale, and the
evaluation of their long-term impact on society. 3. Resilient AI</p>
      <p>Beyond scholarly contributions, the lab is directly
involved in a variety of actions that underscore its commit- The University of Naples Federico II (UNINA) leads the
ment to this vital area. These initiatives include collab- Spoke 3, named Resilient AI, of the Future Artificial
Inorative participation with regulatory boards as well as telligence Research (FAIR) project, founded by Italian
certification agencies that aim to support reliability and PNRR. Resilient AI is encompassed within the broader
responsible AI. frameworks of responsible and reliable AI. In the
con</p>
      <p>The lab is actively engaged with the activities of text of responsible AI, which involves considering the
CEN/CENELEC. Specifically, one of the lab members ethical implications of AI technologies, resilient AI plays
(L.M.) has been appointed as one of the CINI AI-IS na- a crucial role in addressing technical risks and
vulnerational experts for Uninfo6, the national standardization bilities that may compromise ethical considerations. By
body for Information Technologies and their applications building AI systems that can withstand challenges and
in Italy, representing and promoting the national strategy adapt to changing conditions, developers enhance the
in international standardization bodies such as CEN and overall reliability and trustworthiness of AI
technoloISO, as well as UNINFO in the European Telecommuni- gies within an ethical framework. Similarly, within the
cations Standards Institute (ETSI). The activities are part scope of reliable AI, which focuses on building systems
of the working group JTC21, which focuses on the stan- that consistently produce accurate, trustworthy results,
dardization of ethical and social implications of AI. Three resilient AI complements this objective by addressing
standards are currently under development in this group: technical challenges that may impact system reliability.
the AI Trustworthiness Framework, which will be used These challenges include adversarial attacks, data
perturfor third-party conformity assessments for the AI Act; bations, or system failures. By incorporating resilience
Standards on Ethics, defining processes and competen- into AI design, developers can enhance the robustness
and dependability of AI technologies, thereby improving
their overall reliability.</p>
      <p>Spoke 3 addresses the study of AI foundational
methodologies that are aimed at processing data in-the-wild,
3https://hcai-ep.sigcseire.acm.org/2024/
4https://sites.google.com/view/hcai4u2023
5https://humancentered-ai.eu/
6https://www.uninfo.it/
making the performance of AI resilient and robust in for subsequent AI endeavors. For specific domains, the
challenging contexts. We study how learning algorithms semantic integration of standard datasets, such as for
can cope with the problem of training with real-world example cBioPortal[22], UniProt[23], GenBank [24] in
data and we devise novel theories, methods, and auto- biology, could potentially allow to discover novel insights
mated instruments to address the current limitations of and to make implicit knowledge explicit. Through
strateAI-intensive software system development, and also pay gic alignments with domain ontologies and meticulous
attention to the ethical and legal issues that involve AI mapping endeavors, the semantic labeling of datasets is
applications in-the-wild. The research activities to be poised to usher in a new era of data-centric AI. Looking
carried out include: i) the definition of appropriate data ahead, the trajectory of Dataset Recognition and
Semanaugmentation techniques, when data are incomplete or tification converges with the emergent paradigm of
Renot adequately representative, while analyzing, monitor- sponsible AI, wherein data assumes a pivotal role. By
ing, and improving the fairness of the machine learning prioritizing data-centric methodologies, characterized by
algorithms; ii) the definitions of algorithms that are both outlier detection, error correction, and consensus
estabresilient and robust with respect to possible external at- lishment, the endeavor endeavors to foster AI systems
tacks (also deriving from training with "malicious" data); that are not only technically robust but also socially
reiii) the investigation of the implications related to the sponsible and ethically sound.
design, validation &amp; verification, evolution and operation The research activities of Spoke 3 also aim at
addressof the software that implements machine or deep learn- ing AI resiliency in adversarial scenarios from diferent
ing algorithms, when they have to work in-the-wild; (iv) points of view, towards the design of approaches and
the ethical and legal issues connected with the use of methodologies intended to i) detect and recover from
real-world data. attacks, ii) increase the robustness of federated
learn</p>
      <p>Responsible AI endeavors to ensure that AI systems op- ing, iii) enforce privacy, iv) enforcing fairness.
Moreerate ethically, fairly, and transparently, with due consid- over, in the knowledge representation area, we will
deeration given to their societal impact [17]. In the pursuit velop inference-proof countermeasures against attacks
of Responsible AI, one crucial aspect lies in the metic- to knowledge confidentiality, based on various kinds of
ulous curation and semantic enrichment of datasets, a background knowledge and meta-knowledge.
process integral to the activity of dataset recognition Scenarios involving multi-task learning with missing
and semantification. Such activity delves into the cre- and/or noisy labels are included with the the aim of
defination and annotation of extensive datasets. This task is ing efective learning procedures. In particular, in case
propelled by a multifaceted approach, beginning with of missing labels, the research activity will concern joint
a meticulous literature review aimed at identifying per- training techniques exploiting the concept of label
masktinent datasets across diverse domains. The distinction ing or other similar approaches, while in case of noisy
between general-purpose and domain-specific datasets labels, the goal is the design of novel learning procedures
lays the groundwork, with renowned repositories such as optimized for soft labels, in order to take into account
WordNet [18] and ImageNet [19] serving as pivotal refer- the uncertainty of the noisy annotations.
ence points. Leveraging these gold standard datasets not The need of handling missing or noisy data is also
only facilitates knowledge representation but also aug- present in multimodal scenarios, where multiple data
ments the semantic integration and labelling processes modalities should be merged to have a complete
underin several domain, such as biology, autonomous driving, standing of the phenomenon to be analyzed. Indeed, in
and speech Processing. The guiding principles under- several domains, such as healthcare, it is not easy to have
lying this endeavor are encapsulated within the FAIR a well-annotated dataset with paired acquisitions,
conproject. Drawing inspiration from these principles, the sisting of samples including all the modalities. As a
conadoption of semantic artifacts and semantics-based tech- sequence, strategies to deal with incomplete data should
niques emerges as a cornerstone strategy [20, 21]. Se- be introduced, making the model robust against noisy or
mantification, the subsequent phase, emerges as a pivotal missing modalities. To this aim, in our research activities,
process aimed at imbuing data with contextual meaning we will focus on multi-input multi-output neural
netthrough the incorporation of semantic artifacts such as work, able to be flexible to the heterogeneous
characterisontologies and knowledge graphs. The pursuit of seman- tics of the input. Moreover, in the context of multimodal
tification yields manifold benefits, fostering a standard- learning, we will also evaluate diferent fusion strategies
ized framework for data representation, facilitating the aiming to improve the integration of multiple sources.
harmonization of heterogeneous datasets, and furnishing Dealing with Resiliet AI, the Spoke 3 will provide a
a flexible structure for entity linkage across disparate transformation in various aspects of our society by
endatasets. Notably, initiatives such as the development of abling systems and technologies to adapt, recover, and
ImageNet++ underscore the commitment to enriching face diferent challenges. Indeed, Resilient AI has the
existing repositories, thereby fortifying the foundation potential to drive innovation, improve resilience, and
enhance societal well-being across various domains. [11] C. Todorova, G. Sharkov, H. Aldewereld, S. Leijnen,
A. Dehghani, S. Marrone, C. Sansone, M. Lynch,
J. Pugh, T. Singh, et al., The european ai tango:
Acknowledgments Balancing regulation innovation and
competitiveness, in: Proceedings of the 2023 Conference on
This work was partially supported by PNRR MUR Project Human Centered Artificial Intelligence: Education
PE0000013-FAIR. and Practice, 2023, pp. 2–8.
[12] L. Marassi, N. Patwardhan, F. Gargiulo, Can justice
References be a measurable value for ai? proposed evaluation
of the relationship between nlp models and
princi[1] A. Giovannini, A. S. Pasha, Artificial intelligence: ples of justice (2023).</p>
      <p>A legal landscape, Laws of Medicine: Core Legal [13] F. Flammini, S. Marrone, Distance education
boostAspects for the Healthcare Professional (2022) 387– ing interdisciplinarity and internationalization: an
404. experience report from “ethics, law and privacy in
[2] W. Wu, T. Huang, K. Gong, Ethical principles and data and analytics” at supsi, in: Proceedings of
governance technology development of ai in china, the 2023 Conference on Human Centered
ArtifiEngineering 6 (2020) 302–309. cial Intelligence: Education and Practice, 2023, pp.
[3] A. A. Guenduez, T. Mettler, Strategically con- 54–54.</p>
      <p>structed narratives on artificial intelligence: What [14] B. Feeney, M. Zuccarini, T. Singh, H. Aldewereld,
stories are told in governmental artificial intelli- S. Marrone, K. Quille, Developing a human centred
gence policies?, Government Information Quarterly ai masters: The good, the bad and the ugly, in:
40 (2023) 101719. Proceedings of the 27th ACM Conference on on
[4] P. Cihon, M. J. Kleinaltenkamp, J. Schuett, S. D. Innovation and Technology in Computer Science
Baum, Ai certification: Advancing ethical prac- Education Vol. 2, 2022, pp. 660–661.
tice by reducing information asymmetries, IEEE [15] L. Marassi, A. E. Pascarella, G. Giacco, M. Zuccarini,
Transactions on Technology and Society 2 (2021) S. Marrone, C. Sansone, D. Amitrano, M. Rigiroli,
200–209. Artificial intelligence and voluntary carbon
mar[5] M. Blösser, A. Weihrauch, A consumer perspec- ketplaces: An analysis of the ethical and legal
astive of ai certification–the current certification land- pects, in: Proceedings of the 2023 Conference on
scape, consumer approval and directions for future Human Centered Artificial Intelligence: Education
research, European Journal of Marketing 58 (2024) and Practice, 2023, pp. 53–53.</p>
      <p>441–470. [16] G. Orrù, A. Galli, V. Gattulli, M. Gravina,
[6] S. Spiekermann, Ieee p7000—the first global stan- M. Micheletto, S. Marrone, W. Nocerino, A.
Procacdard process for addressing ethical concerns in sys- cino, G. Terrone, D. Curtotti, et al., Development of
tem design, in: Proceedings, volume 1, MDPI, 2017, technologies for the detection of (cyber) bullying
p. 159. actions: The bullybuster project, Information 14
[7] M. Gravina, A. Galli, G. De Micco, S. Marrone, G. Fi- (2023) 430.</p>
      <p>ameni, C. Sansone, Fead-d: Facial expression analy- [17] V. Dignum, Responsible artificial intelligence: how
sis in deepfake detection, in: International Confer- to develop and use AI in a responsible way,
volence on Image Analysis and Processing, Springer, ume 1, Springer, 2019.</p>
      <p>2023, pp. 283–294. [18] G. A. Miller, Wordnet: a lexical database for english,
[8] L. Marassi, S. Marrone, What would happen if Communications of the ACM 38 (1995) 39–41.
hackers attacked the railways? consideration of [19] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, L.
Feithe need for ethical codes in the railway transport Fei, Imagenet: A large-scale hierarchical image
systems, in: Applications of Artificial Intelligence database, in: 2009 IEEE conference on computer
and Neural Systems to Data Science, Springer, 2023, vision and pattern recognition, Ieee, 2009, pp. 248–
pp. 289–296. 255.
[9] N. Patwardhan, S. Shetye, L. Marassi, M. Zuccarini, [20] A. M. Rinaldi, C. Russo, et al., A novel
frameT. Maiti, T. Singh, Designing human-centric foun- work to represent documents using a
semanticallydation models, reconstruction 9 (2023) 10. grounded graph model., in: KDIR, 2018, pp. 201–
[10] L. Marassi, Assessing user perceptions of bias in 209.</p>
      <p>generative ai models: Promoting social awareness [21] K. Madani, C. Russo, A. M. Rinaldi, Merging large
for trustworthy ai, in: Proceedings of the 2023 Con- ontologies using bigdata graphdb, in: 2019 IEEE
ference on Human Centered Artificial Intelligence: International Conference on Big Data (Big Data),
Education and Practice, 2023, pp. 46–46. IEEE, 2019, pp. 2383–2392.
[22] J. Gao, B. A. Aksoy, U. Dogrusoz, G. Dresdner,</p>
      <p>B. Gross, S. O. Sumer, Y. Sun, A. Jacobsen, R. Sinha,
E. Larsson, et al., Integrative analysis of complex
cancer genomics and clinical profiles using the
cbioportal, Science signaling 6 (2013) pl1–pl1.
[23] U. Consortium, Uniprot: a hub for protein
informa</p>
      <p>tion, Nucleic acids research 43 (2015) D204–D212.
[24] D. A. Benson, M. Cavanaugh, K. Clark, I.
Karsch</p>
      <p>Mizrachi, D. J. Lipman, J. Ostell, E. W. Sayers,
Genbank, Nucleic acids research 41 (2012) D36–D42.</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>