<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Beyond the Hype: Toward a Concrete Adoption of the Fair and Responsible Use of AI</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Lelio</forename><surname>Campanile</surname></persName>
							<email>lelio.campanile@unicampania.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Mathematics and Physics</orgName>
								<orgName type="institution">Università degli Studi della Campania &quot;L. Vanvitelli&quot;</orgName>
								<address>
									<addrLine>viale Lincoln 5</addrLine>
									<postCode>81100</postCode>
									<settlement>Caserta</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Roberta</forename><surname>De Fazio</surname></persName>
							<email>roberta.defazio@unicampania.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Mathematics and Physics</orgName>
								<orgName type="institution">Università degli Studi della Campania &quot;L. Vanvitelli&quot;</orgName>
								<address>
									<addrLine>viale Lincoln 5</addrLine>
									<postCode>81100</postCode>
									<settlement>Caserta</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Michele</forename><forename type="middle">Di</forename><surname>Giovanni</surname></persName>
							<email>michele.digiovanni@unicampania.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Mathematics and Physics</orgName>
								<orgName type="institution">Università degli Studi della Campania &quot;L. Vanvitelli&quot;</orgName>
								<address>
									<addrLine>viale Lincoln 5</addrLine>
									<postCode>81100</postCode>
									<settlement>Caserta</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Fiammetta</forename><surname>Marulli</surname></persName>
							<email>fiammetta.marulli@unicampania.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Mathematics and Physics</orgName>
								<orgName type="institution">Università degli Studi della Campania &quot;L. Vanvitelli&quot;</orgName>
								<address>
									<addrLine>viale Lincoln 5</addrLine>
									<postCode>81100</postCode>
									<settlement>Caserta</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">R</forename><forename type="middle">D</forename><surname>Fazio</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Mathematics and Physics</orgName>
								<orgName type="institution">Università degli Studi della Campania &quot;L. Vanvitelli&quot;</orgName>
								<address>
									<addrLine>viale Lincoln 5</addrLine>
									<postCode>81100</postCode>
									<settlement>Caserta</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Beyond the Hype: Toward a Concrete Adoption of the Fair and Responsible Use of AI</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">CE343E2D373AE55A4F51AA1DB41FDFD4</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:56+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Artificial Intelligence</term>
					<term>Generative AI</term>
					<term>Ethical AI</term>
					<term>Large Language Models</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Artificial Intelligence (AI) is a fast-changing technology that is having a profound impact on our society, from education to industry. Its applications cover a wide range of areas, such as medicine, military, engineering and research. The emergence of AI and Generative AI have significant potential to transform society, but they also raise concerns about transparency, privacy, ownership, fair use, reliability, and ethical considerations. The Generative AI adds complexity to the existing problems of AI due to its ability to create machine-generated data that is barely distinguishable from human-generated data. Bringing to the forefront the issue of responsible and fair use of AI. The security, safety and privacy implications are enormous, and the risks associated with inappropriate use of these technologies are real. Although some governments, such as the European Union and the United States, have begun to address the problem with recommendations and proposed regulations, it is probably not enough. Regulatory compliance should be seen as a starting point in a continuous process of improving the ethical procedures and privacy risk assessment of AI systems. The need to have a baseline to manage the process of creating an AI system even from an ethics and privacy perspective becomes progressively more important In this study, we discuss the ethical implications of these advances and propose a conceptual framework for the responsible, fair, and safe use of AI.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Artificial Intelligence (AI) is a rapidly advancing field of science and technology that has the potential to revolutionize various sectors of industry and society. With its ability to process vast amounts of data, generate insights, and support decision-making, AI has emerged as an important part of many organizations' processes. However, concerns about the impact of AI on society, particularly from an ethical perspective, have increased as its use has grown. From self-driving cars to virtual assistants, the applications of AI are endless as the quality and performance of AI techniques and methods continue to improve.</p><p>The advent of generative AI expands the potential applications of AI and increases the dangers it poses. Generative AI is a subset of AI that uses Machine Learning (ML) algorithms to generate new content based on existing data. It makes it possible to create content that appears as new and original, but is the result of generating statistics based on training data sets.</p><p>Generative AI raises new ethical challenges and a whole new set of emerging issues because of the difficulty in separating human-generated content from machinegenerated content.</p><p>It becomes crucial a fair use of AI in any field of application, first and foremost in sensitive fields such as medical, military, and engineering, where the human decision-making component is of primary importance, but also in research and education where fair use of AI becomes critical to the informed growth of students with critical thinking and quality research. With the rapid developments in machine learning and generative AI models, the newborn of more powerful Large Language Model (LLM) models such as ChatGPT, Claude, Mistral and others continue to receive attention focusing on the associated risks, particularly from legal and ethical points of view.</p><p>There are both exciting opportunities and significant ethical challenges associated with the use of generative AI. The technology has the potential to revolutionize various sectors of society. However, it also raises concerns about job displacement, transparency, privacy, ownership, inequality, and reliability. To ensure that, the benefits of generative AI are maximized while its risks are minimized, the development of responsible and ethical frameworks for its use will be critical.</p><p>In this paper, we explore the key ethical issues, promises, and perils of AI use, and propose a conceptual framework that could contribute to the responsible, reliable, fair, and safe use of AI.</p><p>The rest of this paper is structured as follows: Section 2 gives a brief overview of AI and generative AI, Section 3 focuses on the ethical implications and issues of AI. Section 4 presents the conceptual framework. Finally, Section 5 presents the conclusion and future research directions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">AI and Generative AI background</head><p>In the last few years, Artificial Intelligence Generated Content (AIGC) <ref type="bibr" target="#b0">[1]</ref>   <ref type="bibr" target="#b1">[2]</ref> and DALL-E <ref type="bibr" target="#b2">[3]</ref>: these tools can generate, respectively, but not limited to, textual documents and pictures, exploiting large knowledge bases laying under the interaction systems, typically provided as conversational agents. The extraordinary popularity of these tools can be reasonably found exactly in the key aspects to being friendly and ready-to-use tools for not expert people: by adopting a very familiar interface, provided in the shape of an instant messaging system, properly called conversational agents or shortly as chatbots, common users are enabled to test and exploit effectively the potential of the generative technologies. ChatGPT is a Large Language Model (LLM) <ref type="bibr" target="#b3">[4]</ref> -based tool, developed by OpenAI for building conversational AI systems, which can efficiently understand and respond to human language inputs in a meaningful way <ref type="bibr" target="#b4">[5]</ref>. In addition, DALL-E is another state-of-the-art GAI model also developed by OpenAI, which is capable of creating unique and high-quality images from textual descriptions in a few minutes, such as "a pink rabbit going to Mars boarding its flying basket", in a photorealistic style. Anyway, GAI is not free from research challenges, concerning, for example, the appropriate set of commonly used evaluation metrics for assessing fidelity, faithfulness and quality of artificially generated data, as discussed in <ref type="bibr" target="#b5">[6]</ref>. A further analysis concerning GAI methodologies and research aspects, along with a comprehensive classification of input and output formats used in GAI systems, is provided in <ref type="bibr" target="#b6">[7]</ref>.</p><p>Whether GAI represents a significantly challenging issue for researchers, involved in understanding and improving the representation of the knowledge that is behind, GAI-based systems are also carriers of not trivial implications, as the ones represented by social impact and related to ethical and legal aspects. Observing the phenomenon of the outstanding popularity of these kinds of systems and tools among common users, it brings back to mind the effects of Web 2.0, with the introduction of User Generated Contents (UGC) <ref type="bibr" target="#b7">[8]</ref>, where people were enabled to write everything almost everywhere. A deleterious phenomenon deriving from the exceeding democracy of the web still remains represented by the fake news unconditioned spreading <ref type="bibr" target="#b8">[9]</ref>, as discussed in the studies proposed in <ref type="bibr" target="#b9">[10]</ref>, <ref type="bibr" target="#b10">[11]</ref>, <ref type="bibr" target="#b11">[12]</ref>. Fake news could be automatically generated by GAI systems, with features that make them challenging to be distinguished from real news when automatic classification systems are employed. With the very recent advances of GAI, generating fake content is within everyone's reach. Finally, also novel cyber-security issues are introduced by the malicious exploitation of generative AI <ref type="bibr" target="#b12">[13]</ref>. Foremost among them there are the adversarial attacks, performed mostly by re-shaping and re-arranging well-known malicious behaviours and activities under a novel unknown guise, to cheat defence and intrusion detection systems <ref type="bibr" target="#b13">[14]</ref>. Zero-days attacks, along with data and model poisoning attacks, are very frequently supported by GAI-based systems <ref type="bibr" target="#b14">[15]</ref>. In <ref type="bibr" target="#b9">[10]</ref> and <ref type="bibr" target="#b15">[16]</ref> poisoning attacks targeting machine learning models, performed by the exploitation of adversarial and GAI are discussed. In the work of <ref type="bibr" target="#b16">[17]</ref>, a case study for energy distribution and dispatching systems frauds is discussed, highlighting the potential drawbacks and threats deriving from a malicious-aim driven exploitation of Generative Adversarial Networks (GANs) <ref type="bibr" target="#b17">[18]</ref>, several years before the current explosion of popularity of current GAI systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Ethics aspects: promises and perils</head><p>The emerging new possibilities associated with AI and GAI raise various ethical challenges that should be addressed in a comprehensive manner. Researchers, physicists, and engineers should not remain at the legal minimum compliance in terms of facing ethical issues in the field of AI, but they should study, understand, and act in a better possible way to mitigate or eliminate those issues. A significant concern in AI is handled by bias. Starting from data collection to model training, bias is a potential risk in different stages of the AI process. The potential risk is to perpetuate the existing bias in training data to AI model. This risk becomes very high in GAI model training. <ref type="bibr" target="#b18">[19]</ref> In GAI, the amount of data used to train the model is enormous. Often this data has been collected from the Internet and using different and heterogeneous data sources. Unlike a traditional AI model, which is used In addition, researchers working in this area face another ethical dilemma: if the data contain biases that reflect society, is it correct to work to mitigate these biases? If so, how?</p><p>Certainly, it must be done with the utmost care, because biased AI systems could potentially exacerbate existing societal inequalities. They could perpetuate prejudice or reinforce stereotypes. They could also produce disparate outcomes for groups based on factors like race, gender, or socioeconomic status, leading to further inequality and social unrest. There is a real risk of perpetuating harmful stereotypes and possibly even distorting beliefs <ref type="bibr" target="#b19">[20]</ref>.</p><p>Strictly related to the bias issue, in the special mode with generative content creation, there is the problem of misleading information and fake news generation.</p><p>The ability of LLMs to create information that is not present in their original data, known as hallucination of LLMs or more technically called emerging feature <ref type="bibr" target="#b20">[21]</ref>, introduces the problem of creating misleading text, which could easily become fake news.</p><p>Moreover, recent developments in GAI allow not only text, but also images (figure <ref type="figure" target="#fig_0">1</ref>), video, and audio to be created, enabling non-technical people to effectively use these techniques through simple applications.</p><p>It is clear that illegal use of these technologies has led to attempts at fraud and extortion, and can also lead to major legal and social problems. The illegal use of these techniques to create images and videos that substitute the face or other physical characteristics of one person for those of another. Because of their potential to create believable and deceptive content that can be used to spread misinformation or damage the reputation of individuals. The most relevant privacy issues include:</p><p>• Privacy Violation: fake content can be used to manipulate existing videos or images without the consent of the people involved, possibly violating their privacy. • Identity Theft: by spreading false information, misleading content, or malicious messages, fake content can be used to impersonate individuals and cause significant damage to their reputations and privacy, as well as organize financial fraud. • Revenge Porn: fake content can be used to create not real videos or images that show people in compromising situations, damaging their privacy and reputation. In the most serious cases, money is solicited for extensive purposes. • Misinformation or Disinformation: fake content can have a significant impact on public opinion, trust, and decision-making by spreading false information or propaganda. This misinformation can also have a serious impact on society. It can lead to social unrest, political instability, and other negative consequences.</p><p>It is important to emphasize that privacy issues arise first in AI processes. In fact, there are significant privacy issues at the data collection stage, because in this stage is where sensitive information is collected and stored, making it vulnerable to potential security breaches and unauthorized access. <ref type="bibr" target="#b21">[22]</ref>, <ref type="bibr" target="#b22">[23]</ref> Finally, it is also interesting for this discussion to mention the problem of copyrighting the content on which AI systems, especially GAI systems, are trained. Often the source of this data is not really known.</p><p>GAI systems can use, process, and generate content without explicit consent, potentially violating the privacy of individuals and organizations.</p><p>The considerations presented here on the ethical risks associated with AI and the perils that arise from them, depend in great part on unaccountable or unfair use of AI, both by the creators of AI systems and by the end users of such technologies.</p><p>In the field of text generation, there are a lot of use cases where LLMs can help and improve the regular activities of students and researchers if it is used in a fair way.</p><p>GAI systems, such as ChatGPT, could be leveraged for students to get ideas or insights on specific topics. If you know well the idea that you want to write, then the GAI could help you to write it without grammar mistakes, especially if you are writing in non-native language.</p><p>This could greatly benefit non-native speakers, even in academia, in a sort of democratization of the dissemination of scientific thought, without having to resort to expensive language revision services.</p><p>On the other hand, an unfair and unethical use of this technology by students and researchers raises a very important ethical and legal problem related to authorship.</p><p>The need to know whether a piece of content is humangenerated or machine-generated is becoming more relevant and critical.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">A conceptual framework</head><p>Faced with these ethical and practical problems, the governments of various countries around the world have not stood still.</p><p>The United States has responded with the AI Bill of Right <ref type="bibr" target="#b23">[24]</ref>, which is not a regulation but only a white paper of recommendation from the White House Office of Science and Technology Policy. It outlines the main principles to be followed to pursue ethical issues in AI. It is a guideline for designing and deploying AI systems that respect human rights, enhance fairness, and protect personal privacy.</p><p>The European Commission has gone further with the EU AI Act, a "Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence and amending certain Union legislative acts" <ref type="bibr" target="#b24">[25]</ref>. It is a fully-fledged legislative proposal that aims to address the risks associated with artificial intelligence systems.</p><p>The AI Act intends to ensure that AI systems are trustworthy, reliable, and beneficial to individuals and society.</p><p>Furthermore, once again the European Commission enacted the General Data Protection Regulation (GDPR) <ref type="bibr" target="#b25">[26]</ref>, which although not closely related to artificial intelligence, protects the privacy rights of European citizens, with particular emphasis on the automatic gathering and processing of personal data.</p><p>These documents provide a solid foundation for the development of a conceptual framework to assist researchers and companies in developing and deploying AI systems that are not only compliant but also address and attempt to solve AI ethical issues.</p><p>The four pillars on which the proposed framework is built are:</p><p>• Explainability Artificial Intelligence (XAI) • Use of tools when possible • Audit and organization for ethical compliance • Continuous risk assessment In figure <ref type="figure" target="#fig_1">2</ref> is depicted a possible workflow for the application of this framework.</p><p>The trustworthiness and transparency of an AI system are important characteristics for the responsible use of AI, because they increase the sense of security in using the system and the confidence in it. XAI is a cornerstone to achieve these aims. The user should be able to understand why the AI system arrives at the results it does and why certain actions are taken. XAI supports transparency, which not only increases user confidence, but also helps To be effective, the application of the framework should be continuous in time and cyclical. The adoption, where possible, of various tools and techniques to review the fair use of AI systems will be essential. These should include automatic systems to check the origin of training data, or tools that can help assess whether a text is human-generated or machine-generated. Tools that protect against adversarial attacks and data poisoning attacks are also needed to keep the system fair, ethical, and secure.</p><p>In this field, the importance of research is paramount because even though some steps have been taken in the right direction, new developments are moving fast, and it is necessary to constantly improve tools and techniques.</p><p>The next phase involves regular audits and organizational practices to encourage ethical and responsible use and development of AI systems.</p><p>This could include internal reviews of development processes, ongoing training for operators, and conducting regular audits to assess the ethical implications of AI systems. These practices should be organized with clear guidelines to avoid any misunderstanding or abuse of AI techniques.</p><p>Finally, a regular and cyclical risk assessment process specific to AI systems is required to promptly identify, evaluate, and prioritize potential risks associated with the development of AI systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion and Future Works</head><p>Generative AI is all about creating artificial data that looks like the real thing. This super-realistic data can be a game-changer in many fields, from video games to medicine and finance, until the arts. The resulting production of GAI is sometimes referred to as "fake data", to evidence that the contents were generated by an automatic process performed by a machine and not by a human being. GAI enables to generate fake but realistic images, to write new text, compose music, and even build chatbots that seem like chatting with real people. Besides the research efforts to improve the quality of the AI production, several ethical, legal and security issues need to be addressed.</p><p>It is apparent to need to address these issues systematically and beyond mere regulatory compliance. The development of a conceptual framework to address these issues should be a good starting point.</p><p>Future work will include improving the framework and exploring ways to make it more practical, including measures of the performance of ethical and responsible use of AI and GAI.</p><p>Moreover, we plan an in-depth look at the topic of detecting human user-generated texts from texts generated by a GAI, exploring existing techniques and towards new ones. Finally, we will continue research in the area of XAI (whose exploration began in <ref type="bibr" target="#b26">[27]</ref> and <ref type="bibr" target="#b27">[28]</ref>), which also extends to GAI, in order to improve the transparency of AI systems.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: OpenAI Dall-E-2 photorealistic image</figDesc><graphic coords="3,109.63,84.19,162.68,108.45" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Workflow for the application of the conceptual framework for responsible and ethical use and development of AI Systems</figDesc><graphic coords="4,322.96,84.19,162.69,294.78" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">A comprehensive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Cao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Dai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">S</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Sun</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2303.04226</idno>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><surname>Openai</surname></persName>
		</author>
		<ptr target="https://chat.openai.com" />
		<title level="m">Conversation with chatgpt</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><surname>Openai</surname></persName>
		</author>
		<ptr target="https://openai.com/dall-e-2" />
		<title level="m">Generative AI model for image creation</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note>Dall-e 2</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A survey on evaluation of large language models</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Yi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Intelligent Systems and Technology</title>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Chatgpt: Fundamentals, applications and social impacts</title>
		<author>
			<persName><forename type="first">M</forename><surname>Abdullah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Madain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Jararweh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2022 Ninth International Conference on Social Networks Analysis, Management and Security (SNAMS), Ieee</title>
				<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Exploring the faithfulness of synthetic data by generative models</title>
		<author>
			<persName><forename type="first">F</forename><surname>Marulli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Paganini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Lancellotti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2023 International Conference on Machine Learning and Applications (ICMLA)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="2214" to="2221" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">The power of generative ai: A review of requirements, models, input-output formats, evaluation metrics, and challenges</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bandi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">V S R</forename><surname>Adapa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">E V P K</forename><surname>Kuchi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Future Internet</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page">260</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">B</forename><surname>Omar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Dequan</surname></persName>
		</author>
		<title level="m">Watch, share or create: The influence of personality traits and user motivation on tiktok mobile video usage</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Sensitivity of machine learning approaches to fake and untrusted data in healthcare domain</title>
		<author>
			<persName><forename type="first">F</forename><surname>Marulli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Marrone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Verde</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Sensor and Actuator Networks</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page">21</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Exploring data and model poisoning attacks to deep learning-based nlp systems</title>
		<author>
			<persName><forename type="first">F</forename><surname>Marulli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Verde</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Campanile</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Procedia Computer Science</title>
		<imprint>
			<biblScope unit="volume">192</biblScope>
			<biblScope unit="page" from="3570" to="3579" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A federated consensus-based model for enhancing fake news and misleading information debunking</title>
		<author>
			<persName><forename type="first">F</forename><surname>Marulli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Verde</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Marrore</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Campanile</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Intelligent Decision Technologies: Proceedings of the 14th KES-IDT 2022 Conference</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="587" to="596" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Vulnerabilities assessment of deep learning-based fake news checker under poisoning attacks</title>
		<author>
			<persName><forename type="first">L</forename><surname>Campanile</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Cantiello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Iacono</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Marulli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mastroianni</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computational Data and Social Networks</title>
		<imprint>
			<biblScope unit="page">385</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Towards the use of generative adversarial neural networks to attack online resources</title>
		<author>
			<persName><forename type="first">L</forename><surname>Campanile</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Iacono</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Martinelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Marulli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mastroianni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Mercaldo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Santone</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Web, Artificial Intelligence and Network Applications: Proceedings of the Workshops of the 34th International Conference on Advanced Information Networking and Applications (WAINA-2020)</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="890" to="901" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Towards resilient artificial intelligence: Survey and research issues</title>
		<author>
			<persName><forename type="first">O</forename><surname>Eigner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Eresheim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Kieseberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">D</forename><surname>Klausner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pirker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Priebe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tjoa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Marulli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Mercaldo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Cyber Security and Resilience (CSR), IEEE</title>
				<imprint>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="536" to="542" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">A comparative study of adversarial attacks to malware detectors based on deep learning</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">A</forename><surname>Visaggio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Marulli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Laudanna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>La Zazzera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pirozzi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Malware Analysis Using Artificial Intelligence and Deep Learning</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="477" to="511" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Exploring the impact of data poisoning attacks on machine learning model reliability</title>
		<author>
			<persName><forename type="first">L</forename><surname>Verde</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Marulli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Marrone</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Procedia Computer Science</title>
		<imprint>
			<biblScope unit="volume">192</biblScope>
			<biblScope unit="page" from="2624" to="2632" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Adversarial deep learning for energy management in buildings</title>
		<author>
			<persName><forename type="first">F</forename><surname>Marulli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">A</forename><surname>Visaggio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 Summer Simulation Conference</title>
				<meeting>the 2019 Summer Simulation Conference</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="11" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Generative adversarial networks</title>
		<author>
			<persName><forename type="first">I</forename><surname>Goodfellow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Pouget-Abadie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mirza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Warde-Farley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ozair</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Courville</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Communications of the ACM</title>
		<imprint>
			<biblScope unit="volume">63</biblScope>
			<biblScope unit="page" from="139" to="144" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">of controversies and risks of chatgpt</title>
		<author>
			<persName><forename type="first">K</forename><surname>Wach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">D</forename><surname>Duong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ejdys</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Kazlauskaitė</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Korzynski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Mazurek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Paliszkiewicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Ziemba</surname></persName>
		</author>
		<idno type="DOI">10.15678/EBER.2023.110201</idno>
		<idno>doi:10.15678</idno>
		<ptr target="/EBER.2023.110201" />
	</analytic>
	<monogr>
		<title level="m">The dark side of generative artificial intelligence: A critical analysis</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Abhishek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Derdenger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Srinivasan</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2403.02726</idno>
		<title level="m">Bias in generative ai</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Zhong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Feng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Peng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Feng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Qin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Liu</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2311.05232</idno>
		<title level="m">A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Ethics and privacy of artificial intelligence: Understandings from bibliometrics</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">Y</forename><surname>Tian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Knowledge-Based Systems</title>
		<imprint>
			<biblScope unit="volume">222</biblScope>
			<biblScope unit="page">106994</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">When machine learning meets privacy: A survey and outlook</title>
		<author>
			<persName><forename type="first">B</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ding</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Shaham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Rahayu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Farokhi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Lin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Computing Surveys (CSUR)</title>
		<imprint>
			<biblScope unit="volume">54</biblScope>
			<biblScope unit="page" from="1" to="36" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<ptr target="https://www.whitehouse.gov/ostp/ai-bill-of-rights" />
		<title level="m">White House Office of Science and Technology Policy, Ai bill of right</title>
				<imprint>
			<date type="published" when="2022-04-01">2022. 01 April, 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<ptr target="https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf" />
		<title level="m">Eu ai act</title>
				<imprint>
			<date type="published" when="2024-04-01">2024. 01 April, 2024</date>
		</imprint>
		<respStmt>
			<orgName>Council of European Union</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<ptr target="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02016R0679-20160504,ac-cessedon" />
		<title level="m">Council of European Union, Regulation (eu) 2016/679 of the european parliament and of the council -general data protection regulation</title>
				<imprint>
			<date type="published" when="2016-04-01">2016. 01 April, 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">P</forename><surname>Di Bonito</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Campanile</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Napolitano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Iacono</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Portolano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">Di</forename><surname>Natale</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.cherd.2023.06.006</idno>
		<ptr target="https://doi.org/10.1016/j.cherd.2023.06.006" />
		<title level="m">Analysis of a marine scrubber operation with a combined analytical/ai-based method</title>
				<imprint>
			<publisher>Chemical Engineering Research and Design</publisher>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Prediction of chemical plants operating performances: a machine learning approach</title>
		<author>
			<persName><forename type="first">L</forename><surname>Campanile</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Di Bonito</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Iacono</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">Di</forename><surname>Natale</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">PROCEEDINGS EUROPEAN COUNCIL FOR MOD-ELLING AND SIMULATION</title>
		<imprint>
			<biblScope unit="volume">2023</biblScope>
			<biblScope unit="page" from="575" to="581" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
