Beyond the Hype: Toward a Concrete Adoption of the Fair and Responsible Use of AI Lelio Campanile1,∗,† , Roberta De Fazio1,† , Michele Di Giovanni1,† and Fiammetta Marulli1,† 1 Department of Mathematics and Physics Università degli Studi della Campania ”L. Vanvitelli”, viale Lincoln 5, Caserta, 81100, Italy Abstract Artificial Intelligence (AI) is a fast-changing technology that is having a profound impact on our society, from education to industry. Its applications cover a wide range of areas, such as medicine, military, engineering and research. The emergence of AI and Generative AI have significant potential to transform society, but they also raise concerns about transparency, privacy, ownership, fair use, reliability, and ethical considerations. The Generative AI adds complexity to the existing problems of AI due to its ability to create machine-generated data that is barely distinguishable from human-generated data. Bringing to the forefront the issue of responsible and fair use of AI. The security, safety and privacy implications are enormous, and the risks associated with inappropriate use of these technologies are real. Although some governments, such as the European Union and the United States, have begun to address the problem with recommendations and proposed regulations, it is probably not enough. Regulatory compliance should be seen as a starting point in a continuous process of improving the ethical procedures and privacy risk assessment of AI systems. The need to have a baseline to manage the process of creating an AI system even from an ethics and privacy perspective becomes progressively more important In this study, we discuss the ethical implications of these advances and propose a conceptual framework for the responsible, fair, and safe use of AI. Keywords Artificial Intelligence, Generative AI, Ethical AI, Large Language Models 1. Introduction whole new set of emerging issues because of the difficulty in separating human-generated content from machine- Artificial Intelligence (AI) is a rapidly advancing field of generated content. science and technology that has the potential to revo- It becomes crucial a fair use of AI in any field of ap- lutionize various sectors of industry and society. With plication, first and foremost in sensitive fields such as its ability to process vast amounts of data, generate in- medical, military, and engineering, where the human sights, and support decision-making, AI has emerged decision-making component is of primary importance, as an important part of many organizations’ processes. but also in research and education where fair use of AI However, concerns about the impact of AI on society, becomes critical to the informed growth of students with particularly from an ethical perspective, have increased critical thinking and quality research. With the rapid as its use has grown. From self-driving cars to virtual as- developments in machine learning and generative AI sistants, the applications of AI are endless as the quality models, the newborn of more powerful Large Language and performance of AI techniques and methods continue Model (LLM) models such as ChatGPT, Claude, Mistral to improve. and others continue to receive attention focusing on the The advent of generative AI expands the potential ap- associated risks, particularly from legal and ethical points plications of AI and increases the dangers it poses. Gener- of view. ative AI is a subset of AI that uses Machine Learning (ML) There are both exciting opportunities and significant algorithms to generate new content based on existing ethical challenges associated with the use of generative data. It makes it possible to create content that appears as AI. The technology has the potential to revolutionize var- new and original, but is the result of generating statistics ious sectors of society. However, it also raises concerns based on training data sets. about job displacement, transparency, privacy, owner- Generative AI raises new ethical challenges and a ship, inequality, and reliability. To ensure that, the ben- Ital-IA 2024: 4th National Conference on Artificial Intelligence, orga- efits of generative AI are maximized while its risks are nized by CINI, May 29-30, 2024, Naples, Italy minimized, the development of responsible and ethical ∗ Corresponding author. frameworks for its use will be critical. † These authors contributed equally. In this paper, we explore the key ethical issues, Envelope-Open lelio.campanile@unicampania.it (L. Campanile); promises, and perils of AI use, and propose a concep- roberta.defazio@unicampania.it (R. D. Fazio); michele.digiovanni@unicampania.it (M. D. Giovanni); tual framework that could contribute to the responsible, fiammetta.marulli@unicampania.it (F. Marulli) reliable, fair, and safe use of AI. Orcid 0000-0003-4021-4137 (L. Campanile); 0000-0002-0271-132X The rest of this paper is structured as follows: Section (R. D. Fazio) 2 gives a brief overview of AI and generative AI, Section © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings 3 focuses on the ethical implications and issues of AI. and related to ethical and legal aspects. Observing the Section 4 presents the conceptual framework. Finally, phenomenon of the outstanding popularity of these kinds Section 5 presents the conclusion and future research of systems and tools among common users, it brings back directions. to mind the effects of Web 2.0, with the introduction of User Generated Contents (UGC)[8], where people were enabled to write everything almost everywhere. A dele- 2. AI and Generative AI terious phenomenon deriving from the exceeding democ- background racy of the web still remains represented by the fake news unconditioned spreading[9], as discussed in the In the last few years, Artificial Intelligence Generated studies proposed in[10], [11], [12]. Fake news could be Content (AIGC)[1] has gained outstanding popularity, automatically generated by GAI systems, with features not only in the computer science research community that make them challenging to be distinguished from but, mainly in terms of interest in the various content real news when automatic classification systems are em- generation products built by large tech companies. AIGC ployed. With the very recent advances of GAI, generat- refers typically to contents that can be automatically ing fake content is within everyone’s reach. Finally, also generated by adopting advanced Generative AI (GAI) novel cyber-security issues are introduced by the mali- techniques, as opposed to being created by human au- cious exploitation of generative AI[13]. Foremost among thors. them there are the adversarial attacks, performed mostly GAI-based systems can automate the creation of large by re-shaping and re-arranging well-known malicious quantity of content in a very short time. The most rep- behaviours and activities under a novel unknown guise, resentative exemplars are provided by the OpenAI tools, to cheat defence and intrusion detection systems[14]. namely ChatGPT[2] and DALL-E[3]: these tools can gen- Zero-days attacks, along with data and model poisoning erate, respectively, but not limited to, textual documents attacks, are very frequently supported by GAI-based sys- and pictures, exploiting large knowledge bases laying tems[15]. In [10] and [16] poisoning attacks targeting under the interaction systems, typically provided as con- machine learning models, performed by the exploitation versational agents. The extraordinary popularity of these of adversarial and GAI are discussed. In the work of tools can be reasonably found exactly in the key aspects [17], a case study for energy distribution and dispatching to being friendly and ready-to-use tools for not expert systems frauds is discussed, highlighting the potential people: by adopting a very familiar interface, provided drawbacks and threats deriving from a malicious-aim in the shape of an instant messaging system, properly driven exploitation of Generative Adversarial Networks called conversational agents or shortly as chatbots, com- (GANs)[18], several years before the current explosion mon users are enabled to test and exploit effectively the of popularity of current GAI systems. potential of the generative technologies. ChatGPT is a Large Language Model (LLM)[4] — based tool, devel- oped by OpenAI for building conversational AI systems, 3. Ethics aspects: promises and which can efficiently understand and respond to human perils language inputs in a meaningful way[5]. In addition, DALL-E is another state-of-the-art GAI model also devel- The emerging new possibilities associated with AI and oped by OpenAI, which is capable of creating unique and GAI raise various ethical challenges that should be ad- high-quality images from textual descriptions in a few dressed in a comprehensive manner. Researchers, physi- minutes, such as “a pink rabbit going to Mars boarding cists, and engineers should not remain at the legal mini- its flying basket”, in a photorealistic style. Anyway, GAI mum compliance in terms of facing ethical issues in the is not free from research challenges, concerning, for ex- field of AI, but they should study, understand, and act ample, the appropriate set of commonly used evaluation in a better possible way to mitigate or eliminate those metrics for assessing fidelity, faithfulness and quality of issues. artificially generated data, as discussed in [6]. A further A significant concern in AI is handled by bias. Starting analysis concerning GAI methodologies and research as- from data collection to model training, bias is a potential pects, along with a comprehensive classification of input risk in different stages of the AI process. The potential and output formats used in GAI systems, is provided in risk is to perpetuate the existing bias in training data to [7]. AI model. This risk becomes very high in GAI model Whether GAI represents a significantly challenging training. [19] issue for researchers, involved in understanding and im- In GAI, the amount of data used to train the model proving the representation of the knowledge that is be- is enormous. Often this data has been collected from hind, GAI-based systems are also carriers of not trivial the Internet and using different and heterogeneous data implications, as the ones represented by social impact sources. Unlike a traditional AI model, which is used consent of the people involved, possibly violating their privacy. • Identity Theft: by spreading false information, misleading content, or malicious messages, fake content can be used to impersonate individuals and cause significant damage to their reputations and privacy, as well as organize financial fraud. • Revenge Porn: fake content can be used to create not real videos or images that show people in compromising situations, damaging their privacy Figure 1: OpenAI Dall-E-2 photorealistic image and reputation. In the most serious cases, money is solicited for extensive purposes. • Misinformation or Disinformation: fake content can have a significant impact on public opinion, for prediction or classification purpose, the generative trust, and decision-making by spreading false in- model is used to create new content. Bias in training data formation or propaganda. This misinformation increases, in this case a specific ethical issue, perpetuating can also have a serious impact on society. It a potential social bias in the newly generated content. can lead to social unrest, political instability, and In addition, researchers working in this area face an- other negative consequences. other ethical dilemma: if the data contain biases that reflect society, is it correct to work to mitigate these bi- It is important to emphasize that privacy issues arise ases? If so, how? first in AI processes. In fact, there are significant privacy Certainly, it must be done with the utmost care, be- issues at the data collection stage, because in this stage cause biased AI systems could potentially exacerbate is where sensitive information is collected and stored, existing societal inequalities. They could perpetuate prej- making it vulnerable to potential security breaches and udice or reinforce stereotypes. They could also produce unauthorized access. [22], [23] disparate outcomes for groups based on factors like race, Finally, it is also interesting for this discussion to men- gender, or socioeconomic status, leading to further in- tion the problem of copyrighting the content on which equality and social unrest. There is a real risk of perpet- AI systems, especially GAI systems, are trained. Often uating harmful stereotypes and possibly even distorting the source of this data is not really known. beliefs [20]. GAI systems can use, process, and generate content Strictly related to the bias issue, in the special mode without explicit consent, potentially violating the privacy with generative content creation, there is the problem of of individuals and organizations. misleading information and fake news generation. The considerations presented here on the ethical risks The ability of LLMs to create information that is not associated with AI and the perils that arise from them, present in their original data, known as hallucination of depend in great part on unaccountable or unfair use of LLMs or more technically called emerging feature [21], AI, both by the creators of AI systems and by the end introduces the problem of creating misleading text, which users of such technologies. could easily become fake news. In the field of text generation, there are a lot of use Moreover, recent developments in GAI allow not only cases where LLMs can help and improve the regular ac- text, but also images (figure 1), video, and audio to be tivities of students and researchers if it is used in a fair created, enabling non-technical people to effectively use way. these techniques through simple applications. GAI systems, such as ChatGPT, could be leveraged for It is clear that illegal use of these technologies has led students to get ideas or insights on specific topics. If you to attempts at fraud and extortion, and can also lead to know well the idea that you want to write, then the GAI major legal and social problems. The illegal use of these could help you to write it without grammar mistakes, techniques to create images and videos that substitute especially if you are writing in non-native language. the face or other physical characteristics of one person This could greatly benefit non-native speakers, even for those of another. Because of their potential to cre- in academia, in a sort of democratization of the dissemi- ate believable and deceptive content that can be used nation of scientific thought, without having to resort to to spread misinformation or damage the reputation of expensive language revision services. individuals. The most relevant privacy issues include: On the other hand, an unfair and unethical use of this technology by students and researchers raises a very • Privacy Violation: fake content can be used to important ethical and legal problem related to authorship. manipulate existing videos or images without the The need to know whether a piece of content is human- generated or machine-generated is becoming more rele- vant and critical. 4. A conceptual framework Faced with these ethical and practical problems, the gov- ernments of various countries around the world have not stood still. The United States has responded with the AI Bill of Right [24], which is not a regulation but only a white paper of recommendation from the White House Office of Science and Technology Policy. It outlines the main principles to be followed to pursue ethical issues in AI. It is a guideline for designing and deploying AI systems that respect human rights, enhance fairness, and protect personal privacy. The European Commission has gone further with the EU AI Act, a “Proposal for a Regulation of the Euro- pean Parliament and of the Council laying down harmo- nized rules on artificial intelligence and amending certain Union legislative acts” [25]. It is a fully-fledged legisla- tive proposal that aims to address the risks associated with artificial intelligence systems. The AI Act intends to ensure that AI systems are trust- worthy, reliable, and beneficial to individuals and society. Furthermore, once again the European Commission enacted the General Data Protection Regulation (GDPR) Figure 2: Workflow for the application of the conceptual [26], which although not closely related to artificial intel- framework for responsible and ethical use and development ligence, protects the privacy rights of European citizens, of AI Systems with particular emphasis on the automatic gathering and processing of personal data. These documents provide a solid foundation for the organizations analyse potential biases and errors in their development of a conceptual framework to assist re- models. searchers and companies in developing and deploying AI To be effective, the application of the framework systems that are not only compliant but also address and should be continuous in time and cyclical. The adop- attempt to solve AI ethical issues. tion, where possible, of various tools and techniques to The four pillars on which the proposed framework is review the fair use of AI systems will be essential. These built are: should include automatic systems to check the origin of • Explainability Artificial Intelligence (XAI) training data, or tools that can help assess whether a text is human-generated or machine-generated. Tools that • Use of tools when possible protect against adversarial attacks and data poisoning • Audit and organization for ethical compliance attacks are also needed to keep the system fair, ethical, • Continuous risk assessment and secure. In figure 2 is depicted a possible workflow for the In this field, the importance of research is paramount application of this framework. because even though some steps have been taken in the The trustworthiness and transparency of an AI system right direction, new developments are moving fast, and it are important characteristics for the responsible use of AI, is necessary to constantly improve tools and techniques. because they increase the sense of security in using the The next phase involves regular audits and organiza- system and the confidence in it. XAI is a cornerstone to tional practices to encourage ethical and responsible use achieve these aims. The user should be able to understand and development of AI systems. why the AI system arrives at the results it does and why This could include internal reviews of development certain actions are taken. XAI supports transparency, processes, ongoing training for operators, and conduct- which not only increases user confidence, but also helps ing regular audits to assess the ethical implications of AI systems. These practices should be organized with clear [5] M. Abdullah, A. Madain, Y. Jararweh, Chatgpt: Fun- guidelines to avoid any misunderstanding or abuse of AI damentals, applications and social impacts, in: 2022 techniques. Ninth International Conference on Social Networks Finally, a regular and cyclical risk assessment process Analysis, Management and Security (SNAMS), Ieee, specific to AI systems is required to promptly identify, 2022, pp. 1–8. evaluate, and prioritize potential risks associated with [6] F. Marulli, P. Paganini, F. Lancellotti, Exploring the the development of AI systems. faithfulness of synthetic data by generative mod- els, in: 2023 International Conference on Machine Learning and Applications (ICMLA), IEEE, 2023, pp. 5. Conclusion and Future Works 2214–2221. [7] A. Bandi, P. V. S. R. Adapa, Y. E. V. P. K. Kuchi, The Generative AI is all about creating artificial data that power of generative ai: A review of requirements, looks like the real thing. This super-realistic data can models, input–output formats, evaluation metrics, be a game-changer in many fields, from video games to and challenges, Future Internet 15 (2023) 260. medicine and finance, until the arts. The resulting pro- [8] B. Omar, W. Dequan, Watch, share or create: The duction of GAI is sometimes referred to as “fake data”, influence of personality traits and user motivation to evidence that the contents were generated by an au- on tiktok mobile video usage (2020). tomatic process performed by a machine and not by a [9] F. Marulli, S. Marrone, L. Verde, Sensitivity of ma- human being. GAI enables to generate fake but realis- chine learning approaches to fake and untrusted tic images, to write new text, compose music, and even data in healthcare domain, Journal of Sensor and build chatbots that seem like chatting with real people. Actuator Networks 11 (2022) 21. Besides the research efforts to improve the quality of the [10] F. Marulli, L. Verde, L. Campanile, Exploring data AI production, several ethical, legal and security issues and model poisoning attacks to deep learning-based need to be addressed. nlp systems, Procedia Computer Science 192 (2021) It is apparent to need to address these issues system- 3570–3579. atically and beyond mere regulatory compliance. The [11] F. Marulli, L. Verde, S. Marrore, L. Campanile, A development of a conceptual framework to address these federated consensus-based model for enhancing issues should be a good starting point. fake news and misleading information debunking, Future work will include improving the framework in: Intelligent Decision Technologies: Proceedings and exploring ways to make it more practical, including of the 14th KES-IDT 2022 Conference, Springer, measures of the performance of ethical and responsible 2022, pp. 587–596. use of AI and GAI. [12] L. Campanile, P. Cantiello, M. Iacono, F. Marulli, Moreover, we plan an in-depth look at the topic of de- M. Mastroianni, Vulnerabilities assessment of deep tecting human user-generated texts from texts generated learning-based fake news checker under poisoning by a GAI, exploring existing techniques and towards new attacks, Computational Data and Social Networks ones. Finally, we will continue research in the area of (2021) 385. XAI (whose exploration began in[27] and [28]), which [13] L. Campanile, M. Iacono, F. Martinelli, F. Marulli, also extends to GAI, in order to improve the transparency M. Mastroianni, F. Mercaldo, A. Santone, Towards of AI systems. the use of generative adversarial neural networks to attack online resources, in: Web, Artificial Intel- References ligence and Network Applications: Proceedings of the Workshops of the 34th International Conference [1] Y. Cao, S. Li, Y. Liu, Z. Yan, Y. Dai, P. S. Yu, L. Sun, on Advanced Information Networking and Appli- A comprehensive survey of ai-generated content cations (WAINA-2020), Springer, 2020, pp. 890–901. (aigc): A history of generative ai from gan to chat- [14] O. Eigner, S. Eresheim, P. Kieseberg, L. D. Klausner, gpt, arXiv preprint arXiv:2303.04226 (2023). M. Pirker, T. Priebe, S. Tjoa, F. Marulli, F. Mercaldo, [2] OpenAI, Conversation with chatgpt, 2023. URL: Towards resilient artificial intelligence: Survey and https://chat.openai.com. research issues, in: 2021 IEEE International Confer- [3] OpenAI, Dall-e 2, 2023. Generative AI model for ence on Cyber Security and Resilience (CSR), IEEE, image creation. https://openai.com/dall-e-2. 2021, pp. 536–542. [4] Y. Chang, X. Wang, J. Wang, Y. Wu, L. Yang, K. Zhu, [15] C. A. Visaggio, F. Marulli, S. Laudanna, H. Chen, X. Yi, C. Wang, Y. Wang, et al., A sur- B. La Zazzera, A. Pirozzi, A comparative vey on evaluation of large language models, ACM study of adversarial attacks to malware detectors Transactions on Intelligent Systems and Technol- based on deep learning, Malware Analysis Using ogy (2023). Artificial Intelligence and Deep Learning (2021) 477–511. a marine scrubber operation with a combined [16] L. Verde, F. Marulli, S. Marrone, Exploring the im- analytical/ai-based method, Chemical Engineer- pact of data poisoning attacks on machine learning ing Research and Design (2023). doi:https://doi. model reliability, Procedia Computer Science 192 org/10.1016/j.cherd.2023.06.006 . (2021) 2624–2632. [28] L. Campanile, L. Di Bonito, M. Iacono, F. Di Na- [17] F. Marulli, C. A. Visaggio, Adversarial deep learning tale, et al., Prediction of chemical plants operat- for energy management in buildings, in: Proceed- ing performances: a machine learning approach, ings of the 2019 Summer Simulation Conference, PROCEEDINGS EUROPEAN COUNCIL FOR MOD- 2019, pp. 1–11. ELLING AND SIMULATION 2023 (2023) 575–581. [18] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial networks, Communications of the ACM 63 (2020) 139–144. [19] K. Wach, C. D. Duong, J. Ejdys, R. Ka- zlauskaitė, P. Korzynski, G. Mazurek, J. Pal- iszkiewicz, E. Ziemba, The dark side of generative artificial intelligence: A critical analysis of controversies and risks of chat- gpt (2023). URL: https://www.scopus.com/ inward/record.uri?eid=2-s2.0-85183620669&doi= 10.15678%2fEBER.2023.110201&partnerID= 40&md5=deab98413c32b948ba57308e7e53fa6a. doi:10.15678/EBER.2023.110201 . [20] M. Zhou, V. Abhishek, T. Derdenger, J. Kim, K. Srini- vasan, Bias in generative ai, arXiv preprint arXiv:2403.02726 (2024). [21] L. Huang, W. Yu, W. Ma, W. Zhong, Z. Feng, H. Wang, Q. Chen, W. Peng, X. Feng, B. Qin, T. Liu, A survey on hallucination in large language models: Principles, taxonomy, challenges, and open ques- tions (2023). arXiv:2311.05232 . [22] Y. Zhang, M. Wu, G. Y. Tian, G. Zhang, J. Lu, Ethics and privacy of artificial intelligence: Understand- ings from bibliometrics, Knowledge-Based Systems 222 (2021) 106994. [23] B. Liu, M. Ding, S. Shaham, W. Rahayu, F. Farokhi, Z. Lin, When machine learning meets privacy: A survey and outlook, ACM Computing Surveys (CSUR) 54 (2021) 1–36. [24] White House Office of Science and Technology Policy, Ai bill of right, 2022. URL: https://www. whitehouse.gov/ostp/ai-bill-of-rights, accessed on 01 April, 2024. [25] Council of European Union, Eu ai act, 2024. URL: https://data.consilium.europa.eu/doc/document/ ST-5662-2024-INIT/en/pdf, accessed on 01 April, 2024. [26] Council of European Union, Regulation (eu) 2016/679 of the european parliament and of the council - general data protection regulation, 2016. URL: https://eur-lex.europa.eu/legal-content/EN/ TXT/?uri=CELEX%3A02016R0679-20160504, ac- cessed on 01 April, 2024. [27] L. P. Di Bonito, L. Campanile, E. Napolitano, M. Ia- cono, A. Portolano, F. Di Natale, Analysis of