. Design an Hybrid Educational Framework for AI Ethics in Healthcare: Leveraging LLMs and E-Learning Platforms to Empower Medical Students Giacomo Balduzzi1,∗,† , Teresa Balduzzi2,∗,† and Manuel Striani3,∗,† 1 Dipartimento di Giurisprudenza e Scienze Politiche, Economiche e Sociali - DIGSPES, Università del Piemonte Orientale, Alessandria, Italy 2 Dipartimento di Giurisprudenza, Università degli Studi Roma Tre, Roma, Italy 3 Dipartimento di Scienze ed Innovazione Tecnologica - DiSIT, Università del Piemonte Orientale, Alessandria, Italy Abstract As Artificial Intelligence becomes increasingly integrated into healthcare, it is essential for medical professionals to understand the ethical implications of these technologies. This paper proposes the creation of a specialized AI ethics course tailored specifically for healthcare professionals. The course utilizes an innovative framework that combines the interactive capabilities of OpenAI’s ChatGPT-4o with the comprehensive learning management features of the Moodle e-learning platform to deeply engage with complex ethical dilemmas in AI. This hybrid approach aims at ensuring that healthcare students and professionals not only gain theoretical knowledge but also develop practical skills in ethical decision-making, empowering them to navigate AI-related challenges in healthcare responsibly and effectively, while maintaining the highest standards of patient care and ethical practice. Keywords Education, AI ethics in healthcare, Personalized learning, 1. Introduction to AI Ethics Education in Healthcare The rapid advancement of Artificial Intelligence (AI) in healthcare has revolutionized the way medical services are delivered, offering unprecedented opportunities for improving patient outcomes, enhancing diagnostic accuracy, and streamlining clinical workflows. Moreover, AI-driven tools from predictive analytics to personalized treatment recommendations, are becoming integral components of modern healthcare systems [1]. However, these technological innovations arise significant ethical challenges that must be carefully navigated by healthcare professionals. Despite the critical importance of ethical AI in healthcare, there is a gap in formal education on this topic for medical professionals. Many healthcare providers and researchers may lack the necessary training to fully understand the ethical dimensions of AI, leading to challenges in its responsible implementation. To address this gap, there is a growing need for comprehensive education programs that integrate ethics into AI healthcare training, ensuring that medical personnel are prepared to make informed, ethical decisions in their practice. In addition, digital and AI technologies are significantly increasing their importance within the educational processes and techniques [2, 3, 4]. Therefore, in addition to “education for AI”, which is the focus of this workshop and paper—specifically addressing its implications in healthcare — there is also the challenge of the growing role of digital 1st Workshop on Education for Artificial Intelligence (edu4AI 2024, https:// edu4ai.di.unito.it/ ), co-located with the 23rd International Conference of the Italian Association for Artificial Intelligence (AIxIA 2024). 26-28 November 2024, Bolzano, Italy ∗ Corresponding author. † These authors contributed equally. Envelope-Open giacomo.balduzzi@uniupo.it (G. Balduzzi); teresa.balduzzi@uniroma3.it (T. Balduzzi); manuel.striani@uniupo.it (M. Striani) Orcid 0000-0003-4448-1515 (G. Balduzzi); 0009-0008-1434-912X (T. Balduzzi); 0000-0002-7600-576X (M. Striani) © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings technologies and AI within educational processes (“AI in education”). Although we see a necessity to analytically distinguish these two fields, they are integrated in practice and reality, as they often involve similar and interconnected ethical risks and opportunities. This paper proposes the development of a course on ethics in AI healthcare education, specifically tailored for medical personnel and students. The course is designed to be delivered through an innovative framework that integrates the interactive capabilities of Large Language Models (LLMs [5], specifically, OpenAI’s ChatGPT-4o [6, 7]) with the versatile e-learning platform Moodle. 2. The Importance of AI Ethics Education in Healthcare The state of the art in ethics education for AI in healthcare emphasizes the need for comprehensive frameworks that integrate ethical principles into the rapidly evolving field of AI-driven healthcare solutions. Recent literature highlights the importance of combining ethics with practical applications to address the unique challenges posed by AI in healthcare settings. The World Health Organization (WHO) has called for cautious and ethical implementation of AI technologies, particularly in ensuring that these tools respect patient autonomy, privacy, and fairness while enhancing diagnostic and treatment processes. The WHO emphasizes that the integration of AI in healthcare must be guided by ethical principles like transparency, inclusion, and public interest by rigorous evaluation to prevent unintended harm and ensure equitable access to the benefits of AI [8, 9]. Additionally, the American Medical Association (AMA) has developed frameworks that promote trustworthy AI in healthcare, focusing on the intersection of ethics, evidence, and equity. These frameworks advocate for the responsible use of AI in medical education and practice, ensuring that healthcare professionals are well-equipped to navigate the ethical dilemmas associated with AI. The AMA’s approach also includes addressing biases in AI algorithms and ensuring that AI technologies are used to support, rather than replace, human decision-making in clinical settings [10]. Overall, the current state of the art underscores the need for healthcare professionals to be educated not only on the technical aspects of AI but also on the ethical considerations that must accompany its deployment. This dual focus on ethics and practical application is essential for developing AI systems that are both innovative and aligned with the core values of healthcare [11, 12]. Incorporating AI ethics into medical training is essential for preparing future healthcare professionals to navigate the ethical challenges presented by emerging technologies. While medical ethics education cannot cover every aspect of AI ethics, it must equip students with the skills to reason ethically about AI’s role in healthcare. Key topics for AI ethics education include informed consent, bias, safety, transparency, patient privacy, and trust. For example, informed consent in AI-driven healthcare is particularly complex, requiring careful consideration of how much patients should be informed about AI’s role in their care. Similarly, biases within AI systems can lead to unequal healthcare outcomes, making it crucial for future doctors to recognize and mitigate these biases. Additionally, AI’s impact on clinical skills and decision-making such as the risk of over-reliance on AI (automation bias) and the potential erosion of traditional clinical skills, must be addressed to maintain high standards of patient care. Furthermore, ethical considerations like accountability, legal regulation and the environmental sustainability of AI technologies, should be integral to this education. In summary, while AI holds transformative potential for healthcare, its integration requires careful ethical consideration. Providing medical professionals with comprehensive AI ethics education ensures that these technologies are used responsibly, improving patient outcomes while upholding the core values of medical practice. The course, delivered through the Moodle e-learning platform and enhanced with LLMs OpenAI’s ChatGPT-4o, aims to create a stimulating, inclusive and interactive learning environment [13], enabling medical residents to rigorously test their knowledge and be well-prepared for the ethical complexities of AI in healthcare. 3. Course Structure and Objectives This course is designed to provide medical students with a deep and critical understanding of AI ethical aspects by utilizing innovative digital tools like OpenAI’s ChatGPT-4o to make learning more interactive and engaging. The course objectives are organized into six main areas: 1. Informed Consent and patient autonomy: There is ongoing debate about whether medical AI expands physicians’ legal obligations regarding informed consent. However, it is clear that a new approach is needed, one that carefully identifies the specific aspects of medical AI that must be communicated to patients in order to ensure a comprehensive and ethically sound consent process. [14, 15, 16, 17, 18]. As AI becomes increasingly prevalent in high-stakes medical contexts, safeguarding patients’ autonomy could require ensuring their right to specifically refuse AI-based interventions. 2. Bias: There is growing recognition of both overt and subtle biases within AI systems. The challenge lies in determining whether these biases can be effectively mitigated, and if so, how this should be achieved. 3. Safety: While AI offers innovative solutions, it also introduces potential risks and errors, both anticipated and unforeseen, in healthcare delivery [19]. For instance, medical AI devices are typically designed to minimize mathematical “loss functions” (such as average prediction or classification errors), but this does not always translate to minimizing harm to patients. 4. Transparency: The opacity of some medical AI systems can lead to situations where physicians and patients must rely on predictions that cannot be easily explained or justified, raising concerns about which AI models should be deployed in clinical settings. Sometimes, the “black box” effect [20] may arise from intentional corporate institutional self-protection. In other cases, the lack of transparency stems from technical illiteracy, that is, the fact that the design of algorithms is a specialized skill, which remains inaccessible to most of the population. Finally, a third form of opacity relies on the mismatch between mathematical procedures of machine learning algorithms and human styles of semantic interpretation. In short, “the workings of machine learning algorithms can escape full understanding and interpretation by humans, even for those with specialized training, even for computer scientists” [21]. 5. Equity: As AI becomes increasingly integral to medicine, ensuring fair access to its benefits will become a critical issue [22]. There is a growing concern that disadvantaged groups—such as those with lower incomes, less education, or limited access to healthcare—may face significant barriers to benefiting from AI advancements. This could exacerbate existing inequalities and create new forms of injustice/inequity within the healthcare system. 6. Legal Regulation: EU regulations on medical devices (MDR and IVDR) and the EU’s GDPR [23] already form a complex regulatory framework governing medical AI. To this intricate landscape, alongside a new strategy on data governance—which includes the Data Governance Act (DGA), the Data Act, and the proposed European Health Data Space (EHDS) regulation the AI Act has been introduced [24], regulating AI across various sectors, including healthcare [25, 26]. The interplay between this regulatory framework and ethical considerations is crucial, emphasizing their mutual influence on shaping both legal and ethical standards in the field. 4. The Need for a Cross-cutting Approach These main areas discussed above highlight distinct yet interconnected issues. As noted by United Nations and WHO, the digitization of healthcare and the rapid expansion of AI systems is fundamentally transforming how services are delivered [27, 28, 29]. Such technologies, together with opportunities, create challenges for the healthcare systems including the risk that they may compound and exacerbate human biases [30]. Moreover, these developments have ethical implications that cut across the areas mentioned above. The course design should identify several of these cross-cutting themes and address them, highlighting their connections and implications in relation to the various areas. One example of this is the impact of digitization and AI on the centrality of the patient in healthcare. Policies and institutions at various levels, such as the European level, advocate - at least in principle - for a healthcare system that becomes increasingly patient-centred [31]. In this perspective, patients should more become an active subject rather than a mere object of healthcare, which includes participation in and influence on decision-making, as well as competences needed for wellbeing. In reality, we observe that digitization and the use of AI lead to ambivalent outcomes for patients and healthcare professionals, both of subjectification and objectification. On the one hand, the work in [32] argued that a mismatch exists between the digital turn and the promotion of patient centeredness in the design and delivery of health services. Such technologies might increasingly exclude patients from shared decision-making and be left unable to exercise agency or autonomy in decisions about their health. Digitized healthcare systems increasingly objectify patients as data sources. Not only do patients often have limited knowledge about how and why AI technologies make certain decisions, but they must also confront the different “forms of opacity” presented by these machines: in a word, their lack of transparency (see Section 3). On the other hand, the “digitally engaged patient” [33] approach suggests that digital technologies, as well as computational techniques and AI, are suitable to increase the participation of patients and their active involvement in self-care. For example, by employing wireless mobile digital devices and wearable, implanted or inserted biosensors, lay people can monitor their health, well-being and physical function and engage in self-care of illness, chronic medical conditions, or disability remotely. By using digitalized information systems, patients conduct medical consultations via digital media. Additionally, they could seek out information about health, illness and medical treatments and therapies and share their experiences and health-related data with others, facilitating the process of acquiring their informed consent. Digital technologies also allow the collection and transmission of patient-reported outcomes (PROs), that is, “any report of the status of a patient’s health condition health behaviour, or experience with health care that comes directly from the patient or in some cases from a caregiver or surrogate responder, without interpretation by a practitioner or anyone else” [34]. People no longer exclusively acquire and use artificial intelligence (AI) applications in health-care systems or home care. As an example, non-health system entities such as education systems, workplaces, social media and even financial agencies often provide AI applications for mental health. Telemedicine is improving a larger shift from hospital to home-based care, facitilated by the uso of AI. These applications include remote monitoring systems, such as video-observed therapy for tuberculosis and virtual assistants to support patient care. Since 2020, the use of telemedicine has grown exponentially in the wake of the COVID-19 pandemic in many countries as demostrated by striking example of China [35]. Telemedicine and AI constitute highly distinct technological innovations, with applications in various fields of healthcare. However, they share some common elements. First, these technologies focus on digitizing patients’ bodies and behaviours to generate and use the data they produce [33]. Digital medicine overcomes the divide between disease and illness since both can provide data that are useful for improving health system knowledge and performance. The work in [36] recalls the case of “Google Flu Trends” launched by Google in 2009. With its stream of millions of hourly search queries, Google discovered it was able to report flu epidemics by having access to the world’s ‘health’ data «without truly knowing it». By exclusively recognizing the body as a carrier of the disease, modern medicine tends to remove the embodied illness experience [37]. Digital medicine reduces the body itself to a digital archive. In this context, the “digitally-engaged patient” actively cooperates with the process of individualization of detailed data that may be produced. Second, the emphasis on becoming “engaged” and “taking control of their own health” refers to a fundamentally individualistic approach to patient involvement in the healthcare system. As highlighted in its constitution and reiterated in a more recent report [9], the WHO emphasizes the need to focus not only on reducing disease but also on tackling its root causes. This involves systematically addressing social, environmental, and economic determinants of health. Although loneliness and social isolation are serious public health risks [9], they are still largely neglected by medicine and healthcare. The view of patients as individuals, abstracted from their social ties, is an obstacle in promoting a relational approach to health, which gives value, meaning, and importance to the relational goods [38] generated by patients and healthcare professionals in their daily interactions. Third, digital health technologies, unlike modern conventional medicine, acknowledge the importance of patients’ experiences, embodied in their perceptions and behaviours. Thus, the collaboration of patients and caregivers is necessary to monitor, detect, measure, compute, configure such expertise, in other words, to transform the patients themselves into extractable and editable data. The growing role of AI in healthcare seems to prospect the expansion and establishment of a healthcare model based on the extraction of data from individuals. This is demonstrated by the decision made by major players in surveillance capitalism [39] to significantly increase investments in the health field [40]. The consideration of embodied knowledge of lay people is conditioned on the fact that it is effectively disembodied to make it material quantifiable and manageable by computers. So far, the digital turn requires a reconfiguration of relationships among different actors, types of knowledge and experiences, instruments, techniques, structures, and spaces in the health field. Besides, a new form of reductionism is arising, from the patient as body with disease to the patient as sources of data. In this sense, a reinforced objectification of the patient is observable. This poses problems for the subjectivity not only of patients, but also of physicians. The implications of these processes for informed consent, transparency, legal regulation, and the other areas mentioned in the previous paragraph are varied and significant. Nonetheless, a well-structured course design is essential for developing an appropriate awareness aimed at exploring and understanding practical situations in a multidimensional way, interpreting them critically from various perspectives. To this end, the course will need to engage various fields of knowledge, adopting an interdisciplinary approach and featuring co-teaching that includes both researchers and professionals from diverse backgrounds: informatics, medical-scientific, socio-political, and legal. Furthermore, we believe it would be beneficial to design a course that integrates artificial intelligence with widely used e-learning tools as the next section will explain in more details. 5. Integration of OpenAI’s ChatGPT-4o in Moodle Moodle [41] is a widely used open-source e-learning platform that offers configurable features for creating student assessments—such as quizzes, online tests, and surveys—and for managing tasks and schedules [42, 43, 44]. Additionally, it provides a variety of tools to support the teaching and learning process. Figure 1 illustrates a possibile integration of ChatGPT-4o into the Moodle e-learning platform by using a cloud systems we suggest to experiment since it may be costly but effective for academic institutions. This system enables the creation of personalized learning paths tailored to each student’s progress and performance. For example, if a student has difficulty understanding the topics on AI ethics in healthcare, ChatGPT-4o embedded in Moodle e-learning platform, can suggest additional resources or offer simplified explanations to clarify the concept. Below, we present an example of an interactive learning module on AI ethics in healthcare, designed specifically for medical students. This module, powered by our architecture, combines Moodle’s robust course management capabilities with the adaptive learning and conversational strengths of ChatGPT-4o, resulting in an engaging, tailored, and comprehensive educational experience. Below, we outline the four key steps that define our educational framework. Step 1: Teaching: Medical students engage with the Moodle e-learning platform, where foundational materials on AI ethics are presented. These include case studies, theoretical modules, and practical examples relevant to healthcare. ChatGPT-4o offers explanations, answers questions, and provides deeper insights into complex topics. The AI-tutor, personalizes learning by adapting content delivery Step 1: Teaching Step 3: Testing Step 4: Evaluation 1 Step 2: Learning 2 3 4 Figure 1: A dynamic learning architecture designed to enhance the educational experience of medical students focusing on AI ethics in healthcare built around the Moodle e-learning platform to the student’s pace and understanding, ensuring that each student grasps the ethical considerations of AI in healthcare. This first step includes a structured curriculum that covers: • Introduction to AI: Basics of AI, its applications in healthcare, and the potential ethical issues that may arise. • Ethical Principles: Core ethical principles relevant to AI in healthcare, such as patient autonomy, confidentiality, beneficence, and equity. • Case Studies: Real-world examples where AI has been implemented in healthcare, emphasizing both successes and failures from an ethical perspective. Step 2: Learning: Students actively participate in discussions, simulations, and interactive scenarios. The AI’s conversational capabilities encourage critical thinking and allow students to explore the nuances of AI ethics in a healthcare context. For instance, students might engage in simulated ethical dilemmas where they must apply their knowledge to make decisions, with ChatGPT-4o guiding and challenging their reasoning. This phase is where students engage with interactive content designed to deepen their understanding of the ethical implications of AI in healthcare. This phase includes: • Interactive Modules: These could involve scenarios where students must apply ethical principles to AI-based healthcare decisions. • Simulations: Virtual environments where students can witness AI in action within a healthcare setting, understanding the impact of their decisions. • Discussion Forums: Collaborative spaces where students can discuss ethical dilemmas with peers and instructors, fostering critical thinking and ethical reasoning. Step 3: Testing: Following the learning phase, students take quizzes and assessments on Moodle e- learning platform to test their understanding. These quizzes are dynamically generated, with ChatGPT- 4o providing instant feedback and additional explanations for incorrect answers. This continuous feedback loop helps students identify areas for improvement and reinforces their learning. In this phase, students’ understanding and application of ethics in AI are assessed and it includes: • Quizzes and Exams: Regular assessments to evaluate students’ grasp of ethical concepts and their ability to apply these in AI contexts. • Practical Assessments: Realistic scenarios where students must navigate ethical challenges presented by AI in healthcare, ensuring they can apply what they’ve learned in practice. • Feedback Mechanisms: Automated or instructor-provided feedback to help students understand their mistakes and improve their ethical decision-making skills. Step 4: Evaluation: The final step involves a more formal evaluation, where students’ knowledge and ethical decision-making skills are assessed through comprehensive exams or projects. ChatGPT-4o plays a role in offering review sessions, clarifying doubts, and even simulating real-world scenarios where students must apply ethical principles. The results from these evaluations are used to measure the effectiveness of the learning process and provide further guidance. This step includes: • Comprehensive Evaluations: End-of-course assessments that test the students’ ability to synthe- size and apply ethical principles in AI across multiple healthcare scenarios. • Reflective Assessments: Students are encouraged to reflect on their learning journey, how their understanding of AI ethics has evolved, and how they might apply these lessons in their future careers. • Continuous Improvement: The framework itself is evaluated for effectiveness, with feedback loops that inform continuous updates and improvements to the teaching materials and methods. 6. Case-Study: Scenarios and Learning Objectives As part of the “AI Ethics in Healthcare” course, medical students are presented with a case study that challenges them to critically assess various ethical issues surrounding the use of AI in medical practice. The case study is divided into several key topics mentioned in Section 3. Table 1 presents real-world scenarios starting from the following background, with the aim of highlighting the complex ethical dilemmas AI, introduces into healthcare settings. Background Dr. Emily, a cardiologist at a prestigious hospital, is utilizing a newly implemented AI-powered diagnostic tool designed to assist in detecting heart disease. This particular AI-Decision Support System, analyzes patient data - including genetic information, lifestyle habits and medical history - to recommend personalized treatment plans. While the technology has shown promise in enhancing diagnostic accuracy, it also raises several ethical concerns that are detailed as (i) Understand the unique ethical challenges AI introduces to informed consent in medical practice (ii) analyze the potential biases in AI systems and discuss methods for mitigating these biases (iii) evaluate the safety risks associated with AI in healthcare and the importance of transparency in AI decision-making (iv) discuss the Equity implications of AI in healthcare, particularly regarding equitable access to AI technologies, (v) reflect on patient autonomy and the right to refuse AI-driven interventions and (vi) explore the role of legal regulations in governing the use of AI in healthcare. Interactive Testing After completing the case study, students will use the ChatGPT tool integrated into the Moodle platform to answer a series of questions related to the case. ChatGPT will provide immediate feedback on their responses, prompting them to consider alternative perspectives and deepen their understanding of the ethical issues involved. Evaluation Students’ performance will be assessed based on their ability to articulate the ethical challenges presented by AI in healthcare, their critical thinking skills, and their ability to propose solutions that balance technological innovation with ethical responsibility. Ethical Issue Scenario Question Dr. Emily is preparing to use the How should Dr. Emily approach the Informed AI tool to recommend a treatment informed consent process in this situation? Consent and plan for a new patient. The patient To what extent should Dr. Emily patient expresses concern about how respect the patient’s choice to refuse autonomy the AI works and whether it might AI-based intervention? make decisions without their full understanding. The AI tool has been found to What steps should Dr. Emily take to assess recommend different treatment and address potential biases in the AI tool? Bias plans based on a patient’s ethnic How can the hospital ensure background. Dr. Emily notices this that the AI provides fair and pattern and wonders if the AI is biased. unbiased recommendations for all patients? The AI system occasionally suggests What are the ethical implications of treatment plans that deviate using an AI tool that might from traditional medical protocols. increase the risk of harm to patients? Safety Some colleagues have reported that How should Dr. Emily balance the potential these recommendations, while innovative, benefits of AI-driven recommendations may introduce unforeseen risks. with the need to ensure patient safety? The AI tool’s decision-making process is How important is transparency in the not fully transparent, and Dr. Emily use of AI in healthcare? cannot always explain why the AI recommends Transparency Should Dr. Emily rely on certain treatments. This lack of AI recommendations if she cannot fully transparency makes it difficult to justify the explain them to her patients? AI’s decisions to patients. What are the ethical concerns related The hospital’s AI tool is expensive and not to justice and equity in the available in all healthcare facilities. Dr. Emily distribution of AI technologies in Equity is concerned that only patients who healthcare? How can healthcare can afford treatment at her hospital providers ensure that the benefits of will benefit from this advanced technology. AI are accessible to all patients, regardless of socioeconomic status? What are the legal considerations New laws such as the EU’s GDPR have been Dr. Emily needs to be aware of when Legal introduced to regulate AI in using AI in her practice? How can ethical Regulation healthcare, but it is unclear how these principles influence the development of laws apply to Dr. Emily’s practice. legal regulations for medical AI? Table 1: Ethical Issues, Scenarios and Questions Related to AI in Healthcare 7. Discussion and Conclusion The integration of AI into healthcare offers tremendous opportunities but also presents significant ethical challenges. This course proposal presented in this paper on AI ethics for medical students, provides a comprehensive framework that merges theoretical knowledge with practical applications. Throughout the course, students will have the opportunity to deal with the key ethical concerns such as informed consent, algorithmic bias, patient privacy, and the impact of AI on patient autonomy. By engaging with real-world case studies and utilizing tools like OpenAI’s ChatGPT and Moodle e-learning platform, students will have the opportunity to develop the critical thinking skills needed to assess AI-driven interventions in healthcare. Additionally, the course will address the broader societal implications of AI in healthcare, including its potential to exacerbate existing disparities and the ethical challenges it poses on a global scale. Students will also have the opportunity to be introduced to the legal and regulatory frameworks that govern AI, preparing them to contribute to the responsible development and deployment of these technologies. As AI will continue to reshape the healthcare landscape, these students will prepare themselves to play a crucial role in shaping the ethical standards of AI in medicine, ensuring that these powerful tools are used responsibly and for the benefit of all patients. References [1] S. Montani, M. Striani, Artificial intelligence in clinical decision support: a focused literature survey, Yearbook of medical informatics 28 (2019) 120–127. doi:10.1055/s- 0039- 1677911 , publisher Copyright: © 2019 IMIA and Georg Thieme Verlag KG. [2] O. Zawacki-Richter, V. I. Marín, M. Bond, F. Gouverneur, Systematic review of research on artificial intelligence applications in higher education–where are the educators?, International Journal of Educational Technology in Higher Education 16 (2019) 1–27. [3] S. Grassini, Shaping the future of education: Exploring the potential and consequences of ai and chatgpt in educational settings, Education Sciences 13 (2023). URL: https://www.mdpi.com/ 2227-7102/13/7/692. doi:10.3390/educsci13070692 . [4] S. Tirocchi, Digital education, DigitCult - Scientific Journal on Digital Cultures 8 (2024) 75–89. URL: https://digitcult.lim.di.unimi.it/index.php/dc/article/view/254. doi:10.36158/97888929589205 . [5] B. Li, V. L. Lowell, C. Wang, X. Li, A systematic review of the first year of publications on chatgpt and language education: Examining research on chatgpt’s use in language learning and teaching, Computers and Education: Artificial Intelligence 7 (2024) 100266. URL: https:// www.sciencedirect.com/science/article/pii/S2666920X24000699. doi:https://doi.org/10.1016/ j.caeai.2024.100266 . [6] Openai: Gpt-4 technical report corr abs/2303.08774 (2024). URL: https://arxiv.org/abs/2303.08774. doi:https://doi.org/10.48550/arXiv.2303.08774 . arXiv:2303.08774 . [7] Large language models in medicine: The potentials and pitfalls, Annals of Internal Medicine 177 (2024) 210–220. doi:10.7326/M23- 2772 . [8] W. Guidance, Ethics and governance of artificial intelligence for health, World Health Organization (2021). [9] Executive Board, 148. Social determinants of health: report by the Director-General, World Health Organization, 2021. URL: https://iris.who.int/handle/10665/359797. [10] A. M. Association, et al., Principles for augmented intelligence development, deployment, and use, Accessed at www. ama-assn. org/system/files/ama-ai-principles. pdf on 29 (2023). [11] K. Murphy, E. Di Ruggiero, R. Upshur, D. J. Willison, N. Malhotra, J. C. Cai, N. Malhotra, V. Lui, J. Gibson, Artificial intelligence for good health: a scoping review of the ethics literature, BMC medical ethics 22 (2021) 1–17. [12] J. Michael P. Cary, J. C. D. Gagne, E. D. Kauschinger, B. M. Carter, Advancing health equity through artificial intelligence: An educational framework for preparing nurses in clinical practice and research, Creative Nursing 30 (2024) 154–164. doi:10.1177/10784535241249193 . [13] Y. L. Qianqian Cai, Z. Yu, Factors influencing learner attitudes towards chatgpt-assisted language learning in higher education, International Journal of Human–Computer Interaction 0 (2023) 1–15. doi:10.1080/10447318.2023.2261725 . [14] G. Katznelson, S. Gerke, The need for health ai ethics in medical school education, Ad- vances in Health Sciences Education 26 (2021) 1447–1458. doi:https://doi.org/10.1007/ s10459- 021- 10040- 3 . [15] D. Morana, T. Balduzzi, F. Morganti, et al., La salute “intelligente”: ehealth, consenso informato e principio di non-discriminazione, Federalismi. it 2022 (2022) 127–151. [16] M. Granillo, La sostenibilità giuridica dell’utilizzo degli algoritmi nei processi decisionali in ambito sanitario: il bilanciamento fra i benefici offerti dall’utilizzo delle nuove tecnologie e la regolamentazione in materia di trattamento dei dati personali, IUS et SALUS (27th August 2021). [17] C. De Menech, Intelligenza artificiale e autodeterminazione in materia sanitaria, BioLaw Journal- Rivista di BioDiritto (2022) 181–203. [18] L. Scaffardi, La medicina alla prova dell’intelligenza artificiale, DPCE online 51 (2022). [19] M. O. Kim, E. Coiera, F. Magrabi, Problems with health information technology and their effects on care delivery and patient outcomes: a systematic review, Journal of the American Medical Informatics Association 24 (2017) 246–250. doi:https://doi.org/10.1093/jamia/ocw154 . [20] F. Pasquale, The black box society: The secret algorithms that control money and information, Harvard University Press, 2015. [21] J. Burrell, How the machine ‘thinks’: Understanding opacity in machine learning algorithms, Big Data & Society (2016). doi:10.1177/2053951715622512 . [22] Y. Della Croce, O. Nicole-Berva, Duties of healthcare institutions and climate justice, Journal of Medical Ethics (2024). doi:https://doi.org/10.1136/jme- 2024- 109879 . [23] L. Y. He Li, W. He, The impact of gdpr on global technology development, Journal of Global Information Technology Management 22 (2019) 1–6. doi:10.1080/1097198X.2019.1569186 . [24] Data governance act (dga): Regulation (eu) 2022/868, data act: Regulation (eu) 2023/2854, 2023. [25] L. Floridi, Establishing the rules for building trustworthy ai, Ethics, Governance, and Policies in Artificial Intelligence (2021) 41–45. doi:https://doi.org/10.1007/978- 3- 030- 81907- 1_4 . [26] D. Horgan, M. Hajduch, M. Vrana, J. Soderberg, N. Hughes, M. I. Omar, J. A. Lal, M. Kozaric, F. Cascini, V. Thaler, et al., European health data space—an opportunity now to grasp the future of data-driven healthcare, in: Healthcare, volume 10, MDPI, 2022, p. 1629. [27] U. Secretary-General, Question of the realization of economic, social and cultural rights in all countries: the role of new technologies for the realization of economic, social and cultural rights: report of the secretary-general (2020). URL: https://digitallibrary.un.org/record/3870748?ln=en. [28] A. A. Reis, R. Malpani, E. Vayena, P. Majumder, S. Swaminathan, S. Pujari, J. Reeder, B. Mariano, N. Al Shorbachi, A. Ema, et al., Ethics and governance of artificial intelligence for health: Who guidance (2021). URL: https://www.who.int/publications/i/item/9789240029200. [29] C. Cath, Governing artificial intelligence: ethical, legal and technical opportunities and challenges, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376 (2018) 20180080. [30] J. Nabi, How bioethics can shape artificial intelligence and machine learning, Hastings Center Report 48 (2018) 10–13. doi:10.1002/hast.895 . [31] E. Commission, Together for Health: A Strategic Approach for the EU 2008-2013: White Paper, OOPEC, 2007. [32] M. Cavallone, R. Palumbo, Debunking the myth of industry 4.0 in health care: insights from a sys- tematic literature review, The TQM Journal 32 (2020) 849–868. doi:10.1108/TQM- 10- 2019- 0245 . [33] D. Lupton, The digitally engaged patient: Self-monitoring and self-care in the digital health era, Social Theory & Health 11 (2013) 256–270. doi:10.1057/sth.2013.10 . [34] D. T. Eton, T. J. Beebe, P. T. Hagen, M. Y. Halyard, V. M. Montori, J. M. Naessens, J. A. Sloan, C. A. Thompson, D. L. Wood, Harmonizing and consolidating the measurement of patient-reported information at health care institutions: a position statement of the mayo clinic, Patient Related Outcome Measures (2014) 7–15. doi:https://doi.org/10.2147/PROM.S55069 . [35] J. Gao, C. Fan, B. Chen, Z. Fan, L. Li, L. Wang, Q. Ma, X. He, Y. Zhai, J. Zhao, Telemedicine is becoming an increasingly popular way to resolve the unequal distribution of healthcare resources: evidence from china, Frontiers in public health 10 (2022) 916303. doi:doi:10.3389/fpubh.2022. 916303. [36] J. Cheney-Lippold, We are data: Algorithms and the making of our digital selves, in: We Are Data, New York University Press, 2017. doi:https://doi.org/10.18574/nyu/9781479888702. 001.0001 . [37] A. W. Frank, The wounded storyteller: Body, illness & ethics, University of Chicago Press, 2013. [38] P. Donati, Discovering the relational goods: their nature, genesis and effects, International Review of Sociology 29 (2019) 238–259. doi:10.1080/03906701.2019.1619952 . [39] S. Zuboff, The age of surveillance capitalism, in: Social theory re-wired, Routledge, 2023, pp. 203–213. [40] B. Voigt, Docteur amazon, Bulletin des médecins suisses 103 (2022) 12–15. doi:10.4414/bms.2022. 21261 . [41] T. Krahn, R. Kuo, M. Chang, Personalized study guide: A moodle plug-in generating personal learning path for students, in: C. Frasson, P. Mylonas, C. Troussas (Eds.), Augmented Intelligence and Intelligent Tutoring Systems, Springer Nature Switzerland, Cham, 2023, pp. 333–341. doi:10. 1007/978- 3- 031- 32883- 1_30 . [42] J. A. Itmazi, M. G. Megías, P. Paderewski, F. L. G. Vela, A comparison and evaluation of open source learning managment systems, in: IADIS AC, 2005. URL: https://api.semanticscholar.org/CorpusID: 7722240. [43] C. Costa, H. Alvelos, L. Teixeira, The use of moodle e-learning platform: A study in a por- tuguese university, Procedia Technology 5 (2012) 334–343. URL: https://www.sciencedirect.com/ science/article/pii/S2212017312004689. doi:https://doi.org/10.1016/j.protcy.2012.09.037 , 4th Conference of ENTERprise Information Systems – aligning technology, organizations and people (CENTERIS 2012). [44] H. T. S. Alrikabi, N. A. Jasim, B. H. Majeed, A. Z. Abass, I. R. N. ALRubee, Smart learning based on moodle e-learning platform and digital skills for university students, Int. J. Recent Contributions Eng. Sci. IT 10 (2022). URL: https://online-journals.org/index.php/i-jes/article/view/28995.