=Paper= {{Paper |id=Vol-3936/iStar24_keynote |storemode=property |title=Ontology-Based Requirements Engineering: The Case of Ethicality Requirements |pdfUrl=https://ceur-ws.org/Vol-3936/iStar24_keynote.pdf |volume=Vol-3936 |authors=Renata Guizzardi,Giancarlo Guizzardi |dblpUrl=https://dblp.org/rec/conf/istar/GuizzardiG24 }} ==Ontology-Based Requirements Engineering: The Case of Ethicality Requirements== https://ceur-ws.org/Vol-3936/iStar24_keynote.pdf
                                Ontology-Based Requirements Engineering: The Case of
                                Ethicality Requirements
                                Renata Guizzardi and Giancarlo Guizzardi

                                University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands


                                                Abstract
                                                In this paper, we summarize the content of our keynote speech at iStar’24, in which we discussed an
                                                ontology-based requirements engineering method to elicit and analyze ethicality requirements for the
                                                development of trustworthy AI systems.

                                                Keywords
                                                Ethical AI, Trustworthy AI, Requirements Engineering Method 1


                                Concerned by the growing impact of information systems in people’s lives, especially motivated
                                by the recent AI developments, ethicists and AI researchers have been recently studying the
                                interplay of ethics and AI systems [1,2]. Moreover, governments and private organizations have
                                been engaged in producing regulations and guidelines for the development of trustworthy
                                systems [3,4]. Although we agree that the theoretical debate, along with regulations and
                                guidelines are important, we believe that it is essential to embed ethics into the system
                                engineering practices. For being concerned with stakeholders’ needs and wants, Requirements
                                Engineering has a fundamental role in the development of ethical systems. If we provide the
                                means for requirements analysts to capture and analyze ethicality requirements, we will be
                                contributing for ethics to be a core concern since the start of the system development lifecycle.
                                Moreover, ethicality requirements may be monitored and assessed not only while the system is
                                under development, but also after it is deployed.
                                   This extended abstract summarizes the content of our keynote speech at iStar 2024, where
                                we presented an ontology-based requirements engineering method [5, 6]. The proposed method,
                                known and Ontology-based Requirements Engineering (OBRE) started with an ontological
                                analysis of ethicality requirements as non-functional requirements. As a result, we created an
                                ethicality requirements ontology. Then we instantiated this ontology, identifying guidelines for
                                the elicitation of ethicality requirements. With the help of these guidelines, the requirements
                                analyst may use an existing Requirements Engineering approach of her choice (e.g.,
                                requirements table, i*, user stories) to specify and analyze ethicality requirements.
                                   The definition of ethicality requirements is based on the ontological analysis of four principles
                                conceived as part of an ethical framework to guide the development and adoption of AI systems
                                [7]: Beneficence, Nonmaleficence, Autonomy and Explicability. As a result of our ontological
                                analysis, these principles have been understood as more concrete concepts that are easier to
                                grasp, thus supporting requirements elicitation and analysis. To make our analysis clear, let us
                                describe how we define these principles. To illustrate the types of requirements, we use a
                                driverless car example.
                                   Beneficence and Nonmaleficence are analyzed together. Beneficence is roughly understood
                                as ‘do good’ while Nonmaleficence means ‘do no harm’ [7]. With the help of the Common
                                Ontology of Value and Risk [8], we used the concepts of “value” and “risk” to analyze these


                                iStar’24: The 17th International i* Workshop, October 28-31, 2024, Pittsburgh, US
                                   r.guizzardi @utwente.nl (R. Guizzardi); g.guizzardi @utwente.nl (G. Guizzardi)
                                   0000-0002-5804-5741 (R. Guizzardi); 0000-0002-3452-553X (G. Guizzardi)
                                           © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).


CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
respective principles. Beneficence requirements are those that allow the system to create gain
events, i.e., events that positively impact the stakeholder’s intentions. On the other hand,
nonmaleficence requirements are those that lead the system to prevent loss events, i.e., events
that negatively impact the stakeholder’s intentions. For instance, for a driverless car, “the car
shall choose the quicker route towards destination” and “the car shall stop before a crosswalk
every time there is a pedestrian waiting to cross it” are examples of beneficence requirements,
while “the car shall make enough distance while overtaking a car” and “the car shall adopt a
defensive driving behavior” are examples of nonmaleficence requirements.
    Autonomy means striking a balance between the decision-making power retained by the
stakeholder and that which is delegated to the system [7]. To understand this kind of
requirement, we need to focus on the concept of delegation. The stakeholder delegates decisions
to the system and as part of this delegation, social positions are created to regulate the content
of such relationship [9]. For example, autonomy requirements may define duties, permissions
and powers from the system towards the stakeholders. For a driverless car, “the car has the duty
to follow traffic laws” and “the car does not have permission to change destination without the
passenger’s explicit request” are autonomy requirements examples.
    Explicability is understood as making the decision-making process transparent, intelligible
and accountable [7]. Explicability requirements aim at keeping track of the system’s decision-
making process. According to the Decision-Making Ontology [10], for each decision, the system
conducts valuations of different options, and such valuations are based on different criteria. For
each decision, an explicability requirement aims at making explicit the available options, which
option was chosen, and which criteria were applied in this choice. Requirements such as “the car
shall explain why it decides (not) to overtake other vehicles” and “the car shall explain the
reasons why a particular route is chosen” are examples of explicability requirements.
    Focusing on ethics since the Requirements Engineering activity is paramount to guarantee
the development of trustworthy systems. Our work is a first attempt in this direction. We hope
that in the future, we are able to evaluate it by its application on real cases, and improve it based
on this practical application. We also intend to complete the ontological analysis of the ethical
dimensions proposed in [7] by tackling the notion of Justice.

References
[1] L. Floridi. The Ethics of Artificial Intelligence: Principles, challenges, and opportunities.
     Oxford University Press (2023).
[2] Handbook of Research on Technoethics. IGIGlobal (2009).
[3] EU Artificial Intelligence Act, accessed 27-01-2025 at https://artificialintelligenceact.eu/
[4] 7000-2021 - IEEE Standard Model Process for Addressing Ethical Concerns during System
     Design, accessed 27-01-2025 at https://ieeexplore.ieee.org/document/9536679
[5] R. Guizzardi, G. Amaral, G. Guizzardi and J. Mylopoulos. An ontology-based approach to
     engineer ethicality requirements. Softw. Syst. Model. 22 (2023): 1897-1923.
[6] R. Guizzardi, G. Amaral, G. Guizzardi and J. Mylopoulos. Using i* to Analyze Ethicality
     Requirements. In X. Franch, J.C. Leite, G. Mussbacher, J. Mylopoulos and A. Perini (Eds.) Social
     Modeling Using the i* Framework: Essays in Honour of Eric Yu. Springer, (2024): 183-204.
[7] L. Floridi, et al. Ai4people – An ethical framework for a good AI society: Opportunities, risks,
     principles and recommendations. Minds & Machines 28 (2018): 689-707.
[8] T. Sales et al. The Common Ontology of Value and Risk. In: Proc. of 37th International
     Conference on Conceptual Modeling (ER). Springer, LNCS v. 11157 (2019): 121–135.
[9] Griffo, C., Almeida, J.P.A., Guizzardi, G., Nardi, J.C., R. Guizzardi, B. Carneiro, D. Porello and G.
     Guizzardi. A core ontology on decision making. Information Systems, V. 101, Nove. 2021.
[10] R. Guizzardi, B. Carneiro, D. Porello and G. Guizzardi. A core ontology on decision making.
     In: Proc. of the 13th Seminar on Ontology Research in Brazil, CEUR v. 2728 (2020): 9-21.