<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Managing Trade-offs in the Nested Iterative Cycles of Responsible AI ⋆</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Rohith</forename><surname>Sothilingam</surname></persName>
							<email>rsothilingam@mail.utoronto.ca</email>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Information</orgName>
								<orgName type="institution">University of Toronto</orgName>
								<address>
									<addrLine>140 St George St</addrLine>
									<postCode>M5S 3G6</postCode>
									<settlement>Toronto</settlement>
									<region>ON</region>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Vik</forename><surname>Pant</surname></persName>
							<email>vik.pant@mail.utoronto.ca</email>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Information</orgName>
								<orgName type="institution">University of Toronto</orgName>
								<address>
									<addrLine>140 St George St</addrLine>
									<postCode>M5S 3G6</postCode>
									<settlement>Toronto</settlement>
									<region>ON</region>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Eric</forename><surname>Yu</surname></persName>
							<email>eric.yu@utoronto.ca</email>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Information</orgName>
								<orgName type="institution">University of Toronto</orgName>
								<address>
									<addrLine>140 St George St</addrLine>
									<postCode>M5S 3G6</postCode>
									<settlement>Toronto</settlement>
									<region>ON</region>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Managing Trade-offs in the Nested Iterative Cycles of Responsible AI ⋆</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">143D67F0F0051D4A25917E9666B2C4F4</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:56+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Goal-Oriented Modeling</term>
					<term>Machine Learning</term>
					<term>Responsible AI</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper addresses the challenge of managing decisions in machine learning (ML) development, where choices in one iterative cycle affect subsequent cycles, each with varying evaluation results. The research objective is to evaluate how well our proposed modeling constructs-Sensors, Actuators, and Iterative Loops-enhance existing goal-oriented conceptual modeling to better analyze decisions in Responsible AI, particularly within nested iterative cycles. We evaluate the efficacy of our proposed goal modeling constructs in analyzing trade-offs among business, technical, and Responsible AI goals using these constructs. Our findings suggest that these constructs improve upon current goal modeling methods, offering more effective decision-making support for Responsible AI outcomes.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Bias and other social responsibility challenges in AI arise from both the underlying Machine Learning (ML) model and the context in which it is used. AI systems, due to inherent model biases, can propagate these issues at scale, affecting numerous user applications <ref type="bibr" target="#b6">[6]</ref>  <ref type="bibr" target="#b15">[15]</ref> [23] <ref type="bibr" target="#b24">[24]</ref>. As AI systems are increasingly deployed for critical tasks, concerns about safety and security also escalate.</p><p>ML system design involves multiple stages, each with multiple decision points, and iterative cycles, including data gathering, feature engineering, ML model training, deployment, and user output. Responsible AI is an approach within ML-based that integrates fairness, transparency, and ethical considerations at each stage, ensuring that decisions are evaluated not only for technical effectiveness but also for their societal and ethical impact.</p><p>Supporting decision-making in iterative ML and Responsible AI cycles is crucial for refining models and improving accuracy by quickly identifying and addressing issues like overfitting or data drift. It ensures that each iteration adds value, ultimately leading to more reliable and robust outcomes. Goal-oriented conceptual modeling is a well-established technique to support systematic decision-making processes <ref type="bibr" target="#b1">[2]</ref>  <ref type="bibr">[7] [8]</ref>. This approach argues that the rationale for system development lies outside the system itself, in the enterprise context. It enables modelers to evaluate goal satisfaction, compare design alternatives, inform requirements, validate design reasoning, and facilitate communication. Through goal refinement, business and Responsible AI goals are broken down into sub-goals and alternative tasks that can achieve goals. Quality objectives are treated as softgoals.</p><p>As goals are operationalized in terms of tasks, current goal modeling approaches do not consider how each task alternative may contribute differently to various softgoals, across iterative cycles. To deal with this problem, this paper proposes 3 new goal modeling constructs and examines how they aid in analyzing tradeoffs and conflicts in Responsible AI that are distributed throughout the ML lifecycle. Design decisions at each stage interact and contribute to goals at different stages, with issues like concept drift causing ML processes to evolve over time. Recognizing where tradeoffs occur is crucial, as focusing solely on technical decision points can lead to oversimplified solutions that do not meet other objectives. We explore how tradeoffs at the operationalization level can eventually impact those at the business level, emphasizing the importance of modeling processes and decisions. To illustrate relevant aspects of Responsible AI and their tradeoffs, we draw upon the literature <ref type="bibr">[4] [22]</ref>. We use these sources to demonstrate the challenges of dealing with iterative cycles in ML model development and how our proposed Goal Modeling approach can help. We focus on Explainability, Fairness, Privacy, and Accuracy to analyze and demonstrate conflicts at different ML process stages. Our main contribution is enhancing current goal modeling approaches to improve decision-making in Responsible AI design by addressing conflicts between goals in iterative ML cycles.</p><p>In Section 2, we first consider tradeoffs between business goals and technical ML goals. Then in Section 3, we introduce the proposed goal modeling notation. In Section 4, we apply the modeling notation to deal with tradeoffs between responsible AI and technical ML goals in various stages of ML development. We discuss related work in Section 5 and conclude with future work in Section 6.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Analyzing Tradeoffs in ML Development</head><p>In the design of ML-based systems, technical ML objectives can often conflict with business objectives due to competing priorities, leading to tradeoffs <ref type="bibr" target="#b27">[27]</ref>. For instance, in the development of a customer feedback system, a technical ML objective might be to prioritize the accuracy of sentiment analysis, which could be achieved by using a Support Vector Machine (SVM) algorithm that maximizes the margin between positive and negative feedback <ref type="bibr" target="#b28">[28]</ref>. However, this could conflict with the business objective of improving the ease for customers to provide feedback, as the SVM algorithm might require a large amount of labeled training data to achieve high accuracy, which could be timeconsuming and costly to obtain. Additionally, the SVM algorithm might be sensitive to noise and outliers in the data, which could lead to a poorer user experience if customers are required to provide precise and detailed feedback to be understood. In this case, the team may need to make tradeoffs, such as using a simpler ML algorithm that balances accuracy with ease of use, or implementing additional features that help customers provide more effective feedback, such as natural language processing or sentiment analysis tools. This tradeoff would allow the team to meet the business objective of improving the customer experience while still achieving a reasonable level of performance in the ML model. While one can describe tensions and conflicts between various aspects of ML using narrative text, goal modeling supports decision-making, to help solve problems systematically, through incremental steps. For instance, there are known conflicts between explainability and accuracy <ref type="bibr" target="#b22">[22]</ref>. Specific techniques, such as ad-hoc methods for explainability, can impede accuracy. But why? By utilizing goal modeling, we can see that specific techniques contribute to one or more softgoals, elucidating why the conflict occurs and at which point in the ML process.</p><p>The Goal model in Figure <ref type="figure" target="#fig_0">1</ref> demonstrates how we analyze the above example of conflicting priorities between technical ML and business objectives. The Goal "Customers are satisfied" is refined into two goals: "Ongoing customer satisfaction" (a business goal) and "Customer sentiment be predicted" (an ML goal).</p><p>Thus far, these goals have been business goals. To achieve the latter goal, ML goals are now required: "ML model be deployed" and "ML model be trained to predict customer sentiment". We can see how these goals are refined in Figure <ref type="figure" target="#fig_0">1</ref>. As we conduct goal and task refinement on the ML goal, we analyze options for ML model techniques (SVM, Naive Bayes, and Decision Trees).</p><p>As we attach tasks to achieve the refined goals in Figure <ref type="figure" target="#fig_0">1</ref>, the modeling techniques of SVM, Naive Bayes, and Decision Trees are represented as alternative Tasks. The tradeoff explained above is illustrated in this goal model using the positive and negative softgoal contributions among these Tasks. By conducting goal refinement, then identifying and analyzing conflicting contributions of task alternatives to softgoals, we can identify tradeoffs which occur during design decisions when choosing techniques, by deciding to prioritize between specific softgoals (e.g. interpretability of the model and improving the ease for customers to provide feedback).</p><p>Though the examples of goal modeling presented in this section is useful and allow us to identify simple tradeoffs, it does not allow us to identify tradeoffs that occur at different stages of the ML lifecycle. Specifically, the tradeoffs and decisions in these examples occur at the same stage of model training. In Figure <ref type="figure" target="#fig_0">1</ref>, this can be seen as the tradeoff occurs at the same task refinement level, as we refine the goal of "ML model be trained to predict customer sentiment". This can lead to wrong or poor decisions because, through the goal modeling, we cannot trace the positive and negative effects that the softgoal contributions have to other stages in the ML lifecycle, such as data preparation, feature engineering, or business decisions such as cost effectiveness.</p><p>Decisions get made at different iterative cycles in the ML lifecycle. For example, in Figure <ref type="figure" target="#fig_0">1</ref>, the primary goal of "ML model be trained to predict customer sentiment" would involve iterations where model training is continuously done until the stopping criteria is met. To achieve the success of "ongoing customer satisfaction", this goal will involve iterations of continuous monitoring. When we drill down and expand into these goals, further tradeoffs appear with respect to how computational, business, and Responsible AI goals must be simultaneously achieved. Traditional goal modeling does not allow us to analyze the distribution of goals and tradeoffs at different iterative cycles, as well as how they interact with each other. In the remainder of this paper, we take the goal modeling a step further by analyzing conflicts between technical ML and Responsible AI goals at decision points distributed through various stages of the ML lifecycle.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Introducing the Proposed Goal Modeling Approach for Responsible AI</head><p>We propose three modeling constructs to help us improve our design of ML processes concerning appropriate decisions at each iterative cycle: Sensors, Actuators, and Iterative Loops (Figure <ref type="figure" target="#fig_1">2</ref>). A metamodel is shown in Figure <ref type="figure" target="#fig_2">3</ref>. In each iteration, based on the most recent information, decisions are made and actions taken, to incrementally get closer to meeting the objectives. Since there are multiple decisions aiming to meet multiple interacting objectives, it is important to position the decision points and their associated information collection and actions appropriately in the nested iterative structures. Together, the three modeling constructs aim to facilitate systematic reasoning that ensures that decisions at each iteration are purposeful and aligned with the overarching goals of ML model development.</p><p>The concepts of goals and tasks are drawn from i* [8] <ref type="bibr" target="#b33">[33]</ref>. Sensors and Actuators are used in an abstract conceptual sense and do not refer to physical devices. Sensors collect information from the environment. Information can be collected through tasks, or when pursuing goals. Sensor variables are used as input for decisions. An Indicator <ref type="bibr" target="#b12">[12]</ref> is one type of Sensor. They are associated with goals so as to indicate how well the goals are achieved.</p><p>Actuators are used to manipulate the environment through Tasks. Actuator variables are settings for parameters in tasks. They are outputs of decisions, and can be thought of as levers or knobs for adjusting values. A Task may manipulate multiple Actuators.</p><p>Iterative Loops are associated with goals. They repeat until a condition, the stopping criteria, is reached. When a goal that has an Iterative Loop is refined into a subgoal that also has an iterative loop, the latter loop is said to be nested inside the former loop. The latter loop is the inner loop and the former the outer loop. The inner loop is iterated multiple times for each iteration of the outer loop. Consider the following example. In the initial goal model below (Figure <ref type="figure" target="#fig_3">4</ref>), we provide a detailed illustration of the various conflicts that can arise between different aspects of Responsible AI, specifically focusing on privacy and fairness at a high level. Below, we break down the initial set of useful features in this goal model.</p><p>In Figure <ref type="figure" target="#fig_3">4</ref>, Fairness and Privacy are conveyed as separate goals with their own set of alternative tasks which can achieve the respective goal. For each of the goals of Fairness and Privacy, there is an Indicator which is used as a gauge to determine the success of the goal. To achieve the "Privacy Methods be Set" goal, the Indicator "Composite Privacy Score" is calculated to meet its desired threshold. To gauge whether this Indicator can be met, each of these alternative techniques, have a Sensor which senses a specific value to determine if the Indicator threshold has been met. For example, if the task T-closeness is chosen, the Sensor of "T-closeness" collects the T-closeness data value, which is used to gauge the success of the Indicator "Composite Privacy Score". Task refinement allows for representing conditional softgoal contributions to Responsible AI goals based on the choice of alternative techniques. For instance, K-value (a technique to achieve privacy) can negatively impact the softgoal "Balanced Accuracy." </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Responsible AI Decisions and Tradeoffs along different Iterative</head><p>Cycles in the ML Process</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">General Goal Model of the ML Lifecycle</head><p>Let us consider the following challenge of Principal Component Analysis (PCA), which is a technique option that can help with feature dimensionality reduction but affects fairness. Traditional Principal Component Analysis (PCA) is not designed with fairness in mind and may perpetuate biases, leading to unequal reconstruction errors across different demographic groups, resulting in potentially harmful and unfair outcomes <ref type="bibr" target="#b20">[20]</ref>.</p><p>Let us consider the goals involved. In Figure <ref type="figure" target="#fig_4">5</ref>, we present the following parent goals: Model algorithms be set and features be transformed. Together, these goals would eventually support the producing the prediction. Upon identifying the parent goals, refine the parent goals into further subgoals until we can identify alternative techniques (tasks) for accomplishing those goals. When refining the goals, we ensure that the topic is consistent. We refine the goal "Features be transformed" into the following sub-goals: "Features be normalized" "Features be encoded", and "Feature Dimensionality Reduction". We do not yet refine the goal Model algorithm(s) be set because we can identify alternative techniques for this goal.</p><p>Next, we identify the alternative tasks that can accomplish each of the sub-goals. Upon identifying these tasks, we attach softgoal contributions (help and hurt) to each softgoal. By identifying the softgoal contributions, we can visualize tradeoffs that arise as a result of choosing one alternative technique over another.</p><p>Finally, we attach the Actuators and Sensors to each Goal where they apply in Figure <ref type="figure" target="#fig_4">5</ref> to identify specifically where the tradeoff occurs and why. Toward the right of the model, we can see that we can see that the tradeoff between PCA and LDA can negatively affect the success of Balanced Accuracy, which in turn eventually affects Fairness, through softgoal contributions. This is helpful because it gives us a visual breakdown of why choosing PCA can eventually hurt fairness while providing technical benefits in feature generalization and noise reduction. However, how does this conflict then affect the larger ML development process? At what point in the iterative loops involved does this conflict occur and how does it affect other stages?</p><p>The conflict occurs at the "feature transformation" stage, where the PCA technique can be chosen as a dimensionality reduction technique for achieving feature transformation. This technique then negatively affects the group fairness softgoal "mortality prediction output be fair across groups". The Sensors "Explained Variance Ratio" and "Discriminant Power" are used as inputs to determine when Dimensionality Reduction has successfully converged (the Iterative loop stopping criteria for this goal). These Sensors serve as inputs for consideration to adjust the "number of components" Actuator for each of the PCA and LDA techniques. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Conflicting Responsible AI Goals in a Case Study</head><p>In this section, we introduce the context of a recent case study <ref type="bibr" target="#b3">[4]</ref> that we will draw upon to build on the previous model, toward a comprehensive goal model that captures conflicts between interpretability, explainability, accuracy, and fairness, using a case study example. In this case study, the authors conducted empirical research on conflicts arising between healthcare stakeholders due to ethical concerns with ML applications in healthcare. The authors map the relationships between stakeholders and potential "values-collisions," identifying several themes of conflict. For our purposes, we focus on the following themes: The ML model evaluated in the case study aims to identify what individuals might benefit from advance care planning by addressing a proxy problem: predicting the probability of a given patient passing away within the next 12 months, to aid in palliative care consults. Based on the outcome of the mortality prediction, patients will have the option of being notified if they ought to be considered for advance care planning based on the mortality prediction.</p><p>In the following subsections, for the purpose of our goal modeling in this paper, we will focus on the prediction of mortality rates, and build upon the initial goal model (Figure <ref type="figure" target="#fig_3">4</ref>). This model will use the case study <ref type="bibr" target="#b3">[4]</ref> to illustrate specific aspects of fairness, accuracy, privacy, and explainability appearing in various goals. Each of these Responsible AI goals has been further refined, and conflicts are observed at different goal refinement points, representing various stages of the broader ML process. This approach enables us to illustrate how different Responsible AI challenges are distributed and interact throughout the ML lifecycle, using an empirical case study.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Conflicts within Responsible AI: Interpretability, Explainability, and Fairness</head><p>Conflicts often arise not only between technical ML priorities and Responsible AI objectives but also among different aspects of Responsible AI itself <ref type="bibr" target="#b22">[22]</ref>. For instance, a model prioritizing fairness may compromise transparency, as fairness metrics might involve complex calculations that are challenging to interpret. Regarding explainability and interpretability, a model emphasizing explainability might sacrifice interpretability, as explanations could necessitate simplifications or approximations that obscure the original model's nuances.</p><p>In the preceding sections, we explored (1) conflicts between Responsible AI and technical ML objectives (e.g., feature generalization vs. fairness) and ( <ref type="formula">2</ref>) conflicts between distinct Responsible AI goals (e.g., explainability vs. fairness). In Figure <ref type="figure" target="#fig_6">6</ref>, we illustrate the broader ML lifecycle as it relates to the case study. In this goal model, the nested iterative stages are represented in Loop 1.1 and Loop 1.2 nested within Outer Loop 1. This initial goal model provides us with a breakdown of where each functional goal and set of tasks exist with respect to the nested iterative loops they are a part of.</p><p>Building on this model, Figure <ref type="figure" target="#fig_6">6</ref> below introduces softgoal contributions and illustrates a goal model encompassing the broader context of technical ML, business, and Responsible AI goals throughout the ML lifecycle. This figure provides a comprehensive view, analyzing the interactions among three critical aspects of Responsible AI: interpretability, explainability, and fairness.</p><p>In this example, we identify two primary conflicts: (1) between fairness and interpretability and (2) between fairness and explainability. Regarding the first conflict, depicted on the left side of the goal model, fairness and interpretability are at odds because interpretability can enhance "Tolerance to outliers," which adversely affects the goal of "mortality prediction output being fair across groups," thereby undermining group fairness. Through softgoal refinement, it becomes evident that interpretability indirectly impacts fairness by enhancing tolerance to outliers. This figure allows us to visualize conflicts that can occur across different stages in the ML development process.</p><p>The second conflict, between fairness and explainability, arises because the Fairness softgoal of "mortality prediction output being fair across groups" may compromise decision trustworthiness and, consequently, the softgoal of "explainable prediction result." When considering these tradeoffs among different Responsible AI aspects represented as softgoals, the model designer must evaluate and prioritize these objectives accordingly. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Related Work</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">Checklists, Guidelines, and Principles</head><p>Principle-based approaches are utilized within specific guidelines and ethical frameworks for Responsible AI. These approaches are often prescriptive to specific contexts and issues, rather than being universally applicable to the broader spectrum of Responsible AI. Current methodologies are constrained to addressing a finite set of ethical concerns, such as explainability, fairness, privacy, and accountability. They lack inclusivity regarding the various sub-concepts, perspectives, and interpretations of Responsible AI. Translating a list of ethical objectives into actionable steps poses significant challenges, including determining the most appropriate metric or technique for each use case.</p><p>Principle-based approaches, standards, and guidelines (e.g., <ref type="bibr" target="#b13">[13]</ref>) are designed to be universal, aiming to apply to all projects. However, requirements are inherently project-specific. Often, principles may conflict with one another, and some may not be relevant or meaningful within the project's specific context. Through Goal Modeling, principles (represented as softgoals) can be refined according to specific Responsible AI contexts, rather than adhering to a finite set of principles applied uniformly across all contexts.</p><p>Goal modeling facilitates the clarification and operationalization of vague or ambiguous requirements through goal refinement. Our approach extends the advantages of goal modeling by offering a reasoned and systematic methodology for making design decisions at various stages of the ML lifecycle.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">Computational Techniques for Responsible AI</head><p>Initial research on fairness predominantly concentrated on formulating quantitative definitions of fairness (see, e.g., <ref type="bibr" target="#b9">[9]</ref>, <ref type="bibr" target="#b11">[11]</ref>, <ref type="bibr" target="#b31">[31]</ref>) and developing technical methods for 'debiasing' AI models in accordance with these mathematical formalizations (see, e.g., <ref type="bibr" target="#b0">[1]</ref>, <ref type="bibr" target="#b2">[3]</ref>, <ref type="bibr" target="#b34">[34]</ref>).</p><p>As the application of computational techniques proves valuable in addressing challenges within this domain, the notion of Responsible AI is increasingly recognized as contextual. This necessitates greater attention to the varying definitions and needs of Responsible AI, alongside the specific practices and requirements of practitioners. The inherent complexities and contextual nuances of fairness make it impractical to fully de-bias an AI system or guarantee its fairness <ref type="bibr" target="#b14">[14]</ref>, <ref type="bibr" target="#b21">[21]</ref>. The primary objective, therefore, is to mitigate fairness-related harms and other adverse outcomes to the greatest extent possible ( <ref type="bibr" target="#b16">[16]</ref>, <ref type="bibr" target="#b19">[19]</ref>).</p><p>It is crucial to approach ML as a holistic process, actively considering the diverse social perspectives, stakeholders, and interactions involved. For example, Srivastava et al. <ref type="bibr" target="#b30">[30]</ref> discovered that competing definitions of fairness often do not align with established mathematical definitions. Current computational techniques (e.g. <ref type="bibr" target="#b10">[10]</ref>) and tools (e.g. <ref type="bibr" target="#b9">[9]</ref>) provide conceptual frameworks that facilitate decision-support for data-driven applications. However, these tools lack critical reasoning capabilities, such as tradeoff mechanisms, goal refinement processes, and the operationalization of those goals.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3.">Inadequacies of Current GORE Approaches</head><p>Kuwajima and Ishikawa <ref type="bibr" target="#b13">[13]</ref> proposed a goal-oriented conceptual modeling approach that adheres to the Ethics guidelines for trustworthy AI set forth by the European Commission. While this approach is methodical, it is constrained by its narrow focus on a singular dimension of Responsible AI. It does not encompass the diverse interpretations of Responsible AI, such as fairness, explainability, security, and privacy. Consequently, it is ill-equipped to address the conflicting goals and priorities that arise from these varied interpretations. In contrast, our proposed approach is designed to be versatile and adaptable, accommodating multiple lenses and perspectives to suit any specific context within Responsible AI cases.</p><p>GR4ML is another related framework that employs a goal-oriented approach to link analytics and business goals <ref type="bibr" target="#b18">[18]</ref>. However, GR4ML falls short in addressing the interrelationships and trade-offs between these goals, particularly within the scope of Responsible AI. To our knowledge, our approach represents the first goal-oriented conceptual modeling framework specifically tailored for Responsible AI.</p><p>Existing goal-oriented modeling languages exhibit limited capabilities in integrating Sensors, Actuators, and nested Iterative Loops. Although awareness requirements and adaptive systems in goal modeling address some aspects of sensing, they remain inadequate. For instance, <ref type="bibr" target="#b17">Morandini et al. (2008)</ref> present a goal-oriented approach for designing self-adaptive systems, emphasizing the engineering of self-adaptive software.</p><p>Awareness Requirements <ref type="bibr" target="#b25">[25]</ref> are defined as requirements that reference other requirements or domain assumptions, monitoring their success or failure at run-time. This type of reasoning facilitates adaptability by supporting the monitoring, diagnosis, planning, and execution of requirements. Our proposed Sensor modeling construct extends this concept by enabling inputs from the causal world to inform decisions based on sensed variable values, thereby facilitating interaction with the non-intentional world.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusions and Future Research</head><p>The design of Responsible AI solutions necessitates a systematic approach to accommodate the dynamically evolving decision points inherent in ML processes. This involves the alignment of both business and Responsible AI objectives, alongside the meticulous analysis of conflicts and trade-offs that emerge throughout the nested stages of the ML lifecycle. While contemporary goal modeling approaches offer potential value for designing Responsible AI solutions, they fall short in effectively supporting the analysis of nested iterative cycles in ML development specific to Responsible AI. This paper introduces three novel modeling constructs as part of an innovative goal modeling methodology aimed at systematically designing Responsible AI solutions. Given the benefits of the approach presented, we acknowledge the added complexity. Modelers would have to weigh whether the problem context warrants the added expressiveness and analytical capabilities from using the proposed approach.</p><p>In future work, we will augment our proposed goal modeling framework by integrating Agent-Oriented (AO) modeling. Specifically, we will explore how conflicting stakeholder goals might impact the modeling process or the resulting AI solutions. The various stages of the ML life-cycle often involve distinct individuals, where conflicts arising at each stage can be more localized than the current goal models suggest, requiring acknowledgment of the interests and cultural contexts of these individuals. Understanding how humans and AI co-evolve as a hybrid learning system within organizations is a critical area of exploration. Academic discourse has advocated for viewing human-AI systems as collaborative and co-creating rather than merely co-existing systems <ref type="bibr" target="#b32">[32]</ref>. In this context, we propose the application of Agent-Oriented conceptual modeling to dissect and analyze conflicts and trade-offs among stakeholders during Responsible AI projects, thereby guiding the design of Responsible AI solutions in a manner that systematically balances diverse values, goals, and interests.</p><p>To demonstrate the utility of our modeling approach, we will focus on enhancing the initial analysis and results of the study by identifying the following:</p><p>• Strategic interests (i.e., values) of actors involved and the conflicts arising (1) between the interests of each actor and (2) among the subsequent goals in which they are involved.</p><p>• Specific points in the ML process where these actors are engaged.</p><p>• Extension of the goal modeling to examine how conflicts within nested iterative cycles in the ML lifecycle interact with the interests and priorities of actors. Subsequent development of our conceptual modeling framework will involve its application and validation through an empirical case study to assess its practical relevance in real-world settings. The framework will incorporate knowledge catalogs to aid in the design of Responsible AI solutions, and this research will identify the necessary catalogs. A comprehensive methodology and detailed guidelines will be formulated for the use of the new framework, encompassing phases such as Modeling, Evaluation, Exploration, and Implementation. We will also explore options for tool development to support our proposed approach.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Goal Model conveying an example of tradeoffs that can occur between Business and Machine Learning Goals, to achieve customer satisfaction.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Proposed Modeling Notation conveying Sensors, Actuators, and Iterative Loops.</figDesc><graphic coords="4,90.04,111.64,407.04,319.92" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Metamodel for proposed Goal Modeling Framework, conveying Sensors, Actuators, and Iterative Loops.</figDesc><graphic coords="5,90.88,90.04,407.16,221.64" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Goal Model illustrating tradeoffs between aspects of Responsible AI: Fairness, Privacy, Explainability, and Accuracy.</figDesc><graphic coords="6,83.08,80.08,432.96,154.56" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Goal Model illustrating a tradeoff between Fairness and Feature Generalization</figDesc><graphic coords="7,90.88,159.04,419.52,286.44" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>•</head><label></label><figDesc>Bias and perpetuation of bias (Bias) • Conflicting values and perspectives on death and end-of-life care (Fairness) • Transparency and evaluation of efficacy (Transparency) • Determining the recipients of ML output (Explainability) • Patient consent and involvement (Privacy)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Goal Model conveying tradeoffs in case study between various aspsects of Responsible AI: Interpretability, Explainability, and Fairness.</figDesc><graphic coords="9,90.88,161.44,424.44,99.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="2,70.60,530.68,446.52,227.52" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A reductions approach to fair classification</title>
		<author>
			<persName><forename type="first">A</forename><surname>Agarwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Beygelzimer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Dudík</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Langford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wallach</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International conference on machine learning</title>
				<imprint>
			<publisher>PMLR</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="60" to="69" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Introduction to the user requirements notation: learning by example</title>
		<author>
			<persName><forename type="first">D</forename><surname>Amyot</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computer Networks</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="285" to="301" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Man is computer programmer as woman is to homemaker? debiasing word embeddings</title>
		<author>
			<persName><forename type="first">T</forename><surname>Bolukbasi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">W</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Y</forename><surname>Zou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Saligrama</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">T</forename><surname>Kalai</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in neural information processing systems</title>
		<imprint>
			<biblScope unit="page">29</biblScope>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A framework to identify ethical concerns with ML-guided care workflows: a case study of mortality prediction to guide advance care planning</title>
		<author>
			<persName><forename type="first">D</forename><surname>Cagliero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Deuitch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Shah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Feudtner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Char</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the American Medical Informatics Association</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="819" to="827" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A requirements-driven development methodology</title>
		<author>
			<persName><forename type="first">J</forename><surname>Castro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kolp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mylopoulos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advanced Information Systems Engineering: 13th International Conference</title>
				<meeting><address><addrLine>CAiSE</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2001">2001. 2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title/>
	</analytic>
	<monogr>
		<title level="j">Proceedings</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="108" to="123" />
			<date type="published" when="2001">June 4-8, 2001</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Governing artificial intelligence: ethical, legal and technical opportunities and challenges</title>
		<author>
			<persName><forename type="first">C</forename><surname>Cath</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences</title>
		<imprint>
			<biblScope unit="volume">376</biblScope>
			<biblScope unit="page">20180080</biblScope>
			<date type="published" when="2018">2018. 2133</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Chung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">A</forename><surname>Nixon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mylopoulos</surname></persName>
		</author>
		<title level="m">Non-functional requirements in software engineering</title>
				<imprint>
			<publisher>Springer Science Business Media</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="volume">5</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">istar 2.0 language guide</title>
		<author>
			<persName><forename type="first">F</forename><surname>Dalpiaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Franch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Horkoff</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1605.07767</idno>
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Fairness through awareness</title>
		<author>
			<persName><forename type="first">C</forename><surname>Dwork</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Pitassi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Reingold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Zemel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 3rd innovations in theoretical computer science conference</title>
				<meeting>the 3rd innovations in theoretical computer science conference</meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="214" to="226" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A methodology for direct and indirect discrimination prevention in data mining</title>
		<author>
			<persName><forename type="first">S</forename><surname>Hajian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Domingo-Ferrer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE transactions on knowledge and data engineering</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="1445" to="1459" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Equality of opportunity in supervised learning</title>
		<author>
			<persName><forename type="first">M</forename><surname>Hardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Price</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Srebro</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in neural information processing systems</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page">29</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">business modeling: representation and reasoning</title>
		<author>
			<persName><forename type="first">J</forename><surname>Horkoff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Barone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Amyot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Borgida</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mylopoulos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Software Systems Modeling</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="1015" to="1041" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Adapting square for quality assessment of artificial intelligence systems</title>
		<author>
			<persName><forename type="first">H</forename><surname>Kuwajima</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Ishikawa</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="13" to="18" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Inherent trade-offs in the fair determination of risk scores</title>
		<author>
			<persName><forename type="first">J</forename><surname>Kleinberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mullainathan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Raghavan</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1609.05807</idno>
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Racial disparities in automated speech recognition</title>
		<author>
			<persName><forename type="first">A</forename><surname>Koenecke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Nam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Lake</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Nudell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Quartey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Mengesha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">.</forename><forename type="middle">.</forename><surname>Goel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the National Academy of Sciences</title>
		<imprint>
			<biblScope unit="volume">117</biblScope>
			<biblScope unit="issue">14</biblScope>
			<biblScope unit="page" from="7684" to="7689" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">A survey on bias and fairness in machine learning</title>
		<author>
			<persName><forename type="first">N</forename><surname>Mehrabi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Morstatter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Saxena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lerman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Galstyan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM computing surveys (CSUR)</title>
		<imprint>
			<biblScope unit="volume">54</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="1" to="35" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Towards goal-oriented development of selfadaptive systems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Morandini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Penserini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Perini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2008 international workshop on Software engineering for adaptive and self-managing systems</title>
				<meeting>the 2008 international workshop on Software engineering for adaptive and self-managing systems</meeting>
		<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="9" to="16" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Designing business analytics solutions: a model-driven approach</title>
		<author>
			<persName><forename type="first">S</forename><surname>Nalchigar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Yu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Business Information Systems Engineering</title>
		<imprint>
			<biblScope unit="volume">62</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="61" to="75" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Disclosure by Design: Designing information disclosures to support meaningful transparency and accountability</title>
		<author>
			<persName><forename type="first">C</forename><surname>Norval</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Cornelius</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Cobbe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Singh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency</title>
				<meeting>the 2022 ACM Conference on Fairness, Accountability, and Transparency</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="679" to="690" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">A novel approach for Fair Principal Component Analysis based on eigendecomposition</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">D</forename><surname>Pelegrina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">T</forename><surname>Duarte</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Artificial Intelligence</title>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">On fairness and calibration</title>
		<author>
			<persName><forename type="first">G</forename><surname>Pleiss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Raghavan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kleinberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">Q</forename><surname>Weinberger</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in neural information processing systems</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page">30</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Implementing responsible AI: Tensions and trade-offs between ethics aspects</title>
		<author>
			<persName><forename type="first">C</forename><surname>Sanderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Douglas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Lu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Joint Conference on Neural Networks (IJCNN)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2023">2023. 2023</date>
			<biblScope unit="page" from="1" to="7" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">How computers see gender: An evaluation of gender classification in commercial facial analysis services</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Scheuerman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Paul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Brubaker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the ACM on Human-Computer Interaction</title>
				<meeting>the ACM on Human-Computer Interaction</meeting>
		<imprint>
			<publisher>CSCW</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="1" to="33" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Decision provenance: Harnessing data flow for accountable systems</title>
		<author>
			<persName><forename type="first">J</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Cobbe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Norval</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="6562" to="6574" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Awareness requirements for adaptive systems</title>
		<author>
			<persName><forename type="first">Silva</forename><surname>Souza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">E</forename><surname>Lapouchnian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Robinson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">N</forename><surname>Mylopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 6th international symposium on Software engineering for adaptive and self-managing systems</title>
				<meeting>the 6th international symposium on Software engineering for adaptive and self-managing systems</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="60" to="69" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">A A Goal-Oriented Approach for Modeling Decisions in ML Processes</title>
		<author>
			<persName><forename type="first">Yu</forename><surname>Sothilingam</surname></persName>
		</author>
		<idno type="DOI">10.1109/REW61692.2024.00048</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 32 nd International Requirements Engineering Conference Workshops (REW)</title>
				<meeting><address><addrLine>Reykjavik, Iceland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2024">2024. 2024</date>
			<biblScope unit="page" from="321" to="325" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">A Requirements-Driven Conceptual Modeling Framework for Responsible AI</title>
		<author>
			<persName><forename type="first">R</forename><surname>Sothilingam</surname></persName>
		</author>
		<idno type="DOI">10.1109/RE57278.2023.00061</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 31st International Requirements Engineering Conference (RE)</title>
				<meeting><address><addrLine>Hannover, Germany</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023. 2023</date>
			<biblScope unit="page" from="391" to="395" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Using i* to Analyze Collaboration Challenges in MLOps Project Teams</title>
		<author>
			<persName><forename type="first">R</forename><surname>Sothilingam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Pant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Yu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 15th International i* Workshop 2022</title>
				<meeting>the 15th International i* Workshop 2022</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Modeling Agents Roles and Positions in Machine Learning Project Organizations</title>
		<author>
			<persName><forename type="first">R</forename><surname>Sothilingam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Yu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 13th International i* Workshop 2020</title>
				<meeting>the 13th International i* Workshop 2020</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">2641</biblScope>
			<biblScope unit="page" from="61" to="66" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Mathematical notions vs. human perception of fairness: A descriptive approach to fairness for machine learning</title>
		<author>
			<persName><forename type="first">M</forename><surname>Srivastava</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Heidari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Krause</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery data mining</title>
				<meeting>the 25th ACM SIGKDD international conference on knowledge discovery data mining</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="2459" to="2468" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Fairness definitions explained</title>
		<author>
			<persName><forename type="first">S</forename><surname>Verma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Rubin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the international workshop on software fairness</title>
				<meeting>the international workshop on software fairness</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1" to="7" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">From coexistence to co-creation: Blurring boundaries in the age of AI</title>
		<author>
			<persName><forename type="first">L</forename><surname>Waardenburg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Huysman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information and Organisation</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="issue">4</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<monogr>
		<title level="m" type="main">Social Modeling for Requirements Engineering (Cooperative Information Systems Series</title>
		<author>
			<persName><forename type="first">E</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Giorgini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Maiden</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mylopoulos</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2011">2011</date>
			<publisher>MIT Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Fairness beyond disparate treatment disparate impact: Learning classification without disparate mistreatment</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">B</forename><surname>Zafar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Valera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gomez Rodriguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">P</forename><surname>Gummadi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 26th international conference on world wide web</title>
				<meeting>the 26th international conference on world wide web</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="1171" to="1180" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
