<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">XAI for Group-AI Interaction: Towards Collaborative and Inclusive Explanations</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Mohammad</forename><surname>Naiseh</surname></persName>
							<email>mnaiseh1@bournemouth.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="institution">Bournemouth University</orgName>
								<address>
									<settlement>Poole</settlement>
									<country key="GB">United Kingdom</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Catherine</forename><surname>Webb</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">University of Southampton</orgName>
								<address>
									<settlement>Southampton</settlement>
									<country key="GB">United Kingdom</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Tim</forename><surname>Underwood</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">University of Southampton</orgName>
								<address>
									<settlement>Southampton</settlement>
									<country key="GB">United Kingdom</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Gopal</forename><surname>Ramchurn</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">University of Southampton</orgName>
								<address>
									<settlement>Southampton</settlement>
									<country key="GB">United Kingdom</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Zoe</forename><surname>Walters</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">University of Southampton</orgName>
								<address>
									<settlement>Southampton</settlement>
									<country key="GB">United Kingdom</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Navamayooran</forename><surname>Thavanesan</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">University of Southampton</orgName>
								<address>
									<settlement>Southampton</settlement>
									<country key="GB">United Kingdom</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ganesh</forename><surname>Vigneswaran</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">University of Southampton</orgName>
								<address>
									<settlement>Southampton</settlement>
									<country key="GB">United Kingdom</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">XAI for Group-AI Interaction: Towards Collaborative and Inclusive Explanations</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">0494A3691B9273C62C1ED4E393207C0B</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:36+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Explainable AI</term>
					<term>Group-AI Interaction</term>
					<term>Interaction Design</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The increasing integration of Machine Learning (ML) into decision-making across various sectors has raised concerns about ethics, legality, explainability, and safety, highlighting the necessity of human oversight. In response, eXplainable AI (XAI) has emerged as a means to enhance transparency by providing insights into ML model decisions and offering humans an understanding of the underlying logic. Despite its potential, existing XAI models often lack practical usability and fail to improve human-AI performance, as they may introduce issues such as overreliance. This underscores the need for further research in Human-Centered XAI to improve the usability of current XAI methods. Notably, much of the current research focuses on one-to-one interactions between the XAI and individual decision-makers, overlooking the dynamics of many-to-one relationships in real-world scenarios where groups of humans collaborate using XAI in collective decision-making. In this late-breaking work, we draw upon current work in Human-Centered XAI research and discuss how XAI design could be transitioned to group-AI interaction. We discuss four potential challenges in the transition of XAI from human-AI interaction to group-AI interaction. This paper contributes to advancing the field of Human-Centered XAI and facilitates the discussion on group-XAI interaction, calling for further research in this area.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>eXplainable AI (XAI) has emerged as a research direction in response to the lack of explainability and interpretability of AI models <ref type="bibr" target="#b5">[6]</ref>. XAI aims to enhance the transparency of AI models, ML in particular, by providing human decision-makers with insights into the inner workings of ML models <ref type="bibr" target="#b20">[21]</ref>. XAI makes ML outputs more interpretable and comprehensible by demystifying the complex processes within ML models. Approaches such as feature importance, example-based and counterfactual explanations have been developed for that purpose <ref type="bibr" target="#b0">[1]</ref>. XAI seeks to bridge the gap between the technical complexity of these models and the need for human-understandable outputs by unravelling the intricacies of ML model decisions.</p><p>Despite the potential benefits of XAI, many existing XAI models lack practical usability and fail to improve human-AI performance <ref type="bibr" target="#b0">[1]</ref>. Decision-makers often perceive explanations as tools designed for data scientists and ML engineers <ref type="bibr" target="#b6">[7]</ref>, leading to disinterest and reluctance to engage with XAI interfaces <ref type="bibr" target="#b7">[8]</ref>. Decision-makers also report not seeing explanations as motivating to learn and solve problems; they show a lack of interest and curiosity unless these explanations align with their initial expectations <ref type="bibr" target="#b0">[1]</ref>. This is consistent with findings from cognitive psychology showing that users tend to focus on features that have apparent value for their decision-making process <ref type="bibr" target="#b7">[8]</ref>. Research in Human-Computer Interaction (HCI) has identified several user issues in human-XAI interaction, such as misinterpreting explanations, highlighting the need for improved design solutions <ref type="bibr" target="#b8">[9]</ref>. Explanations might also be ignored if they are overly abstract, as people tend to prioritise concrete information instead <ref type="bibr" target="#b7">[8]</ref>.</p><p>Researchers have suggested various approaches to operationalise XAI in human-AI settings. For instance, contextualising XAI design by incorporating domain-related information and empirical knowledge has shown promising results in enhancing user satisfaction and understanding <ref type="bibr" target="#b9">[10]</ref>. Additionally, employing contrastive explanations and juxtaposing features can help develop an expert ability to notice salient features and anomalous events <ref type="bibr" target="#b10">[11]</ref>. To enhance critical engagement with explanations and mitigate over-reliance on AI recommendations, incorporating cognitive forcing <ref type="bibr" target="#b11">[12]</ref> and nudging <ref type="bibr" target="#b7">[8]</ref> design elements into the XAI interface has been empirically tested. These elements have been shown to discourage decision-makers from blindly accepting AI suggestions and instead prompt them to evaluate recommendations thoughtfully. Furthermore, methodological approaches have been suggested to help UX designers operationalise XAI methods on the XAI user interface level <ref type="bibr" target="#b11">[12]</ref>. These approaches have been shown to improve human-XAI interaction and increase participant engagement with AI explanations.</p><p>Interestingly, much of the existing research in Human-Centered eXplainable AI focuses on oneto-one interactions between the XAI and human decision-makers. In other words, it focuses on how individual decision-makers interact with AI explanations and use them to make decisions in human-AI settings. However, much of real-world scenarios often involve many-to-one relationships in a group-AI interaction, where a group of individuals collaborate to make collective decisions using the XAI interface. Figure <ref type="figure">1</ref> explains this relationship. Group-AI interaction refers to the collaboration and interaction between groups of human decision-makers and AI systems in decision-making processes <ref type="bibr" target="#b16">[17]</ref>. In this context, groups may include executive committees, boards, teams of professionals, or any collective of individuals tasked with making decisions within an organisation or context <ref type="bibr" target="#b14">[15]</ref>. Group-AI interaction has the potential to leverage the capabilities of AI technologies alongside collective human expertise to enhance decision-making outcomes. It has also been shown to exceed the accuracy of human-AI collaboration by bringing different expertise and perspectives of humans involved in the decision-making process <ref type="bibr" target="#b6">[7]</ref>. Individuals in the group can be assigned specialised roles and responsibilities related to interacting with AI systems, interpreting AI-generated insights, and integrating them into decision-making processes <ref type="bibr" target="#b0">[1]</ref>. This division of responsibilities ensures that each member contributes their expertise effectively, mitigating the risks associated with human-AI collaboration <ref type="bibr" target="#b0">[1]</ref>.</p><p>In this paper, we argue that designing XAI for group-AI interaction requires distinct approaches and careful considerations compared to the ones used in human-AI interaction. This latebreaking contribution synthesises insights from collective decision-making and Human-Centered XAI literature to discuss challenges inherent in transitioning XAI from human-AI to group-AI interaction. These encompass the complexities of group dynamics, the potential amplification of cognitive biases, issues surrounding trust, as well as the critical facets of XAI evaluation in the context of group-AI. While acknowledging the possibility of additional challenges, our discussion provides an initial framework for contrasting the nuanced design requirements of XAI in facilitating AI-assisted decision-making within group settings.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 1:</head><p>Comparison of One-to-One and Many-to-One Interactions with an XAI Interface. In the One-to-One interaction mode, a single user interacts directly with the XAI interface, receiving explanations tailored to their specific queries or actions. In contrast, the Many-to-One interaction mode involves multiple users interacting with the XAI interface simultaneously, with explanations generated to accommodate diverse user inputs and preferences.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">The Complexity of Group Diversity</head><p>In group decision-making, individuals often wield varying degrees of influence and expertise, regardless of the context-be it a professional team, a board of directors, or a community organisation <ref type="bibr" target="#b18">[19]</ref>. This diversity encompasses a nuanced interplay of individual personalities, social structures, power dynamics, and communication patterns within the group <ref type="bibr" target="#b20">[21]</ref>. Such complexity in group diversity further impacts the interaction between groups and AI, particularly in XAI contexts. This diversity underscores the necessity for XAI systems to accommodate inclusive explanations tailored to the diverse needs and backgrounds of group members <ref type="bibr" target="#b19">[20]</ref>. For example, individuals with varying levels of familiarity with AI and machine learning concepts may necessitate explanations that are lucid, accessible, and devoid of technical jargon. Moreover, the diversity within groups may prompt XAI developers to accommodate diverse learning styles and linguistic preferences, thereby enhancing the accessibility of XAI explanations <ref type="bibr" target="#b4">[5]</ref>. This entails providing explanations in multiple formats or languages and integrating interactive features to facilitate engagement and comprehension among all group members <ref type="bibr" target="#b10">[11]</ref>. Additionally, the array of group interactions may be further complicated by individual attitudes and perceptions toward AI technology <ref type="bibr" target="#b21">[22]</ref>. Cultural values and norms, for instance, have been demonstrated to influence attitudes toward AI <ref type="bibr" target="#b0">[1]</ref>. While some individuals may embrace XAI as a valuable tool for enhancing decision-making capabilities, others may harbour scepticism or resistance due to concerns about job displacement, loss of autonomy, or ethical implications. Consequently, XAI for group-AI interaction must address these diverse perspectives and cultivate a culture of trust, transparency, and open communication to mitigate resistance to XAI adoption and foster constructive collaboration within the group.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Bias Amplification in Group-XAI</head><p>Biases in group-AI interaction can be more pronounced than human-AI interaction, presenting significant challenges for the design and development of XAI systems <ref type="bibr" target="#b3">[4]</ref>  <ref type="bibr" target="#b16">[17]</ref>. One prevalent bias is groupthink, where group members prioritise consensus and overlook dissenting viewpoints to maintain harmony <ref type="bibr" target="#b22">[23]</ref>. In this context, if explanations do not encourage critical thinking and challenge groupthink, individuals within the group may unquestioningly accept AI-generated insights without thorough examination explanations <ref type="bibr" target="#b12">[13]</ref>. XAI systems should not only explain recommendations but also encourage scrutiny and diverse viewpoints <ref type="bibr" target="#b25">[26]</ref>. This could involve presenting multiple explanations, highlighting uncertainties, and actively soliciting feedback from users with varying perspectives. XAI systems may also implement mechanisms for independent review and validation. For instance, introducing a Devil's Advocacy role within the team can challenge group consensus and encourage critical evaluation of XAI explanations <ref type="bibr" target="#b23">[24]</ref>. This individual identifies potential flaws or biases, fostering a more balanced consideration of decision options.</p><p>Another bias that could impact XAI design in group-AI interaction scenarios is the equality bias. It refers to the tendency for individuals to downplay their expertise or to weigh everyone's opinion equally, regardless of competence or expertise <ref type="bibr" target="#b26">[27]</ref>. This bias can have detrimental effects on decision-making processes, particularly when there is a genuine disparity in knowledge or experience within the group. In the context of XAI, the equality bias could be amplified when group members defer too readily to the AI recommendations, regardless of their domain expertise or experience in the subject matter <ref type="bibr" target="#b27">[28]</ref>. For example, suppose a group of healthcare professionals is using an XAI system to diagnose patients. In that case, individuals with specialised medical knowledge may inadvertently downplay their expertise and defer to the AI's recommendations and explanations, even when they have valid insights or concerns that should be taken into account. XAI design shall account for such bias by designing explanations that encourage individuals to recognise and value their expertise and insights, as well as those of others within the group. Additionally, fostering a culture of collaboration and open communication within the group can help ensure that diverse perspectives and expertise are taken into account when making decisions with the assistance of AI systems. This may involve providing XAI explanations that highlight the knowledge and contributions of individual group members, as well as mechanisms for facilitating constructive dialogue and debate within the group.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Trust within the Group</head><p>Trust has been a crucial element in human-AI interaction, influencing the dynamics and effectiveness of human-AI teams <ref type="bibr" target="#b0">[1]</ref>. When considering group-AI interaction, trust dynamics become more complex, involving not only trust between group members and AI but also among group members themselves. It has been discussed that incorporating XAI into group decisionmaking processes can impact trust dynamics among group members <ref type="bibr" target="#b24">[25]</ref>. The integration of XAI into group decision-making processes introduces new dimensions to these dynamics, with potential implications for team cohesion and effectiveness. Suppose an explanation contradicts the opinions or recommendations of certain group members, it could create tensions or conflicts within the group, undermining trust and cohesion. In addition, some scenarios could involve individuals within the group perceiving XAI explanations as more reliable or objective than human judgments, which may lead to a shift in trust dynamics within the group <ref type="bibr" target="#b8">[9]</ref>. To navigate these complexities and foster trust among group members, XAI development needs to consider the social dynamics of group interaction. Open communication about XAI's role in decisionmaking and clear explanations of its outputs are crucial for trust calibration. Additionally, establishing protocols for interpreting XAI explanations in context, along with mechanisms for addressing conflicts arising from AI-influenced decisions, can safeguard against trust erosion. This ensures that XAI's benefits are harnessed without jeopardizing ethical principles or human values.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Evaluating XAI for Group Interaction</head><p>Evaluating XAI for group interaction presents distinct challenges compared to traditional XAI evaluations focused on individual users. Traditionally, XAI examines how individuals interact with AI systems, understand explanations, and make decisions based on them <ref type="bibr">[1] [6]</ref>. Here, evaluation metrics assess explanation clarity, relevance, and user satisfaction <ref type="bibr" target="#b8">[9]</ref>  <ref type="bibr" target="#b9">[10]</ref>. Trust and interpersonal dynamics are also crucial factors, with conflicts arising from discrepancies between user expectations and AI behaviour, requiring strategies for resolution <ref type="bibr">[1] [8]</ref>. However, in group-AI interaction, evaluation extends beyond individual users. We need to consider the entire group ecosystem. This includes how AI-generated explanations are communicated within the group, how they impact group cohesion and communication patterns, and how conflicts are resolved among members, considering factors like power dynamics and individual expertise <ref type="bibr" target="#b17">[18]</ref>. Additionally, group-XAI evaluation involves understanding the social influence of explanations on the group's decision-making dynamics. This encompasses considerations of scalability (how well does XAI adapt to groups of varying sizes?) and consensus-building (how can XAI support groups in achieving agreement despite diverse perspectives?) <ref type="bibr" target="#b17">[18]</ref>. Therefore, evaluating XAI for group interaction demands methodologies that account for the complexities of social interactions and group dynamics. This might involve incorporating social network analysis to understand how information flows within the group and identify potential bottlenecks or communication silos. Longitudinal studies could be conducted to assess the impact of XAI on group performance and decision-making quality over time. Ultimately, understanding these differences is crucial for designing effective XAI systems that empower both individual users and collaborative decision-making processes, while mitigating potential pitfalls and fostering a healthy group environment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion and Future Directions</head><p>In conclusion, the integration of Machine Learning (ML) into decision-making processes across various sectors has prompted the development of XAI to address concerns regarding ethics, legality, explainability, and safety. In this paper, we showed that much of the existing research focuses on one-to-one interactions between XAI interfaces and individual decision-makers. However, many real-world scenarios require the interaction between a group of humans and the XAI interface. Building on the current research on Human-Centered XAI, this paper has discussed four key considerations when transitioning from human-AI to group-AI interaction in the context of XAI. These challenges include complexities in group dynamics, cognitive bias amplification, trust issues within the group, and group-centric evaluation. By drawing upon current work in Human-Centered XAI research, we contribute to advancing the field and facilitate discussions on group-XAI interaction. This paper calls for further research in this area to enhance the effectiveness and usability of XAI in collaborative decision-making settings, ultimately leading to more informed and successful outcomes in various domains.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="3,139.20,141.57,326.35,205.94" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">How the different explanation classes impact trust calibration: The case of clinical decision support systems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Naiseh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Al-Thani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ali</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Human-Computer Studies</title>
		<imprint>
			<biblScope unit="volume">169</biblScope>
			<biblScope unit="page">102941</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Toward algorithmic accountability in public services: A qualitative study of affected community perspectives on algorithmic decisionmaking in child welfare services</title>
		<author>
			<persName><forename type="first">Anna</forename><forename type="middle">B</forename><surname>Chouldechova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Emily</forename><surname>Putnam-Hornstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrew</forename><surname>Tobin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rhema</forename><surname>Vaithianathan</surname></persName>
		</author>
		<idno type="DOI">10.1145/3290605.3300271</idno>
		<ptr target="https://doi.org/10.1145/3290605.3300271" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems</title>
				<meeting>the 2019 CHI Conference on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="1" to="12" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Artificial intelligence and machine learning in finance: Identifying foundations, themes, and research clusters from bibliometric analysis</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">W</forename><surname>Goodell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">M</forename><surname>Lim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Pattnaik</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Behavioral and Experimental Finance</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="page">100577</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Group dynamics training and improved decision-making</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Williams</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Journal of Applied Behavioral Science</title>
		<imprint>
			<biblScope unit="page" from="39" to="68" />
			<date type="published" when="1970">1970</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Personalising explainable recommendations: literature and conceptualisation</title>
		<author>
			<persName><forename type="first">M</forename><surname>Naiseh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ali</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Trends and Innovations in Information Systems and Technologies</title>
				<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="518" to="533" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities</title>
		<author>
			<persName><forename type="first">W</forename><surname>Saeed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Omlin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">\textit{Knowledge-Based Systems}</title>
		<imprint>
			<biblScope unit="volume">263</biblScope>
			<biblScope unit="page">110273</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Explainability design patterns in clinical decision support systems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Naiseh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Research Challenges in Information Science: 14th International Conference, RCIS 2020</title>
		<title level="s">Proceedings</title>
		<meeting><address><addrLine>Limassol, Cyprus</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2020-09-23">2020. September 23-25, 2020</date>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="613" to="620" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Nudging through Friction: An approach for Calibrating Trust in Explainable AI</title>
		<author>
			<persName><forename type="first">Mohammad</forename><surname>Naiseh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Reem</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dena</forename><surname>Al-Mansoori</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nan</forename><surname>Al-Thani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Raian</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><surname>Ali</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">8th International Conference on Behavioral and Social Computing (BESC)</title>
				<imprint>
			<date type="published" when="2021">2021. 2021. 2021</date>
			<biblScope unit="page" from="1" to="5" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Explainable recommendations and calibrated trust: two systematic user errors</title>
		<author>
			<persName><forename type="first">M</forename><surname>Naiseh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Cemiloglu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Al Thani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ali</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">\textit{Computer}</title>
		<imprint>
			<biblScope unit="volume">54</biblScope>
			<biblScope unit="issue">10</biblScope>
			<biblScope unit="page" from="28" to="37" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Contextualization and Exploration of Local Feature Importance Explanations to Improve Understanding and Satisfaction of Non-Expert Users</title>
		<author>
			<persName><forename type="first">Clara</forename><surname>Bove</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jonathan</forename><surname>Aigrain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marie-Jeanne</forename><surname>Lesot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Charles</forename><surname>Tijus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marcin</forename><surname>Detyniecki</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">27th International Conference on Intelligent User Interfaces</title>
				<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="807" to="819" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Human-XAI interaction: a review and design principles for explanation user interfaces</title>
		<author>
			<persName><forename type="first">M</forename><surname>Chromik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Butz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Human-Computer Interaction-INTERACT 2021: 18th IFIP TC 13 International Conference</title>
		<title level="s">Proceedings, Part II</title>
		<meeting><address><addrLine>Bari, Italy</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2021-08-30">2021. August 30-September 3, 2021</date>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="page" from="619" to="640" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">C-XAI: A Conceptual Framework for Designing XAI tools that Support Trust Calibration</title>
		<author>
			<persName><forename type="first">M</forename><surname>Naiseh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Simkute</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Zieni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ali</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Responsible Technology</title>
		<imprint>
			<biblScope unit="page">100076</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">May. Does the whole exceed its parts? the effect of ai explanations on complementary team performance</title>
		<author>
			<persName><forename type="first">G</forename><surname>Bansal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Fok</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Nushi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Kamar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Ribeiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Weld</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems</title>
				<meeting>the 2021 CHI Conference on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="1" to="16" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Explaining models: an empirical study of how explanations impact fairness judgment</title>
		<author>
			<persName><forename type="first">J</forename><surname>Dodge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">V</forename><surname>Liao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">K</forename><surname>Bellamy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Dugan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 24th international conference on intelligent user interfaces</title>
				<meeting>the 24th international conference on intelligent user interfaces</meeting>
		<imprint>
			<date type="published" when="2019-03">2019. March</date>
			<biblScope unit="page" from="275" to="285" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">The knowing organization: How organizations use information to construct meaning, create knowledge and make decisions</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">W</forename><surname>Choo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International journal of information management</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="329" to="340" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">The confidence heuristic: A game-theoretic analysis</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Thomas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">G</forename><surname>Mcfadyen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Economic Psychology</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="97" to="113" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">The dynamics of within-group and between-group interaction</title>
		<author>
			<persName><forename type="first">K</forename><surname>Hausken</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Mathematical Economics</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="655" to="687" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Predictive models for human-AI nexus in group decision making</title>
		<author>
			<persName><forename type="first">O</forename><surname>Askarisichani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Bullo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">E</forename><surname>Friedkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">K</forename><surname>Singh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Annals of the New York Academy of Sciences</title>
		<imprint>
			<biblScope unit="volume">1514</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="70" to="81" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">On the rationale of group decision-making</title>
		<author>
			<persName><forename type="first">D</forename><surname>Black</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of political economy</title>
		<imprint>
			<biblScope unit="volume">56</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="23" to="34" />
			<date type="published" when="1948">1948</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Explanation in artificial intelligence: Insights from the social sciences</title>
		<author>
			<persName><forename type="first">T</forename><surname>Miller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial intelligence</title>
		<imprint>
			<biblScope unit="volume">267</biblScope>
			<biblScope unit="page" from="1" to="38" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Questioning the AI: informing design practices for explainable AI user experiences</title>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">V</forename><surname>Liao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Gruen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Miller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2020 CHI conference on human factors in computing systems</title>
				<meeting>the 2020 CHI conference on human factors in computing systems</meeting>
		<imprint>
			<date type="published" when="2020-04">2020. April</date>
			<biblScope unit="page" from="1" to="15" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Individual differences in explanation strategies for image classification and implications for explainable AI</title>
		<author>
			<persName><forename type="first">R</forename><surname>Qi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hsiao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Annual Meeting of the Cognitive Science Society</title>
				<meeting>the Annual Meeting of the Cognitive Science Society</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">45</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<title level="m" type="main">Victims of groupthink: A psychological study of foreign-policy decisions and fiascoes</title>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">L</forename><surname>Janis</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1972">1972</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">May. Guidelines for human-AI interaction</title>
		<author>
			<persName><forename type="first">S</forename><surname>Amershi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Weld</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vorvoreanu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Fourney</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Nushi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Collisson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Suh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Iqbal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">N</forename><surname>Bennett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Inkpen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Teevan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 chi conference on human factors in computing systems</title>
				<meeting>the 2019 chi conference on human factors in computing systems</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="13" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">What is missing in xai so far? an interdisciplinary perspective</title>
		<author>
			<persName><forename type="first">U</forename><surname>Schmid</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Wrede</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">KI-Künstliche Intelligenz</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="303" to="315" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">October. XAI for learning: Narrowing down the digital divide between &quot;new&quot; and &quot;old&quot; experts</title>
		<author>
			<persName><forename type="first">A</forename><surname>Simkute</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Surana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Luger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Evans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Jones</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Adjunct Proceedings of the 2022 Nordic Human-Computer Interaction Conference</title>
				<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Equality bias impairs collective decision-making across cultures</title>
		<author>
			<persName><forename type="first">A</forename><surname>Mahmoodi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Bang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Olsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">A</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Shi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Broberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Safavi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Nili Ahmadabadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">D</forename><surname>Frith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Roepstorff</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the National Academy of Sciences</title>
		<imprint>
			<biblScope unit="volume">112</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="3835" to="3840" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Which biases and reasoning pitfalls do explanations trigger? Decomposing communication processes in human-AI interaction</title>
		<author>
			<persName><forename type="first">M</forename><surname>El-Assady</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Moruzzi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Computer Graphics and Applications</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="11" to="23" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
