<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Explainability and the intention to use AI-based conversational agents. An empirical investigation for the case of recruiting</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Jürgen</forename><surname>Fleiß</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Graz</orgName>
								<address>
									<addrLine>Attemsgasse 11</addrLine>
									<postCode>8010</postCode>
									<settlement>Graz</settlement>
									<country key="AT">Austria</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Elisabeth</forename><surname>Bäck</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Graz</orgName>
								<address>
									<addrLine>Attemsgasse 11</addrLine>
									<postCode>8010</postCode>
									<settlement>Graz</settlement>
									<country key="AT">Austria</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">Silicon Austria Labs</orgName>
								<address>
									<addrLine>Inffeldgasse 25F</addrLine>
									<postCode>8010</postCode>
									<settlement>Graz</settlement>
									<country key="AT">Austria</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Explainability and the intention to use AI-based conversational agents. An empirical investigation for the case of recruiting</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">90473A6917E4C1ED93666108B1292E25</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T05:36+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Conversational Agent</term>
					<term>Explainable AI</term>
					<term>User Study</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The use of conversational agents (CA) based on artificial intelligence (AI) is increasing in the field of recruiting. Recruiting is considered a particular sensitive domain, especially if CAs also make (pre)selection decisions. The black box character of AI decisions may hinder the acceptance and use of CAs as they are not considered to be fair, accountable and transparent (FAT). Explainable AI (XAI) has the goal to make AI decisions more transparent and thus to increase its FAT. But little is known about the perception of XAI by potential job candidates and their intention to use CAs. To investigate this research gap, we conducted a vignette-style questionnaire survey filled out by 490 persons from a quota-representative population sample for Germany and Austria. Scenarios are varied by (a) the type of XAI approach and (b) by whether the explanations refer to measurable qualification or soft skills. The results indicate that XAI increases the intention to use CA in recruiting, compared to CA relying on black box AI.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Motivation and Background</head><p>Conversational agents (CA) and Artificial Intelligence (AI) fundamentally change the way information systems (IS) interact with humans. AI enables interactions between IS and humans that are similar to the way that humans interact with each other <ref type="bibr" target="#b10">[11]</ref>. However, AI is usually based on black box models and the behavior of conversational agents is thus opaque <ref type="bibr" target="#b8">[9]</ref>. This is an unpleasant situation for users as they might perceive the CAs as unfair, in-transparent or less trustworthy, and this in turn influences the acceptance of the IS, especially in high-stake situations <ref type="bibr" target="#b6">[7]</ref>.</p><p>One recent example of such a sensitive application of CAs is in the field of recruiting: CAs now conduct job interviews online and even preselect candidates based on their resumes and responses <ref type="bibr" target="#b5">[6]</ref>. This application is considered especially sensitive due to the black box character of AI, as the stakes for applicants are high and thus it is reasonable to assume that applicants will expect explanations <ref type="bibr" target="#b3">[4]</ref>. Furthermore, such explanations are also seen as required by the European General Data Protection Regulation <ref type="bibr" target="#b9">[10]</ref>.</p><p>Research on AI has recently proposed approaches to make AI explainable and, through those explanations, more transparent <ref type="bibr" target="#b0">[1]</ref>. First results indicate that XAI can reduce the negative perceptions towards AI in general <ref type="bibr" target="#b7">[8]</ref>. However, there is little research on the influence in critical decision situations and in particular on the influence of certain explainability features on the acceptance and intention to use CAs by (potential) job applicants. To tackle this research gap, we conducted a vignette-style questionnaire survey with a total of 490 persons from a quota-representative population sample for Germany and Austria. In the next section, we will develop scenarios to study the effect of explainability and the type of skills that those explanations refer to, to investigate their effect on the willingness of potential applicants to use such CAs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Research Model Development</head><p>We investigate the use and overall acceptance of CA (pre)selection decisions using a vignette-style method, suitable to be combined with an experimental design in surveys <ref type="bibr" target="#b1">[2]</ref>. In this approach we present subjects with scenarios that are varied by (a) the type of XAI approach and (b) by whether the explanations refer to measurable qualification or soft skills. For each scenario subjects evaluate their intention to use such an CA and their overall acceptance of the CA decision.</p><p>The survey starts with a general introduction for the scenarios, namely that they apply for a job and on the company website a chatbot appears and informs them that it will make the preselection of candidates instead of a human recruiter. This CA will communicate by chat and ask all the necessary questions to assess their fit for the open position. After this introduction, seven scenarios will subsequently be presented to subjects in random order referring to the outcome of this preselection process. <ref type="foot" target="#foot_0">3</ref> In all scenarios, subjects will be informed that they were rejected by the CA in the preselection process. In a baseline scenario, BASE, subjects will simply be informed that the CA decided to reject their application. This mimics the result-focused decision of a typical CA based on a black box AI. We vary BASE with regard to two factors derived from the literature: explainability (EXPLAIN) and type of skill that is used in the explanation (SKILLTYPE).</p><p>In the three EXPLAIN variations, we distinguished between the explanation of black box models and interpretable models <ref type="bibr" target="#b2">[3]</ref>. Two of the variations of the EXPLAIN factor offer explanations of black box model decisions. In EXPLAIN LIST, subjects are provided with a list of three criteria that the rejection is based on. In EXPLAIN COMPARE, subjects see a visualization of the score that the conversational agent assigned to them and the average score of other applicants. In the third variation of the factor EXPLAIN, EX-PLAIN INTERPRET, participants are shown a simple decision tree using the same criteria as in the first two variations of EXPLAIN. The path to the decision "reject" is highlighted in the decision tree and paths to the decision "accept" are visible. Such a simple decision tree is a typical example of a simple rule based model, which can be intuitively interpreted by humans <ref type="bibr" target="#b8">[9]</ref>.</p><p>We again vary all three EXPLAIN variations, with two variations of the factor SKILLTYPE, resulting in a three by two design. The factor SKILLTYPE is a natural consequence of explaining a hiring decision, as such decisions must be based on the match between the skills of the candidate and the position to be filled. The two variations of the factor SKILLTYPE we choose capture the distinction between "emotional" and "cognitive" judgements, also used in a previous scenario study on human perceptions of AI decisions. This study distinguishes between "mechanical" and "human" skills, the latter of which are meant to capture emotional capabilities or subjective judgements <ref type="bibr" target="#b4">[5]</ref>. Mechanical skills refer to objective measures. For the recruiting application, we incorporate human skills as soft skills in SKILLTYPE SOFT and mechanical skills as more objectively verifiable qualifications in SKILLTYPE VERIFY. For soft skills, we use the ability to work in teams, communication skills and diligence, for verifiable qualifications work experience, command of English and computer knowledge.</p><p>Combining each of the EXPLAIN variations with each of the SKILLTYPE variations results in six scenarios in addition to BASE. These six scenarios and the corresponding key elements of the explanations as they will be shown to subjects are displayed in Figure <ref type="figure" target="#fig_0">1</ref>. The full questionnaire is available upon request.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Outlook</head><p>We conducted the survey described before with 490 persons from a quotarepresentative population sample for Germany and Austria. A preliminary and raw analysis of the results indicates that XAI increases the intention to use CAs in recruiting, compared to CAs relying on black box AI. The next step is to rigorously analyze the collected data. We believe that the developed scenarios capture important aspects of CAs in the field of recruiting, but also of AI in general. XAI, by overcoming the black box nature of many algorithms, is seen as an important step to create fair, accountable and transparent (FAT) AI solutions. This in turn should also increase the trust of those affected by the decisions.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Scenario Overview including Stimuli</figDesc><graphic coords="3,134.77,116.83,345.83,303.68" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_0">Decisions in later scenarios can be affected by previous scenarios. This can be tested by comparing results for the scenarios when presented first to those for all scenarios.</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">B</forename><surname>Arrieta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Díaz-Rodríguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Del Ser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bennetot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tabik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Barbado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>García</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gil-López</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Molina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Benjamins</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information Fusion</title>
		<imprint>
			<biblScope unit="volume">58</biblScope>
			<biblScope unit="page" from="82" to="115" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">What would you do? Conducting web-based factorial vignette surveys</title>
		<author>
			<persName><forename type="first">H</forename><surname>Aviram</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Handbook of survey methodology for the social sciences</title>
				<editor>
			<persName><forename type="first">L</forename><surname>Gideon</surname></persName>
		</editor>
		<meeting><address><addrLine>New York, NY</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="463" to="473" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Explanation and justification in machine learning: A survey</title>
		<author>
			<persName><forename type="first">O</forename><surname>Biran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Cotton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IJCAI-17 workshop on explainable AI (XAI)</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="8" to="13" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Transparency and accountability in AI decision support: Explaining and visualizing convolutional neural networks for text information</title>
		<author>
			<persName><forename type="first">B</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Suh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Decision Support Systems</title>
		<imprint>
			<biblScope unit="volume">134</biblScope>
			<biblScope unit="page" from="1" to="11" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Lee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Big Data &amp; Society</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="1" to="16" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Technology &amp; recruiting 101: How it works and where it&apos;s going</title>
		<author>
			<persName><forename type="first">C</forename><surname>Leong</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Strategic HR Review</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="page" from="50" to="52" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Explainable AI: From black box to glass box</title>
		<author>
			<persName><forename type="first">A</forename><surname>Rai</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the Academy of Marketing Science</title>
		<imprint>
			<biblScope unit="volume">48</biblScope>
			<biblScope unit="page" from="137" to="141" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Why should I trust you?&quot; Explaining the predictions of any classifier</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Ribeiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Guestrin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of NAACL-HLT 2016 (Demonstrations)</title>
				<meeting>NAACL-HLT 2016 (Demonstrations)</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="97" to="101" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead</title>
		<author>
			<persName><forename type="first">C</forename><surname>Rudin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nature Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="206" to="215" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Meaningful information and the right to explanation</title>
		<author>
			<persName><forename type="first">A</forename><surname>Selbst</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Powles</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Data Privacy Law</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="233" to="242" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Recommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefs</title>
		<author>
			<persName><forename type="first">W</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Benbasat</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Management Information Systems</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="217" to="246" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
