<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Explainability and the intention to use AI-based conversational agents.</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jurgen</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Back</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elisabeth</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Thalmann</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefan</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Silicon Austria Labs</institution>
          ,
          <addr-line>In eldgasse 25F, 8010 Graz</addr-line>
          ,
          <country country="AT">Austria</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Graz</institution>
          ,
          <addr-line>Attemsgasse 11, 8010 Graz</addr-line>
          ,
          <country country="AT">Austria</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The use of conversational agents (CA) based on arti cial intelligence (AI) is increasing in the eld of recruiting. Recruiting is considered a particular sensitive domain, especially if CAs also make (pre)selection decisions. The black box character of AI decisions may hinder the acceptance and use of CAs as they are not considered to be fair, accountable and transparent (FAT). Explainable AI (XAI) has the goal to make AI decisions more transparent and thus to increase its FAT. But little is known about the perception of XAI by potential job candidates and their intention to use CAs. To investigate this research gap, we conducted a vignette-style questionnaire survey lled out by 490 persons from a quota-representative population sample for Germany and Austria. Scenarios are varied by (a) the type of XAI approach and (b) by whether the explanations refer to measurable quali cation or soft skills. The results indicate that XAI increases the intention to use CA in recruiting, compared to CA relying on black box AI.</p>
      </abstract>
      <kwd-group>
        <kwd>Conversational Agent</kwd>
        <kwd>Explainable AI</kwd>
        <kwd>User Study</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Conversational agents (CA) and Arti cial Intelligence (AI) fundamentally change
the way information systems (IS) interact with humans. AI enables interactions
between IS and humans that are similar to the way that humans interact with
each other [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. However, AI is usually based on black box models and the
behavior of conversational agents is thus opaque [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. This is an unpleasant situation for
users as they might perceive the CAs as unfair, in-transparent or less
trustworthy, and this in turn in uences the acceptance of the IS, especially in high-stake
situations [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        One recent example of such a sensitive application of CAs is in the eld of
recruiting: CAs now conduct job interviews online and even preselect candidates
based on their resumes and responses [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. This application is considered especially
sensitive due to the black box character of AI, as the stakes for applicants are
? Copyright c 2020 for this paper by its authors. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0).
high and thus it is reasonable to assume that applicants will expect explanations
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Furthermore, such explanations are also seen as required by the European
General Data Protection Regulation [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>
        Research on AI has recently proposed approaches to make AI explainable
and, through those explanations, more transparent [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. First results indicate
that XAI can reduce the negative perceptions towards AI in general [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
However, there is little research on the in uence in critical decision situations and in
particular on the in uence of certain explainability features on the acceptance
and intention to use CAs by (potential) job applicants. To tackle this research
gap, we conducted a vignette-style questionnaire survey with a total of 490
persons from a quota-representative population sample for Germany and Austria.
In the next section, we will develop scenarios to study the e ect of explainability
and the type of skills that those explanations refer to, to investigate their e ect
on the willingness of potential applicants to use such CAs.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Research Model Development</title>
      <p>
        We investigate the use and overall acceptance of CA (pre)selection decisions
using a vignette-style method, suitable to be combined with an experimental
design in surveys [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In this approach we present subjects with scenarios that
are varied by (a) the type of XAI approach and (b) by whether the explanations
refer to measurable quali cation or soft skills. For each scenario subjects evaluate
their intention to use such an CA and their overall acceptance of the CA decision.
      </p>
      <p>The survey starts with a general introduction for the scenarios, namely that
they apply for a job and on the company website a chatbot appears and informs
them that it will make the preselection of candidates instead of a human
recruiter. This CA will communicate by chat and ask all the necessary questions
to assess their t for the open position. After this introduction, seven
scenarios will subsequently be presented to subjects in random order referring to the
outcome of this preselection process.3 In all scenarios, subjects will be informed
that they were rejected by the CA in the preselection process. In a baseline
scenario, BASE, subjects will simply be informed that the CA decided to
reject their application. This mimics the result-focused decision of a typical CA
based on a black box AI. We vary BASE with regard to two factors derived from
the literature: explainability (EXPLAIN) and type of skill that is used in the
explanation (SKILLTYPE).</p>
      <p>
        In the three EXPLAIN variations, we distinguished between the
explanation of black box models and interpretable models [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Two of the variations
of the EXPLAIN factor o er explanations of black box model decisions. In
EXPLAIN LIST, subjects are provided with a list of three criteria that the
rejection is based on. In EXPLAIN COMPARE, subjects see a visualization
of the score that the conversational agent assigned to them and the average
3 Decisions in later scenarios can be a ected by previous scenarios. This can be tested
by comparing results for the scenarios when presented rst to those for all scenarios.
score of other applicants. In the third variation of the factor EXPLAIN,
EXPLAIN INTERPRET, participants are shown a simple decision tree using the
same criteria as in the rst two variations of EXPLAIN. The path to the decision
\reject" is highlighted in the decision tree and paths to the decision \accept" are
visible. Such a simple decision tree is a typical example of a simple rule based
model, which can be intuitively interpreted by humans [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        We again vary all three EXPLAIN variations, with two variations of the
factor SKILLTYPE, resulting in a three by two design. The factor SKILLTYPE
is a natural consequence of explaining a hiring decision, as such decisions must
be based on the match between the skills of the candidate and the position
to be lled. The two variations of the factor SKILLTYPE we choose capture
the distinction between \emotional" and \cognitive" judgements, also used in
a previous scenario study on human perceptions of AI decisions. This study
distinguishes between \mechanical" and \human" skills, the latter of which are
meant to capture emotional capabilities or subjective judgements [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Mechanical
skills refer to objective measures. For the recruiting application, we incorporate
human skills as soft skills in SKILLTYPE SOFT and mechanical skills as more
objectively veri able quali cations in SKILLTYPE VERIFY. For soft skills, we
use the ability to work in teams, communication skills and diligence, for veri able
quali cations work experience, command of English and computer knowledge.
      </p>
      <p>Combining each of the EXPLAIN variations with each of the SKILLTYPE
variations results in six scenarios in addition to BASE. These six scenarios and
the corresponding key elements of the explanations as they will be shown to
subjects are displayed in Figure 1. The full questionnaire is available upon request.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Outlook</title>
      <p>We conducted the survey described before with 490 persons from a
quotarepresentative population sample for Germany and Austria. A preliminary and
raw analysis of the results indicates that XAI increases the intention to use CAs
in recruiting, compared to CAs relying on black box AI. The next step is to
rigorously analyze the collected data. We believe that the developed scenarios
capture important aspects of CAs in the eld of recruiting, but also of AI in
general. XAI, by overcoming the black box nature of many algorithms, is seen as an
important step to create fair, accountable and transparent (FAT) AI solutions.
This in turn should also increase the trust of those a ected by the decisions.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Arrieta</surname>
            ,
            <given-names>A.B.</given-names>
          </string-name>
          ,
          <article-title>D az-Rodr guez</article-title>
          , N.,
          <string-name>
            <surname>Del Ser</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bennetot</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tabik</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barbado</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <article-title>Garc a</article-title>
          , S.,
          <string-name>
            <surname>Gil-Lopez</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Molina</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Benjamins</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , et al.:
          <article-title>Explainable arti - cial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI</article-title>
          .
          <source>Information Fusion</source>
          <volume>58</volume>
          ,
          <issue>82</issue>
          {
          <fpage>115</fpage>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Aviram</surname>
          </string-name>
          , H.:
          <article-title>What would you do? Conducting web-based factorial vignette surveys</article-title>
          . In: Gideon,
          <string-name>
            <surname>L</surname>
          </string-name>
          . (ed.)
          <article-title>Handbook of survey methodology for the social sciences</article-title>
          , pp.
          <volume>463</volume>
          {
          <fpage>473</fpage>
          . Springer, New York, NY. (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Biran</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cotton</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Explanation and justi cation in machine learning: A survey</article-title>
          .
          <source>In: IJCAI-17 workshop on explainable AI (XAI)</source>
          .
          <source>vol. 8</source>
          , pp.
          <volume>8</volume>
          {
          <issue>13</issue>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Park</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suh</surname>
          </string-name>
          , J.:
          <article-title>Transparency and accountability in AI decision support: Explaining and visualizing convolutional neural networks for text information</article-title>
          .
          <source>Decision Support Systems</source>
          <volume>134</volume>
          , 1{
          <fpage>11</fpage>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>M.K.</given-names>
          </string-name>
          :
          <article-title>Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management</article-title>
          .
          <source>Big Data &amp; Society</source>
          <volume>5</volume>
          ,
          <issue>1</issue>
          {
          <fpage>16</fpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Leong</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          : Technology &amp; recruiting 101:
          <article-title>How it works and where it's going</article-title>
          .
          <source>Strategic HR Review</source>
          <volume>17</volume>
          ,
          <issue>50</issue>
          {
          <fpage>52</fpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Rai</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <string-name>
            <surname>Explainable</surname>
            <given-names>AI</given-names>
          </string-name>
          :
          <article-title>From black box to glass box</article-title>
          .
          <source>Journal of the Academy of Marketing Science</source>
          <volume>48</volume>
          ,
          <issue>137</issue>
          {
          <fpage>141</fpage>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Ribeiro</surname>
            ,
            <given-names>M.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Guestrin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>"Why should I trust you?" Explaining the predictions of any classi er</article-title>
          .
          <source>In: Proceedings of NAACL-HLT 2016 (Demonstrations)</source>
          . pp.
          <volume>97</volume>
          {
          <issue>101</issue>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Rudin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead</article-title>
          .
          <source>Nature Machine Intelligence</source>
          <volume>1</volume>
          ,
          <issue>206</issue>
          {
          <fpage>215</fpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Selbst</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Powles</surname>
          </string-name>
          , J.:
          <article-title>Meaningful information and the right to explanation</article-title>
          .
          <source>International Data Privacy Law</source>
          <volume>7</volume>
          ,
          <issue>233</issue>
          {
          <fpage>242</fpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Benbasat</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Recommendation agents for electronic commerce: E ects of explanation facilities on trusting beliefs</article-title>
          .
          <source>Journal of Management Information Systems</source>
          <volume>23</volume>
          ,
          <fpage>217</fpage>
          {
          <fpage>246</fpage>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>