<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Hermann Kaindl</string-name>
          <email>hermann.kaindl@tuwien.ac.at</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Davor Svetinovic</string-name>
          <email>davor.svetinovic@ku.ac.ae</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Center on Cyber-Physical Systems, Department of Computer Science, Khalifa University of Science and Technology</institution>
          ,
          <addr-line>Abu Dhabi, UAE</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Institute of Computer Technology, TU Wien</institution>
          ,
          <addr-line>Vienna</addr-line>
          ,
          <country country="AT">Austria</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <abstract>
        <p>An important question with regard to new systems is whether they can be trusted by human users, especially when they are safety-critical. This problem should already be addressed in the course of requirements engineering for such systems. However, trust is actually a complex psychological and sociological concept. Thus, it cannot simply be taken into account as a desired or needed property of a system. We propose to learn from the Human Factors discipline on trust in technical systems. In particular, we argue that both undertrust and overtrust need to be avoided. The challenge is to determine system properties and activities in the course of requirements engineering for achieving that. We conjecture that both actual properties like safety and subjective assessment like perceived safety will be important, and how they will have to be balanced for avoiding undertrust and overtrust.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>higher levels of automated driving functions. Regarding users' and societal acceptance, one of the mentioned
challenges is trust. There are no hints, however, on how to address it.</p>
      <p>At the recent RE'18 conference, Cysneiros et al. [CRdPL18] suggested that self-driving cars that demonstrate
transparency in their operations will increase consumer trust, and they investigated transparency as a
NonFunctional Requirement. However, they did not take the work on trust in automation by the Human Factors
community into account, which applies psychological and physiological principles to the (engineering and) design
of products, processes, and systems. The goal of Human Factors is to reduce human error, increase productivity,
and enhance safety and comfort with a speci c focus on the interaction between the human and the thing of
interest, see https://en.wikipedia.org/wiki/Human_factors_and_ergonomics.</p>
      <p>For a better understanding of the issues involved in requirements engineering regarding trust, we propose to
learn from the eld of Human Factors.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Previous Work on Human Factors</title>
      <p>Hence, let us sketch some previous work on Human Factors that may inform future work on requirements
engineering. The signi cance of trust is not limited to the interpersonal domain; trust can also de ne the way
people interact with technology. Thus, the concept of trust in automation has been the focus of substantial
research over the past several decades.</p>
      <p>In their review of related work after 2002, Ho and Bashir [HB15] surveyed many studies where trust in
automation was transferred from trust relationships between humans. The truster is human, the trustee is an
automated system.</p>
      <p>Earlier work on similarities and di erences between human{human and human{automation trust has been
reviewed already in [MW07b]. In a nutshell, a major insight appears to be that anthropomorphizing, i.e., ascribing
human features to an automated system, reduces the di erences. This is con rmed by a more recent study with
potential (end-)users of autonomously driving cars, which assumed that trust is directly linked to perceived safety
[HvBPB17]. Using a car simulator without any aspect of actual safety involved, they showed that trust in a
self-driving car was directly linked in their human factors study to the perceived safety, which was highest in the
case of anthropomorphic visualization through a chau eur avatar. It reacts to the same events as a world in
miniature visualization that presents the car's perception of the surroundings, its interpretation and its actions,
but is more human-like and potentially associated with more feelings.</p>
      <p>Wang et al. [WJH09] studied the appropriateness of reliance depending on \reliability disclosure," which refers
to whether participants were explicitly informed of the reliability of the feedback (as estimated by the system
itself). More precisely, they provided meta-information on the reliability of feedback together with the feedback
itself. Overall, the informed group seemed to resolve the changing reliability of the feedback better than the
uninformed group. Instead of verbally informing, Neyedli et al. [NHJ11] studied displaying this meta-information
in di erent ways, pie displays and mesh displays, respectively (in real time). They showed that this can change
reliance on the automation.
3</p>
    </sec>
    <sec id="sec-3">
      <title>The Challenge for Requirements Engineering</title>
      <p>We conjecture that both actual properties like safety and subjective assessment like perceived safety will be
important, and how they will have to be balanced for avoiding undertrust and overtrust. The challenge for
requirements engineering is to determine system properties and activities in the course of requirements engineering
for achieving a good balance between actual and perceived safety.</p>
      <p>As illustrated in Figure 1, this means to avoid both the region of undertrust and the region of overtrust. An
open question is what can be done to avoid undertrust so that, e.g., a safe system (with a very low probability of
causing injuries or even fatalities) would not be used by people because they do not trust it. Conversely, another
open question is what can be done to avoid overtrust so that, e.g., drivers using current autopilots of cars would
not simply hand-over control but still take care of the tra c situation and stay ready for taking over.</p>
      <p>We propose to study these questions by considering objectively determined properties like safety and to
determine adequate anthropomorphism for disclosing its level. For evaluations, studies like those established in
the eld of Human Factors will most likely be necessary for assessing the balance.</p>
      <p>However, based on the very recent observations in [FWCR19], it seems we cannot just rely upon the social
sciences to take into account growing computer science research. It is critical for us to phrase the engineering
questions in a way that they can be included in social science research, and to nd a way of incorporating the
results in the engineering solutions.</p>
      <p>Besides the explicit relationship between safety and trust in safety-critical systems, the importance of trust
is becoming more and more evident in privacy and security-critical systems. For example, it was observed
that privacy violations are having signi cant impact on trust [Mar18], and the security of the online systems is a
necessary prerequisite for trust in such systems. This is especially evident in a new generation of blockchain-based
systems [ARS18].</p>
      <p>In order to achieve a proper understanding of how to handle undertrust and overtrust, we will have to
incorporate studies with humans into requirements engineering using a combination of standard social science
research methods: case study, survey, observational, correlational, experimental, and cross-cultural methods.
However, in order for these studies to be e ective, it will be of critical importance that we learn how to pose the
research questions in a way that the results of the social studies can be engineered into technical systems to be
trusted.</p>
      <p>Finally, let us propose a short list of research questions. We believe it will be of critical importance to handle
properly both methodological and technical questions. The most pressing methodological questions that we see
are:</p>
      <p>How do we map research questions and methods from Requirements Engineering to Human Factors, and
vice versa?
How do we make engineering questions properly included in Human Factors research, and how can the
produced results be e ectively engineered into the systems to be trusted?
How do we apply computational social science methods in both Requirements Engineering and Human
Factors?</p>
      <p>The technical questions regarding trust, undertrust, and overtrust in Requirements Engineering will become
even more pressing and probably best tackled in the context of emerging application areas of Arti cial Intellgence,
autonomous adaptive systems, and blockchain:</p>
      <p>How do we relate trust/undertrust/overtrust and non-functional requirements?
How do we relate trust regarding safety/security/privacy?
How do we develop evaluation techniques for trust regarding actual vs. perceived safety?
How do we reduce trust manipulations in autonomous adaptive systems through techniques such as
anthropomorphization?
How do we de ne trust in fully open decentralized blockchain systems that store data and run code supplied
by any anonymous entity?</p>
      <p>How do we ensure ongoing trust in evolving systems with emerging behavior and machine learning?
[ARS18]</p>
      <p>Israa Alqassem, Iyad Rahwan, and Davor Svetinovic. The anti-social system properties: Bitcoin
network data analysis. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2018.
[ES17]
[HB15]
[Mar18]
[MW07a]
[MW07b]
[NHJ11]
[WGM00]
[WJH09]</p>
      <p>EPoSS ERTRAC and ETIP SNET. Ertrac automated driving roadmap. ERTRAC Working Group,
7, 2017.</p>
      <p>Kevin Anthony Ho and Masooda Bashir. Trust in automation: Integrating empirical evidence on
factors that in uence trust. Human Factors, 57(3):407{434, 2015.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [CRdPL18]
          <article-title>Luiz Marcio Cysneiros, Majid Ra , and Julio Cesar Sampaio do Prado Leite</article-title>
          .
          <article-title>Software transparency as a key requirement for self-driving cars</article-title>
          .
          <source>In 2018 IEEE 26th International Requirements Engineering Conference (RE)</source>
          , pages
          <fpage>382</fpage>
          {
          <fpage>387</fpage>
          . IEEE,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [HvBPB17]
          <article-title>Renate Hauslschmid, Max von Buelow, Bastian P eging, and Andreas Butz. Supportingtrust in autonomous driving</article-title>
          .
          <source>In Proceedings of the 22nd international conference on intelligent user interfaces</source>
          ,
          <source>pages</source>
          <volume>319</volume>
          {
          <fpage>329</fpage>
          . ACM,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [FWCR19] Morgan R. Frank, Dashun Wang,
          <string-name>
            <surname>Manuel Cebrian</surname>
            , and
            <given-names>Iyad</given-names>
          </string-name>
          <string-name>
            <surname>Rahwan</surname>
          </string-name>
          .
          <article-title>The evolution of citation graphs in arti cial intelligence research</article-title>
          .
          <source>Nature Machine Intelligence</source>
          ,
          <volume>1</volume>
          (
          <issue>2</issue>
          ):
          <volume>79</volume>
          {
          <fpage>85</fpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <source>Journal of Business Research</source>
          ,
          <volume>82</volume>
          :
          <fpage>103</fpage>
          {
          <fpage>116</fpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <given-names>P.</given-names>
            <surname>Madhavan</surname>
          </string-name>
          and
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Wiegmann</surname>
          </string-name>
          .
          <article-title>Similarities and di erences between human{human and human{ automation trust: an integrative review</article-title>
          .
          <source>Theoretical Issues in Ergonomics Science</source>
          ,
          <volume>8</volume>
          (
          <issue>4</issue>
          ):
          <volume>277</volume>
          {
          <fpage>301</fpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <given-names>Poornima</given-names>
            <surname>Madhavan</surname>
          </string-name>
          and
          <article-title>Douglas A Wiegmann. Similarities and di erences between human{human and human{automation trust: an integrative review</article-title>
          .
          <source>Theoretical Issues in Ergonomics Science</source>
          ,
          <volume>8</volume>
          (
          <issue>4</issue>
          ):
          <volume>277</volume>
          {
          <fpage>301</fpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Heather F Neyedli</surname>
          </string-name>
          , Justin G Hollands,
          <article-title>and Greg A Jamieson. Beyond identity: Incorporating system reliability information into an automated combat identi cation system</article-title>
          .
          <source>Human factors</source>
          ,
          <volume>53</volume>
          (
          <issue>4</issue>
          ):
          <volume>338</volume>
          {
          <fpage>355</fpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Christopher D. Wickens</surname>
            , Keith Gempler, and
            <given-names>M. Ephimia</given-names>
          </string-name>
          <string-name>
            <surname>Morphew</surname>
          </string-name>
          .
          <article-title>Workload and reliability of predictor displays in aircraft tra c avoidance</article-title>
          .
          <source>Transportation Human Factors</source>
          ,
          <volume>2</volume>
          (
          <issue>2</issue>
          ):
          <volume>99</volume>
          {
          <fpage>126</fpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <given-names>Lu</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Greg A Jamieson,</article-title>
          and Justin G Hollands.
          <article-title>Trust and reliance on an automated combat identi cation system</article-title>
          .
          <source>Human factors</source>
          ,
          <volume>51</volume>
          (
          <issue>3</issue>
          ):
          <volume>281</volume>
          {
          <fpage>291</fpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>