=Paper= {{Paper |id=Vol-1705/03-paper |storemode=property |title=What Can Be Learnt from Engineering Safety Critical Partly-Autonomous Systems when Engineering Recommender Systems |pdfUrl=https://ceur-ws.org/Vol-1705/03-paper.pdf |volume=Vol-1705 |authors=Camille Fayollas,Célia Martinie,Philippe Palanque,Eric Barboni,Yannick Deleris |dblpUrl=https://dblp.org/rec/conf/eics/FayollasMPBD16 }} ==What Can Be Learnt from Engineering Safety Critical Partly-Autonomous Systems when Engineering Recommender Systems== https://ceur-ws.org/Vol-1705/03-paper.pdf
                                 What Can Be Learnt from Engineering
                                 Safety Critical Partly-Autonomous
                                 Systems when Engineering
                                 Recommender Systems
Camille Fayollas                                      Abstract
Célia Martinie                                        Human-Automation Design main target is to design sys-
Philippe Palanque                                     tems in such a way that the couple system operator per-
Eric Barboni
                                                      forms as efficiently as possible. Means for such designs in-
ICS-IRIT, University of Toulouse,
                                                      clude identifying functions (on the system side) and tasks
118, route de Narbonne, 31042
Toulouse, France
                                                      (on the operator’s side) and balancing the allocation of
{lastname}@irit.fr                                    tasks and functions between operators and the systems
                                                      being operated. Allocating functions to the most suitable ac-
                                                      tor has been the early driver of function allocation [18]. The
                                                      philosophy of recommender systems is that the system will
Yannick Deleris
                                                      provide a set of options for the users to select from. Such
AIRBUS Operations,                                    behavior can be easily connected to previous work on lev-
316 Route de Bayonne,3 1060                           els of automation as defined by Sheridan [34] and lessons
Toulouse, France                                      can be drawn from putting together these two views. When
yannick.deleris@airbus.com                            these automations (including the one of recommender sys-
                                                      tems) are not adequately designed (or correctly understood
                                                      by the operator), they may result in so called automation
                                                      surprises [25, 32] that degrade, instead of enhance, the
                                                      overall performance of the operations. This position paper
                                                      identifies issues related to bringing recommender systems
                                                      in the domain of safety critical interactive systems. While
                                                      their advantages are clearly pointed out by their advocates,
Copyright is held by the author/owner(s).             limitations are usually hidden or overlooked. We present
EICS’16, June 21-24, 2016, Bruxelles, Belgium.        this argumentation in the case of the ECAM (Electronic
                                                      Centralised Aircraft Monitor) of which some behavior could
                                                      be considered as similar to the one of a recommender sys-




                                                 14
tem. We also highlight some engineering aspects of deploy-           instead of enhance as expected, the overall performance
ing recommender systems in the safety critical domain.               of the operations and might lead to incidents or even ac-
                                                                     cidents [25]. The issue of usability of recommenders sys-
Author Keywords                                                      tems has also been identified in the early days [35] even
Automation, recommender systems, transparent automa-                 though only perceived as an interactive application being
tion, operator tasks, performance                                    used by users and not an interaction taking place with a
                                                                     partly-autonomous system. Work in that domain focuses
ACM Classification Keywords                                          on usefulness of the recommender systems and on their
D.2.2 [Design Tools and Techniques]: Computer-aided soft-            usability or user experience [20].
ware engineering (CASE); H.5.3 [Group and Organization
Interfaces]                                                          This paper argues that having an automation-centered view
                                                                     on recommender systems helps to identify design issues
Introduction                                                         related to their user interfaces and could inform design de-
Human-Automation Design main target is to design sys-                cisions and evaluation of these systems. Such a perspec-
tems in such a way that the couple system operator per-              tive could also help understanding issues that have to be
forms as efficiently as possible. Allocating functions to the        addressed prior to the deployment of such systems in the
most suitable actor has been the early driver of function al-        context of safety critical interactive systems.
location as advocated by Fitts [18]. Such work is known as
MABA-MABA (Men Are Better At âĂŞ Machine Are Bet-                  The next section proposes a short overview of the main
ter At) where the underlying philosophy is that automate             concepts related to automations and focuses on the human
as many functions as possible was perceived as adequate              aspects of automation. Then the paper positions recom-
(see for instance [11]). This technology-centered view has           mender systems within that context and highlights similari-
led to unsafe and unusable systems [32] and it became                ties with other systems as well as their specificities. The pa-
clear that the design of usable partly-autonomous systems            per then presents the case study of the ECAM (Electronic
is a difficult task.                                                 Centralised Aircraft Monitor) and how this system relates to
                                                                     recommender systems. Last section lists design and engi-
Recommender systems belong to this trend of work on au-              neering issues related to the deployment of recommender
tomation, even though that characteristic is not put forward         systems in the area of safety critical interactive systems.
or even ignored by their designers and promoters. When
the word automation is connected to recommender systems              Even though those levels can support the understanding
it is usually for explaining that the recommender system is          of automation they cannot be used as a mean for assess-
evolving autonomously as in “automated collaborative filter-         ing the automation of a system which has to be done at a
ing” used for instance in GroupLens [21].                            much finer grain i.e., “function” by “function”. However, if a
                                                                     detailed description of the “functions” is provided they make
When automation is not adequately designed, or correctly             it possible to support both the decision and the design pro-
perceived and then understood by the operator, they may              cess of migrating a function from the operator’s activity to
result in so called automation surprises [33] that degrade,          the system or vice versa.


                                                                15
                                                                       As stated in [27], automated systems can operate at spe-
                                                                       cific levels within this continuum and automation can be
                                                                       applied not only to the output functions but also to input
                                                                       functions. Figure 2 presents the four-stage model of human
                                                                       information processing as introduced in [27].
         HIGH     10. The computer decides every-
                  thing, acts autonomously, ignoring
                  the human
                  9. Informs the human only if it, the
                  computer, decides to
                  8. Informs the human only if                         Figure 2: Simple four-stages model of human information
                  asked, or                                            processing.
                  7. Executes automatically, then
                  necessarily informs the human,                       The first stage refers to the acquisition and recording of
                  and                                                  multiple forms of information. The second one involves con-
                  6. Allows the human a restricted                     scious perception, and manipulation of processed and re-
                  time to veto before automatic                        trieved information in working memory. The third stage is
                  execution, or                                        where decisions are accomplished by cognitive processes.
                  5. Executes that suggestion if the                   The last one contains the implementation of a response or
                  human approves, or                                   action consistent with decision made in the previous stage.
                  4. Suggests one alternative                          The first three stages in that model represent how the op-
                  3. Narrows the selection down to a                   erator processes the information that is rendered by the
                  few, or                                              interactive system. The last stage identifies the response
                  2. The computer offers a complete                    from the user that may correspond to providing input to the
                  set of decision/action alternatives,                 controlled system by means of the interactive system (flow
                  or                                                   of events from the user towards the controlled system.
         LOW      1. The computer offers no as-
                  sistance: human must take all
                  decisions and actions

Figure 1: Levels of automation of decision and action selection
from [34] and [27]                                                     Figure 3: Four classes of system functions (that can be
                                                                       automated)

                                                                       The model in Figure 2 (about human information process-
                                                                       ing) has a similar counterpart in system’s functions as shown
                                                                       in Figure 3. Each of these functions can be automated


                                                                  16
to different degrees. For instance, the sensory process-             to analyze the effect of the level of automation and empha-
ing stage (in Figure 2) could be migrated to the informa-            size the importance of a simulation framework to have a
tion acquisition stage (in Figure 3) by developing hardware          feedback on design choices before deploying the system.
sensors. The second stage in the human model could be                Finally, several techniques have been coined to provide
automated by developing inferential algorithms (as for in-           support for formal verification of human automation inter-
stance in recommender systems). The third stage involves             action [9], which aim at providing tools for checking confor-
selection from several alternatives which can be easily im-          mance between what the system has to perform, and what
plemented with algorithms. The final stage called action             the user is responsible for. For each of these techniques
implementation refers to the execution of the choice. Au-            and methods, human automation interaction is dealt with as
tomation of this stage may involve different levels of ma-           a whole and thus focusing on goal-related tasks.
chine execution and could even replace physical effectors
(e.g., hand or voice) of the operator [28]. The stages of hu-        Recommender systems as
man information processing as well as their corresponding            partly-autonomous system
classes of system functions are used to analyze and de-              Recommender systems may be based on different ap-
sign which tasks are performed by the human operator and             proaches. They may implement content-based filtering,
which functions are performed by the system (also called             knowledge-based filtering, collaborative filtering or hybrid
function allocation as defined in [15]).                             filtering [8]. We don’t describe here the various types of
                                                                     recommender systems in details and encourage the inter-
Based on this theoretical framework, several techniques              ested reader to see [8] for a very detailed and pedagogic
and methods have been proposed to analyze, design and                survey. We here focus on content-based approaches for
evaluate human automation interaction. Proud et al. pro-             recommender systems as it is a relevant filtering approach
posed the LOA (Level Of Autonomy) Assessment Tool [30]               for interactive cockpits. Figure 4 presents a typical architec-
(based on a LOA Assessment Scale) which produces an-                 ture of a content-based recommender system. The system
alytical summaries of the appropriate LOA for particular             stores a set of items and each item is described accord-
functions and has been applied to an Autonomous Flight               ing to a set of attributes. In such systems the user is de-
Management System. Cummings et al. [13] identified a re-             scribed according to these attributes too, this description
finement mechanism for the decision making step, to help             being named user profile. According to the user profile and
in deciding which one of the human or of the system should           the attributes of the set of items, the systems proposes to
perform a given decision task. More generally, techniques            the user a set of recommendations.
based on cognitive task analysis, such as the one proposed
in [3], help in understanding precisely the different tasks          Recommender Systems and Levels of Automation:
that are actually performed by the human operator. Model             According to the levels of automation presented in Fig-
based approaches take advantage of task analysis and                 ure 1 recommender systems typically fall within levels 2
propose to systematically ensure consistency and coher-              to 4 depending on the number of alternatives presented to
ence between task models and system behavioral descrip-              the user. Rules for designing autonomous systems would
tion [22]. Johansson et al. [19] developed a simulation tool         thus apply to recommender systems to avoid know issues



                                                                17
                                                                         • Action implementation: this stage corresponds to
                                                                           the presentation on the user interface of the selected
                                                                           items to the operator. That presentation of informa-
                                                                           tion can be sometimes enriched with argumentation
                                                                           about the rationale for selecting items [12]. It is also
                                                                           important to note that, following that presentation of
                                                                           information, user interaction is usually available allow-
                                                                           ing users to browse the list of recommended items,
                                                                           to access more information about them and to select
                                                                           the desired one. Such operator behavior is taken ex-
  Figure 4: Content-based recommendation principle (from [8])              plicitly into account by the operator behavior model of
                                                                           Figure 2 and detailed below.

such as automation surprises [32] and [25]. Issue of trans-
                                                                     Recommender Systems and operators behavior: As
parency of automation has been also identified for recom-
                                                                     far as user activity is concerned the operator behavior de-
mender systems [14] while a process and a notation to sys-
                                                                     scribed in Figure 2 can be refined for describing interaction
tematically engineer transparency for partly-autonomous
                                                                     with a recommender system.
systems was proposed in [7].
                                                                     According to the four stages of human information process-
Recommender Systems and systems functions: The
                                                                     ing proposed in Figure 2 it is easy to relate to the recom-
recommender system behavior in Figure 4 and the classes
                                                                     mender system behavior:
of system functions in Figure 3 can be related as follows:


    • Information acquisition: this stage corresponds to                 • Sensory processing: while interacting with the rec-
      the process in the recommender systems browsing                      ommender system this activity would consist in all
      the internal source of items being candidates for rec-               operators’ information sensing both from the recom-
      ommendation. Information can be entirely stored at                   mender and from the system under use. Localiza-
      initialization or gathered during execution.                         tion of the information from the recommender system
                                                                           might deeply impact that sensing.
    • Information analysis: this stage corresponds to the
      recommender system process of correlating user                     • Perception/working memory: at this stage informa-
      profile with items description.                                      tion from the recommender system will be integrated
                                                                           with the information presented by the system. It is
    • Decision and action selection: this stage corre-                     important to note that human errors such as interfer-
      sponds to the filtering process of the recommender                   ence, overshooting a stop rule... [31] and thus should
      system selecting amongst the list of candidate items                 be avoided (and is not possible detected, recovered
      and ranking them before presentation to the operator.                or mitigated).


                                                                18
    • Decision making: this corresponds to the decision                 (using the System Display (SD)); ii) the display of warnings
      of the operator for selecting one of the recommenda-              about system failures and procedures that have to be com-
      tions presented by the recommender system (if more                pleted by the pilot to process the detected warning (using
      than one is offered).                                             the Warning Display (WD)) and iii) the production of aural
                                                                        and visual alerts (using several lights and loudspeakers in
    • Response selection: this is the actual selection of               the cockpit).
      one of the recommender system recommendation.
      As stated above, that stage might involve additional              The SD and WD are displayed, in the cockpit of the A380,
      cycles within this 4 stages model (while operators                on two separated Display Units (DU). These two DUs are
      interact with the recommender systems e.g. browsing               highlighted in Figure 5 and are part of the eight of DUs
      the recommendation or accessing more information                  composing the Control and Display System (CDS). The
      about a given recommendation).                                    CDS is the interactive system in aircraft cockpits (flight
                                                                        decks) that offers various operational services of major im-
                                                                        portance for flight crew. It displays aircraft parameters, and
The mapping between recommender system processes
                                                                        enables the flying crew to graphically interact with these
and Parasuraman’s system functions as well as the map-
                                                                        parameters using a keyboard and a mouse (KCCU for Key-
ping between activities done with recommender systems
                                                                        board and Cursor Control Unit) to control aircraft systems.
and Parasuraman’s stages of human information process-
ing, provides support to analyze the impact of the level of
automation of the recommender system functions on the
operators’ task.

Illustrative example
To exemplify the concept presented above in the context
of a safety critical system, we present a case study from
the avionics domain: the ECAM (Electronic Centralized Air-
craft Monitor) in the Airbus family. The ECAM is a system
that monitors aircraft systems (e.g., the engines) and re-
lays to the pilots data about their state (e.g., if their use is
limited due to a failure) and the procedures that have to be
                                                                                Figure 5: WD and SD in the cockpit of the A380
achieved by the pilots to recover from the failure.

The ECAM system is composed of several systems. More                    As presented in Figure 6, if the ECAM has created, simulta-
particularly, the Flight Warning System (FWS) is in charge              neously, several warning messages, it sorts them, in order
of the processing of data from the monitoring of the aircraft           to obtain a display order, according to three inhibition mech-
systems. This processing enables: i) the display of infor-              anisms:
mation about the status of the aircraft systems parameters



                                                                   19
    • Their priority: a priority is associated to each warning         Figure 7 presents an example of the display of warning
      message;                                                         messages (one red warning and three amber warnings)
                                                                       and their associated recovery procedures on the ECAM.
    • FWS internal behavior: some warning messages
                                                                       In this example, the red warning (L1 in Figure 7) informs
      may be inhibited in case of presence of others warn-
                                                                       the pilot that the autopilot is not working anymore. There-
      ing messages (for instance, the “APU fault” warning
                                                                       fore, the first amber warning (L2 in Figure 7) informs the
      message is not displayed if the “APU fire” is already
                                                                       pilot that the auto-thrust is not working anymore. The cor-
      detected);
                                                                       responding recovery procedure (L3 in Figure 7) indicates
    • The current flight phase: some warning messages                  to the pilot that s/he has to take responsibility for the thrust
      are only displayed when the aircraft is in a given flight        by moving the thrust levers. The second amber warning (L4
      phase (for instance, flight management systems fail-             and L5 in Figure 7) informs the pilot that the flight control
      ures are not displayed after landing).                           laws are not working anymore. The corresponding recovery
                                                                       procedures (L6 in Figure 7) indicates to the pilot that s/he
                                                                       has to take responsibility for the aircraft speed that must be
                                                                       under 0.82 MACH.




       Figure 6: Principle for the display of the warnings

Therefore, the warning messages are displayed (in the pro-             Figure 7: Example of the display of warning messages on the
                                                                       ECAM (from [10])
cessed display order) to the pilots, within the WD, with three
different colors, representing their priority level:
                                                                       These warnings messages notification are similar to rec-
                                                                       ommendations in recommender systems (see, for instance,
    • Red warnings that require immediate actions from the
                                                                       the one presented in Figure 4) in the sense that the sys-
      pilots (e.g., the loss of an engine);
                                                                       tem sorts the warning messages and their associated re-
    • Amber warnings that require non-immediate actions                covery procedures and proposes, to the pilots, an order
      from the pilots (e.g., a fault within the APU);                  for their treatment. In this example, the system indicates
                                                                       to the pilot that the auto-thrust management function is off
    • Green warnings that only require monitoring from the             and indicates to the pilot that s/he shall manually move the
      pilots but do not present any danger.


                                                                  20
thrust levers (lines 2 and 3 in Figure 4). Using Parasura-            or safety critical domain. [14] presents the evaluation of
man’s models, we can analyze that the system fall within              a recommender system for single pilot operations but no
level 4 of automation (Figure 1). In other cases, the sys-            information is given about the design and development of
tem may propose a list of prioritized alarms and recovery             the underlying system and nor about its user interface and
procedures, which make the system fall within level 3 of au-          interaction techniques.
tomation (Figure 1). As inferred in the Parasuraman’s level
of automation, the more alternatives the system proposes,             We have considered several engineering approaches to
the less automated.                                                   examine these issues. First, the ICO user interface design
                                                                      techniques enables to develop usable and reliable interac-
Design and engineering issues of recommender                          tion techniques [24]. Complemented with task modelling
systems in safety critical domain                                     (for describing operators goals and tasks to be performed
This section tries to identify the potential of recommender           to reach these goals), it can be used to analyze user’s task
systems as well as the design issues related to their engi-           w.r.t. system’s behavior [22]. At last, task models can also
neering.                                                              be used to assess whether the user or the system should
                                                                      handle a particular task in a particular context [22]. All of
The classification framework of recommender systems pro-              these techniques aim at finding the optimal collaboration
posed in [29] identifies multiple application domains where           solution between the user and the system but were not ap-
recommender systems have been deployed (as an excerpt                 plied with a recommender systems, even with the AMAN
is presented in Figure 8).                                            (Arrival Manager) advisory tool for air traffic control which
                                                                      could be also considered as a simple recommender sys-
                                                                      tem [23].

                                                                      However these approaches do not deal with the possible
                                                                      dynamic change of behavior of the system, especially if it
                                                                      has machine learning capabilities (reinjecting operators’ se-
                                                                      lections in the items information). Additionally, considering
                                                                      that the safety-critical user interfaces require additional de-
                                                                      sign and development paths, we identified the following set
                                                                      of issues that must be considered if the system is (partly)
                                                                      autonomous:

                                                                          • What is usability of a recommender system in a criti-
Figure 8: Classification framework of recommender systems                   cal context and how to evaluate it (as operators follow
(from [29])                                                                 extensive training and have deep knowledge of the
                                                                            behavior of the supervised systems),
It is interesting to note that none of them target at critical
                                                                          • How to guarantee the safety and dependability of the


                                                                 21
      possible interactions when browsing recommended                    systems and that existing classifications in that domain are
      items,                                                             applicable to recommender systems.

    • How to guarantee the safety and dependability of the               We have shown on a simple example from the aviation do-
      underlying recommender system behavior,                            main that current systems exhibits some of the characteris-
                                                                         tics of recommender systems. We have also highlighted de-
    • How to analyze and prevent operators’ errors,
                                                                         sign and development issues that currently prevent recom-
    • How to assess and design responsibility, authority                 mender from being deployed in the context of safety critical
      and liability between the recommender system and                   systems but we have also highlighted some of the problems
      the operators (for instance in aircraft the entire au-             to be addressed.
      thority belong to the captain and not to the first offi-
                                                                         Future work deals with the definition of engineering ap-
      cer),
                                                                         proaches for building reliable and fault-tolerant recom-
    • How to design and specify interaction techniques                   mender systems following what has been done in the past
      where autonomous behavior from the system inter-                   for interactive cockpit applications as presented in [16]
      fere with operator input (including the question on                and [36]. It is important to note that trade-off between prop-
      how to model that formally [7]),                                   erties (such as usability and dependability as presented
                                                                         in [17]) will also be present in the case of recommender
    • How to design interaction so that the operators can                systems in safety critical applications.
      foresee the systems’ future steps and states and the
      impact of selecting one recommendation instead of                  References
      another one,                                                        [1] Johnny Accot, Stéphane Chatty, Sébastien Maury,
                                                                              and Philippe Palanque. 1997. Formal transducers:
    • How to design interactions when the automation (rec-
                                                                              models of devices and building bricks for the design
      ommendation) can fail and how to notify the operators
                                                                              of highly interactive systems. In Design, Specification
      about degradation of the recommender system (for
                                                                              and Verification of Interactive Systems’ 97. Springer,
      each of the stages in Figure 3),
                                                                              143–159.
    • How to enhance and evaluate aspects of user experi-                 [2] Johnny Accot, Stéphane Chatty, and Philippe
      ence, while fulfilling the constraints of a safety-critical             Palanque. 1996. A formal description of low level in-
      system which has to be secure, safe, reliable and us-                   teraction and its application to multimodal interactive
      able.                                                                   systems. In Design, Specification and Verification of
                                                                              Interactive Systems’ 96. Springer, 92–104.
                                                                          [3] Julie A Adams, Curtis M Humphrey, Michael A
Summary and Conclusion                                                        Goodrich, Joseph L Cooper, Bryan S Morse, Cameron
This position paper proposed to consider recommender                          Engh, and Nathan Rasmussen. 2009. Cognitive
systems as partly-autonomous systems. We have demon-                          task analysis for developing unmanned aerial vehi-
strated that their behavior is similar to the ones of autonomous


                                                                    22
     cle wilderness search support. Journal of cognitive                   rep., République Française, Ministère de l’Ecologie, du
     engineering and decision making 3, 1 (2009), 1–26.                    Développement durable et de l’Énergie.
 [4] ARINC. 2002. ARINC 661 Cockpit Display System                    [11] Alphonse Chapanis. 1996. Human factors in systems
     Interfaces to User Systems. ARINC Specification 661.                  engineering. John Wiley & Sons, Inc.
     (2002).                                                          [12] Carlos Iván Chesnevar and Ana G Maguitman. 2004.
 [5] Rémi Bastide, Philippe Palanque, Duc-Hoa Le, Jaime                    Arguenet: An argument-based recommender system
     Muñoz, and others. 1998. Integrating rendering spec-                  for solving web search queries. In Intelligent Systems,
     ifications into a formalism for the design of interactive             2004. Proceedings. 2004 2nd International IEEE Con-
     systems. In Design, Specification and Verification of                 ference, Vol. 1. IEEE, 282–287.
     Interactive Systems’ 98. Springer, 171–190.                      [13] Mary L Cummings and Sylvain Bruni. 2009. Collabora-
 [6] Olivier Bau and Wendy E Mackay. 2008. OctoPocus:                      tive Human–Automation Decision Making. In Springer
     a dynamic guide for learning gesture-based command                    handbook of automation. Springer, 437–447.
     sets. In Proceedings of the 21st annual ACM sympo-               [14] Arik-Quang V Dao, Kolina Koltai, Samantha D Cals,
     sium on User interface software and technology. ACM,                  Summer L Brandt, Joel Lachter, Michael Matessa,
     37–46.                                                                David E Smith, Vernol Battiste, and Walter W Johnson.
 [7] R Bernhaupt, M Cronel, F Manciet, C Martinie, and P                   2015. Evaluation of a recommender system for single
     Palanque. 2015. Transparent Automation for Assess-                    pilot operations. Procedia Manufacturing 3 (2015),
     ing and Designing better Interactions between Oper-                   3070–3077.
     ators and Partly-Autonomous Interactive Systems. In              [15] Andy Dearden, Michael Harrison, and Peter Wright.
     Proceedings of the 5th International Conference on                    2000. Allocation of function: scenarios, context and
     Application and Theory of Automation in Command                       the economics of effort. International Journal of
     and Control Systems. ACM, 129–139.                                    Human-Computer Studies 52, 2 (2000), 289–318.
 [8] Jesús Bobadilla, Fernando Ortega, Antonio Hernando,              [16] Camille Fayollas, Jean-Charles Fabre, Philippe
     and Abraham Gutiérrez. 2013. Recommender systems                      Palanque, Eric Barboni, David Navarre, and Yannick
     survey. Knowledge-Based Systems 46 (2013), 109–                       Deleris. 2013. Interactive cockpits as critical applica-
     132.                                                                  tions: a model-based and a fault-tolerant approach.
 [9] Matthew L Bolton, Ellen J Bass, and Radu I Si-                        International Journal of Critical Computer-Based Sys-
     miniceanu. 2013. Using formal verification to evaluate                tems 17 4, 3 (2013), 202–226.
     human-automation interaction: A review. IEEE Trans-              [17] Camille Fayollas, Célia Martinie, P Palanque, Yannick
     actions on Systems, Man, and Cybernetics: Systems                     Deleris, J-C Fabre, and David Navarre. 2014. An Ap-
     43, 3 (2013), 488–503.                                                proach for Assessing the Impact of Dependability on
[10] Bureau d’Enquêtes et d’Analyse. 2012. Rapport final-                  Usability: Application to Interactive Cockpits. In De-
     Accident survenu le 1er juin 2009 à l’Airbus A330-203                 pendable Computing Conference (EDCC), 2014 Tenth
     immatriculé F-GZCP exploité par Air France vol AF                     European. IEEE, 198–209.
     447 Rio de Janeiro-Paris. Technical Report. Tech.                [18] Paul M Fitts. 1951. Human engineering for an effective
                                                                           air-navigation and traffic-control system. (1951).



                                                                 23
[19] Björn Johansson, Åsa Fasth, Johan Stahre, Juhani                      Interaction (TOCHI) 16, 4 (2009), 18.
     Heilala, Swee Leong, Y Tina Lee, and Frank Riddick.              [25] Everett Palmer. 1995. Oops, it didn’t arm’- A case
     2009. Enabling flexible manufacturing systems by us-                  study of two automation surprises. In International
     ing level of automation as design parameter. In Winter                Symposium on Aviation Psychology, 8 th, Columbus,
     Simulation Conference. Winter Simulation Conference,                  OH. 227–232.
     2176–2184.                                                       [26] Raja Parasuraman and Victor Riley. 1997. Humans
[20] Bart P Knijnenburg, Martijn C Willemsen, Zeno Gant-                   and automation: Use, misuse, disuse, abuse. Human
     ner, Hakan Soncu, and Chris Newell. 2012. Explaining                  Factors: The Journal of the Human Factors and Er-
     the user experience of recommender systems. User                      gonomics Society 39, 2 (1997), 230–253.
     Modeling and User-Adapted Interaction 22, 4-5 (2012),            [27] Raja Parasuraman, Thomas B Sheridan, and Christo-
     441–504.                                                              pher D Wickens. 2000. A model for types and levels of
[21] Joseph A Konstan, Bradley N Miller, David Maltz,                      human interaction with automation. IEEE Transactions
     Jonathan L Herlocker, Lee R Gordon, and John Riedl.                   on systems, man, and cybernetics-Part A: Systems
     1997. GroupLens: applying collaborative filtering to                  and Humans 30, 3 (2000), 286–297.
     Usenet news. Commun. ACM 40, 3 (1997), 77–87.                    [28] Raja Parasuraman and Christopher D Wickens. 2008.
[22] Célia Martinie, Philippe Palanque, Eric Barboni, Marco                Humans: Still vital after all these years of automation.
     Winckler, Martina Ragosta, Alberto Pasquini, and                      Human Factors: The Journal of the Human Factors
     Paola Lanzi. 2011. Formal tasks and systems mod-                      and Ergonomics Society 50, 3 (2008), 511–520.
     els as a tool for specifying and assessing automation            [29] Deuk Hee Park, Hyea Kyeong Kim, Il Young Choi, and
     designs. In Proceedings of the 1st International Con-                 Jae Kyeong Kim. 2012. A literature review and classifi-
     ference on Application and Theory of Automation in                    cation of recommender systems research. Expert Sys-
     Command and Control Systems. IRIT Press, 50–59.                       tems with Applications 39, 11 (2012), 10059–10072.
[23] Celia Martinie, Philippe Palanque, Alberto Pasquini,             [30] Ryan W Proud, Jeremy J Hart, and Richard B Mrozin-
     Martina Ragosta, Eric Rigaud, and Sara Silvagni.                      ski. 2003. Methods for determining the level of au-
     2012. Using complementary models-based ap-                            tonomy to design into a human spaceflight vehicle: a
     proaches for representing and analysing ATM systems’                  function specific approach. Technical Report. DTIC
     variability. In Proceedings of the 2nd International Con-             Document.
     ference on Application and Theory of Automation in               [31] James Reason. 1990. Human error. Cambridge uni-
     Command and Control Systems. IRIT Press, 146–                         versity press.
     157.                                                             [32] Nadine B Sarter, David D Woods, and Charles E
[24] David Navarre, Philippe Palanque, Jean-Francois                       Billings. 1997. Automation surprises. Handbook of
     Ladry, and Eric Barboni. 2009. ICOs: A model-based                    human factors and ergonomics 2 (1997), 1926–1943.
     user interface description technique dedicated to in-            [33] Thomas B Sheridan and Raja Parasuraman. 2005.
     teractive systems addressing usability, reliability and               Human-automation interaction. Reviews of human
     scalability. ACM Transactions on Computer-Human                       factors and ergonomics 1, 1 (2005), 89–129.




                                                                 24
[34] Thomas B Sheridan and William L Verplank. 1978. Hu-        [36] A Tankeu-Choitat, David Navarre, P Palanque, Yannick
     man and computer control of undersea teleoperators.             Deleris, Jean-Charles Fabre, and Camille Fayollas.
     Technical Report. DTIC Document.                                2011. Self-checking components for dependable in-
[35] Kirsten Swearingen and Rashmi Sinha. 2001. Be-                  teractive cockpits using formal description techniques.
     yond algorithms: An HCI perspective on recommender              In Dependable Computing (PRDC), 2011 IEEE 17th
     systems. In ACM SIGIR 2001 Workshop on Recom-                   Pacific Rim International Symposium on. IEEE, 164–
     mender Systems, Vol. 13. Citeseer, 1–11.                        173.




                                                           25