=Paper= {{Paper |id=Vol-1796/rmt-livestudy |storemode=property |title=Using Human Error Abstraction Method for Detecting and Classifying Requirements Errors: A Live Study |pdfUrl=https://ceur-ws.org/Vol-1796/rmt-livestudy.pdf |volume=Vol-1796 |authors=Vaibhav Anu,Gursimran Walia,Gary Bradshaw,Wenhua Hu,Jeffrey C. Carver |dblpUrl=https://dblp.org/rec/conf/refsq/AnuWBHC17a }} ==Using Human Error Abstraction Method for Detecting and Classifying Requirements Errors: A Live Study== https://ceur-ws.org/Vol-1796/rmt-livestudy.pdf
    Using Human Error Abstraction Method for Detecting
     and Classifying Requirements Errors: A Live Study

Vaibhav Anu1, Gursimran Walia1, Gary Bradshaw2, Wenhua Hu3, Jeffrey C. Carver3

    North Dakota State University1, Mississippi State University2, University of Ala-
                                        bama3


1       Introduction

   Inspections, a proven quality improvement approach [3, 7], are a process where a
team of skilled individuals review a software artifact (e.g., requirements specification
document) to identify faults. Traditional fault-based software inspections (like Fault
Checklist inspection) focus inspectors’ attention on different type of faults (e.g., incor-
rect or incomplete or ambiguous requirements) [7]. Even a faithful application of vali-
dated fault-based techniques does not help inspectors in finding all faults. As a result,
a larger part of (40%-50%) the development effort is spent fixing issues that should
have been fixed in an earlier phase [3]. Hence, there is a real need to improve early fault
detection and to help developers avoid the unnecessary rework. We hypothesize that
inspections focused on identifying human errors (i.e., the underlying cause of faults)
are better at identifying requirements problems when compared to inspections focused
on faults (i.e., manifestation of human error).
   On those lines, our recent work [1, 5] uses a Cognitive Psychology perspective on
human errors to improve the practice of requirements inspections. Human errors are
understood as purely mental events, failings of human cognition in the process of prob-
lem solving, planning, and acting. Errors, in turn, will produce faults, a physical mani-
festation of the error. It is important that a clear distinction is made between human
errors (mental events) vs program errors (related to coding or programmatic failures).
   To help inspectors in identifying human errors, the authors over the past two years,
have worked on developing a Human Error Taxonomy (HET) that classifies human
errors that commonly during requirements engineering [2]. Additionally, we have also
developed a human error analysis framework called the Human Error Abstraction
(HEA) method that can guide inspectors in analyzing and abstracting (i.e., extracting)
human error information from requirements faults, a process referred to as Error Ab-
straction (EA) by Psychologists. Description of HET and HEA appears in Section 2.
   We recently carried out a series of empirical studies to validate the effectiveness of
human error based inspections supported by HET against FC based inspection at two
different sites [1, 5]. While the results were promising, the subjects did not have a sup-
porting framework to assist them while abstracting errors from the faults. Analogous to
human error investigation frameworks in Psychology, this paper discusses the design
and evaluation of the HEA method during the live study.




$PQZSJHIUªGPSUIJTQBQFSCZJUTBVUIPST$PQZJOHQFSNJUUFEGPSQSJWBUFBOE
BDBEFNJDQVSQPTFT
2      Background

   In this section we briefly describe human errors and method for abstracting human
errors from the requirement faults which in turn can help find additional faults.
   (1) Human Error Based Requirements Inspections: Error based inspections [6],
works by assisting inspectors to identify and extract human error information from
faults found during FC inspection, and then to use the abstracted human error infor-
mation to guide the re-inspection. Our prior studies [1, 5, 10] have shown that, error
based inspections are a significant improvement over fault-based inspections. However,
an inspectors’ ability to find faults using the error information is highly dependent on
their ability to correctly identify human errors during the error abstraction (EA) process.
Therefore, the goal of this study is to evaluate the usability of HEA method during the
EA step so that their inspection effectiveness can be further improved.
   (2) Human Errors: To assist inspectors during the Error Abstraction (EA) leg of
human error based inspections, we have developed a Human Error Taxonomy (HET)
that classifies the most commonly occurring requirements phase human errors built
around Reason’s psychological account of human errors [9]. The complete develop-
ment process of the HET can be found in [2].




                     Fig. 1. Human Errors (Slips, Lapses and Mistakes)

   Reason’s well-respected human error classification system, classifies human errors
into slips, lapses, and mistakes, as shown in Figure 1. According to Reason [9], when
faced with a situation that demands problem-solving, human operators go through two
major information-processing stages: planning stage and execution stage. The human
error mechanisms associated with execution (or action) stage are called slips and lapses.
The error mechanism associated with planning stage is called mistake. As illustrated in
Figure 1, assume that our goal is to drive to the store and entails steps such as starting
the car, backing down the driveway to the street, navigating the route, and parking in
the store lot. If we put the wrong key in the ignition, we have committed a failure of
execution known as a slip. If we forget to put the transmission into reverse before step-
ping on the gas, the omitted step is a lapse. Failing to take into account the effect of a
bridge closing and getting caught in traffic is a planning mistake.
   Slips are execution failures that are caused by inattention and occur when a planned
action is incorrectly executed. Lapses, which are also execution failures, occur when
an action is forgotten (omitted) while executing a planned task or when an individual
forgets their place in a planned task and ends up repeating an action. Mistakes are plan-
ning failures and are generally a result of being in an unfamiliar situation.
    (3) Human Error Abstraction Method (HEA): Although the HET provides a con-
crete list of the most commonly occurring human errors, EA is still a subjective process
that different people might perceive in different ways. In order to reduce the subjectivity
and complexity of EA we developed the HEA method which can be found in [2].
   The HEA was created after performing pilot empirical evaluation of human error
based inspections with different set of subjects [1, 5]. After the studies, the subjects
provided feedback that EA can be improved by focusing the inspector’s attention on
various RE activities (elicitation, analysis, specification, and management). Hence,
HEA is developed to guide the selection of the appropriate RE activity and the situation
where the human error might have occurred. We created HEA (Figure 2) to act as an
intuitive frame-work to systematically guide inspectors during EA. Inspectors have to
answer a set of questions (decision points) to trace a fault to an underlying human error.




                     Fig.2. Human Error Abstraction (HEA) Method

   The HEA method (that asked specific questions) has been converted into a decision
tree style framework that can better guide inspectors during the error-discovery (in con-
sultation with the Cognitive Psychology expert, Dr. Bradshaw). This decision tree (Fig.
2) uses the skill-rule-knowledge framework developed by Rasmussen [8], wherein in-
spectors are directed through decision points (based on cognitive failure patterns). Ma-
jor decision points are discussed below:
    (i) Decision point D1 guides inspectors to distinguish between an error scenario as a
planning scenario (i.e., Mistakes) or an execution scenario (i.e., Slips and Lapses).
    (ii) Decision points D2 helps inspectors to distinguish between inattention failures
(i.e. slips) and memory failures (i.e., lapses).
    (iii) Decision points D3 and D4 helps identify the type of Mistake (i.e., rule-based
vs. knowledge based mistake). It is hoped that this type of EA framework can help in-
spectors navigate to correct human error classes.


3      Study Design

   The main goal of the live study is to evaluate the use of the HEAA tool in helping
inspectors correctly abstract and classify underlying human errors responsible for the
requirement faults.


3.1    Research Questions

  RQ 1: Are inspectors able to use the HEA method to accurately abstract and classify
human errors that occurred during the requirements development process?
  RQ 2: Are the human error classes (Slips, Lapses and Mistakes) adequate and rele-
vant to the requirements development process??


3.2    Subjects and Artifacts
   (1) Subjects: The population of interest are subjects with familiarity with require-
ments engineering activities and industry experience. We want to evaluate the useful-
ness of the HEA method from practitioners and experts in academia.
   (2) Artifacts: An SRS document that specified requirements for a Parking Garage
Control System (PGCS) will be used during the live study. PGCS SRS was developed
by researchers at University of Maryland and was seeded with 35 realistic faults..


3.3    Study Procedure
   During the study, subjects will be trained on how to use the HEA method to abstract
errors from a small subset of PGCS faults (that will be provided to them), and then use
the training to abstract errors from a larger subset of remaining faults.
   Experimental steps are described as follows:
   Step 1 – Training on Error abstraction (EA): During this 25-30-minute training
session, subjects will be trained on human errors classes of HET, and how to use the
HEAA tool to abstract errors from supplied faults. Next, subjects will be asked to use
the training to trace errors from a subset of five PGCS faults followed by the discussion
of their results.
   Step 2 –Error abstraction (EA) on Remaining PGCS faults: During the remainder
of the live study, subjects will use the HEA template to abstract and classify human
errors (into Slips, Lapses, and Mistakes) from the a second subset of 15 faults in PGCS
SRS.
   Step 3 – Survey: Post study, we will collect feedback from subjects on HEAA and
EA using a survey that can either be handed out to the subjects or emailed to them.
   The following documents will be provided during the study run –
  • PGCS SRS: Hard copy (i.e., printout) or a downloadable PDF file that will be
       made available on local server.
  • HEA decision tree: Hard copy or a downloadable PDF file that will be made
       available on local server. The HEA decision tree will provide a handy template
       to enter their error abstraction data.
  • Error Report Form: The error report form will contain 20 faults (randomly se-
       lected from a list of 35 seeded faults) in PGCS SRS. Subjects will be asked to
       abstract errors from the first 5 faults during the training followed by error ab-
       straction for the remainder of 15 faults post training. We can supply the error-
       report form as a hard copy and also make a PDF copy available for download.


3.4    Data collection and Analysis
   Fig. 3 provides an example of the information that subjects will be asked to report
when abstracting a human error from a fault (one row for each fault-error mapping). To
enable an objective error data analysis, when analyzing the error abstraction accuracy,
we will compare the human error classification reported against the expected human
error class. The expected human error class for each fault (for PGCS faults) has been
agreed upon after discussion amongst authors and Psychology expert, Dr. Bradshaw.
This will provide insight into whether subjects are able to use the HEA method to dis-
tinguish between the 3 error mechanisms.




                            Fig. 2. Sample Error Report Form

   Additionally, at each step, we will ask subjects to report the effort spent (amount of
time taken to complete the task). We will also analyze the written accounts of human
errors collected during the study to analyze whether the accounts of human errors are
consistent across all subjects.


4      Potential Validity Threats

     The live study faces the following validity threats:
    (1) For some participants, this study might be the first introduction to Cognitive Psy-
chology concepts (like slips, lapses, and mistakes) and it is possible that they may not
properly understand the concepts within the stipulated time allocated for the study run.
We intend to mitigate this threat by involving a Cognitive Psychologist (Dr. Bradshaw)
when training the participants on human errors and how to abstract errors.
    (2) The heterogeneity of participant population (mix of researchers and practitioners)
is also a potential threat to the generalizability of the results of the live study. The par-
ticipants will be of different backgrounds and this may contribute to variability in
measures. We will collect data regarding the affiliation of subjects (industry or aca-
demia) in order to perform separate data analyses on the two expected subgroups.


References
 1. Anu, V., Walia, G.S., Hu, W., Carver, J.C. and Bradshaw, G. 2016. Effectiveness of Human
    Error Taxonomy during Requirements Inspection: An Empirical Investigation. Software En-
    gineering and Knowledge Engineering, SEKE2016.
 2. Anu, V., Walia, G.S., Hu, W., Carver, J.C. and Bradshaw, G. 2016. The Human Error Ab-
    straction Assist (HEAA) tool. http://vaibhavanu.com/NDSU-CS-TP-2016-001.html
 3. Boehm, B. and Basili, V.R. Software Defect Reduction Top 10. Computer. 34,1 (2001),
    135–137.
 4. Hsieh, H.F. and Shannon, S.E. 2005. Three Approaches to Qualitative Content Analysis.
    Qualitative Health Research. 15, 9 (2005), 1277–1288.
 5. Hu, W., Carver, J.C., Anu, V., Walia, G.S. and Bradshaw, G. 2016. Detection of Require-
    ment Errors and Faults via a Human Error Taxonomy: A Feasibility Study. 10th ACM/IEEE
    International Symposium on Empirical Software Engineering and Measurement, ESEM’16.
 6. Lanubile, F., Shull, F. and Basili, V.R. 1998. Experimenting with Error Abstraction in Re-
    quirements Documents. In Proc. of the 5th International symposium on software metrics.
 7. Porter, A.A., Votta, L.G. and Basili, V.R. 1995. Comparing detection methods for software
    requirements inspections: a replicated experiment. IEEE Transactions on Software Engi-
    neering. 21, 6 (1995), 563–575.
 8. Rasmussen, J. and Vicente, K.J. Coping with human errors through system design: implica-
    tions for ecological interface design. Jour. of Man-Machine Studies. 31, 5 (1989), 517–534.
 9. Reason, J. 1990. Human error. Cambridge University Press.
10. Walia, G.S. and Carver, J.C. 2010. Evaluating the use of requirement error abstraction and
    classification method for preventing errors during artifact creation: A feasibility study. In
    Proc. of the 24th International Symposium on Software Reliability Engineering, ISSRE’10