=Paper= {{Paper |id=Vol-1636/paper-07 |storemode=property |title=Extracting Rules to Detect Cognitive Distractions through Driving Simulation |pdfUrl=https://ceur-ws.org/Vol-1636/paper-07.pdf |volume=Vol-1636 |authors=Fumio Mizoguchi,Hayato Ohwada,Hiroyuki Nishiyama,Akira Yoshizawa,Hirotoshi Iwasaki |dblpUrl=https://dblp.org/rec/conf/ilp/MizoguchiONYI15 }} ==Extracting Rules to Detect Cognitive Distractions through Driving Simulation== https://ceur-ws.org/Vol-1636/paper-07.pdf
Extracting rules to detect cognitive distractions
          through driving simulation

      Fumio Mizoguchi†‡, Hayato Ohwada†, Hiroyuki Nishiyama †, Akira
                    Yoshizawa*, and Hirotoshi Iwasaki*

            Faculty of Sci. and Tech. Tokyo University of Science†,
                           2641 Yamazaki, Noda-shi,
                            CHIBA, 278-8510, Japan
                             WisdomTex Co. Ltd.‡,
               1-17-3 Meguro-ku Meguro, Tokyo 153-0063, Japan
                             Denso IT Laboratory*
      mizo@wisdomtex.com, ohwada@rs.tus.ac.jp, hiroyuki@rs.noda.tus.ac.jp,
                ayoshizawa@d-itlab.co.jp, hiwasaki@d-itlab.co.jp



      Abstract. In our study, we generate rules to determine whether or not
      a driver is cognitively distracted, using collected data about the driver’s
      eye movements and driving data by Inductive Logic Programming (ILP).
      We assigned a mental arithmetic task to the research participants to
      cause cognitive distraction and then learned the rules of the cognitive
      distraction using the cognitively distracted state as positive examples by
      ILP. Using the generated rules, we hope to reduce car-driving risks by
      providing advice or urging caution using voice utterance when distracted
      driving is detected.


1   INTRODUCTION
We have conducted studies to build a mental model and detect a driver’s tension
to develop a safe information service for drivers [3, 6]. We generated rules to
detect a relaxed state while driving, which is considered the state situation for the
information service [3]. However, the relaxed state includes cognitive distraction,
therefore, we must detect cognitive distraction to develop a safe information
service.
    The National Highway Traffic Safety Administration (NHTSA) has identi-
fied three types of distracted driving, based on distraction factors[7]: (1) visual
distraction, (2) cognitive distraction, and (3) manual distraction. Visual distrac-
tion occurs while viewing an unrelated object (i.e., look-away driving). Viewing
and operating a smartphone, viewing the car’s TV, or operating and viewing the
car navigation system are visual distractions. The visibility of outside material
(beyond safety checks) during driving is also a visual distraction.
    Cognitive distraction involves the internal state of the driver who is thinking
about unrelated things while driving. Examples include driving while talking on
a cell phone and concentrating on one’s thoughts. Manual distraction involves an
intentionally careless driver. To detect visually distracted driving, we measure




                                         79
                                               Current Gaze Point




Fig. 1. Simulator environment photographed using the camera of the eye-movement
measuring device EMR.




the driver’s eye movements while driving. However, since cognitive distraction
involves the driver’s internal state, it is difficult to detect cognitive distraction
using just eye movement and driving data[6].

    In our study, we generate rules to determine whether or not a driver is cogni-
tively distracted, using collected data regarding the driver’s eye movement and
driving data by Inductive Logic Programming (ILP). To generate these rules, we
assigned a mental arithmetic task to the research participants to cause cognitive
distraction (e.g., Harbluk’s method) [1]. In addition, to ensure safety, we used a
simulation (Fig. 1). Using the simulator, we gathered two types of data: normal
driving and driving with a mental arithmetic task as a cognitive distraction. We
then learned the rules of the cognitive distraction using the cognitively distracted
state as a positive example by ILP. Using the generated rules, we expect to be
able to reduce car-driving risks by providing advice or urging caution using voice
utterance when distracted driving is detected.




                                        80
2     EYE MOVEMENT AND DRIVING DATA

2.1   Raw Data

An EMR-8 system was used to collect data on a driver’s eye movement. This de-
vice measures horizontal and vertical viewing angles in degrees. For the purpose
of this study, we consider 60 data points per second.
    Using a car simulator system developed by Denso, Inc., we obtained such
information as the accelerator depression rate (0% to 100%), braking signal (0
or 1), steering signal (-1 to 1), speed, and GPS information (of the simulator)
(60 data points per second).
    We gathered eye-movement and driving data on 19 research participants
(9 women and 10 men) using the simulator. In our experiment, each driver
ran the same 15-minute course two times. The first drive was normal driving
(no-load driving), and the second drive was driving with a mental arithmetic
task (load driving). The mental arithmetic task involved a two-digit addition
problem presented through headphones. We asked the driver the question every
uniformity interval. (By one experiment, we made questions for 110 times).
    Eye-movement data, car-driving data, and the mental arithmetic task were
synchronized, and all data were used to produce background knowledge, as de-
scribed below.
    We defined saccade and fixation as follows [6].

Saccade is caused by a change in the road situation or the appearance of pedes-
   trians or cars. It is considered a perception factor in the driving model.
Fixation is a cognition factor in which the driver determines the next action
   by recognizing changes in the environment and objects. It is also related to
   the perception of temporal changes, such as signal changes and road signs.


2.2   Data Transformation

We transformed data at constant time intervals to generate qualitative data for
ILP learning. We set the intervals at 5 seconds, following the previous study
[3]. We generated qualitative data for saccade and fixation information based
on the count of the saccade and the length of fixation time during the interval
[6], and for other information based on the average in the constant time as in
the previous research [3]. Specifically, we transformed eye movement and driving
data into qualitative data for use in ILP as follows.

Step 1. Collect a set of raw eye-movement data measured in the time interval,
   and measure the number of times that saccade and fixation were produced
   based on eye movement direction and distance. In addition, measure the
   total eye-movement distance.
Step 2. Collect a set of raw driving data measured in the time interval, and
   average each attribute value of driving data.
Step 3. Integrate the data of Step 2 with that of Step 1.




                                      81
Step 4. Add new attributes indicating differences in the short time period (5
   seconds before) where each difference is represented by Δ-second.
Step 5. Translate the data obtained in Step 4 into corresponding qualitative
   data using the categories upLow, upMiddle, upHigh, downLow, downMiddle,
   and downHigh.


3     ILP LEARNING

3.1   Background Knowledge

Table 1 presents a set of predicate types and their mode declarations given to
the background knowledge. The first type corresponds to qualitative values for
each eye movement and driving data. This is described by the time ID and a
parameter value. The second type is a qualitative state difference in a short time
(5-second) period and is described as parameter diff(ID,Val). The third one
(before event) is used to obtain information of adjacent data.


Table 1. Predicates and their mode declarations in background knowledge. Mode +
indicates input variable, - output variable, and # constant.


      Types                             Predicates
qualitative value accele(+ID, #Val), brake(+ID, #Val), velocity(+ID, #Val),
                   steering(+ID, #Val), frontCar(+ID, #Val), gazeX(+ID, #Val),
                   gazeY(+ID, #Val), sacCount(+ID, #Val), fixCount(+ID, #Val),
                   eyeMove(+ID, #Val)
 qualitative state accele diff(+ID,#Val), brake diff(+ID,#Val),
    difference      velocity diff(+ID,#Val), steering diff(+ID,#Val),
                   frontCar diff(+ID,#Val), gazeX diff(+ID,#Val),
                   gazeY diff(+ID,#Val), sacCount diff(+ID, #Val), ,
                   fixCount diff(+ID, #Val), moveCount diff(+ID, #Val)
adjacent saccades before event(+ID, -ID)




3.2   Training Examples

We used two types of data: normal driving and driving with the cognitive dis-
traction of a mental arithmetic task. For the mental arithmetic task, we made
questions for one problem every 8 seconds, and measured the time required to
answer the questions. We defined cognitive distraction as the driving state dur-
ing the time it took to answer the mental arithmetic questions and defined this
state as positive examples.




                                      82
3.3   Learned Rules and Discussion
In the present study, we focused on ILP learning using background knowledge
and training examples. We generated rules for each individual driver. For exam-
ple, the rules for research participant F01 (female, 30 years old, driving experi-
ence more than 10 years, 5 hours per week or more driving) are as follows.


        Table 2. Data (research participant F01) used for the experiments


 Type          Measured time     Raw data     Transformed    distracted normal
                 (second)                        data       positive ex. negative ex.
 No Task            917            55020          183            0           183
 With Task          934            56220          186           119           0




    Table 2 presents the characteristics of the raw data, transformed data, distracted-
driving data (positive examples) and normal driving data (negative examples).
    We used our parallel ILP system [5] (based on GKS [2]) to learn rules. We
used 8 PCs (total of 36 CPUs). This system generated 22 rules. Learning time
was about 77 minutes.
    Typical rules are presented below. “{T,F}” denotes the number of positive
examples (T) and the number of negative examples (F) covered by the rule.
{25,5} class(A) :- front(A, notClear), steering(A, straight),
                before_event(A, B), front(B, notClear).
This is a rule the number of inclusion there were many of the most positive
examples and detects the cognitive distracted state using only driving data.
This indicates following a forward vehicle and going straight.
{23,4} class(A) :- steering(A, straight), eyeMove(A, average),
                before_event(A, B), front(B, notClear).
{21,3} class(A) :- front(A, notClear), before_event(A, B),
                steering(B, straight), eyeMove(A, average).
These rules correspond to the immediately preceding rule. These rules also con-
sidered information on eye moving.
{11,1} class(A) :- eyeX_diff(A, rightLow), eyeMove_diff(A, upMiddle),
                             before_event(A, B), eyeMove(B, average).
This rule considers only information on eye movement. Specific eye position and
movement are caused to cognitive distraction. In addition, each rule contains
the non-determinate predicate before event, indicating the advantage of an
ILP-based learner.
    We implemented the generated rules using a GUI system and realized distrac-
tion detection during driving in real time. Figure 2 shows the flow of detecting




                                        83
                                                                                            Raw Driving Data
                                                                              No.     Timecode Accelerator Sttering   Gaze X     Gaze Y
                                                                              316    01:01:35:19  25.5        12      -17.1       4.1
                                                                              317    01:01:35:25  25.5        12      -16.5       5.6
                                                                              318    01:01:35:31   25         12      -11.9       10.4
                                                                              319    01:01:35:37   25         12       -14        8.5
                                                                              320
                                                                              321
                                                                              322
                                                                                     01:01:35:43
                                                                                     01:01:35:49
                                                                                     01:01:35:55
                                                                                                   25
                                                                                                   24
                                                                                                   24
                                                                                                            10.5
                                                                                                              9
                                                                                                             7.5
                                                                                                                      -16.2
                                                                                                                      -19.8
                                                                                                                      -17.2
                                                                                                                                  5.6
                                                                                                                                  5.2
                                                                                                                                  5.8
                                                                                                                                          …
                                                                              323    01:01:36:01  23.5        6       -12.9       7.4
                                                                              324    01:01:36:07  23.5        6        -8.6       9.7
                                                                              325    01:01:36:13   22         6       -14.3       7.3
                                                                              326    01:01:36:19   22         6       -15.5       5.6
                                                                              327    01:01:36:25   22         6       -15.8       5.8

                                                                                                   X-Y location
                                                                     Judgement of
    class(A) :- accele_change(A, upHigh), aveEyeMove(A, small).
    class(A) :- steering(A, straight), eyeMove(A, small),
                                                                     Distraction
                                                                                                 Transformed Data
              before_event(A, B), front(B, object).
    class(A) :- front(A, object), before_event(A, B),                         Time    Accelerator Dev.          Sttering       Dev.
              steering(B, straight), eyeMove(A, small).                         5        6.51      -3.6         245.61         37.5
    class(A) :- eyeX_diff(A, rightLow), eyeMove_diff(A, upMiddle),
                       before_event(A, B), eyeMove(B, small).
                                                                               10
                                                                               15
                                                                                        22.69
                                                                                         1.48
                                                                                                    17
                                                                                                  -22.3
                                                                                                                 74.13
                                                                                                                -58.26
                                                                                                                               -25.9
                                                                                                                                -20       …
    class(A) :- speed(A, over50), before_event(A, B),                          20       22.24      21.8          -69.9         -1.8
               front(B, object)                                                25        8.19     -14.7          -9.27          9.2
    class(A) :- before_event(A, B), eyeY_change(B, downLow),
                                                                               30       22.43      14.9           9.33          2.8
             sacCount_change(B, noChange), brake(B, off),

                                  …
             speed_change(B, downLow)

                                       Distraction Rule Sets




               Fig. 2. GUI that enables detection distraction for driving data [4]



the cognitive distraction using the system, retrieving distraction rules that can
explain the situation of data and display the result of detecting the rules of the
cognitive distraction.



4      CONCLUSIONS
In the present study, we generated rules to determine whether or not a driver
is cognitively distracted, using collected data about the driver’s eye movement
and driving data by ILP. We assigned a mental arithmetic task to the research
participants to cause cognitive distraction and then learned the rules of the
cognitive distraction using the cognitively distracted state as positive examples
by ILP. In addition, we implemented the generated rules using a GUI system and
realized distraction detection during driving in real time. Using the generated
rules and the system, we hope to reduce car-driving risks by providing advice or
urging caution using voice utterance when distracted driving is detected.


References
1. Joanne L. Harbluk, Y. Ian Noy, Patricia L. Trbovich and Moshe Eizenman: An
   on-road assessment of cognitive distraction: Impacts on drivers’ visual behavior and
   braking performance, Accident Analysis & Prevention, Vol. 39, No. 2, pp. 372-379,
   2007.




                                                                       84
2. Fumio Mizoguchi and Hayato Ohwada: Constrained Relative Least General Gener-
   alization for Inducing Constraint Logic Programs, New Generation Computing 13,
   pp. 335-368, 1995.
3. Fumio Mizoguchi, Hayato Ohwada, Hiroyuki Nishiyama and Hirotoshi Iwasaki: Iden-
   tifying Driver’s Cognitive Load Using Inductive Logic Programming, Inductive Logic
   Programming, Lecture Notes in Arificial Intelligence (LNAI), LNAI7842, pp. 166-
   177, 2013.
4. Hiroyuki Nishiyama, Akira Yoshizawa, Hirotoshi Iwasaki and Fumio Mizoguchi:Cog-
   Tracker: A New Tool for Detecting Distracted Car Driving Using Eye-Movement and
   Driving Data on a Tablet PC, CATA-2015, pp.125-130, 2015.
5. Hiroyuki Nishiyama and Hayato Ohwada: Yet Another Parallel Hypothesis Search
   for Inverse Entailment, ILP2015, 2015.
6. Shinichiro Sega, Hirotoshi Iwasaki, Hironori Hiraishi and Fumio Mizoguchi: Qualita-
   tive Reasoning Approach to a Driver’s Cognitive Mental Load, International Journal
   of Software Science and Computational Intelligence 3(4), pp. 18-32, 2011.
7. NHTSA: TSA Distracted Driving Research Plan - April 2010 [PDF] DOT-HS-811-
   299, http://www.nhtsa.gov/Research/Human+Factors/ Distraction




                                         85