<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>ASYDE: An Argumentation-based System for classif Ying Driving bEhaviors⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>DISCUSSION PAPER</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bettina Fazzinga</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sergio Flesca</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Filippo Furfaro</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giuseppina Monterosso</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Irina Trubitsyna</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>DIMES, University of Calabria</institution>
          ,
          <addr-line>Via Bucci, Rende, 87036</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>DiCES, University of Calabria</institution>
          ,
          <addr-line>Via Bucci, Rende, 87036</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>We introduce a framework for classifying the driving behaviors of motorists, where the readings collected by inertial sensors during the trip are first integrated with the available information on the characteristics of the road traveled by the vehicle, and then processed to obtain a driving-style certificate. The core of the approach is a reasoner based on Abstract Argumentation Framework, that is a well-known paradigm for modeling disputes between agents. Specifically, the decision on which driving-style class best describes the behavior exhibited at each time point is modeled as the “outcome" of a dispute involving diferent agents, where each agent proposes a class, that may be aligned or in conflict with the other agents' opinions and with what suggested by the sensors' readings on the assessment of the driver's behavior. A prototype implementing the framework was implemented and its experimental validation on real-life data is presented.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Examining the influence of human behavior on road safety is a matter of considerable interest. As
studies in this field over the past decades [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] have demonstrated a correlation between the drivers’
behavior and road safety, there is an increasing demand for solutions supporting the analysis of the
driving style of motorists. In this context, the objective of this research is the definition of a framework
for recognizing and classifying the behavior exhibited by a motorist during a trip on the basis of the
readings gathered by various devices monitoring the vehicle status, such as the readings of the inertial
sensors of the driver’s smartphone and those registered by the Electronic Control Unit (ECU) of the
vehicle, which periodically records the speed, the engine rpm (rate per minute), etc.. In particular, the
proposed framework returns a driving certificate , where the kinematic characteristics of the vehicle
registered by the sensors during the trip and the available information on the road traveled by the
vehicle (such as the speed limits) are taken into account. The core of the approach is a reasoner based
on a well-known AI framework, namely Dung’s Abstract Argumentation Framework (AAF), that has
proved efective in supporting the reasoning over several situations [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2, 3, 4</xref>
        ], especially when diferent
pieces of information provide a partial assessment of the situation. Specifically, AAFs can efectively
handle the case where the pieces of information describing the characteristics of the examined situation
are conflicting: this is a characteristics afecting the addressed scenario, since the various sensors
monitoring the vehicle may provide diferent measures of the same physical dimension, due to noise or
failures.
      </p>
      <p>In the proposed framework, called ASYDE (Argumentation-based System for classif Ying Driving
bEhaviors), the reasoning over the driving style is modeled as a dispute (encoded as an AAF): the
participants of the dispute are the sensors, that claim arguments encoding the measures read at time
point  (such as “I read the speed value 60 Km/h at , so the motorist held a very high speed in the urban
road at "), and virtual agents, whose “assessment arguments" assign one of the available classes of
driving style (i.e. calm, normal, aggressive) to the motorist’s behavior at  (an example of assessment
argument is “Based on the sensors’ readings, it is fair to classify the motorist’s driving style at  as calm").
In the AAF built this way, also the contradictions holding between the claims are considered, and they
are represented in terms of attacks between arguments: for instance, attacks are considered between
assessment arguments suggesting diferent classifications at the same time point, or between sensors’
arguments claiming significantly diferent measures. Starting from this, a well-known reasoner is run
over the AAF, and, for each time point, we collect the assessment arguments that turn out to be accepted
(which, under argumentation terminology, means that they consist of claims that are resilient to the
attacks from other arguments, so they can be reasonably considered truthful). Then, the returned
driving-style certificate contains, for each driving-style class  ∈ {calm, normal, aggressive}, the
minimum and maximum percentage of time points (against the overall time points composing the time
interval of the trip) where an assessment argument voting for  is accepted. Herein, the minimum
percentage represents the percentage of time points where only the assessment argument at  voting
for class  is accepted (so the motorist’s behavior is classified in  with no uncertainty), whereas the
maximum percentage represents the time points where also assessment arguments voting for alternative
classes are accepted (meaning that the classification in  is uncertain, as it is only one of the possible
classes suggested by the sensor readings). This uncertainty is due to the fact that sensor readings can
be afected by errors, so it may happen that alternative classifications for the same time point may be
accepted, depending on which sensors are considered trustworthy.</p>
      <p>Many approaches in the literature have been developed for classifying driving behavior. However,
to the best of our knowledge no studies have been conducted applying Argumentation Framework
to the task of classifying driving behavior. The power of Argumentation Frameworks lies in their
ability to analyze and evaluate conflicting pieces of information, allowing for more accurate and reliable
conclusions. In the addressed task, conflicting information arises when a certain sensor devoted to collect
driving data, provides errors in the measurements. In this case, if several devices are monitoring the
same quantity, we may be provided with diferent readings of the same physical measurement. Through
the employment of argumentation, we are able to solve these conflicts and correct measurement errors
by considering not only the readings at a specific time point but also taking into account the surrounding
events. In cases where conflict resolution is not feasible, our framework considers alternative possible
classifications.</p>
      <p>In what follows, we will introduce the ASYDE framework, and show some results coming from a
preliminary evaluation over real-life data.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Preliminaries</title>
      <p>
        An Abstract Argumentation Framework (AAF) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] is a pair ⟨, ⟩, where  is a finite set, whose elements
are called arguments, and  ⊆  ×  is a binary relation over , whose elements are called attacks.
Given a set of arguments  and an argument , we say that “ attacks ” if there is an argument 
in  such that  attacks , and that “ attacks ” if there is an argument  ∈  such that  attacks .
Moreover, we say that  is acceptable w.r.t.  if every argument attacking  is attacked by , and say
that  is conflict-free if there is no attack between its arguments.
      </p>
      <p>A dispute between agents presenting diferent claims can be easily represented via an AAF. Each
claim is encoded as an argument, and any contradiction between two arguments  and  is encoded by
the attack (, ) or (, ) or both, depending on which direction of attack best fits. For instance, given
the three arguments:
: “As it will rain all day long, it is a bad idea to organize a picnic this afternoon";
: “As it is certain that today it will not rain, it is a good idea to organize a picnic this afternoon";
: “As we are in the middle of the rainy season, it is highly probable that this afternoon it will rain";
a reasonable way to encode their relationships is via the attacks (, ) and (, ) (as the premises of the
two arguments  and  contradict each other) and (, ) (as the conclusion of  contradicts the premise
of ).</p>
      <p>The analysis of the dispute encoded via an AAF is typically done by locating its extensions and its
accepted arguments. An extension is a set of arguments  that collectively represent a strong point of
view, in the sense that  is coherent (i.e. its arguments do not contradict each other) and capable of
counterattacking the attacks from arguments outside . These general properties give rise to several
semantics of the notion of extension, such as the admissible and the preferred:  is an admissible
extension if  is conflict-free and all its arguments are acceptable w.r.t. ; a preferred extension if  is a
maximal (w.r.t. ⊆ ) admissible set of arguments.</p>
      <p>The notion of accepted argument is based on that of extension and encodes the robustness of a single
argument, that is assessed by verifying its membership in an extension. In particular, since multiple
extensions can exist, the credulous and the skeptical perspectives of acceptance can be adopted: under
a semantics,  is credulously (resp., skeptically) accepted if it belongs to at least one (resp., every)
extension under the semantics.</p>
      <p>In the example above regarding the reasonability of organizing a picnic, no set containing both  and
, or both  and , is an extension (under any semantics), since it would not be conflict free. In this case,
there are 4 admissible extensions (i.e. ∅, {}, {}, {, }), while under the preferred semantics there
is only one extension, i.e. {, }. Correspondingly,  and  are credulously accepted under both the
semantics, and are skeptically accepted under the preferred but not under the admissible.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Our strategy</title>
      <sec id="sec-3-1">
        <title>In this section, we formalize our framework and illustrate our strategy.</title>
        <p>The aim of our framework is that of providing the users with a driving certificate that characterizes
the driver behavior during the time interval at hand on the basis of some predefined classes (also called
assessments) of behaviors and of the predefined physical measures of interest.</p>
        <p>We start by defining these concepts.</p>
        <p>Let us denote as ℳ the set of the physical measures we take into account, as ℬ =
{, , ...} the set of assessments concerning the driving behaviors, and as [0.. ] the
time interval under analysis. Moreover, we denote as  the set of devices at hand, and as ( ) the
set of devices measuring the physical measure  . Now, we define the concept of Event.
Definition 1 (Event). An event  is a tuple ⟨, , , ⟩, where  ∈ ℳ,  is a device belonging to
( ),  ∈ ℜ is the value collected by , and  is a time point belonging to [0.. ].</p>
        <p>Each sensor reading produces an Event. Once the events have been collected on the basis of the
measurements, we build an argumentation framework consisting of (i) arguments derived from the
events, (ii) arguments derived from the assessment classes, (iii) several other arguments built to enable
a reasoning mechanism that provides as a final result the Driving Certificate , that consists in, for each
predefined classes (also called assessments), the number of time points where the driving style of the
driver corresponds to it.</p>
        <p>In brief, our strategy consists of the following steps:
1. Building the events by collecting measurements;
2. Building an argumentation framework incrementally starting from time point 0, that is
augmenting it with more arguments and attacks for each time point, till the last time point;
3. Reasoning over the argumentation framework trying to establish the driving behavior at each
time point, exploiting the arguments and attacks related to that time point and also the arguments
and attacks related to the previous and subsequent time point. This way, the evaluation of
each time point is done considering the time points in a non-independent way, so that possible
measurement errors due to malfunctioning of some of the devices can be fixed in the reasoning;
4. Computing, at the end, the Driving Certificate that characterizes the driver behavior based on
reasoning over the argumentation framework;</p>
        <p>In the following subsection, we define all the arguments and attacks that we use to build the
argumentation framework.</p>
        <sec id="sec-3-1-1">
          <title>3.1. The ASYDE framework</title>
          <p>We now introduce our argumentation framework, named ASYDE. The he crucial arguments in our
framework are the Low-level arguments and the Assessment arguments. We formally define both
of them as follows:
Definition 2 (Low-level argument). A low-level argument is a tuple ⟨, , , ⟩, where  is a
physical measure belonging to ℳ,  is a value belonging to ℛ,  is a time point belonging to [0.. ]., and
 ∈ ( ) is a device/sensor that detects  ’s value  at time point .</p>
          <p>Definition 3 (Assessment argument). Given a set of driving behavior assessments ℬ, an assessment
argument is a pair ⟨, ⟩, where  belongs to ℬ, and  belongs to [0.. ].</p>
          <p>Basically, the Low-level arguments are directly derived from the events, while the Assessment
arguments are directly derived from the pre-defined assessment classes. For each time point, we build
an assessment argument for each class and a low-level argument for each device and each measure.</p>
          <p>Our argumentation framework also includes the High-level arguments. These arguments represent
a projection of the values collected by the devices into ranges, typically High, Medium and Low. We
build an argument for each range, for each time point and for each measure.</p>
          <p>Provided that (i) for each measure  , we denote as levels( ) the set of its levels, (ii) given a level
 ∈ levels( ), we denote as values(,  ) the set of numerical values associated to it, and (iii) we
denote as compatible(,  ) the set of levels ′ of levels( ) that are compatible with  in the sense
that it is possible, for  , to change from ′ to  in one time point, the High-level arguments are
defined as follows:
Definition 4 (High-level argument). A high-level argument is a tuple ⟨, , ⟩, where  is a physical
measure belonging to ℳ,  is a level belonging to levels( ), and  belongs to [0.. ].</p>
          <p>We also have two types of service arguments, whose aim will be clearer in what follows. The first
one is used to link the low-level arguments to the high-level arguments of the current time point and to
link the arguments referring to the current time point with the arguments referring to the previous and
the subsequent time point.</p>
          <p>Definition 5 (Dummy argument). A dummy argument is a tuple ⟨, , , ⟩, where  belongs to
{past,present,future},  is a physical measure belonging to ℳ,  is a level belonging to levels( ), and 
belongs to [0.. ].</p>
          <p>The second one is used to link high-level arguments to assessment arguments.</p>
          <p>Definition 6 (Dummy Assessment argument). A dummy assessment argument is a tuple ⟨, , ⟩,
where  is a driving behavior assessment belonging to ℬ,  belongs to ℳ, and  belongs to [0.. ].</p>
          <p>The framework is incrementally built, starting from the initial time point 0, augmenting it by adding
new arguments at each time point, until the final time point  . For each time point , we add edges
(attacks) between arguments referring to , but also that correlate  with both the preceding and
subsequent time point.</p>
          <p>We finally define the ASYDE framework.</p>
          <p>Definition 7 (ASYDE Framework). An ASYDE framework is a tuple  = ⟨, ⟩, where  is the
set of arguments and  ⊆  ×  is the set of attacks. The set  of arguments is composed by
⟨, , , , , ⟩, where  is a set of low-level arguments,  is a set of high-level
arguments,  is a set of dummy arguments,  is a set of dummy assessment arguments, and  is
a set of assessment arguments.</p>
          <p>For a generic time point , the ASYDE framework is composed of:
1. a low-level argument  = ⟨, , , ⟩ from each event ⟨, , , ⟩;
2. a high-level argument ℎ = ⟨, , ⟩ for each measure and for each of its levels;
3. three dummy arguments ⟨present, , , ⟩, ⟨past, , , ⟩, and ⟨future, , , ⟩ for each measure
and for each of its levels;
4. two mutual attacks for every pair of high-level arguments ℎ = ⟨, , ⟩ and ℎ′ = ⟨, ′, ⟩;
5. (i) a self-attack to every dummy argument, (ii) an attack from every  = ⟨present, , , ⟩ to
every ℎ = ⟨, , ⟩, (iii) an attack from every  = ⟨, , , ⟩ to every  = ⟨present, , , ⟩
such that value  belongs to level  for  , (iv) an attack from every  = ⟨past, , , ⟩ to
every ℎ = ⟨, , ⟩, (v) an attack from every ℎ = ⟨, ,  − 1⟩ to every  = ⟨past, , ′, ⟩
such that ′ ∈ compatible(,  ), (vi) an attack from every  = ⟨future, , , ⟩ to every
ℎ = ⟨, , ⟩, (vii) an attack from every ℎ = ⟨, ,  + 1⟩ to every  = ⟨future, , ′, ⟩
such that ′ ∈ compatible(,  );
6. an assessment argument for each class in ℬ;
7. a dummy assessment argument ⟨, , ⟩ for each class in ℬ;
8. two mutual attacks for every pair of assessment arguments ⟨, ⟩ and ⟨′, ⟩;
9. (i) a self-attack to every dummy assessment argument, an attack from every ⟨, , ⟩ to every
⟨, ⟩, and an attack from every ℎ = ⟨, , ⟩ to every ⟨, ⟩ such that  for  is judged
compatible with the driving behavior : for example, speed’s level  is compatible with the
CalmDriving behavior assessment.</p>
          <p>In a nutshell, our aim is that of enabling a mechanisms that allows to have at least one preferred
extension containing exactly one assessment argument per time point, or possibly multiple preferred
extensions such that each preferred extension contains at most one of the assessment arguments for
each time point. This way, we can compute, for each assessment class , the percentage of time points
 where the assessment argument ⟨, ⟩ is skeptical accepted (resp. credulous accepted) in a preferred
extension. Then, we compute the final result of our technique, that is the Driving Certificate , as follows:
Definition 8 (Driving Certificate). A Driving Certicfiate  is a set of pairs of the form
⟨, [..]⟩, where  is a driving behavior class belonging to ℬ,  (resp., ) is a percentage
representing the number of time points an assessment argument  of the form  = ⟨, ⟩ has been skeptical
(resp., credulous) accepted in the time interval [0.. ].</p>
          <p>Example 1. Consider the case where we have a single physical measure, namely speed (denoted by ),
monitored by two devices, GPS and OBDII and a time interval [0..2]. For each time point  ∈ [0..2], we
have two low-level arguments, corresponding to each speed measurement. Figure 1 shows an excerpt of
the structure of our framework focused on time point 1. For the sake of space, we show all the arguments
and attacks built at  = 1, but only the arguments and attacks built at  = 0 and  = 2 that are linked to
arguments built at  = 1. The two low-level arguments for time point 1 are depicted in green with dashed
lines.</p>
          <p>In this example we define () = {, , }, where  stands for Low,  and  for Medium
and High, respectively. Then, for each level and for each time point, we have a high-level argument,
depicted in yellow. Assuming that drivers’ behavior is classifiable in three categories ( CalmDriving (),
NormalDriving ( ), and AggressiveDriving ()), we have the three assessment arguments depicted
in white.</p>
          <p>The dummy arguments that attack the high-level arguments and are attacked by low-level arguments
are depicted as orange circles. The dummy arguments that link the current time point to the previous (resp.
subsequent) one depicted as blue squares (resp. pink stars). The dummy assessment arguments are depicted
as brown diamonds.</p>
          <p>It is easy to see that each high-level argument must be defended against the dummy-argument attack to
be included in a preferred extension by at least one low-level argument. In this example, at time point 1, we
have one low-level argument defending the high-level argument ⟨, , 1⟩, representing the level Medium,
and the other one defending the high-level argument representing the level Low. The high-level argument
representing the level High is instead not defended, and therefore, can not be accepted in any preferred
extension. In practice, this means that assuming that the driver was driving at a high-level speed is not
reasonable for time point 1.</p>
          <p>By correlating the current time point with both preceding and subsequent ones, we can eliminate certain
possibilities that we would not be able to rule out by considering only the current time point. Specifically, to
be accepted in a preferred extension, each high-level argument must be defended by both the attack from
the blue square dummy argument and the pink star dummy argument. This defense is implemented by the
high-level arguments of the previous and subsequent time point, respectively, that represent a compatible
range. In other words, if, for instance, at time point 0 we accepted the high-level argument representing the
level High, at time point 1 the high-level arguments associated with levels High and Medium are defended,
while the one related to level Low is not. Consequently, ⟨, , 1⟩ can not be accepted in any preferred
extension.</p>
          <p>Assuming that, at time point 0 (resp., 2), there are the two arguments ⟨,  , 80, 0⟩,
⟨,  , 85, 0⟩ (resp., ⟨,  , 65, 2⟩, ⟨,  , 95, 2⟩), defending ⟨, , 0⟩ (resp., ⟨, , 2⟩ and
⟨, , 2⟩) from the attacks by the dummy arguments, we have that ⟨, 0⟩ (resp., ⟨ , 2⟩ and ⟨, 2⟩)
is defended by them from the attacks by the dummy assessment arguments. Furthermore, we have that the
arguments ⟨, , 1⟩ and ⟨, , 1⟩ (resp. ⟨, , 1⟩, ⟨, , 1⟩ and ⟨, , 1⟩) are defended by ⟨, , 0⟩
(resp., ⟨, , 2⟩ and ⟨, , 2⟩) from the attacks by the blue-square (resp. pink-star) dummy arguments.
On the contrary, ⟨, , 1⟩ is not defended by the blue-square dummy argument attack, and therefore, is not
accepted in any preferred extension. In other word, we assume that the reading of the speed that supports
the level Low at time point 1 is due to a failure of the device which collected it. Therefore, we exclude the
possibility of classifying the driving style as Calm at time point 1.</p>
          <p>Then, we have two preferred extensions:
• {⟨,  , 80, 0⟩, ⟨,  , 95, 0⟩, ⟨, , 0⟩, ⟨, 0⟩,
⟨,  , 60, 1⟩, ⟨,  , 30, 1⟩⟨, , 1⟩, ⟨ , 1⟩,
⟨,  , 65, 2⟩, ⟨,  , 95, 2⟩, ⟨, , 2⟩, ⟨ , 2⟩}
• {⟨,  , 80, 0⟩, ⟨,  , 95, 0⟩, ⟨, , 0⟩, ⟨, 0⟩,
⟨,  , 60, 1⟩, ⟨,  , 30, 1⟩⟨, , 1⟩, ⟨ , 1⟩,
⟨,  , 65, 2⟩, ⟨,  , 95, 2⟩, ⟨, , 2⟩, ⟨, 2⟩}
The produced driving certificate is:</p>
          <p>• {⟨, [0.0% − 0.0%]⟩, ⟨ , [33.3% − 66.7%]⟩, ⟨, [33.3% − 66.7%]⟩}</p>
          <p>In conclusion, in this example, according to the certificate, the driving style can be classified as both
Normal and Aggressive over the entire time interval considered.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental Evaluation</title>
      <p>
        We conducted a series of experiments to validate the capability of our prototype in distinguishing
various driving styles. As also done in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], we assumed that drivers’ behavior is classifiable in the three
categories: CalmDriving (), NormalDriving ( ), and AggressiveDriving (). The driving data
were collected during two diferent trips performed by diferent drivers. Both the trips were taken
on a rural road, under favorable weather conditions and without trafic, ensuring that trafic did not
influence the test results.
      </p>
      <sec id="sec-4-1">
        <title>4.1. Physical Measures Set and Devices Set</title>
        <sec id="sec-4-1-1">
          <title>The devices used to collect data were:</title>
          <p>• OBDII device - It enables access the vehicle’s internal information connecting via Bluetooth to
your phone.
Time point 1
&lt;S, OBDII, 60, 1&gt;</p>
          <p>Time point 2</p>
          <p>&lt;S, M, 0&gt;
&lt;S, L, 0&gt;</p>
          <p>&lt;S, H, 0&gt;
&lt;present, S, M, 1&gt; &lt;past, S, M, 1&gt;</p>
          <p>&lt;S, M, 1&gt;
&lt;past, S, L, 1&gt;
&lt;future, S, M, 1&gt; &lt;past, S, H, 1&gt;</p>
          <p>&lt;AD, 1&gt;
&lt;ND, S ,1&gt;</p>
          <p>&lt;ND, 1&gt;
&lt;AD, S ,1&gt;
&lt;present, S, H, 1&gt;</p>
          <p>&lt;CD, 1&gt;
&lt;CD, S ,1&gt;
Low-level argument
High-level argument
Assessment argument
Dummy argument
&lt;S, GPS, 30, 1&gt;
&lt;S, L, 1&gt;
&lt;S, H, 1&gt;
Dummy assessment argument &lt;present, S, L, 1&gt;
&lt;future, S, L, 1&gt;</p>
          <p>&lt;future, S, H, 1&gt;
&lt;S, L, 2&gt;</p>
          <p>&lt;S, H, 2&gt;
&lt;S, M, 2&gt;
• Smartphone - Smartphones are equipped with 3-axial accelerometers and 3-axial gyroscopes.</p>
          <p>The smartphone must be positioned inside the car so that its y-axis is oriented toward the vehicle’s
front and it’s screen (z-axis) is oriented upward.</p>
          <p>• GPS device - It is used to collect vehicle speed data.</p>
          <p>
            We used the following measures, that, according to the recent literature ([
            <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14 ref15 ref7 ref8 ref9">7, 8, 9, 10, 11, 12, 13, 14, 15</xref>
            ]),
are deemed to be the most suitable set of physical measures for recognizing driving styles:
• Longitudinal acceleration - acceleration in the direction of the vehicle’s motion, detecting
acceleration and braking events (gathered by y-axis smartphone’s accelerometer and OBDII
device).
• Lateral acceleration - acceleration in the transverse direction of the vehicle’s motion, detecting
turning events (gathered by x-axis smartphone’s accelerometer).
• Angular velocity - rate of rotation of the vehicle, detecting turning events (gathered by z-axis
smartphone’s gyroscope).
• Speed - identifying any exceeding of speed limits (gathered by GPS and OBDII devices).
• Longitudinal jerk - variation in acceleration in the direction of the vehicle’s motion (calculated
as derivatives of longitudinal acceleration).
• Lateral jerk - variation in acceleration in the transverse direction of the vehicle’s motion
(calculated as derivatives of lateral acceleration).
          </p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Measure Level set</title>
        <p>
          Given a physical measure  , the values of  read by the sensors are mapped to specific levels
 ∈ levels( ). The categorization of levels and their respective threshold values for each physical
measure are determined based on the comfort level of passengers, which is correlated with ride quality.
Although the sense of comfort is subjective and varies among passengers, attempts have been made in
literature to establish general measurement criteria. Various studies have identified threshold values for
discomfort concerning diferent physical factors such as acceleration, lateral acceleration, and jerk. In
our experiments, the threshold values for each physical measure were evaluated based on [
          <xref ref-type="bibr" rid="ref16 ref17 ref18">16, 17, 18, 19</xref>
          ].
The only exception is for speed threshold values, which refer to road signs to identify any exceeding
of the limits. However, all the threshold values of the intervals are documented in a properties file,
allowing for easy adjustment based on the user’s specific requirements.
        </p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Results</title>
        <p>We implemented the prototype described in Sec. 3 using  -toksia argumentation reasoning system.
This solver provides several reasoning tasks over AFs, such as credulous and skeptical acceptance of
arguments [20].</p>
        <p>The implemented prototype of the ASYDE framework processed the real-life data collected during the
trips, obtaining the driving certificates reported in table 1. For each trip, we report the percentage of
time points where the behavior of the driver who took the trip falls in each of the predefined assessment
classes (i.e. CalmDriving (), NormalDriving ( ), and AggressiveDriving ()).</p>
        <p>As shown in table 1, the prototype appears to be capable of distinguishing various driving styles. In
fact, the class that represents the actual driving style with which the trip was conducted was recognized
by the framework for a rather significant percentage of time compared to other classes.
1 The minimum (resp., maximum) percentage corresponds to the percentage of time
points where an assessment argument corresponding to the assessment class has
been skeptical (credulous) accepted under preferred semantics.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>We have discussed the ASYDE framework, whose aim is that of classifying driver behaviors by resorting
to the abstract argumentation framework. We have also shown some prominent results coming from a
preliminary experimental evaluation.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>We acknowledge partial financial support from MUR project PRIN 2022 EPICA (CUP H53D23003660006)
funded by the European Union - Next Generation EU and from PNRR MUR project PE0000013-FAIR.
environments in sichuan, southwest china, Discrete Dynamics in Nature and Society 2015 (2015)
1–16. doi:10.1155/2015/494130.
[19] L. Svensson, J. Eriksson, Tuning for ride quality in autonomous vehicle : Application to linear
quadratic path planning algorithm, 2015.
[20] A. Niskanen, M. Järvisalo, µ-toksia: An eficient abstract argumentation reasoner, 2020, pp.
800–804. doi:10.24963/kr.2020/82.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>French</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>West</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Elander</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wilding</surname>
          </string-name>
          ,
          <article-title>Decision-making style, driving style, and selfreported involvement in road trafic accidents</article-title>
          ,
          <source>Ergonomics</source>
          <volume>36</volume>
          (
          <year>1993</year>
          )
          <fpage>627</fpage>
          -
          <lpage>44</lpage>
          . doi:
          <volume>10</volume>
          .1080/ 00140139308967925.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Fazzinga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Galassi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Torroni</surname>
          </string-name>
          ,
          <article-title>A privacy-preserving dialogue system based on argumentation</article-title>
          ,
          <source>Intell. Syst. Appl</source>
          .
          <volume>16</volume>
          (
          <year>2022</year>
          )
          <article-title>200113</article-title>
          . URL: https://doi.org/10.1016/j.iswa.
          <year>2022</year>
          .
          <volume>200113</volume>
          . doi:
          <volume>10</volume>
          .1016/ J.ISWA.
          <year>2022</year>
          .
          <volume>200113</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>B.</given-names>
            <surname>Fazzinga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Flesca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Furfaro</surname>
          </string-name>
          , L. Pontieri,
          <article-title>Process mining meets argumentation: Explainable interpretations of low-level event logs via abstract argumentation</article-title>
          ,
          <source>Inf. Syst</source>
          .
          <volume>107</volume>
          (
          <year>2022</year>
          )
          <article-title>101987</article-title>
          . URL: https://doi.org/10.1016/j.is.
          <year>2022</year>
          .
          <volume>101987</volume>
          . doi:
          <volume>10</volume>
          .1016/J.IS.
          <year>2022</year>
          .
          <volume>101987</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B.</given-names>
            <surname>Fazzinga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Flesca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Furfaro</surname>
          </string-name>
          ,
          <article-title>Taking into account "who said what" in abstract argumentation: Complexity results</article-title>
          ,
          <source>Artif. Intell</source>
          .
          <volume>318</volume>
          (
          <year>2023</year>
          )
          <article-title>103885</article-title>
          . URL: https://doi.org/10.1016/j.artint.
          <year>2023</year>
          .
          <volume>103885</volume>
          . doi:
          <volume>10</volume>
          .1016/J.ARTINT.
          <year>2023</year>
          .
          <volume>103885</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>P. M.</given-names>
            <surname>Dung</surname>
          </string-name>
          ,
          <article-title>On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>77</volume>
          (
          <year>1995</year>
          )
          <fpage>321</fpage>
          -
          <lpage>357</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/000437029400041X. doi:https://doi.org/ 10.1016/
          <fpage>0004</fpage>
          -
          <lpage>3702</lpage>
          (
          <issue>94</issue>
          )
          <fpage>00041</fpage>
          -
          <lpage>X</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>I.</given-names>
            <surname>Cojocaru</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Popescu</surname>
          </string-name>
          ,
          <article-title>Building a driving behaviour dataset</article-title>
          ,
          <year>2022</year>
          , pp.
          <fpage>101</fpage>
          -
          <lpage>107</lpage>
          . doi:
          <volume>10</volume>
          .37789/ rochi.
          <year>2022</year>
          .
          <volume>1</volume>
          .1.17.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>G.</given-names>
            <surname>Castignani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Derrmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Frank</surname>
          </string-name>
          , T. Engel,
          <article-title>Driver behavior profiling using smartphones: A low-cost platform for driver monitoring</article-title>
          ,
          <source>IEEE Intelligent Transportation Systems Magazine</source>
          <volume>7</volume>
          (
          <year>2015</year>
          )
          <fpage>91</fpage>
          -
          <lpage>102</lpage>
          . doi:
          <volume>10</volume>
          .1109/MITS.
          <year>2014</year>
          .
          <volume>2328673</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Johnson</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Trivedi</surname>
          </string-name>
          ,
          <article-title>Driving style recognition using a smartphone as a sensor platform</article-title>
          ,
          <source>in: 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC)</source>
          ,
          <year>2011</year>
          , pp.
          <fpage>1609</fpage>
          -
          <lpage>1615</lpage>
          . doi:
          <volume>10</volume>
          .1109/ITSC.
          <year>2011</year>
          .
          <volume>6083078</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>L.</given-names>
            <surname>Eboli</surname>
          </string-name>
          , G. Mazzulla, G. Pungillo,
          <article-title>Combining speed and acceleration to define car users' safe or unsafe driving behaviour</article-title>
          ,
          <source>Transportation Research Part C: Emerging Technologies</source>
          <volume>68</volume>
          (
          <year>2016</year>
          )
          <fpage>113</fpage>
          -
          <lpage>125</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/S0968090X16300067. doi:https: //doi.org/10.1016/j.trc.
          <year>2016</year>
          .
          <volume>04</volume>
          .002.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Ferreira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Carvalho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ferreira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Souza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Suhara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pentland</surname>
          </string-name>
          , G. Pessin,
          <article-title>Driver behavior profiling: An investigation with diferent smartphone sensors and machine learning</article-title>
          ,
          <source>PLOS ONE 12</source>
          (
          <year>2017</year>
          )
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          . doi:
          <volume>10</volume>
          .1371/journal.pone.
          <volume>0174959</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Fazeen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Gozick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Dantu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bhukhiya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>González</surname>
          </string-name>
          ,
          <article-title>Safe driving using mobile phones</article-title>
          ,
          <source>IEEE Transactions on Intelligent Transportation Systems</source>
          <volume>13</volume>
          (
          <year>2012</year>
          )
          <fpage>1462</fpage>
          -
          <lpage>1468</lpage>
          . doi:
          <volume>10</volume>
          .1109/TITS.
          <year>2012</year>
          .
          <volume>2187640</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Carlos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. C.</given-names>
            <surname>González</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wahlström</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Ramírez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Martínez</surname>
          </string-name>
          , G. Runger,
          <article-title>How smartphone accelerometers reveal aggressive driving behavior?-the key is the representation</article-title>
          ,
          <source>IEEE Transactions on Intelligent Transportation Systems</source>
          <volume>21</volume>
          (
          <year>2020</year>
          )
          <fpage>3377</fpage>
          -
          <lpage>3387</lpage>
          . doi:
          <volume>10</volume>
          .1109/TITS.
          <year>2019</year>
          .
          <volume>2926639</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>R.</given-names>
            <surname>Stoichkov</surname>
          </string-name>
          ,
          <article-title>Android smartphone application for driving style recognition, Department of Electrical Engineering and Information Technology Institute for Media Technology (</article-title>
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>V.</given-names>
            <surname>Manzoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Corti</surname>
          </string-name>
          , P. De Luca,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Savaresi</surname>
          </string-name>
          ,
          <article-title>Driving style estimation via inertial measurements</article-title>
          ,
          <source>in: 13th International IEEE Conference on Intelligent Transportation Systems</source>
          ,
          <year>2010</year>
          , pp.
          <fpage>777</fpage>
          -
          <lpage>782</lpage>
          . doi:
          <volume>10</volume>
          .1109/ITSC.
          <year>2010</year>
          .
          <volume>5625113</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>P.</given-names>
            <surname>Brombacher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Masino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Frey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Gauterin</surname>
          </string-name>
          ,
          <article-title>Driving event detection and driving style classiifcation using artificial neural networks</article-title>
          ,
          <source>in: 2017 IEEE International Conference on Industrial Technology (ICIT)</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>997</fpage>
          -
          <lpage>1002</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICIT.
          <year>2017</year>
          .
          <volume>7915497</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>I.</given-names>
            <surname>Bae</surname>
          </string-name>
          , J. Moon,
          <string-name>
            <given-names>J.</given-names>
            <surname>Seo</surname>
          </string-name>
          ,
          <article-title>Toward a comfortable driving experience for a self-driving shuttle bus</article-title>
          ,
          <source>Electronics</source>
          <volume>8</volume>
          (
          <year>2019</year>
          )
          <article-title>943</article-title>
          . doi:
          <volume>10</volume>
          .3390/electronics8090943.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Mei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kuang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <article-title>A vehicle steering recognition system based on low-cost smartphone sensors</article-title>
          ,
          <source>Sensors</source>
          <volume>17</volume>
          (
          <year>2017</year>
          )
          <article-title>633</article-title>
          . doi:
          <volume>10</volume>
          .3390/s17030633.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>J.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shao</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. Lu,</surname>
          </string-name>
          <article-title>An experimental study on lateral acceleration of cars in diferent</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>