=Paper= {{Paper |id=Vol-2585/paper2 |storemode=property |title=Consistency checking of attention aware systems |pdfUrl=https://ceur-ws.org/Vol-2585/paper2.pdf |volume=Vol-2585 |authors=Yensen Limón,Everardo Bárcenas,Edgard Benı́tez-Guerrero,Javier Gomez |dblpUrl=https://dblp.org/rec/conf/lanmr/LimonBBG19 }} ==Consistency checking of attention aware systems== https://ceur-ws.org/Vol-2585/paper2.pdf
Consistency checking of attention aware systems

    Yensen Limón1 , Everardo Bárcenas2 , Edgard Benı́tez-Guerrero1 , and Javier
                                      Gomez2
           1
               Facultad de Estadı́stica e Informática, Universidad Veracruzana
                    zs16017367@estudiantes.uv.mx,edbenitez@uv.mx
                            2
                               Facultad de Ingenierı́a, UNAM
                       ebarcenas@unam.mx,javierg@fi-b.unam.mx



        Abstract. Attention aware systems keep track of users attention in or-
        der to react and provide a better interaction experience. In the educa-
        tional setting, recent research results have showed attention aware sys-
        tem can be useful in many smart learning scenarios, such as in student
        performance evaluation, and identification and motivation of inattentive
        students. However, ensuring a consistent behavior in these systems have
        been found a challenging task. This is due to the rich and heterogeneous
        contexts required to be modeled. Formal verification methods for these
        systems is then a demanding incipient research perspective. In the cur-
        rent work, we study a formal notion of consistency in attention aware
        systems for the educational setting. An expressive modal logic is pro-
        posed as specification language, and a consistency checking algorithm is
        provided. The algorithm is defined in terms of the satisfiability problem
        of the logic.


Keywords: Formal Verification, Attention Aware Systems, Modal Logics


1     Introduction

Consider the following scenario in a smart learning environment, say a class-
room. One or several instructors are helping to a number of students to carry
out a learning activity (which can be divided in sub-activities). For this purpose,
instructor(s) and students have access to a number of smart devices: interactive
board, computers, phones, etc. At some point during the learning activity, stu-
dents are supposed to solve some mathematical problems. One of these students
seems frustrated when trying to solve these problems. Frustration may be de-
tected by facial expression or gaze patterns [12]. Once this inattentive state
(frustration) is detected by the smart classroom system, a tutorial video on how
to solve the mathematical problems pops-up on the student’s screen. An student
with a different learning style may obtain a text lesson on related problems in-
stead of the tutorial video. If after some time, the student still seem frustrated,
one of the instructors, the mathematician, receives a corresponding alert in his
mobile phone, in order to not disturb other students. This is a traditional smart


Copyright © 2019 for this paper by its authors. Use permitted under Creative Commons License
Attribution 4.0 International (CC BY 4.0)


                                            13
learning environment scenario focused in the high engagement key feature defin-
ing smart learning, according to [5]. Systems focused in this particular feature
are also often called attention aware systems [11]. Systems capable to realize
human perceptual and cognitives abilities, and to react in order to provide a
better interaction experience.
    Due to the rich and heterogeneous context in attention aware systems. The
design process have become a very challenging task. This is because ensuring a
consistent system behavior is such a vast context scales far beyond the developer
scope. In this paper, we propose a formal notion of consistency in attention aware
systems. This notion is defined in terms of the satisfiability of an expressive
modal logic. This result implies a consistency checking algorithm.


1.1   Related Work

Due to the relatively recent introduction of studies on attention aware sys-
tem [11], research on formal verification methods for these type of systems is
basically incipient. Attention aware systems can also be seen a sort of context-
aware system [3], where the context is focused on human attention aspects. For-
mal verification methods on context-aware systems have been studied in many
settings [10], including the educational one [1].
    In the setting of formal verification of context-aware systems, closest studies
to our proposal are reported in [8,9]. In [8], formal verification of context-aware
systems is defined in terms of the satisfiability of µ-calculus formulas. The µ-
calculus is an expressive modal logic with fixed-point operators. In our proposal,
we also verify system consistency of in terms of the satisfiability of µ-calculus
formulas. In contrast with context-aware systems verified in [8], verification of
attention aware systems require the construction of deeper models. In [9], it is
proposed a modeling and verification framework for context-aware systems in
terms of consistency of knowledge bases of an expressive description logic.


1.2   Outline

In Section 2, we introduce the formal notions required to model and verify at-
tention aware systems. In particular, since the notion of consistency is defined
in terms of tree-shaped models, we first define tree-shaped Kripke structures.
We also define in this section µ-calculus formulas, which are also interpreted in
terms of Kripke structures. In Section 3, we first define attention aware systems.
In particular, inspired from [6], we define an attention aware smart learning en-
vironment. A formal notion of consistency of the environment is also introduced
in this section. Section 4 is devoted to the description of a consistency checking
algorithm for the attention aware smart learning environment. Consistency is
described in terms of a µ-calculus formula. Additionally, we also show how to
express temporal constraints on consistency models, such as restrictions on con-
secutive system interventions. Finally, in Section 5, we give a brief summary of
our proposal. Further research perspective are also discussed in this last section.




                                       14
2   Preliminaries

We introduce in this Section the specification language, the alternation-free
propositional modal µ-calculus with converse. This is an expressive modal logic
with a least and a greatest fixed-points and converse modalities [4]. Since the
notion of consistency of context-aware systems is defined in terms of tree-shaped
models [8], we then also interpret the specification language on tree structures.
Before giving a precise notion of trees, we first assume an alphabet composed
by a countable set of proposition P , set of modalities M = {1, 2, 3, 4}, and a
countable set of variables X.
   A tree structure K is a tuple (N, R, L), where:

 – the set of nodes N is defined as a complete prefix-closed non-empty finite
   set of words over the natural numbers N, that is, N is a finite set of words
   N ⊆ N? , such that if n · i ∈ N , where n ∈ N? and i ∈ N, then also n ∈ N ;
 – R : N × M × N is a transition relation, written n ∈ R(n0 , m), such that
   for all (n · i), (n · i + 1) ∈ N , where i ∈ N, n ∈ R(n · i, 1), (n · i) ∈ R(n, 3),
   (n · i + 1) ∈ R(n · i, 2) and (n · i) ∈ R(n · i + 1, 4), we say n is the parent of
   n · i, hence n · i is the child of n, n · i + 1 is a following (right) sibling of n · i,
   and hence n · i is a previous (left) sibling of n · i + 1; and
 – L : N 7→ P is a total labeling function.

    The specification language is now introduced. The set of µ-calculus formulas
is defined by the following grammar:

                       φ ::= p | X | ¬φ | φ ∨ φ | hmiφ | µX.φ

where p is a proposition, m a modalities, and X a variable.
   Formulas are interpreted with respect to a tree structure K = (N, R, L) and
a valuation function V : X 7→ 2N , as follows.
    K                                               K
[[p]]V = {n | L(n) = p}                         [[X]]V = V (X)
        K          K                                    K         K         K
[[¬φ]]V = N \ [[φ]]V                            [[φ ∨ ψ]]V = [[φ]]V ∪ [[ψ]]V
            n                         o                      \n                               o
         K                      K                        K                  K
[[hmi φ]]V = n | R(n, m) ∩ [[φ]]V 6= ∅          [[µX.φ]]V =        N 0 [[φ]]V [N 0 /X ] ⊆ N 0

If there is a tree K such that the interpretation of a formula φ is not empty with
respect to any valuation, then we say φ is satisfiable and K is a model of φ.
    Intuitively, formulas are interpreted as follows. Propositions as node labels.
Negations and disjunctions as set complements and unions, respectively. Modal
formulas hmi φ as nodes can access through m to the interpretation of φ, that
is, φ node(s) are children, parent, right and left sibling of h1i φ, h3i φ, h2i φ and
h3i φ nodes, respectively. Fixed points can be seen as recursion operators, for
instance, h1i µX.p ∨ h1i X holds at nodes with at least one descendant named p.




                                           15
3   Attention aware systems


We now introduce the notion of consistency in attention aware systems. For
this, we first define an attention aware system in the education setting inspired
from [6]. The system is intended to engage inattentive students participating
in a learning activity. Learning activities are carried on in a smart classroom
equipped with sensors able to detect when students are not engaged in the ac-
tivities. This disengagement embraces several scenarios, for instance when an
student is inattentive or may be experiencing frustration because of the activity.
According to the scenario, the system is supposed to react accordingly. For ex-
ample, if the student is inattentive watching his mobile phone, the system may
send a visual or sound alert directly to the student phone. In case the student
is frustrated trying to solve some mathematical problems, the system may show
in the student’s screen a recall of the information associated to the problem
is trying to solve. For this re-engagement process, the system must consider a
huge legion of context constraints, such as the student learning style, the type
of learning activity, the available devices, and so on. Intuitively, the notion of
consistency for these systems aims to guarantee, at system design time, context
constraints are satisfied. A consistent system must guarantee it is able to inter-
vene in any possible disengagement scenario. In addition to the attention aware
system proposed in [6], we also consider temporal and location constraints. These
constraint types consider the consecutive repetition of system interventions on
the same student in the same activity. Repeated system interventions may thus
not impact in the engagement process as desired.
   We then formally define an attention aware system by its input. The system
input is defined as a tuple (Sch, IL, IA, ID, SD, LD), where:


 – the schedule Sch is a set of tuples (t, l, la, s) associating times t, locations l,
   learning activities la, and students;
 – IL is composed by pairs (i, la) of interventions i and learning activities la;
 – IA is a set of pairs (i, a) of interventions i and attention states a;
 – (i, d) ∈ ID relates interventions i with devices d;
 – (s, d) ∈ SD denotes which devices d are accessible for each student s; and
 – (l, d) ∈ LD indicates which devices d are available at each location l.


Example 1. Consider for instance the following scenario, at 2 locations, 4 stu-
dents (2 at each location) are participating in 2 activities at 2 time lapses. There
are 2 types of attention states, say being inattentive and being frustrated. There
are also 2 types of devices only: mobile phone and computer. The system can
react in two different ways in order to attract the student attention, by a sound
alert and by recalling a learned lesson. All these variables are related in several




                                        16
                     r




                     t1                       ...                        tk
                                                                           1      Times

                     l1                    ...                     lk
                                                                     2            Locations
                                                                          ..
                                                                           .
                     s1              ...               sk
                                                         3                        Students
                                                                    ..
                                                                     .
                     a1           ...            ak
                                                   4                              Attention states
                                                         ..
                                                          .
                     dj
                                                  ..                              Device
                                                   .
                     ij
                                                                                  Intervention


                   Fig. 1. Graphical representation of a consistency model.


ways. More precisely:
   ID ={(i1 , d1 , ), (i2 , d2 , ), (i3 , d1 ), (i3 , d2 ), (i4 , d1 ), (i4 , d2 ), (i5 , d1 ), (i5 , d2 ), (i6 , d1 ),
           (i6 , d2 )}
   IA ={(i1 , a1 ), (i2 , a2 ), (i3 , a1 ), (i3 , a2 ), (i4 , a1 ), (i4 , a2 ), (i5 , a1 , ), (i5 , a2 ), (i6 , a1 ),
           (i6 , a2 )}
   IL ={(i1 , la1 ), (i2 , la1 ), (i3 , la2 ), (i4 , la2 ), (i5 , la3 ), (i6 , la3 ), (i6 , la4 )}
   LD ={(l1 , d1 ), (l2 , d2 )}
   SD ={(s1 , d2 ), (s2 , d2 ), (s3 , d2 ), (s4 , d2 )}
  Sch ={(t1 , l1 , la1 , s1 ), (t1 , l1 , la1 , s2 ), (t1 , l2 , la2 , s3 ), (t1 , l2 , la2 , s4 ), (t2 , l1 , la3 , s1 ),
           (t2 , l1 , la3 , s2 ), (t2 , l2 , la4 , s3 ), (t2 , l2 , la4 , s4 )}

    It is not hard to see the system defined in Example 1 is consistent because
it is able to react in any possible scenario. Before giving a precise definition
of system consistency, we define a semantic notion of consistency in terms of a
tree-shaped structure.

Definition 1 (Consistency model). Given a system input (Sch, IL, IA, ID,
SD, LD), we define a tree structure (N, R, L), called the consistency model CM ,
as follows:

 – there is root node labeled by r;
 – for each time t occurring in Sch, the root has a child node labeled by the time
   proposition;
 – for each tuple (t, l, la, s) ∈ Sch, the node labeled by t has a child labeled by l;
 – for each tuple (t, l, la, s) ∈ Sch, the node labeled by l has a child labeled by s;




                                                              17
 – for each student in Sch, each student node has as many children as attention
   states occurring in IA;
 – each attention state node has at least one child node, labeled by a device d,
   such that (s, d) ∈ SD, (l, d) ∈ LD, and s and l label some ancestor nodes;
   and
 – each device node d has at least one child node, labeled by an intervention i,
   such that (i, d) ∈ ID.

A canonic graphic representation of a consistency model is depicted in Figure 1.
    Time constraints may help to increase the intervention effect, that is, repeat-
ing an intervention at two consecutive times may not have the desired impact.
In this work, time constraints where interventions cannot repeat at consecu-
tive times are considered in addition. We then now give a precise notion of a
consistent attention aware system.

Definition 2 (System consistency). Given a system input, we say the system
is consistent, if and only if, there is a consistency model satisfying the time
constraints.


4     Consistency checking

In this Section, we describe a consistency checking algorithm in terms of the
satisfiability of µ-calculus formula. We first define a formula corresponding to a
consistency model.
    Given a system input (Sch, IL, IA, ID, SD, LD), we define the consistency
formula as follows:
                                             k1
                                             ^
                N1 :=h1i(t1 ∧ N2 (t1 ) ∧          h2ij−1 (tj ∧ N2 (tj )) ∧ h2ik1 −1 ¬h2i>)
                                             j=2
                                                             k2 (t)
                                                              ^
             N2 (t) :=h1i(l1 ∧ la1 ∧ N3 (t, l1 , la1 ) ∧             h2ij−1 (lj ∧ N3 (t, lj , laj ))∧
                                                             j=2
                           k2 −1
                       h2i         ¬h2i>)
                                                       k3 (t,l,la)
                                                           ^
       N3 (t, l, la) :=h1i(s1 ∧ N4 (t, l, la, s1 ) ∧                 h2ij−1 (sj ∧ N4 (t, l, la, sj ))∧
                                                           j=2

                       h2ik3 −1 ¬h2i>)
                                                           k4
                                                           ^
    N4 (t, l, la, st) :=h1i(a1 ∧ N5 (t, l, la, s, a1 ) ∧         h2ij−1 (aj ∧ N5 (t, l, la, s, aj ))∧
                                                           j=2
                           k4 −1
                       h2i         ¬h2i>)
 N5 (t, l, la, s, a) :=(FDS (t, l, la, s, a) ∨ FLD (t, l, la, s, a)) ∧ ¬h2i>




                                               18
where
                                           kSD(s)
                                             _
                FSD (t, l, la, s, a) :=             h1i(dj ∧ N6 (t, l, la, s, a, dj ))
                                            j=1
                                           kLD(l)
                                            _
                FLD (t, l, la, s, a) :=             h1i(dj ∧ N6 (t, l, la, s, a, dj ))
                                            j=1
                                           k6 (la,a,d)
                                              _
               N6 (t, l, la, s, a, d) :=               h1iij ∧ ¬h2i>
                                              j=1




    provided that k1 is the number of time propositions in Sch, k2 (t) is the num-
ber of locations, k3 (t, l, la) is the number of students in the learning activity la,
at location l, in time t, k4 is the number of attention states, kSD (s) is the number
of devices available for student s, kLD (l) is the number of devices available at
location l, and k6 (la, a, d) is the number of interventions available for device d,
learning activity a and attention state a.
    As an example, consider the consistency model corresponding to the input
system described in Example 1. The first level of the model in the one for time
lapses, N1 is then defined as follows.

                N1 :=h1i(t1 ∧ N2 (t1 ) ∧ h2i(t2 ∧ N2 (t2 )) ∧ h2i¬h2i>)

   The second model level distinguishes the learning activities, there are two
per time lapse.

    N2 (t1 ) :=h1i((la1 ∧ l1 ) ∧ N3 (t1 , la1 , l1 ) ∧ h2i((la2 ∧ l2 ) ∧ N3 (t1 , l2 , la2 ))∧
               h2i¬h2i>)
    N2 (t2 ) :=h1i((la3 ∧ l1 ) ∧ N3 (t2 , la3 , l1 ) ∧ h2i((la4 ∧ l2 ) ∧ N3 (t2 , la4 , l2 ))∧
               h2i¬h2i>)

 The following level corresponds to the students. There are four students dis-
tributed in the two locations.

    N3 (t1 , la1 , l1 ) :=h1i(s1 ∧ N4 (t1 , la1 , l1 , s1 ) ∧ h2i(s2 ∧ N4 (t1 , la1 , l1 , s2 ))∧
                         h2i¬h2i>)
    N3 (t1 , la2 , l2 ) :=h1i(s3 ∧ N4 (t1 , la2 , l2 , s3 ) ∧ h2i(s4 ∧ N4 (t1 , la2 , l2 , s4 ))∧
                         h2i¬h2i>)
    N3 (t2 , la3 , l1 ) :=h1i(s1 ∧ N4 (t2 , la3 , l1 , s1 ) ∧ h2i(s2 ∧ N4 (t2 , la3 , l1 , s2 ))∧
                         h2i¬h2i>)
    N3 (t2 , la4 , l2 ) :=h1i(s3 ∧ N4 (t2 , la4 , l2 , s3 ) ∧ h2i(s4 ∧ N4 (t2 , la4 , l2 , s4 ))∧
                         h2i¬h2i>)




                                                  19
    In the fourth level, we must consider all possible scenarios of the attention
states of each student at each location and time lapse.
   N4 (t1 , la1 , l1 , s1 ) :=h1i(a1 ∧ N5 (t1 , la1 , l1 , s1 , a1 ) ∧ h2i(a2 ∧ N5 (t1 , la1 , l1 , s1 , a2 ))
                             ∧ h2i¬h2i>)
   N4 (t1 , la1 , l1 , s2 ) :=h1i(a1 ∧ N5 (t1 , la1, l1 , s2 , a1 ) ∧ h2i(a2 ∧ N5 (t1 , la1 , l1 , s2 , a2 ))
                             ∧ h2i¬h2i>)
   N4 (t1 , la2 , l2 , s3 ) :=h1i(a1 ∧ N5 (t1 , la2 , l2 , s1 , a1 ) ∧ h2i(a2 ∧ N5 (t1 , la1 , l1 , s1 , a2 ))
                             ∧ h2i¬h2i>)
   N4 (t1 , la2 , l2 , s4 ) :=h1i(a1 ∧ N5 (t1 , la2 , l2 , s2 , a1 ) ∧ h2i(a2 ∧ N5 (t1 , la1 , l1 , s2 , a2 ))
                             ∧ h2i¬h2i>)
   N4 (t2 , la3 , l1 , s1 ) :=h1i(a1 ∧ N5 (t2 , la3 , l1 , s1 , a1 ) ∧ h2i(a2 ∧ N5 (t1 , la3 , l1 , s1 , a2 ))
                             ∧ h2i¬h2i>)
   N4 (t2 , la3 , l1 , s2 ) :=h1i(a1 ∧ N5 (t1 , la3 , l1 , s2 , a1 ) ∧ h2i(a2 ∧ N5 (t1 , la3 , l1 , s2 , a2 ))
                             ∧ h2i¬h2i>)
   N4 (t2 , la4 , l2 , s1 ) :=h1i(a1 ∧ N5 (t1 , la4 , l2 , s1 , a1 ) ∧ h2i(a2 ∧ N5 (t1 , la4 , l2 , s1 , a2 ))
                             ∧ h2i¬h2i>)
   N4 (t2 , la4 , l2 , s2 ) :=h1i(a1 ∧ N5 (t1 , la4 , l2 , s2 , a1 ) ∧ h2i(a2 ∧ N5 (t1 , la4 , l2 , s2 , a2 ))
                             ∧ h2i¬h2i>)


    Level 5 constrains the availability of devices where the system engagement
intervention is carried on. Location, student, and attention state constraints
must be considered.
             N5 (t1 , la1 , l1 , s1 , a1 ) :=h1i(d1 ∧ N6 (t1 , la1 , l1 , s1 , a1 , d1 ) ∧ ¬h2i>)
             N5 (t1 , la1 , l1 , s1 , a2 ) :=h1i(d2 ∧ N6 (t1 , la1 , l1 , s1 , a2 , d2 ) ∧ ¬h2i>)
             N5 (t1 , la1 , l1 , s2 , a1 ) :=h1i(d1 ∧ N6 (t1 , la1 , l1 , s2 , a1 , d1 ) ∧ ¬h2i>)
             N5 (t1 , la1 , l1 , s2 , a2 ) :=h1i(d2 ∧ N6 (t1 , la1 , l1 , s2 , a2 , d2 ) ∧ ¬h2i>)
             N5 (t1 , la2 , l2 , s3 , a1 ) :=h1i(d1 ∧ N6 (t1 , la3 , l2 , s3 , a1 , d1 ) ∧ ¬h2i>)
             N5 (t1 , la2 , l2 , s3 , a2 ) :=h1i(d2 ∧ N6 (t1 , la3 , l2 , s3 , a2 , d2 ) ∧ ¬h2i>)
             N5 (t1 , la2 , l2 , s4 , a1 ) :=h1i(d1 ∧ N6 (t1 , la2 , l2 , s4 , a1 , d1 ) ∧ ¬h2i>)
             N5 (t1 , la2 , l2 , s4 , a2 ) :=h1i(d2 ∧ N6 (t1 , la2 , l2 , s4 , a2 , d2 ) ∧ ¬h2i>)
             N5 (t2 , la3 , l1 , s1 , a1 ) :=h1i(d1 ∧ N6 (t2 , la3 , l1 , s1 , a1 , d1 ) ∧ ¬h2i>)
             N5 (t2 , la3 , l1 , s1 , a2 ) :=h1i(d2 ∧ N6 (t2 , la3 , l1 , s1 , a2 , d2 ) ∧ ¬h2i>)
             N5 (t2 , la3 , l1 , s2 , a1 ) :=h1i(d1 ∧ N6 (t2 , la3 , l1 , s2 , a1 , d1 ) ∧ ¬h2i>)
             N5 (t2 , la3 , l1 , s2 , a2 ) :=h1i(d2 ∧ N6 (t2 , la3 , l1 , s2 , a2 , d2 ) ∧ ¬h2i>)
             N5 (t2 , la4 , l2 , s3 , a1 ) :=h1i(d1 ∧ N6 (t2 , la4 , l2 , s3 , a1 , d1 ) ∧ ¬h2i>)
             N5 (t2 , la4 , l2 , s3 , a2 ) :=h1i(d2 ∧ N6 (t2 , la4 , l2 , s3 , a2 , d2 ) ∧ ¬h2i>)
             N5 (t2 , la4 , l2 , s4 , a1 ) :=h1i(d1 ∧ N6 (t2 , la4 , l2 , s4 , a1 , d1 ) ∧ ¬h2i>)
             N5 (t2 , la4 , l2 , s4 , a2 ) :=h1i(d2 ∧ N6 (t2 , la4 , l2 , s4 , a2 , d2 ) ∧ ¬h2i>)




                                                      20
    The last level of the model corresponds to the system intervention available
at each student disengagement scenario.


                     N6 (t1 , la1 , l1 , s1 , a1 , d1 ) :=h1i(i1 ∧ ¬h2i>)
                     N6 (t1 , la1 , l1 , s1 , a2 , d2 ) :=h1i(i2 ∧ ¬h2i>)
                     N6 (t1 , la1 , l1 , s2 , a1 , d1 ) :=h1i(i1 ∧ ¬h2i>)
                     N6 (t1 , la1 , l1 , s2 , a2 , d2 ) :=h1i(i2 ∧ ¬h2i>)
                     N6 (t1 , la2 , l2 , s3 , a1 , d1 ) :=h1i(i3 ∧ ¬h2i>)
                     N6 (t1 , la2 , l2 , s3 , a2 , d2 ) :=h1i(i3 ∧ ¬h2i>)
                     N6 (t1 , la2 , l2 , s4 , a1 , d1 ) :=h1i(i4 ∧ ¬h2i>)
                     N6 (t1 , la2 , l2 , s4 , a2 , d2 ) :=h1i(i4 ∧ ¬h2i>)
                     N6 (t2 , la3 , l1 , s1 , a1 , d1 ) :=h1i(i5 ∧ ¬h2i>)
                     N6 (t2 , la3 , l1 , s1 , a2 , d2 ) :=h1i(i5 ∧ ¬h2i>)
                     N6 (t2 , la3 , l1 , s2 , a1 , d1 ) :=h1i(i5 ∧ ¬h2i>)
                     N6 (t2 , la3 , l1 , s2 , a2 , d2 ) :=h1i(i5 ∧ ¬h2i>)
                     N6 (t2 , la4 , l2 , s3 , a1 , d1 ) :=h1i(i6 ∧ ¬h2i>)
                     N6 (t2 , la4 , l2 , s3 , a2 , d2 ) :=h1i(i6 ∧ ¬h2i>)
                     N6 (t2 , la4 , l2 , s4 , a1 , d1 ) :=h1i(i6 ∧ ¬h2i>)
                     N6 (t2 , la4 , l2 , s4 , a2 , d2 ) :=h1i(i6 ∧ ¬h2i>)

    Time constraints consist in forbidding the occurrence of a system engagement
intervention in consecutive time lapses. This can be modeled with a fixed point
operators. More precisely, given a system input (Sch, IL, IA, ID, SD, LD), we
defined the time constraints formula as follows:

                         k3 _
                         ^  k6
      NRT :=h1iµT.((               ¬FL (ij , sh ) ∨ h2i¬FL (ij , sh ) ∧ h2iT ) ∨ ¬ h2i >)
                         h=1 j=1

   FL (i, s) :=h1iµL.(FS (i, s) ∨ h2iL)
   FS (i, s) :=h1iµS.((s ∧ FA (i)) ∨ h2iS)
     FA (i) :=h1iµA.((h1iFD (i)) ∨ h2i A)
     FD (i) :=h1ii

where k3 is the number of students and k6 is the number of interventions.
   Considering again the consistency model corresponding to the system defined
in Example 1. We then define the time constraints formula as follows:

         NRT :=h1iµT.((FST1 ∧ FST2 ∧ FST3 ∧ FST4 ∨ h2iT ) ∨ ¬h2i>)




                                             21
where

    FST1 :=(¬FL (i1 , s1 ) ∨ h2i¬FL (i1 , s1 )) ∧ (¬FL (i2 , s1 ) ∨ h2i¬FL (i2 , s1 ))∧
             (¬FL (i3 , s1 ) ∨ h2i¬FL (i3 , s1 )) ∧ (¬FL (i4 , s1 ) ∨ h2i¬FL (i4 , s1 ))∧
             (¬FL (i5 , s1 ) ∨ h2i¬FL (i5 , s1 )) ∧ (¬FL (i6 , s1 ) ∨ h2i¬FL (i6 , s1 ))
    FST2 :=(¬FL (i1 , s2 ) ∨ h2i¬FL (i1 , s2 )) ∧ (¬FL (i2 , s2 ) ∨ h2i¬FL (i2 , s2 ))∧
             (¬FL (i3 , s2 ) ∨ h2i¬FL (i3 , s2 )) ∧ (¬FL (i4 , s2 ) ∨ h2i¬FL (i4 , s2 ))∧
             (¬FL (i5 , s2 ) ∨ h2i¬FL (i5 , s2 )) ∧ (¬FL (i6 , s2 ) ∨ h2i¬FL (i6 , s2 ))
    FST3 :=(¬FL (i1 , s3 ) ∨ h2i¬FL (i1 , s3 )) ∧ (¬FL (i2 , s3 ) ∨ h2i¬FL (i2 , s3 ))∧
             (¬FL (i3 , s3 ) ∨ h2i¬FL (i3 , s3 )) ∧ (¬FL (i4 , s3 ) ∨ h2i¬FL (i4 , s3 ))∧
             (¬FL (i5 , s3 ) ∨ h2i¬FL (i5 , s3 )) ∧ (¬FL (i6 , s3 ) ∨ h2i¬FL (i6 , s3 ))
    FST4 :=(¬FL (i1 , s4 ) ∨ h2i¬FL (i1 , s4 )) ∧ (¬FL (i2 , s4 ) ∨ h2i¬FL (i2 , s4 ))∧
             (¬FL (i3 , s4 ) ∨ h2i¬FL (i3 , s4 )) ∧ (¬FL (i4 , s4 ) ∨ h2i¬FL (i4 , s4 ))∧
             (¬FL (i5 , s4 ) ∨ h2i¬FL (i5 , s4 )) ∧ (¬FL (i6 , s4 ) ∨ h2i¬FL (i6 , s4 ))

    The main result of this work is then the correspondence of the existence
of a tree-shaped model for the the consistency and time constraints µ-calculus
formulas, and the consistency of attention aware systems.

Theorem 1. Given a system input (Sch, IL, IA, ID, SD, LD), the system is
                                         CM
consistent, if and only if, [[N1 ∧ NRT ]]V 6= ∅, for any valuation V . Furthermore,
deciding system consistency is in EXPTIME.
The proof goes by induction on the size of the system. From earlier complexity
results [4,8], we can in addition characterize the complexity of this consistency
problem.


5   Conclusions

In this paper, we described a consistency checking algorithm for attention aware
systems. These systems are required to realize human perceptual and cognitives
abilities, and to react in order to provide a better interaction experience. A
consistency behavior of these systems implies the system is capable to react as
it is supposed to, in any possible context scenario. We formalized this notion of
consistency in terms of the satisfiability of a µ-calculus formula. There is a model
satisfying the formula, if and only if, the attention aware system is consistent.
According to the known complexity of µ-calculus satisfiability and the succinct
encoding of system consistency provided in this paper, we also conclude the
consistency problem of attention aware systems is en EXPTIME. Currently, we
are developing µ-calculus satisfiability algorithms as in [7], in order to provide
practical consistency checking algorithms for attention aware systems. We are
also interested in extensions of the µ-calculus, as with arithmetic operators [2],
in order to model more complex constraints in attention aware systems.




                                            22
References
 1. Aguilar, J., Jerez, M., Rodrı́guez, T.: Cameonto: Context awareness meta ontology
    modeling. Applied Computing and Informatics 14(2), 202 – 213 (2018)
 2. Bárcenas, E., Benı́tez-Guerrero, E., Lavalle, J.: On the model checking of the
    graded µ-calculus on trees. In: Sidorov, G., Galicia-Haro, S.N. (eds.) Advances in
    Artificial Intelligence and Soft Computing - 14th Mexican International Conference
    on Artificial Intelligence, MICAI. Lecture Notes in Computer Science, vol. 9413,
    pp. 178–189. Springer (2015)
 3. Bettini, C., Brdiczka, O., Henricksen, K., Indulska, J., Nicklas, D., Ranganathan,
    A., Riboni, D.: A survey of context modelling and reasoning techniques. Pervasive
    and Mobile Computing 6(2), 161–180 (2010)
 4. Bonatti, P.A., Lutz, C., Murano, A., Vardi, M.Y.: The complexity of enriched
    mu-calculi. Logical Methods in Computer Science 4(3) (2008)
 5. Gros, B.: The design of smart educational environments. Smart Learn. Environ.
    3(1), 15 (2016)
 6. Korozi, M., Leonidis, A., Antona, M., Stephanidis, C.: LECTOR: towards reen-
    gaging students in the educational process inside smart classrooms. In: Horain,
    P., Achard, C., Mallem, M. (eds.) Intelligent Human Computer Interaction - 9th
    International Conference, IHCI, Proceedings. Lecture Notes in Computer Science,
    vol. 10688, pp. 137–149. Springer (2017)
 7. Limón, Y., Bárcenas, E., Benı́tez-Guerrero, E., Medina, M.A.: Depth-first reason-
    ing on trees. Computación y Sistemas 22(1) (2018)
 8. Limón, Y., Bárcenas, E., Benı́tez-Guerrero, E., Molero, G.: On the consistency of
    context-aware systems. Journal of Intelligent and Fuzzy Systems 34(5), 3373–3383
    (2018)
 9. Ramı́rez-Rueda, R., Bárcenas, E., Mezura-Godoy, C., Molero-Castillo, G.: Expres-
    sive context modeling with description logics. In: Villazón-Terrazas, B., Hidalgo-
    Delgado, Y. (eds.) Knowledge Graphs and Semantic Web - First Iberoamerican
    Conference, KGSWC. Communications in Computer and Information Science,
    vol. 1029, pp. 174–185. Springer (2019)
10. Riboni, D., Bettini, C.: OWL 2 modeling and reasoning with complex human
    activities. Pervasive and Mobile Computing 7(3), 379–395 (2011)
11. Roda, C., Thomas, J.: Attention aware systems: Theories, applications, and re-
    search agenda. Computers in Human Behavior 22(4), 557–587 (2006)
12. Rodriguez, A., Bárcenas, E., Molero-Castillo, G.: Model checking for gaze pat-
    tern recognition. In: International Conference on Electronics, Communications and
    Computers, CONIELECOMP. pp. 170–175. IEEE (2019)




                                         23