=Paper= {{Paper |id=Vol-2995/paper3 |storemode=property |title=Streaming and Learning the Personal Context |pdfUrl=https://ceur-ws.org/Vol-2995/paper3.pdf |volume=Vol-2995 |authors=Fausto Giunchiglia,Marcelo Rodas Britez,Andrea Bontempelli,Xiaoyue Li |dblpUrl=https://dblp.org/rec/conf/ijcai/GiunchigliaBBL21 }} ==Streaming and Learning the Personal Context== https://ceur-ws.org/Vol-2995/paper3.pdf
Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021                                                       19




                                 Streaming and Learning the Personal Context

           Fausto Giunchiglia1∗ , Marcelo Rodas Britez1 , Andrea Bontempelli1 and Xiaoyue Li1
                                      1
                                        University of Trento, Trento, Italy
               {fausto.giunchiglia, marcelo.rodasbritez, andrea.bontempelli, xiaoyue.li}@unitn.it



                              Abstract                                        Hoque and Stankovic, 2012; Hossain et al., 2017]. The hu-
                                                                              man collaboration is important when we are moving into real-
        The representation of the personal context is com-                    life scenarios [Kwapisz et al., 2011].
        plex and essential to improve the help machines can                      Other challenges of this collaborative approach are the pos-
        give to humans for making sense of the world, and                     sibility of overwhelming the humans and the possible differ-
        the help humans can give to machines to improve                       ences between the assignment of meaning between people
        their efficiency. We aim to design a novel model                      [Chang et al., 2017], thus, making the annotation a personal
        representation of the personal context and design a                   activity. Then, having humans as the input of the information
        learning process for better integration with machine                  opens the possibility of human error in the collaboration pro-
        learning. We aim to implement these elements into                     cess [Tourangeau et al., 2000], and this issue is well known in
        a modern system architecture focus in real-life en-                   social sciences and psychology, because of response biases in
        vironments. Also, we show how our proposal can                        answering self-reports [West and Sinibaldi, 2013], and more
        improve in specifically related work papers. Fi-                      importantly, these biases are not well-understood [Freedman
        nally, we are moving forward with a better personal                   et al., 2013].
        context representation with an improved model, the
                                                                                 We propose a novel context model based on the work from
        implementation of the learning process, and the ar-                   [Giunchiglia et al., 2018]. That work focused on ensuring the
        chitectural design of these components.
                                                                              reliability of annotations, whereas our focus is on improving
                                                                              personal context representations to get closer to work in real-
1       Introduction                                                          life scenarios. So, we propose to add a more precise repre-
                                                                              sentation of personal context that can also work with machine
Every person makes sense of their personal context differ-                    learning algorithms. We formalize a context model based on
ently because of their different sets of personal characteris-                ontology and use it with the streaming data to have a knowl-
tics (intelligence) and behaviour (life choices). However, the                edge representation of context data. This formalization al-
machine’s understanding of the personal context is radically                  lows moving towards a generic definition of context that can
different from the user’s understanding. This limitation is due               work with existing multi-label machine learning approaches,
to the limited definition of the personal context, and the lack               using a conversion algorithm. Eventually, the last piece of the
of tools to make sense of the personal context. For instance,                 puzzle will be the design and development of the Streaming
while the person you are with now can be linked to a name,                    System to manage technically the dynamic context data, and
for people it has more meaning than just a name, e.g., friend                 it will be organized in the system architecture with modular
and colleague. Additionally, these meanings are not fixed,                    components for independent development and easy deploy-
they may change at any time, and every person can assign                      ment in current cloud environments.
additional meaning using different criteria. Thus, effective
context recognition requires a complex and dynamic repre-                        Some examples of our model improvement can be seen
sentation of the personal context and the collaboration of the                compared with our main related work [Giunchiglia et al.,
people to fill the cognitive gap of machines.                                 2018; Bontempelli et al., 2020; Zeni et al., 2019]. All of
   The addition of human collaboration into the context                       them can take benefit from our novel personal context repre-
recognition learning of machines is an important part of su-                  sentation and can use our conversions algorithms to explicitly
pervised machine learning [Vapnik, 2013]. These interactions                  implement the transformations needed for the machine learn-
bring new challenges to the implementation of machine learn-                  ing algorithms.
ing algorithms. For instance, humans can be defined as the                       The paper is structured as follows. Section 2 introduces
expert of the supervised algorithms, interacting in an offline                context modeling. Section 3 illustrates our representation of
fashion by annotating sensor data [Webb, 2003], or the inter-                 personal context, while we provide the formal representation
action can be directly online, as active learning [Settles, 2009;             in Section 4. Then, we show the learning process to transi-
                                                                              tion from our formal representation to machine learning rep-
    ∗
        Contact Author                                                        resentation in Section 5 and how our formal representation



Copyright c 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021                                                                    20


can be converted to a Direct Acyclic Graph (DAG). Finally,                    can easily retrieve the time and time zones, and the space co-
Section 6 describes works related to ours, and Section 7 con-                 ordinates (e.g., via GPS). Third, time can be used to measure
cludes our paper.                                                             the changes in all the elements of the world, all evolving at
                                                                              different speeds, thus temporal context allows us to use them
2     The context in time                                                     together based time. Finally, a lot of data about the spatial
When we talk about the context, we concentrate on the con-                    reference context and its sub-contexts are available from ex-
text of a person called observer. The observer’s context is                   ternal sources (e.g., Google Maps, OpenStreetMaps).
the representation of a partial view of the world. We describe                   The spatio-temporal context, also called objective context,
this context into three main dimensions: the viewpoint, the                   at time t is defined as:
part-whole relation, and the endurant-perdurant.                                 ot = (Dt , Tt : Lt , me, coordt (me),
   Firstly, we have the viewpoint dimension divided as out-
side viewpoint and observer viewpoint. The outside view-                                           Pt1 : coordt (P 1 ), . . . , Ptk : coordt (P k ),
point is the view of an ideal observer who can describe every-                                     Ot1 : coordt (O1 ), . . . , Otm : coordt (Om ))
thing from a certain point of view. We distinguish this view                  where Dn , Tn stands for date and time respectively, Ln is
from the world’s static and dynamic properties. In the static                 the location, namely the smallest possible spatial reference
property, there are whatever does not change in time, e.g.,                   context that we can compute. Here me is the observer, P i
mountains, buildings, streets. In the dynamic property, there                 are persons and Om are objects. The function coord(. . . )
are not only the moving people, but also the moving animals,                  computes the spatial coordinates of me, objects, and persons.
and facilities in their manifestations, like trains. Then, the ob-               The number and type of persons and objects change over
server viewpoint describes how the observer perceives what                    time. Hence, we will have a sequence of time-tagged states,
is around her or him. In this view, we also have the property                 namely O = {o1 , . . . , on }. We call the sequence O as the
of being static or dynamic, but it is relative to the movements               streaming context. In the streaming context, within the given
of the observer.                                                              reference location, it is easy to compute spatial relations (e.g.,
   Secondly, context is a part-whole relation. In our every-                  near, right, left, in front, far relative to the location) of the dif-
day life, when we do things, we are always embedded in the                    ferent elements among themselves. For instance, the system
world. From an ontological view, we are part of the whole                     can compute that the smartphone is in the home building and
world. Thus, we call reference context as the element of                      the smartphone is near the computer.
the outside context with a volume and extension that is large
enough to contain all our movements and changes. For in-                      2.2 The objective and subjective context
stance, the reference context is the city of Trento when the                  The spatio-temporal context is also called objective context,
user walks around, or the users’ home when they are at home.                  since all the relations are computed concerning what is objec-
In turn, our body as a whole has parts (e.g., arms, mind, legs)               tively measured, in terms of spatial relations. However, notice
that are with us all the time, and they define the internal con-              that different observers will have different views of the world.
text of the user. The internal context identifies the elements                For instance, the school building has the function of study-
of the user’s body at different levels of abstraction. We usu-                place from the point of view of a student and is the teacher’s
ally distinguish between physical parts, such as arms, body,                  workplace. The word function here is used with the pre-
fingers, and mental parts, such as mind, memory, emotions.                    cise meaning defined in [Giunchiglia and Fumagalli, 2017;
The context as a part-whole relation is divided into reference                Giunchiglia et al., 2018]. Hence, “the function of an object
context and internal context, and both contexts with different                formalizes the behavior that an object is expected to have”
dimensions play a role in our life.                                           [Giunchiglia and Fumagalli, 2017]. For instance, objects are
   Thirdly, context as endurant and perdurant refers to the big-              trains and buildings. The expected behavior may be the pur-
ger relation to changes in time or space. Events and actions                  pose of the object (e.g., fridge) or the role of a person (e.g.,
are perdurants and elements, like me, are endurants.                          friend).
2.1    The spatio-temporal context                                               The subjective context includes both the objective context
                                                                              elements and the function of persons and objects as seen by
The context, as viewpoints, defines the reference point from
                                                                              the user. Thus, the subjective context at time t is defined as:
which we construct the context and the context, as part-
whole, defines which parts we should consider. So, next, we                     st =(Dt , Tt : Lt , me, coordt (me), Ft (P 1 ), . . . , Ft (P k ),
need to define how we keep track of the context from a quan-
                                                                                      Ft (O1 ), . . . , Ft (Om )),
titative point of view, with a set of quantitative and qualitative
measures. Therefore, based on these measures, we introduce                    where Fn (P k ) and Fn (Om ) are the functions with respect to
the spatio-temporal context.                                                  a person P k and an object Om , respectively. The number and
   The spatio-temporal context consists of the temporal and                   type of persons and objects, and their functions change over
spatial reference context. The former includes dates, times                   time. The sequence of subjective contexts over time is defined
and all the additional notations like weekdays and seasons.                   as the subjective streaming context S = {s1 , . . . , sn }.
The latter contains the world coordinate system. There
are various reasons why context should be represented as a                    2.3 The endurant and perdurant context
spatio-temporal context. First, this is a common representa-                  In the endurant context [Giunchiglia et al., 2017], its parts
tion when we think of the world. Second, any device today                     are endurants, essentially objects where the spatial extension



Copyright c 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021                                                            21


of their actions is contained by the space defined by the spa-                Example 1 In the travel scenario, me is Xiaoyue, she has
tial (reference) context. The actions “represents how objects                 travel named Travel 1 in the Trentino of Italy from 12:00 to
change in time” [Giunchiglia and Fumagalli, 2017]. For in-                    13:00 of 2th, June 2021. From 12:00 to 12:30 on this day,
stance, running in a park performed by a runner. We also need                 she takes train 1 from Rovereto to Trento, she sits on seat 1
to represent actions, in particular, the actions that are executed            by herself. From 12:30 to 12:55 on the same day, she walks
by the endurant me and also by any other elements of the out-                 on Roads 2 from Trento Train Station to Xiaoyue’s Home, to-
side dynamic context. Actions can be seen in two ways: (i)                    gether with her friend named Haonan. In addition, Xiaoyue
actions modeled as processes, namely as sequences of single                   talks to Haonan, and Haonan listens to Xiaoyue when they
micro-steps, each of length close to zero; (ii) actions modeled               are walking. This scenario involves one event travel and mul-
as events, which are often also called perdurants, namely as                  tiple locations.
complete movements which last for a certain duration. Ac-
                                                                                 In general, in the Figure 1, all elements are divided into
tions as events have key properties, similar to endurants. An
                                                                              Perdurants and Endurants. Perdurants are the Event and Ac-
event and an action can associate with a set of component
                                                                              tion, and Endurants include Person, Object, and Location. An
sub-events and sub-actions.
                                                                              Event happens in a Location, a Person and Object appear in
   Considering the mentioned concepts, the fundamentally                      an Event and Action is in a Person. Most elements’ inclu-
different role of space and time should become clear.                         sion relationships IN can be represented by the positions of
Whereas the parts of the space context are only used to limit                 those elements’ internal boxes. For the top-level Location
the space where things happen, the parts of time have the                     and Event, we can add an extra attribute box for them respec-
main goal to detail how actions get executed. Things get com-                 tively as InLocation(Location) and InEvent(Event), to
plicated by adding objects, people, and functions as shown in                 represent their belongs.
the data representations shown in Table 1.                                       In the rest of this section, we list the attributes of Location,
   It is worth noting that each function of a person is associ-               Event, Object, Person and Action, each kind of attributes is
ated with a limited set of actions, and the type of action that a             represented as a box in the figure. The Location has the fol-
person can perform can be considerably limited by knowing                     lowing attributes:
his or her function.
   The actions apply to me and persons, whereas functions                         • Spatial properties: Coordinates (xi , yi , zi ), Volume
apply to me, other objects and persons. Notice also, that the                       (∆xi , ∆yi , ∆yi ) , and InLocation(Li ) shows the super
stored location L is limited by the most specific location that                     Location for the top level Location;
we can compute. This is because the bigger locations are                          • Visual properties, namely some properties of Location
assumed to be static and stored in the system. For events,                          that can be observed visually;
instead, we store the smallest possible most general event as                     • Location’ functions: F unctionOf (U ) with U ∈
well as those component actions which are done during a cer-                        {P 1 , . . . , P k , O1 , . . . , OM }, which shows location’s
tain period. Thus, for instance, the action/event meeting can                       functions for persons and objects;
have sub-actions such as talking, walking, listening, typing.
Table 2 reports the streaming context matrix of Example 1                         • An extra box: including the rest part of the context that
and shows how the context changes over time.                                        happens in the Location.
                                                                              The Event orders by time and is represented by a box with
3    The current context                                                      round corners. Events include actual events and virtual
                                                                              events. An Event can have Sub-Events, the Event and Sub-
The streaming context describes the contexts of an observer                   Event have the following attributes:
according to time. To represent each context occurrence, we
define a set of notions to build a figure of the current con-                     • Super Event for the top level event: InEvent(Ei ),
text. We mainly divide the current context into four types of                       which shows the super event Ei for the current Event;
context according to how things compose in space and time.                        • Temporal properties: Begin Time - End Time (Datei ,
From top to bottom we have the following cases:                                     T imei - Datej , T imej ), which shows the date and time
    • 1L1E (One Location One Event), such as a lecture holds                        of the start and end of the event;
      in a classroom;                                                             • An extra box: including the rest part of the context that
                                                                                    happens in the Event.
    • 1LME (One Location Multiple Events), such as a se-
      quence of meetings hold in an office, or eat breakfast,                 The Object appears in Event and has the following attributes:
      lunch and dinner at home;                                                   • Spatial properties: Coordinates (xi , yi , zi ), In/Far/...
    • 1EMC (One Event Multiple Locations), such as a travel                         (P i /Om /...);
      goes from many different places;                                            • Visual properties, namely some properties of Object that
    • MEMC (Multiple Events Multiple Locations), it is the                          can be observed visually;
      most complex case, which mixes with the former three                        • Object’s functions: F unctionOf (U ) with U ∈
      cases.                                                                        {P 1 , . . . , P k , O1 , . . . , OM }, which shows Object’s
An 1EMC example is shown in Figure 1, which describes the                           functions for persons and other objects.
following travel scenario around observer me.                                 The Person appears in Event and has the following attributes:



Copyright c 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021                                                                 22



{
                                                                                                                                                    
                                                                                       P1                          Pk
      D1 , T1 : super(L1 ), super(E1 ), L1 , E1 , me, coord1 (me), Ame
                                                                    1  , F1 (P 1
                                                                                 ) : A 1  , . . . , F 1 (P k
                                                                                                             ) : A 1  , F1 (O 1
                                                                                                                                ), . . . , F 1 (O m
                                                                                                                                                    )  ,
                                                                                                                                                    
                                                                                       P1                          Pk
      D2 , T2 : super(L2 ), super(E2 ), L2 , E2 , me, coord2 (me), Ame         1                           k                  1
                                                                    2 , F2 (P ) : A2 , . . . , F2 (P ) : A2 , F2 (O ), . . . , F2 (O ) ,
                                                                                                                                                  m


    ...,
                                                                                                                                     
                                                                                   P1                     Pk
      Dn , Tn : super(Ln ), super(En ), Ln , En , me, coordn (me), Ame       1                      k              1                m
                                                                    n , Fn (P ) : An , . . . , Fn (P ) : An , Fn (O ), . . . , Fn (O )

}

Table 1: The personal streaming context, where En is an event, super(Ln ) and super(En ) are the super-classes of Ln or En , re-
spectively. The set of actions performed by me or by the persons based on their functions are denoted with Akn = {a1 , . . . , ai }, with
k ∈ {me, P 1 , . . . , P k }.

                                                                                                                                 1
        Dn         Tn     super(Ln ) super(En )          Ln           En        coordn (me)      Ame
                                                                                                  n             Fn (P 1 ) : AP
                                                                                                                             n           Fn (O1 )
                                                                                                                                      RestToolOf(
 02/06/2021 12:15          Trentino       Travel 1     Train 1    Take Train x41, y41, z41 Sitting                   NaN
                                                                                                                                     Xiaoyue, Seat 1)
                                                                                                                  FriendOf(
 02/06/2021 12:30          Trentino       Travel 1     Roads 2       Walk       x43, y43, z43 Walking,        Xiaoyue, Haonan):           NaN
                                                                                              Talking         Walking, Listening

Table 2: A streaming context matrix representing the travel scenario of Example 1 from the point of view of Xiaoyue, i.e., the observer me.
P 1 is Haonan and O1 is the object “Seat 1”. Each column is a property, and every row stands for the current context in a specific timestamp.


   • Spatial properties:                 Coordinates (xi , yi , zi ),         decorated with data properties. The object properties are pre-
      In/Far/...(P i /Om /...);                                               sented in the graph representing the relations among the entity
   • Visual properties, namely some properties of Person that                 types.
      can be observed visually;                                                  In Figure 2, white box nodes are entity types that include
                                                                              data properties with data types, the green box nodes enumer-
   • Person’s functions: F unctionOf (U ) with U ∈                            ates the values of data property. The object properties are con-
      {P 1 , . . . , P k , O1 , . . . , OM }, which shows Person’s            necting all entity types, represented by diamond nodes with
      functions for other persons and objects;                                arrows. The inheritance relation in the ETG is represented by
   • Internal states: Physical states (InP ain()), Mental                     an arrow from the super-class to the sub-class, and the sub-
      states (InM ood(), InStress());                                         class inherits all the data properties and object properties of
   • An extra box: including the Actions of Person.                           its super-class. The EG populates the entity types and proper-
                                                                              ties defined in the ETG with specific values. It is a data graph
The Action is similar with Event, it orders by time and is                    where nodes are entities that are connected by object property
represented by a box with round corners. An Action can has                    values representing the relations. Each entity further includes
many Sub-Action, the Action and Sub-Action have following                     data property values. The streaming context can be viewed as
attributes, each attribute has a box for itself.                              a stream of EGs, in which each EG describes the context at a
   • Temporal properties: Begin Time - End Time (Datei ,                      different time.
      T imei - Datej , T imej ), which shows the date and time                   We design an EG example as Figure 3 according to the sce-
      of the start and end of the Action;                                     nario in Example 1. The graph represents the context around
   • Visual properties, namely some properties of Action that                 ”Me”, this contains entities shown by nodes, e.g., ”Smart-
      can be observed visually;                                               phone”, ”Talking”, ”Walk”. Also, we can see object property
                                                                              values.
   • Means of the Action: Means(Om /...);
   • Sub-action: Action i;                                                    5 Learning Context
   • Action’s functions: F unctionOf (U ) with U ∈
                                                                              AI applications like smart personal assistants provide a ser-
      {P 1 , . . . , P k , O1 , . . . , OM }, which shows Action’s
                                                                              vice to the users based on their context. The context infor-
      functions for persons and objects.
                                                                              mation is usually not available to the machine, and hence it
                                                                              has to infer the location or the activity of the user from sensor
4       The current context as a Knowledge Graph                              data (e.g., GPS, accelerometer, nearby Bluetooth devices). In
We use the Entity Type Graph (ETG) and the Entity Graph                       our scenario, Xiaoyue is carrying a smartphone that generates
(EG) in ontology to represent the context. ETG is a knowl-                    a stream of sensor readings, and she annotates the data by an-
edge graph where nodes are entity types, which are further                    swering questions about her context, e.g., “Where are you”



Copyright c 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021                                                            23




                        Figure 1: One Event Multiple Locations: a travel around me. Representation of the Example 1.


and “What are you doing?”. The sensor data are aggregated                     problem in which an instance x is associated to multiple con-
in time windows generating a stream of instances (e.g., the                   cepts y (aka classes in machine learning). The concepts are
average number of nearby Bluetooth devices in the last 30                     organized in a ground-truth hierarchy H = (C, I), which is
minutes). On each incoming instance, the machine decides                      a direct acyclic graph (DAG) where nodes C = {1, . . . , c}
whether to query the user to acquire the labels. The machine                  are the concepts and edges I ⊂ C × C are is-a relations,
learning technique defines the query strategy, and in the sim-                i.e., I = {(ci , cj ) | ci , cj ∈ C and ci is a child of cj } [Silla
plest case, the labels are acquired on every instance.                        and Freitas, 2011]. The labels of the instances are indica-
   The user’s context recognition is a supervised learning                    tor vectors y ∈ {0, 1}c , where the i-th elements is 1 if x



Copyright c 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021                                                       24




                              Figure 2: An ETG representing partially the personal context in our travel example.


belong to i-th concept in H and 0 otherwise. The machine is                      The properties of the ETG are grouped in properties that
trained on a stream of examples zt = (xt , yt ) drawn from a                  are context depends and properties that are static. The value
ground-truth distribution P (X, Y) that is always consistent                  of the former changes every time the users change their con-
with a ground-truth hierarchy H, i.e., if there is an edge from               text and, in Figure 2 are Q = {near, use, interact, in, do,
class ci to class cj , then y i = 1 implies y j = 1 and con-                  happenIn, during, participate}. For instance, if Xiaoyue
versely y j = 0 implies y i = 0. The goal of this hierarchical                travel from the city of Trento to Rovereto, the in property
classification tasks is to learn a classifier that recognize well             will change accordingly. In contrast, the fact that Trento is
the context on future sensor readings.                                        partOf Italy can be assumed to remain valid even if user’s
                                                                              context is changed. This distinction is necessary since the
   The ETG and EG introduced in Section 4 can be used as                      value of context-dependent properties are derived from the
prior knowledge about the structure of the hierarchy. They en-                output of the context recognition task (e.g., the machine rec-
code the available information about the user and the world.                  ognizes that the user is in the city of Trento and thus updates
Algorithm 1 shows the conversion from ETG and EG to a                         the in property accordingly). The object properties that are
DAG H. The first step is to convert each entity type (etype)                  not contextual are converted as follow. Given a object prop-
in the ETG as a node in H (lines 3 - 5). Second, each entity in               erty p ∈ {isA, partOf, has} that links the etype A to B, then
EG also becomes a node that is added as a child of the node                   the node referring to etype A becomes a child of the node of
referring to the etype of the entity (lines 6 - 10). The hier-                etype B. For the other object properties, a new node referring
archy encodes the information about the current user, so the                  to the property is added as child of the codomain etype node
Me etype and the corresponding entity (e.g., Xiaoyue entity                   (lines 13 - 19). For every object property value i of the prop-
in Figure 3) are not considered.



Copyright c 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021                                                        25




                               Figure 3: An EG representing partially the scenario about travelling of Example 1.


erty p linking the entities a and b, a new node ci is added as
child of cp (i.e., the node referring to the property P ), and as
parent of cb (i.e., the node pointing to the entity b) (lines 20 -
25). Finally, all nodes that does not have a parent are connect
to the root node and the transitive reduction is applied (lines
27 - 30). Figure 4 shows an extract of the DAG resulting from
applying Algorithm 1 on EG and ETG presented in Section 4.
   Every node in H has a unique identifier that is used to ref-
erence back to the ETG and EG. The node name can be trans-
lated into a human-readable text that is used to interact with
the user. This aspect is left as future work. The concept hi-
erarchy available at the beginning can evolve over time and
has to be continually updated. This aspect has been defined
as knowledge drift and is addressed in [Bontempelli et al.,
2021].

6    Related Work
Considering our novel context modelling of the personal con-
text, the context learning, and the architecture presented in
this paper, we can show as use cases the papers that in-
dividually compartmentalizes examples for these improve-                      Figure 4: A partial representation of the DAG obtained from ETG
ments. Current works on context recognition focus on learn-                   and EG examples in Section 4. Orange nodes are derived from EG.
ing the relationship between the input data (sensor readings)                 Blue nodes are derived from ETG. Red nodes are actions and green
and the target concepts (context). The structure of the con-                  nodes are the functions.
text is implicitly learned by the implemented machine learn-
ing algorithm during the training phase. The new parts are
described in Section 3 and compared with related work we
can outline the following: 1) the connection between the
learned context model; 2) The extension of new dimensions



Copyright c 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021                                                       26


Algorithm 1 Convert ETG and EG in a DAG.                                      bels used to train the machine learning model. The context
      Inputs: ETG, EG, and the set of context dependent ob-                   formalization introduced in this work can be used to struc-
      ject properties Q                                                       ture the output of these machine learning models according
      Outputs: H = (C, I)                                                     to our representation. Moreover, it can help machine learn-
                                                                              ing approaches to interact with users. For instance, the ma-
 1: C ← ∅                                                                     chine can ask if Haonan is a friend of Xiaoyue since they
 2: I ← ∅                                                                     are walking together. Approaches that use active learning
 3: for every etype A 6= Me in ETG do                                         strategies (e.g., [Settles, 2009; Hoque and Stankovic, 2012;
 4:     let cA being the node of etype A                                      Hossain et al., 2017]) can benefit of our representation.
 5:     C = C ∪ {cA }                                                            Also, existing frameworks for creating context-aware mo-
                                                                              bile applications, such as Ferreira et al., do not consider the
                                                                              modelling of the context.
 6: for every entity a such that ¬ Me(a) in EG do
 7:     let ca being the node of the entity a
 8:     let cA being the node of the etype of a                               7 Conclusion and Future Work
 9:     C = C ∪ {ca }                                                         In this paper, we moved forward with a better representation
10:     I = I ∪ {(ca , cA )}                                                  of personal context in real-life environments. We proposed an
                                                                              improved representation of the personal context, adding the
11: for every object property p in ETG such that p ∈/ Q do                    internal state, functions, and actions. The learning aspect of
12:     let cp being the node of p                                            our work is the formal definition of an algorithm to transform
13:     for every p(A, B) in ETG do                                           the streaming input data to ML algorithms. We will put all
14:          let cA and cB being the nodes of entity A and B                  these components in the system architecture.
    respectively                                                                 In comparison with the work on personal context recog-
15:         if p ∈ {isA, partOf, has} then                                    nition for human-machine collaboration [Giunchiglia et al.,
16:              I = I ∪ {(cA , cB )}                                         2018], we have shown an enhancement related to the model
17:         else                                                              representation of the personal context.
18:              C = C ∪ {cp }                                                   Additionally, we have shown how our novel personal con-
19:              I = I ∪ {(cp , cB )}                                         text representation can also be leveraged by machine learning
20:     for every p(a, b) in EG do                                            algorithms to include prior knowledge about the structure of
21:         let ci being the node encoding p(a, b)                            their output and can be used to drive the interaction with the
22:         let cb being the node of entity b                                 user. Future work will focus on evaluating the impact of our
23:         C = C ∪ {ci }                                                     formalization on an existing approach for fixing mislabeled
24:         I = I ∪ {(ci , cp )}                                              data when learning the users’ contexts [Zeni et al., 2019].
25:         I = I ∪ {(cb , ci )}                                                 The next step is to propose and implement a modern de-
                                                                              sign of the services related to iLog [Zeni et al., 2014] by a
                                                                              centralised streaming system and linking the personal con-
26: let c0 being the root node
                                                                              text data collections with other distributed services of ma-
27: C = C ∪ {c0 }
                                                                              chine learning. This implementation will allow us to test and
28: for every node c ∈ C that parent(c) = ∅ do
                                                                              evaluate our novel context model in near real-life scenarios.
29:      I = I ∪ {(c, c0 )} add an edge from c to c0
30: apply transitive reduction on H = (C, I)
                                                                              Acknowledgements
                                                                              The research conducted by Fausto and Xiaoyue has re-
                                                                              ceived funding from the European Union’s Horizon 2020
and classification of context, i.e. Internal State, Functions,                FET Proactive project “WeNet – The Internet of us”, grant
Actions. The Algorithm 1 bridges the gap between the two                      agreement No 823783.
modellings allowing context recognition to leverage both ma-                     The research conducted by Marcelo and Andrea has re-
chine learning solutions and knowledge graph representation                   ceived funding from the “DELPhi - DiscovEring Life Pat-
(ETG and EG). For instance, the ETG and EG can gener-                         terns” project funded by the MIUR Progetti di Ricerca di
ate the questions needed by the machine to interact with the                  Rilevante Interesse Nazionale (PRIN) 2017 – DD n. 1062
users. For instance, the personal context recognition model                   del 31.05.2019.
shown in [Giunchiglia et al., 2018], showed to be a good ap-
proach to increase the accuracy of the context recognition
algorithms. Our work described in Section 3 enhance the                       References
representation of the personal context with the hope to per-                  [Bontempelli et al., 2020] Andrea Bontempelli, Stefano
form better in real-life scenarios. In practice, existing ap-                   Teso, Fausto Giunchiglia, and Andrea Passerini. Learning
proaches for context recognition using batch or streaming                       in the wild with incremental skeptical gaussian processes.
sensor data [Vaizman et al., 2018; Bontempelli et al., 2020;                    In Proceedings of the Twenty-Ninth International Joint
Zeni et al., 2019] do not leverage on an explicit context                       Conference on Artificial Intelligence, IJCAI-20, pages
modeling. The context representation is implicit in the la-                     2886–2892, 7 2020.



Copyright c 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021                                                       27


[Bontempelli et al., 2021] Andrea Bontempelli, Fausto                         [Tourangeau et al., 2000] Roger Tourangeau, Lance J. Rips,
  Giunchiglia, Andrea Passerini, and Stefano Teso. Human-                        and Kenneth Rasinski. The psychology of survey re-
  in-the-loop handling of knowledge drift. arXiv preprint                        sponse. 2000.
  arXiv:2103.14874, 2021.                                                     [Vaizman et al., 2018] Yonatan Vaizman, Nadir Weibel, and
[Chang et al., 2017] Yung-Ju Chang, Gaurav Paruthi, Hsin-                        Gert Lanckriet. Context recognition in-the-wild: Unified
  Ying Wu, Hsin-Yu Lin, and Mark W Newman. An investi-                           model for multi-modal sensors and multi-label classifica-
  gation of using mobile and situated crowdsourcing to col-                      tion. 1(4), January 2018.
  lect annotated travel activity data in real-word settings. In-              [Vapnik, 2013] Vladimir Vapnik. The nature of statistical
  ternational Journal of Human-Computer Studies, 102:81–                         learning theory. Springer science & business media, 2013.
  102, 2017.                                                                  [Webb, 2003] Andrew R. Webb. Statistical pattern recogni-
[Ferreira et al., 2015] Denzil Ferreira, Vassilis Kostakos, and                  tion. John Wiley & Sons, 2003.
   Anind K. Dey. Aware: mobile context instrumentation                        [West and Sinibaldi, 2013] Brady T. West and Jennifer Sini-
   framework. Frontiers in ICT, 2:6, 2015.                                       baldi. The quality of paradata: A literature review. Im-
[Freedman et al., 2013] Vicki A. Freedman, Jessica Broome,                       proving surveys with paradata, pages 339–359, 2013.
   Frederick Conrad, and Jennifer C. Cornman. Interviewer                     [Zeni et al., 2014] Mattia Zeni, Ilya Zaihrayeu, and Fausto
   and respondent interactions and quality assessments in a                      Giunchiglia. Multi-device activity logging. In Proceedings
   time diary study. Electronic international journal of time                    of the 2014 ACM international joint conference on per-
   use research, 10(1):55, 2013.                                                 vasive and ubiquitous Computing: Adjunct publication,
[Giunchiglia and Fumagalli, 2017] Fausto Giunchiglia and                         pages 299–302, 2014.
  Mattia Fumagalli. Teleologies: Objects, actions and func-                   [Zeni et al., 2019] Mattia Zeni, Wanyi Zhang, Enrico Big-
  tions. In International conference on conceptual modeling,                     notti, Andrea Passerini, and Fausto Giunchiglia. Fixing
  pages 520–534. Springer, 2017.                                                 mislabeling by human annotators leveraging conflict reso-
                                                                                 lution and prior knowledge. Proceedings of the ACM on
[Giunchiglia et al., 2017] Fausto Giunchiglia, Enrico Big-                       Interactive, Mobile, Wearable and Ubiquitous Technolo-
  notti, and Mattia Zeni. Personal context modelling and an-                     gies, 3(1):1–23, 2019.
  notation. In 2017 IEEE International Conference on Per-
  vasive Computing and Communications Workshops (Per-
  Com Workshops), pages 117–122. IEEE, 2017.
[Giunchiglia et al., 2018] Fausto Giunchiglia, Mattia Zeni,
  and Enrico Big. Personal context recognition via reli-
  able human-machine collaboration. In 2018 IEEE Interna-
  tional Conference on Pervasive Computing and Communi-
  cations Workshops (PerCom Workshops), pages 379–384.
  IEEE, 2018.
[Hoque and Stankovic, 2012] Enamul Hoque and John
  Stankovic. Aalo: Activity recognition in smart homes us-
  ing active learning in the presence of overlapped activities.
  In 2012 6th International Conference on Pervasive Com-
  puting Technologies for Healthcare (PervasiveHealth)
  and Workshops, pages 139–146. IEEE, 2012.
[Hossain et al., 2017] HM Sajjad Hossain, Md Abdullah
  Al Hafiz Khan, and Nirmalya Roy. Active learning en-
  abled activity recognition. Pervasive and Mobile Comput-
  ing, 38:312–330, 2017.
[Kwapisz et al., 2011] Jennifer R. Kwapisz, Gary M. Weiss,
  and Samuel A Moore.           Activity recognition using
  cell phone accelerometers. ACM SigKDD Explorations
  Newsletter, 12(2):74–82, 2011.
[Settles, 2009] Burr Settles. Active learning literature sur-
   vey. 2009.
[Silla and Freitas, 2011] Carlos N Silla and Alex A Freitas.
   A survey of hierarchical classification across different ap-
   plication domains. Data Mining and Knowledge Discov-
   ery, 2011.



Copyright c 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).