<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Streaming and Learning the Personal Context</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Fausto Giunchiglia</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marcelo Rodas Britez</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Bontempelli</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Xiaoyue Li</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Trento</institution>
          ,
          <addr-line>Trento</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <fpage>19</fpage>
      <lpage>27</lpage>
      <abstract>
        <p>The representation of the personal context is complex and essential to improve the help machines can give to humans for making sense of the world, and the help humans can give to machines to improve their efficiency. We aim to design a novel model representation of the personal context and design a learning process for better integration with machine learning. We aim to implement these elements into a modern system architecture focus in real-life environments. Also, we show how our proposal can improve in specifically related work papers. Finally, we are moving forward with a better personal context representation with an improved model, the implementation of the learning process, and the architectural design of these components.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Every person makes sense of their personal context
differently because of their different sets of personal
characteristics (intelligence) and behaviour (life choices). However, the
machine’s understanding of the personal context is radically
different from the user’s understanding. This limitation is due
to the limited definition of the personal context, and the lack
of tools to make sense of the personal context. For instance,
while the person you are with now can be linked to a name,
for people it has more meaning than just a name, e.g., friend
and colleague. Additionally, these meanings are not fixed,
they may change at any time, and every person can assign
additional meaning using different criteria. Thus, effective
context recognition requires a complex and dynamic
representation of the personal context and the collaboration of the
people to fill the cognitive gap of machines.</p>
      <p>The addition of human collaboration into the context
recognition learning of machines is an important part of
supervised machine learning [Vapnik, 2013]. These interactions
bring new challenges to the implementation of machine
learning algorithms. For instance, humans can be denfied as the
expert of the supervised algorithms, interacting in an offline
fashion by annotating sensor data [Webb, 2003], or the
interaction can be directly online, as active learning [Settles, 2009;
∗Contact Author
Hoque and Stankovic, 2012; Hossain et al., 2017]. The
human collaboration is important when we are moving into
reallife scenarios [Kwapisz et al., 2011].</p>
      <p>Other challenges of this collaborative approach are the
possibility of overwhelming the humans and the possible
differences between the assignment of meaning between people
[Chang et al., 2017], thus, making the annotation a personal
activity. Then, having humans as the input of the information
opens the possibility of human error in the collaboration
process [Tourangeau et al., 2000], and this issue is well known in
social sciences and psychology, because of response biases in
answering self-reports [West and Sinibaldi, 2013], and more
importantly, these biases are not well-understood [Freedman
et al., 2013].</p>
      <p>We propose a novel context model based on the work from
[Giunchiglia et al., 2018]. That work focused on ensuring the
reliability of annotations, whereas our focus is on improving
personal context representations to get closer to work in
reallife scenarios. So, we propose to add a more precise
representation of personal context that can also work with machine
learning algorithms. We formalize a context model based on
ontology and use it with the streaming data to have a
knowledge representation of context data. This formalization
allows moving towards a generic definition of context that can
work with existing multi-label machine learning approaches,
using a conversion algorithm. Eventually, the last piece of the
puzzle will be the design and development of the Streaming
System to manage technically the dynamic context data, and
it will be organized in the system architecture with modular
components for independent development and easy
deployment in current cloud environments.</p>
      <p>Some examples of our model improvement can be seen
compared with our main related work [Giunchiglia et al.,
2018; Bontempelli et al., 2020; Zeni et al., 2019]. All of
them can take benefit from our novel personal context
representation and can use our conversions algorithms to explicitly
implement the transformations needed for the machine
learning algorithms.</p>
      <p>The paper is structured as follows. Section 2 introduces
context modeling. Section 3 illustrates our representation of
personal context, while we provide the formal representation
in Section 4. Then, we show the learning process to
transition from our formal representation to machine learning
representation in Section 5 and how our formal representation
can be converted to a Direct Acyclic Graph (DAG). Finally,
Section 6 describes works related to ours, and Section 7
concludes our paper.
2</p>
    </sec>
    <sec id="sec-2">
      <title>The context in time</title>
      <p>When we talk about the context, we concentrate on the
context of a person called observer. The observer’s context is
the representation of a partial view of the world. We describe
this context into three main dimensions: the viewpoint, the
part-whole relation, and the endurant-perdurant.</p>
      <p>Firstly, we have the viewpoint dimension divided as
outside viewpoint and observer viewpoint. The outside
viewpoint is the view of an ideal observer who can describe
everything from a certain point of view. We distinguish this view
from the world’s static and dynamic properties. In the static
property, there are whatever does not change in time, e.g.,
mountains, buildings, streets. In the dynamic property, there
are not only the moving people, but also the moving animals,
and facilities in their manifestations, like trains. Then, the
observer viewpoint describes how the observer perceives what
is around her or him. In this view, we also have the property
of being static or dynamic, but it is relative to the movements
of the observer.</p>
      <p>Secondly, context is a part-whole relation. In our
everyday life, when we do things, we are always embedded in the
world. From an ontological view, we are part of the whole
world. Thus, we call reference context as the element of
the outside context with a volume and extension that is large
enough to contain all our movements and changes. For
instance, the reference context is the city of Trento when the
user walks around, or the users’ home when they are at home.
In turn, our body as a whole has parts (e.g., arms, mind, legs)
that are with us all the time, and they define the internal
context of the user. The internal context identifies the elements
of the user’s body at different levels of abstraction. We
usually distinguish between physical parts, such as arms, body,
fingers, and mental parts, such as mind, memory, emotions.
The context as a part-whole relation is divided into reference
context and internal context, and both contexts with different
dimensions play a role in our life.</p>
      <p>Thirdly, context as endurant and perdurant refers to the
bigger relation to changes in time or space. Events and actions
are perdurants and elements, like me, are endurants.
2.1</p>
      <sec id="sec-2-1">
        <title>The spatio-temporal context</title>
        <p>The context, as viewpoints, defines the reference point from
which we construct the context and the context, as
partwhole, defines which parts we should consider. So, next, we
need to define how we keep track of the context from a
quantitative point of view, with a set of quantitative and qualitative
measures. Therefore, based on these measures, we introduce
the spatio-temporal context.</p>
        <p>The spatio-temporal context consists of the temporal and
spatial reference context. The former includes dates, times
and all the additional notations like weekdays and seasons.
The latter contains the world coordinate system. There
are various reasons why context should be represented as a
spatio-temporal context. First, this is a common
representation when we think of the world. Second, any device today
can easily retrieve the time and time zones, and the space
coordinates (e.g., via GPS). Third, time can be used to measure
the changes in all the elements of the world, all evolving at
different speeds, thus temporal context allows us to use them
together based time. Finally, a lot of data about the spatial
reference context and its sub-contexts are available from
external sources (e.g., Google Maps, OpenStreetMaps).</p>
        <p>The spatio-temporal context, also called objective context,
at time t is defined as:
ot = (Dt, Tt : Lt, me, coordt(me),</p>
        <p>Pt1 : coordt(P 1), . . . , Ptk : coordt(P k),</p>
        <p>Ot1 : coordt(O1), . . . , Otm : coordt(Om))
where Dn, Tn stands for date and time respectively, Ln is
the location, namely the smallest possible spatial reference
context that we can compute. Here me is the observer, P i
are persons and Om are objects. The function coord(. . . )
computes the spatial coordinates of me, objects, and persons.</p>
        <p>The number and type of persons and objects change over
time. Hence, we will have a sequence of time-tagged states,
namely O = {o1, . . . , on}. We call the sequence O as the
streaming context. In the streaming context, within the given
reference location, it is easy to compute spatial relations (e.g.,
near, right, left, in front, far relative to the location) of the
different elements among themselves. For instance, the system
can compute that the smartphone is in the home building and
the smartphone is near the computer.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>The objective and subjective context</title>
        <p>The spatio-temporal context is also called objective context,
since all the relations are computed concerning what is
objectively measured, in terms of spatial relations. However, notice
that different observers will have different views of the world.
For instance, the school building has the function of
studyplace from the point of view of a student and is the teacher’s
workplace. The word function here is used with the
precise meaning defined in [Giunchiglia and Fumagalli, 2017;
Giunchiglia et al., 2018]. Hence, “ the function of an object
formalizes the behavior that an object is expected to have”
[Giunchiglia and Fumagalli, 2017]. For instance, objects are
trains and buildings. The expected behavior may be the
purpose of the object (e.g., fridge) or the role of a person (e.g.,
friend).</p>
        <p>The subjective context includes both the objective context
elements and the function of persons and objects as seen by
the user. Thus, the subjective context at time t is defined as:
st =(Dt, Tt : Lt, me, coordt(me), Ft(P 1), . . . , Ft(P k),</p>
        <p>Ft(O1), . . . , Ft(Om)),
where Fn(P k) and Fn(Om) are the functions with respect to
a person P k and an object Om, respectively. The number and
type of persons and objects, and their functions change over
time. The sequence of subjective contexts over time is defined
as the subjective streaming context S = {s1, . . . , sn}.
2.3</p>
      </sec>
      <sec id="sec-2-3">
        <title>The endurant and perdurant context</title>
        <p>In the endurant context [Giunchiglia et al., 2017], its parts
are endurants, essentially objects where the spatial extension
of their actions is contained by the space defined by the
spatial (reference) context. The actions “ represents how objects
change in time” [Giunchiglia and Fumagalli, 2017]. For
instance, running in a park performed by a runner. We also need
to represent actions, in particular, the actions that are executed
by the endurant me and also by any other elements of the
outside dynamic context. Actions can be seen in two ways: (i)
actions modeled as processes, namely as sequences of single
micro-steps, each of length close to zero; (ii) actions modeled
as events, which are often also called perdurants, namely as
complete movements which last for a certain duration.
Actions as events have key properties, similar to endurants. An
event and an action can associate with a set of component
sub-events and sub-actions.</p>
        <p>Considering the mentioned concepts, the fundamentally
different role of space and time should become clear.
Whereas the parts of the space context are only used to limit
the space where things happen, the parts of time have the
main goal to detail how actions get executed. Things get
complicated by adding objects, people, and functions as shown in
the data representations shown in Table 1.</p>
        <p>It is worth noting that each function of a person is
associated with a limited set of actions, and the type of action that a
person can perform can be considerably limited by knowing
his or her function.</p>
        <p>The actions apply to me and persons, whereas functions
apply to me, other objects and persons. Notice also, that the
stored location L is limited by the most specific location that
we can compute. This is because the bigger locations are
assumed to be static and stored in the system. For events,
instead, we store the smallest possible most general event as
well as those component actions which are done during a
certain period. Thus, for instance, the action/event meeting can
have sub-actions such as talking, walking, listening, typing.
Table 2 reports the streaming context matrix of Example 1
and shows how the context changes over time.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>The current context</title>
      <p>The streaming context describes the contexts of an observer
according to time. To represent each context occurrence, we
define a set of notions to build a figure of the current
context. We mainly divide the current context into four types of
context according to how things compose in space and time.</p>
      <p>From top to bottom we have the following cases:
• 1L1E (One Location One Event), such as a lecture holds</p>
      <p>in a classroom;
• 1LME (One Location Multiple Events), such as a
sequence of meetings hold in an office, or eat breakfast,
lunch and dinner at home;
• 1EMC (One Event Multiple Locations), such as a travel</p>
      <p>goes from many different places;
• MEMC (Multiple Events Multiple Locations), it is the
most complex case, which mixes with the former three
cases.</p>
      <p>An 1EMC example is shown in Figure 1, which describes the
following travel scenario around observer me.</p>
      <p>Example 1 In the travel scenario, me is Xiaoyue, she has
travel named Travel 1 in the Trentino of Italy from 12:00 to
13:00 of 2th, June 2021. From 12:00 to 12:30 on this day,
she takes train 1 from Rovereto to Trento, she sits on seat 1
by herself. From 12:30 to 12:55 on the same day, she walks
on Roads 2 from Trento Train Station to Xiaoyue’s Home,
together with her friend named Haonan. In addition, Xiaoyue
talks to Haonan, and Haonan listens to Xiaoyue when they
are walking. This scenario involves one event travel and
multiple locations.</p>
      <p>In general, in the Figure 1, all elements are divided into
Perdurants and Endurants. Perdurants are the Event and
Action, and Endurants include Person, Object, and Location. An
Event happens in a Location, a Person and Object appear in
an Event and Action is in a Person. Most elements’
inclusion relationships IN can be represented by the positions of
those elements’ internal boxes. For the top-level Location
and Event, we can add an extra attribute box for them
respectively as InLocation(Location) and InEvent(Event), to
represent their belongs.</p>
      <p>In the rest of this section, we list the attributes of Location,
Event, Object, Person and Action, each kind of attributes is
represented as a box in the figure. The Location has the
following attributes:
• Spatial properties: Coordinates ( xi, yi, zi), Volume
(Δxi, Δyi, Δyi) , and InLocation(Li) shows the super</p>
      <p>Location for the top level Location;
• Visual properties, namely some properties of Location</p>
      <p>that can be observed visually;
• Location’ functions: F unctionOf (U ) with U ∈
{P 1, . . . , P k, O1, . . . , OM }, which shows location’s
functions for persons and objects;
• An extra box: including the rest part of the context that</p>
      <p>happens in the Location.</p>
      <p>The Event orders by time and is represented by a box with
round corners. Events include actual events and virtual
events. An Event can have Sub-Events, the Event and
SubEvent have the following attributes:
• Super Event for the top level event: InEvent(Ei),</p>
      <p>which shows the super event Ei for the current Event;
• Temporal properties: Begin Time - End Time ( Datei,</p>
      <p>T imei - Datej , T imej ), which shows the date and time
of the start and end of the event;
• An extra box: including the rest part of the context that</p>
      <p>happens in the Event.</p>
      <p>The Object appears in Event and has the following attributes:
• Spatial properties: Coordinates ( xi, yi, zi), In/Far/...</p>
      <p>(P i/Om/...);
• Visual properties, namely some properties of Object that</p>
      <p>can be observed visually;
• Object’s functions: F unctionOf (U ) with U ∈
{P 1, . . . , P k, O1, . . . , OM }, which shows Object’s
functions for persons and other objects.</p>
      <p>The Person appears in Event and has the following attributes:</p>
      <p>D1, T1 : super(L1), super(E1), L1, E1, me, coord1(me), A1me, F1(P 1) : A1P 1 , . . . , F1(P k) : A1P k , F1(O1), . . . , F1(Om) ,
D2, T2 : super(L2), super(E2), L2, E2, me, coord2(me), A2me, F2(P 1) : A2P 1 , . . . , F2(P k) : A2P k , F2(O1), . . . , F2(Om) ,
decorated with data properties. The object properties are
presented in the graph representing the relations among the entity
types.</p>
      <p>In Figure 2, white box nodes are entity types that include
data properties with data types, the green box nodes
enumerates the values of data property. The object properties are
connecting all entity types, represented by diamond nodes with
arrows. The inheritance relation in the ETG is represented by
an arrow from the super-class to the sub-class, and the
subclass inherits all the data properties and object properties of
its super-class. The EG populates the entity types and
properties defined in the ETG with specific values. It is a data graph
where nodes are entities that are connected by object property
values representing the relations. Each entity further includes
data property values. The streaming context can be viewed as
a stream of EGs, in which each EG describes the context at a
different time.</p>
      <p>We design an EG example as Figure 3 according to the
scenario in Example 1. The graph represents the context around
”Me”, this contains entities shown by nodes, e.g.,
”Smartphone”, ”Talking”, ”Walk”. Also, we can see object property
values.
5</p>
    </sec>
    <sec id="sec-4">
      <title>Learning Context</title>
      <p>AI applications like smart personal assistants provide a
service to the users based on their context. The context
information is usually not available to the machine, and hence it
has to infer the location or the activity of the user from sensor
data (e.g., GPS, accelerometer, nearby Bluetooth devices). In
our scenario, Xiaoyue is carrying a smartphone that generates
a stream of sensor readings, and she annotates the data by
answering questions about her context, e.g., “Where are you”
and “What are you doing?”. The sensor data are aggregated
in time windows generating a stream of instances (e.g., the
average number of nearby Bluetooth devices in the last 30
minutes). On each incoming instance, the machine decides
whether to query the user to acquire the labels. The machine
learning technique defines the query strategy, and in the
simplest case, the labels are acquired on every instance.</p>
      <p>The user’s context recognition is a supervised learning
problem in which an instance x is associated to multiple
concepts y (aka classes in machine learning). The concepts are
organized in a ground-truth hierarchy H = (C, I ), which is
a direct acyclic graph (DAG) where nodes C = {1, . . . , c}
are the concepts and edges I ⊂ C × C are is-a relations,
i.e., I = {(ci, cj ) | ci, cj ∈ C and ci is a child of cj } [Silla
and Freitas, 2011]. The labels of the instances are
indicator vectors y ∈ {0, 1}c, where the i-th elements is 1 if x
belong to i-th concept in H and 0 otherwise. The machine is
trained on a stream of examples zt = (xt, yt) drawn from a
ground-truth distribution P (X, Y) that is always consistent
with a ground-truth hierarchy H, i.e., if there is an edge from
class ci to class cj , then yi = 1 implies yj = 1 and
conversely yj = 0 implies yi = 0. The goal of this hierarchical
classification tasks is to learn a classifier that recognize well
the context on future sensor readings.</p>
      <p>The ETG and EG introduced in Section 4 can be used as
prior knowledge about the structure of the hierarchy. They
encode the available information about the user and the world.
Algorithm 1 shows the conversion from ETG and EG to a
DAG H. The first step is to convert each entity type (etype)
in the ETG as a node in H (lines 3 - 5). Second, each entity in
EG also becomes a node that is added as a child of the node
referring to the etype of the entity (lines 6 - 10). The
hierarchy encodes the information about the current user, so the
Me etype and the corresponding entity (e.g., Xiaoyue entity
in Figure 3) are not considered.</p>
      <p>The properties of the ETG are grouped in properties that
are context depends and properties that are static. The value
of the former changes every time the users change their
context and, in Figure 2 are Q = {near, use, interact, in, do,
happenIn, during, participate}. For instance, if Xiaoyue
travel from the city of Trento to Rovereto, the in property
will change accordingly. In contrast, the fact that Trento is
partOf Italy can be assumed to remain valid even if user’s
context is changed. This distinction is necessary since the
value of context-dependent properties are derived from the
output of the context recognition task (e.g., the machine
recognizes that the user is in the city of Trento and thus updates
the in property accordingly). The object properties that are
not contextual are converted as follow. Given a object
property p ∈ {isA, partOf, has} that links the etype A to B, then
the node referring to etype A becomes a child of the node of
etype B. For the other object properties, a new node referring
to the property is added as child of the codomain etype node
(lines 13 - 19). For every object property value i of the
property p linking the entities a and b, a new node ci is added as
child of cp (i.e., the node referring to the property P ), and as
parent of cb (i.e., the node pointing to the entity b) (lines 20
25). Finally, all nodes that does not have a parent are connect
to the root node and the transitive reduction is applied (lines
27 - 30). Figure 4 shows an extract of the DAG resulting from
applying Algorithm 1 on EG and ETG presented in Section 4.</p>
      <p>Every node in H has a unique identifier that is used to
reference back to the ETG and EG. The node name can be
translated into a human-readable text that is used to interact with
the user. This aspect is left as future work. The concept
hierarchy available at the beginning can evolve over time and
has to be continually updated. This aspect has been defined
as knowledge drift and is addressed in [Bontempelli et al.,
2021].
6</p>
    </sec>
    <sec id="sec-5">
      <title>Related Work</title>
      <p>
        Considering our novel context modelling of the personal
context, the context learning, and the architecture presented in
this paper, we can show as use cases the papers that
individually compartmentalizes examples for these
improvements. Current works on context recognition focus on
learning the relationship between the input data (sensor readings)
and the target concepts (context). The structure of the
context is implicitly learned by the implemented machine
learning algorithm during the training phase. The new parts are
described in Section 3 and compared with related work we
can outline the following: 1) the connection between the
learned context model; 2) The extension of new dimensions
bels used to train the machine learning model. The context
formalization introduced in this work can be used to
structure the output of these machine learning models according
to our representation. Moreover, it can help machine
learning approaches to interact with users. For instance, the
machine can ask if Haonan is a friend of Xiaoyue since they
are walking together. Approaches that use active learning
strategies
        <xref ref-type="bibr" rid="ref10 ref12 ref3 ref7 ref9">(e.g., [Settles, 2009; Hoque and Stankovic, 2012;
Hossain et al., 2017])</xref>
        can benefit of our representation.
      </p>
      <p>Also, existing frameworks for creating context-aware
mobile applications, such as Ferreira et al., do not consider the
modelling of the context.
7</p>
    </sec>
    <sec id="sec-6">
      <title>Conclusion and Future Work</title>
      <p>In this paper, we moved forward with a better representation
of personal context in real-life environments. We proposed an
improved representation of the personal context, adding the
internal state, functions, and actions. The learning aspect of
our work is the formal definition of an algorithm to transform
the streaming input data to ML algorithms. We will put all
these components in the system architecture.</p>
      <p>In comparison with the work on personal context
recognition for human-machine collaboration [Giunchiglia et al.,
2018], we have shown an enhancement related to the model
representation of the personal context.</p>
      <p>Additionally, we have shown how our novel personal
context representation can also be leveraged by machine learning
algorithms to include prior knowledge about the structure of
their output and can be used to drive the interaction with the
user. Future work will focus on evaluating the impact of our
formalization on an existing approach for fixing mislabeled
data when learning the users’ contexts [Zeni et al., 2019].</p>
      <p>The next step is to propose and implement a modern
design of the services related to iLog [Zeni et al., 2014] by a
centralised streaming system and linking the personal
context data collections with other distributed services of
machine learning. This implementation will allow us to test and
evaluate our novel context model in near real-life scenarios.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgements</title>
      <p>The research conducted by Fausto and Xiaoyue has
received funding from the European Union’s Horizon 2020
FET Proactive project “WeNet – The Internet of us”, grant
agreement No 823783.</p>
      <p>The research conducted by Marcelo and Andrea has
received funding from the “DELPhi - DiscovEring Life
Patterns” project funded by the MIUR Progetti di Ricerca di
Rilevante Interesse Nazionale (PRIN) 2017 – DD n. 1062
del 31.05.2019.
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26: let c0 being the root node
27: C = C ∪ {c0}
28: for every node c ∈ C that parent(c) = ∅ do
29: I = I ∪ {(c, c0)} add an edge from c to c0
30: apply transitive reduction on H = (C, I)
and classification of context, i.e. Internal State, Functions,
Actions. The Algorithm 1 bridges the gap between the two
modellings allowing context recognition to leverage both
machine learning solutions and knowledge graph representation
(ETG and EG). For instance, the ETG and EG can
generate the questions needed by the machine to interact with the
users. For instance, the personal context recognition model
shown in [Giunchiglia et al., 2018], showed to be a good
approach to increase the accuracy of the context recognition
algorithms. Our work described in Section 3 enhance the
representation of the personal context with the hope to
perform better in real-life scenarios. In practice, existing
approaches for context recognition using batch or streaming
sensor data [Vaizman et al., 2018; Bontempelli et al., 2020;
Zeni et al., 2019] do not leverage on an explicit context
modeling. The context representation is implicit in the
laAlgorithm 1 Convert ETG and EG in a DAG.</p>
      <p>Inputs: ETG, EG, and the set of context dependent
object properties Q</p>
      <p>Outputs: H = (C, I)
1: C ← ∅
2: I ← ∅
3: for every etype A 6= Me in ETG do
4: let cA being the node of etype A
5: C = C ∪ {cA}
6: for every entity a such that ¬ Me(a) in EG do
7: let ca being the node of the entity a
8: let cA being the node of the etype of a
9: C = C ∪ {ca}
10: I = I ∪ {(ca, cA)}
11: for every object property p in ETG such that p ∈/ Q do
12: let cp being the node of p
13: for every p(A, B) in ETG do
14: let cA and cB being the nodes of entity A and B
respectively
if p ∈ {isA, partOf, has} then</p>
      <p>I = I ∪ {(cA, cB)}
else</p>
      <p>C = C ∪ {cp}</p>
      <p>I = I ∪ {(cp, cB)}
for every p(a, b) in EG do
let ci being the node encoding p(a, b)
let cb being the node of entity b
C = C ∪ {ci}
I = I ∪ {(ci, cp)}</p>
      <p>I = I ∪ {(cb, ci)}</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [Bontempelli et al.,
          <year>2020</year>
          ]
          <string-name>
            <given-names>Andrea</given-names>
            <surname>Bontempelli</surname>
          </string-name>
          , Stefano Teso, Fausto Giunchiglia, and
          <string-name>
            <given-names>Andrea</given-names>
            <surname>Passerini</surname>
          </string-name>
          .
          <article-title>Learning in the wild with incremental skeptical gaussian processes</article-title>
          .
          <source>In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20</source>
          , pages
          <fpage>2886</fpage>
          -
          <lpage>2892</lpage>
          , 7
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [Bontempelli et al.,
          <year>2021</year>
          ]
          <string-name>
            <given-names>Andrea</given-names>
            <surname>Bontempelli</surname>
          </string-name>
          , Fausto Giunchiglia, Andrea Passerini, and
          <string-name>
            <given-names>Stefano</given-names>
            <surname>Teso</surname>
          </string-name>
          .
          <article-title>Humanin-the-loop handling of knowledge drift</article-title>
          .
          <source>arXiv preprint arXiv:2103.14874</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [Chang et al.,
          <year>2017</year>
          ]
          <string-name>
            <surname>Yung-Ju</surname>
            <given-names>Chang</given-names>
          </string-name>
          , Gaurav Paruthi, HsinYing Wu,
          <string-name>
            <surname>Hsin-Yu Lin</surname>
          </string-name>
          , and
          <string-name>
            <surname>Mark</surname>
            <given-names>W Newman.</given-names>
          </string-name>
          <article-title>An investigation of using mobile and situated crowdsourcing to collect annotated travel activity data in real-word settings</article-title>
          .
          <source>International Journal of Human-Computer Studies</source>
          ,
          <volume>102</volume>
          :
          <fpage>81</fpage>
          -
          <lpage>102</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [Ferreira et al.,
          <year>2015</year>
          ]
          <string-name>
            <given-names>Denzil</given-names>
            <surname>Ferreira</surname>
          </string-name>
          , Vassilis Kostakos, and
          <string-name>
            <surname>Anind</surname>
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Dey</surname>
          </string-name>
          .
          <article-title>Aware: mobile context instrumentation framework</article-title>
          .
          <source>Frontiers in ICT, 2:6</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [Freedman et al.,
          <year>2013</year>
          ]
          <article-title>Vicki A</article-title>
          .
          <string-name>
            <surname>Freedman</surname>
          </string-name>
          , Jessica Broome, Frederick Conrad, and
          <string-name>
            <surname>Jennifer</surname>
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Cornman</surname>
          </string-name>
          .
          <article-title>Interviewer and respondent interactions and quality assessments in a time diary study</article-title>
          .
          <source>Electronic international journal of time use research</source>
          ,
          <volume>10</volume>
          (
          <issue>1</issue>
          ):
          <fpage>55</fpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <source>[Giunchiglia and Fumagalli</source>
          , 2017]
          <string-name>
            <given-names>Fausto</given-names>
            <surname>Giunchiglia</surname>
          </string-name>
          and
          <string-name>
            <given-names>Mattia</given-names>
            <surname>Fumagalli</surname>
          </string-name>
          .
          <article-title>Teleologies: Objects, actions and functions</article-title>
          .
          <source>In International conference on conceptual modeling</source>
          , pages
          <fpage>520</fpage>
          -
          <lpage>534</lpage>
          . Springer,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [Giunchiglia et al.,
          <year>2017</year>
          ]
          <string-name>
            <given-names>Fausto</given-names>
            <surname>Giunchiglia</surname>
          </string-name>
          , Enrico Bignotti, and
          <string-name>
            <given-names>Mattia</given-names>
            <surname>Zeni</surname>
          </string-name>
          .
          <article-title>Personal context modelling and annotation</article-title>
          .
          <source>In 2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)</source>
          , pages
          <fpage>117</fpage>
          -
          <lpage>122</lpage>
          . IEEE,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [Giunchiglia et al.,
          <year>2018</year>
          ]
          <string-name>
            <given-names>Fausto</given-names>
            <surname>Giunchiglia</surname>
          </string-name>
          , Mattia Zeni, and
          <string-name>
            <given-names>Enrico</given-names>
            <surname>Big</surname>
          </string-name>
          .
          <article-title>Personal context recognition via reliable human-machine collaboration</article-title>
          .
          <source>In 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)</source>
          , pages
          <fpage>379</fpage>
          -
          <lpage>384</lpage>
          . IEEE,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <source>[Hoque and Stankovic</source>
          , 2012]
          <string-name>
            <given-names>Enamul</given-names>
            <surname>Hoque</surname>
          </string-name>
          and John Stankovic. Aalo:
          <article-title>Activity recognition in smart homes using active learning in the presence of overlapped activities</article-title>
          .
          <source>In 2012 6th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth) and Workshops</source>
          , pages
          <fpage>139</fpage>
          -
          <lpage>146</lpage>
          . IEEE,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [Hossain et al.,
          <year>2017</year>
          ]
          <string-name>
            <given-names>HM</given-names>
            <surname>Sajjad Hossain</surname>
          </string-name>
          , Md Abdullah Al Hafiz Khan, and
          <string-name>
            <given-names>Nirmalya</given-names>
            <surname>Roy</surname>
          </string-name>
          .
          <article-title>Active learning enabled activity recognition</article-title>
          .
          <source>Pervasive and Mobile Computing</source>
          ,
          <volume>38</volume>
          :
          <fpage>312</fpage>
          -
          <lpage>330</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [Kwapisz et al.,
          <year>2011</year>
          ]
          <string-name>
            <surname>Jennifer</surname>
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Kwapisz</surname>
          </string-name>
          ,
          <string-name>
            <surname>Gary M. Weiss</surname>
          </string-name>
          , and Samuel A Moore.
          <article-title>Activity recognition using cell phone accelerometers</article-title>
          .
          <source>ACM SigKDD Explorations Newsletter</source>
          ,
          <volume>12</volume>
          (
          <issue>2</issue>
          ):
          <fpage>74</fpage>
          -
          <lpage>82</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <source>[Settles</source>
          , 2009]
          <string-name>
            <given-names>Burr</given-names>
            <surname>Settles</surname>
          </string-name>
          .
          <article-title>Active learning literature survey</article-title>
          .
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <source>[Silla and Freitas</source>
          , 2011]
          <article-title>Carlos N Silla and Alex A Freitas. A survey of hierarchical classification across different application domains</article-title>
          .
          <source>Data Mining and Knowledge Discovery</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [Tourangeau et al.,
          <year>2000</year>
          ]
          <string-name>
            <given-names>Roger</given-names>
            <surname>Tourangeau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Lance J.</given-names>
            <surname>Rips</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Kenneth</given-names>
            <surname>Rasinski</surname>
          </string-name>
          .
          <article-title>The psychology of survey response</article-title>
          .
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [Vaizman et al.,
          <year>2018</year>
          ]
          <string-name>
            <given-names>Yonatan</given-names>
            <surname>Vaizman</surname>
          </string-name>
          , Nadir Weibel, and
          <string-name>
            <given-names>Gert</given-names>
            <surname>Lanckriet</surname>
          </string-name>
          .
          <article-title>Context recognition in-the-wild: Unified model for multi-modal sensors and multi-label classification</article-title>
          .
          <volume>1</volume>
          (
          <issue>4</issue>
          ),
          <year>January 2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <source>[Vapnik</source>
          , 2013]
          <string-name>
            <given-names>Vladimir</given-names>
            <surname>Vapnik</surname>
          </string-name>
          .
          <article-title>The nature of statistical learning theory</article-title>
          . Springer science &amp; business media,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <source>[Webb</source>
          , 2003]
          <string-name>
            <given-names>Andrew R.</given-names>
            <surname>Webb</surname>
          </string-name>
          .
          <article-title>Statistical pattern recognition</article-title>
          . John Wiley &amp; Sons,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <source>[West and Sinibaldi</source>
          , 2013]
          <string-name>
            <surname>Brady</surname>
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>West</surname>
            and
            <given-names>Jennifer</given-names>
          </string-name>
          <string-name>
            <surname>Sinibaldi</surname>
          </string-name>
          .
          <article-title>The quality of paradata: A literature review</article-title>
          .
          <source>Improving surveys with paradata</source>
          , pages
          <fpage>339</fpage>
          -
          <lpage>359</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [Zeni et al.,
          <year>2014</year>
          ]
          <string-name>
            <given-names>Mattia</given-names>
            <surname>Zeni</surname>
          </string-name>
          , Ilya Zaihrayeu, and
          <article-title>Fausto Giunchiglia. Multi-device activity logging</article-title>
          .
          <source>In Proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous Computing: Adjunct publication</source>
          , pages
          <fpage>299</fpage>
          -
          <lpage>302</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [Zeni et al.,
          <year>2019</year>
          ]
          <string-name>
            <given-names>Mattia</given-names>
            <surname>Zeni</surname>
          </string-name>
          , Wanyi Zhang, Enrico Bignotti, Andrea Passerini, and
          <string-name>
            <given-names>Fausto</given-names>
            <surname>Giunchiglia</surname>
          </string-name>
          .
          <article-title>Fixing mislabeling by human annotators leveraging conflict resolution and prior knowledge</article-title>
          .
          <source>Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies</source>
          ,
          <volume>3</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>23</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>