=Paper= {{Paper |id=Vol-2995/paper2 |storemode=property |title=Modelling and Reasoning for Indirect Sensing over Discrete-time via Markov Logic Networks |pdfUrl=https://ceur-ws.org/Vol-2995/paper2.pdf |volume=Vol-2995 |authors=Athanasios Tsitsipas,Lutz Schubert |dblpUrl=https://dblp.org/rec/conf/ijcai/TsitsipasS21 }} ==Modelling and Reasoning for Indirect Sensing over Discrete-time via Markov Logic Networks== https://ceur-ws.org/Vol-2995/paper2.pdf
Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021                                                          9




                 Modelling and Reasoning for Indirect Sensing over Discrete-time
                                  via Markov Logic Networks

                                              Athanasios Tsitsipas∗ , Lutz Schubert
                                                     Ulm University, Germany
                                                {firstname, surname}@uni-ulm.de


                              Abstract                                        In the literature, indirect sensing is interwoven with remote
                                                                              sensing or sensing from afar [Zhang et al., 2019]. In our
        With the always increasing availability of sensor                     study, we translate indirect sensing to a cooperative model
        devices, there is constant unseen monitoring of our                   of sensor fusion [Durrant-Whyte, 1990], where surrounding
        environment. A physical activity has an impact on                     heterogeneous sensors capture different aspects of the same
        more sensor modalities than we could imagine. It                      phenomenon (i. e., activity1 ). Activity is often described by a
        is so vivid that distinctive patterns in the data look                specific temporal organisation of low-level sensor data, or as
        almost interpretable. Such knowledge, which is                        we call it, a “dimensional footprint”(DF). The low-level sen-
        innate to humans, ought to be encoded and rea-                        sor data in a DF are the primary source of information used as
        son upon declaratively. We demonstrate the power                      evidence to understand and recognise the observed situation.
        of Markov Logic Networks for encoding uncertain                       Such techniques following a bottom-up approach to recog-
        knowledge to discover interesting situations from                     nising situations are well-established in the area of context-
        the observed evidence. We formally relate distin-                     aware pervasive computing [Schmidt, 2003]. Dealing with a
        guishable patterns from the sensor data with knowl-                   concept as the DF requires handling both uncertainty and the
        edge about the environment and generate a rule ba-                    relational organisation. Existing approaches for an indirect
        sis for verifying and explaining occurred phenom-                     sensing task typically fail to capture such aspects at the same
        ena. We demonstrate an implementation on a real                       time.
        dataset and present our results.                                          For the mechanics of an indirect sensing task, recent re-
                                                                              search targets data analysis techniques employing machine
1       Introduction                                                          learning to train complex models labelling the property they
                                                                              want to infer from the data. For example, in [Laput et al.,
With the always-changing physical environments, uncertainty                   2017] the authors train Support Vector Machine (SVM) mod-
and incompleteness are innate in them. Context-aware perva-                   els, in an automatic learning mode à la “programming by
sive systems have been the centre of research regarding ap-                   demonstration” [Dey et al., 2004; Hartmann et al., 2007],
proaches to modelling uncertain contextual information and                    with raw sensor data while performing the activity of interest.
reasoning upon it [Bettini et al., 2010]; moving from low-                    The major limitation of such systems is that they use repre-
level contextual data (i. e., sensors) to higher-level contextual             sentations that are not relatable to humans. In addition, they
information, where it is most commonly referred to as “sit-                   do not support explicit encoding of knowledge about the en-
uation” [Dey, 2001; Gellersen et al., 2002]. Setting up sys-                  vironment. Background knowledge (e. g., contextual, domain
tems to observe an environment includes deploying probes                      or commonsense) may describe situations absent in training
(e. g., sensors) tailored to specific situations. Today, such                 data or challenging to grasp and annotate. In addition, apart
efforts fell under the terms “internet of things” and “smart                  from the definition of knowledge, the occurred observables
homes”. Many situations are worth identifying using sensors                   (i. e., events) in sensor data may be uncertain, as much as the
in a single room, ranging from “is someone present” to “wa-                   manifestations of knowledge are (i. e., rules) in an analytical
ter boiling”. Considering an entire home, we may end up                       reasoning process.
with hundreds of such situations. An office building could
                                                                                  We address these limitations by choosing a probabilistic
have thousands, increasing dedicated sensors to cover all the
                                                                              logic-based approach using an amalgam of Event Calculus
above situations, driving higher economic and maintenance
                                                                              (EC) [Kowalski and Sergot, 1989] and Markov Logic Net-
costs.
                                                                              work (MLN) [Richardson and Domingos, 2006] to model un-
   A compelling method in such deployments is to use indi-
                                                                              certain knowledge about the relational manifestations of dif-
rect sensing, which is employed when the property in need
                                                                              ferent and heterogeneous sensors reasoning to infer interest-
(e. g., a situation) is not attainable to direct sense, either due
                                                                              ing situations. EC drives the modelling task by a set of meta-
to sensor malfunctions, connectivity issues or energy loss.
    ∗                                                                             1
        Contact Author                                                                A situation, in that case, is the state of activity.




Copyright c 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021                                                            10


rules that encode the interaction between the sensor events                   example, if something is an ambient “high” temperature2 , that
and their effects over discrete time. One of the exciting prop-               temperature does not reside in our heads when we think of
erties of EC is that a situation of interest persists over time               it. The “it” of the temperature is a representation of the ac-
unless it gets interrupted by the occurrence of other events.                 tual natural environmental property. This representation of
On the other hand, MLN combines first-order logic and con-                    something is an entity that transmits to us the idea of the
cepts from probability theory to tackle uncertainty, which has                real something. Perhaps we think of our discomfort or imag-
received considerable attention in recent years with applica-                 ing ourselves reacting to this phenomenon (e. g., sweating) to
tions in video activity analysis [Cheng et al., 2014], mar-                   represent the high ambient temperature. Alternatively, we use
itime surveillance [Snidaro et al., 2015], music analysis [Pa-                the colour red accompanied by the temperature degree.
padopoulos and Tzanetakis, 2016] and others. Our goal is                         An event representation in our work is a lexical word em-
to design a reasoning mode for indirect sensing that han-                     bedded in a “sentence” among other additional contextual
dles uncertainty and uses interpretable representations from                  words, which we understand. Therefore, the development of
data. To this end, we make the following contributions: (1)                   sensing data over time (i. e., a time series) is wrapped in a
We model existing sensor data into interpretable symbolic                     word that best describes its nature (e. g., data pattern). The
representations as elements in a narrative on a running sce-                  event representation has two lexical parts. The one part is the
nario (cf. Section 2.2), (2) design a knowledge base (KB)                     trend of the pattern, and the other one is the type of the pat-
within MLN for supporting indirect sensing while emulat-                      tern. The trend of a pattern is represented by the words up-
ing commonsense reasoning, (3) evaluate the realisation of                    ward or downward. The patterns we may derive in the sensor
the approach using an open-source implementation of MLN,                      readings could resemble a shape currently named shapeoid.
(4) demonstrate how the probability of an occurred situation                  For the sake of presentation, the lexical shapeoids are the fol-
changes over time while using different combinations of sen-                  lowing:
sors.
                                                                              ANGLE A gradual, continuous line with an increasing (up-
   Section 2 provides the terminology used in this document,                     ward) or a decreasing (downward) trend in the sensor
including the running example and background information                         readings.
on Event Calculus and Markov Logic Networks. This leads
to Section 3 where we introduce the concept of DF and how                     HOP A stage shift in the sensor readings, where the data
to model it. In Section ,4 we elaborate on MLN definitions,                      have an apparent difference between two consecutive
while in Section 5, we present the results and experiments.                      recognition time points (e. g., binary sensor values).
Section 6 provides a brief related work around the topic of                   HORN This pattern is a transient increase or decrease in the
event modelling and recognition. In Section 7, we summarise                      sensor readings curve.
the main contributions and discuss details, including future
work.                                                                         FLAT A horizontal line in the data, with either unchangeable
                                                                                 values in the pattern duration or minimal changes.

2    Preliminaries                                                               We extract the shapeoids using the Symbolic Aggregate
                                                                              Approximation (SAX) technique. Many time series represen-
2.1 Terminology                                                               tation alternatives exist, but most of them result in a down-
                                                                              sampled real-valued representation. In contrast, SAX boils
The Oxford English Dictionary gives a general definition for                  down to a symbolic discretised form of the time series, which
an event as “a thing that happens or takes place, especially                  is abstract enough to extract the shapeoids generally. The
one of importance”. In our context, a “thing” is represented                  paper’s focus is not to describe how to obtain the proposed
by a (sensor) data pattern. “Importance” matches the (subjec-                 patterns from the sensor data but to put forward a concept of
tive) interest in finding an explanation for this pattern. Many               using temporal organisations of such representations to rea-
researchers try to use the term event in their way, depending                 son in a robust and declarative way.
on the context and the investigated environment, even though
the definition of the word event remains the same.                            2.2 Running Example
   We assume that an (interesting) event occurred on iden-                    In Figure 1, we illustrate the activity of opening and closing
tifying a visible change in the sensor data. The identi-                      a window and its impact (i. e., their DF) on five surrounding
fication involves a pre-processing step using some pattern                    sensor types that happen to be in the same room. Later in
extraction techniques [Patel et al., 2002; Lin et al., 2003;                  the paper (cf. Section 3.3), we will showcase the extracted
Yeh et al., 2016]. Therefore, the timestamps for the respective               shapeoids from the raw data, which put forward a sufficient
pattern represents the event’s temporality. This work clari-                  abstraction, serving as an input for a reasoning task.
fies a time point and a series of time points (exhibiting the                    The data are from a real-world public dataset [Birnbach
concept of duration) bounded by a predefined window value.                    et al., 2019], where the authors collected sensor data while
For example, the increase in the temperature readings is an                   performing different activities. The data timeline spawns over
interesting event and reflects the development of sensing data                two minutes, sufficient for demonstrating the essence of our
(temperature) over time. Therefore, a representation should                   approach.
semantically annotate an event’s time point.
   Interpreting symbols as representations of objects is a                       2
                                                                                   We use a threshold-based term to describe the comfort level for
proxy to describe something instead of the actual thing. For                  a human to endure.




Copyright c 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
       Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021                                                                                                            11


                                                                                                                                    Predicate                      Meaning
                                                                     Window                              Window
                                                                     opened                              closed                     Happens(e, t)                  Event e happens at time t
                                         Temperature

                                                       34                                                                           HoldsAt(f, t)                  Fluent f holds at time t
                                            (in °C)

                                                                                                                                    InitiatedAt(f, t)              Fluent f is initiated at time t
                                                       32
                                                                                                                                    TerminatedAt(f, t)             Fluent f is terminated at time t
                               Air Pressure




                          984.50                                                                                                                              Axioms
                          984.25                                                                                                    HoldsAt (f, t + 1) ⇐          HoldsAt (f, t + 1) ⇐
                                                                                                                                           InitiatedAt (f, t)            HoldsAt (f, t) ∧
                     (in % RH)




                                                       20
              ppm) Humidity




                                                                                                                                                                         ¬ TerminatedAt (f, t)
                                               15                                                                                   ¬ HoldsAt (f, t + 1) ⇐        ¬ HoldsAt (f, t + 1) ⇐
                                              900                                                                                            TerminatedAt (f, t)            ¬ HoldsAt (f, t) ∧
Levels Air(inQuality




                                              800                                                                                                                           ¬ InitiatedAt (f, t)
                                              700
                                               50                                                                            Table 1: The core predicates and domain-independent axioms of the
(in dB)
Sound




                                               25                                                                            EC dialect, MLN-EC.

                                                            09:17:00 09:17:17 09:17:35 09:17:53 09:18:11 09:18:29 09:18:47
                                                                                                                             latidis et al., 2015]. Other dialects may have additional re-
                                                                                                                             strictions (e. g., complex time quantification) that hinder the
       Figure 1: An example of how the activity of opening/closing a win-                                                    realisation of the approach. For more information, we point
       dow affects the listed surrounding sensors.                                                                           the reader to this paper [Mueller, 2004]. The basic predicates
                                                                                                                             and the domain-independent axioms are presented in Table 1.
       2.3                                              Event Calculus                                                       One can read the upper line of two axioms from left to right:
                                                                                                                             (1) a fluent f holds at time t if it was initiated at a previous
       Representing and reasoning about actions and temporally-                                                              time point, and (2) that the fluent f continues to hold, provid-
       scoped relations has been a critical research topic in the area                                                       ing it was not previously terminated. The domain-dependent
       of Knowledge Representation and Reasoning (KRR) since                                                                 predicates initiatedAt/2 and terminatedAt/2 are expressed
       the 60s [Shoham and McDermott, 1988]. Since then, vari-                                                               in an application-specific manner guiding the logic behind the
       ous approaches have been proposed to overcome the Frame                                                               occurrence of events and some contextual constraints. One
       Problem in classical Artificial Intelligence (AI) [McCarthy                                                           example of a common rule for initiatedAt/2 is:
       and Hayes, 1981; Shanahan, 2006]; the challenge of rep-
       resenting the effects of actions. Among them, EC, which                                                                                   InitiatedAt (f, t) ⇐
       Kowalski and Sergot have initially proposed in 1986 [Kowal-                                                                                          Happens (e, t) ∧                          (1)
       ski and Sergot, 1989], is a system for reasoning about events                                                                                        Constraints[t]
       (or actions) and their effects in the scope of Logic Pro-
       gramming. It comprises excellent expressiveness with intu-                                                                The above definition states that a fluent f is initiated at time
       itive and readable representations, making it feasible to ex-                                                         t if an event e happens, and some optional constraints depend
       tend reasoning. It is an adequate tool to fit domain knowl-                                                           on the domain. EC supports default reasoning via circum-
       edge representing how an entity progresses in time using                                                              scription, representing that the fluent continues to persist un-
       events. It has found applications ranging from the scope                                                              less other events happen. Therefore, in our definition of the
       of robotics [Russel et al., 2013], game design [Nelson and                                                            event narrative, we assume these are the only events that oc-
       Mateas, 2008] and commonsense reasoning [Shanahan, 2004;                                                              curred.
       Mueller, 2014] to name a few.
          From a technical point, the core ontology of the EC in-                                                            2.4 Markov Logic Networks
       volves events, fluents and time points. The continuum of                                                              A Markov Logic Network (MLN) amalgam of a Markov Net-
       time is linear, and integers or real numbers represent the time                                                       work (aka. Markov Random Field) and a first-order logic
       points. A fluent can be whatever whose value is subject to                                                            KB [Richardson and Domingos, 2006]. Specifically, it soft-
       change over time. At the occurrence of an event, it may                                                               ens the constraints posed by the formulas with weights that
       change the value of a fluent. This could be a quantity, such                                                          support (positive weights) or penalise (negative weights)
       as “the temperature in the room”, whose value varies in num-                                                          worlds in which they are satisfied. As opposed to classical
       bers, or a proposition, such as “the window is open”, whose                                                           logic, all the statements are hard constraints (i. e., preserving
       truth state changes from time to time. In EC, the core axioms                                                         truthfulness).
       are domain-independent and define whether a fluent holds or                                                              The formulas, being first-order logic objects [Genesereth
       not at a particular time point. In addition, these axioms can                                                         and Nilsson, 1987], are constructed using four symbols: con-
       capture what is known as the common sense law of inertia;                                                             stants, variables, functions and predicates. Predicates and
       formal logic is a way of declaring that an event is assumed not                                                       constants start with an upper-case letter, whereas the func-
       to change a given property of a fluent unless there is evidence                                                       tions and variables have lower-case letters. The variables are
       to the contrary [Shanahan and others, 1997].                                                                          quantifiable over the given domain (e. g., type={Temperature,
          We use a simplified version of EC (named MLN-EC),                                                                  Humidity} ). The constants are objects in the respective do-
       based on a discrete-time reworking of EC [Mueller, 2008],                                                             main (e. g., sensor types: Temperature, Air Quality, Micro-
       which was proven to work in a probabilistic setting [Skar-                                                            phone etc.). Variables are ranges over the objects of the



       Copyright c 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021                                                               12


domain. The functions (e. g., downwardAngleTemp) repre-                       given the evidence atoms x. Finally, Zx is a partition function
sent actual mappings from a single object to a value or an-                   that normalises for all the possible assignments of x.
other object. Finally, the predicate symbols represent rela-                      Equation 2 shows the probability distribution of the set
tions among objects associated with truth values(e. g., Hap-                  of query variables conditioned over the set of observations.
pens(DownwardAngle_Temp,4)).                                                  By modelling the conditional probability directly, the model
    A KB in MLN consists of both hard- and soft-constrained                   remains agnostic about potential dependencies between the
formulas. Hard constraints (clauses with infinite weight) are                 variables in X, and any factors that depend on X are elim-
interwoven with unequivocal knowledge. Therefore, an ac-                      inated. Instead, the model makes conditional independence
ceptable world fulfils all of the hard constraints. By contrast,              assumptions among the Y and assumptions on its inherent
the soft constraints are related to the imperfect knowledge of                structure with dependencies of Y on X. Therefore, in such a
the domain, which can be falsified in the world’s existence in                way, the number of the possible words is constrained [Singla
discourse. This means that when a world violates a formula,                   and Domingos, 2005; Sutton and McCallum, 2006] and the
it is less probable but not impossible.                                       inference is much more efficient. However, calculating ex-
    Formally, a MLN is a set of pairs (Fi , wi ), where Fi is a               actly the formula might become intractable even for a small
first-order logic formula and wi is a real numbered weight.                   domain. Consequently, other approximate inference methods
The KB L, with the weighted       formulas together with a fi-               are preferred.
nite set of constants C = c1 , c2 , . . . , c|C| , defines a ground              Originally, the authors in [Richardson and Domingos,
Markov Network ML,C as follows [Richardson and Domin-                         2006] propose to use Gibbs sampling to perform inference,
gos, 2006]:                                                                   but they found out that the sampling breaks down when the
   • ML,C has one binary node for each possible grounding                     KB has deterministic dependencies3 [Poon and Domingos,
     of each predicate in L. The value of the node is 1 if the                2006; Domingos and Lowd, 2009]. The authors proposed
     grounded atom is true and 0 otherwise.                                   another Markov Chain Monte Carlo method called MC-
                                                                              SAT [Poon and Domingos, 2006] based on satisfiability with
   • ML,C contains one feature for each possible grounding                    slice-sampling. Another type of inference is the Maximum A
     of each formula Fi in L. The value of this feature is 1                  Posteriori (MAP) which described the problem of finding the
     if the formula is true and 0 otherwise. The weight of the                most probable state of the world given some evidence, which
     feature is the wi associated with Fi in L.                               reduces to find the truth assignment that maximises the sum
   An MLN is a template for constructing Markov networks:                     of weights of satisfied clauses (i. e., argmaxp(y | x)).
it will produce different networks given different constants.                                                                 y
The grounding process is the replacement of variables with a                    The problem is generally NP-hard, but both exact and ap-
constant in their domain. The nodes of ML,C correspond to                     proximate satisfiability solvers exist [Domingos and Lowd,
all ground atoms that can be generated by grounding a for-                    2009]. In our experiments, we run approximate inference us-
mula Fi in L, with constants of C. Thus there is an edge                      ing the MC-SAT algorithm.
between two nodes of ML,C iff the corresponding ground
predicates are conditionally dependant on a grounding of a                    3 Modelling a DF
formula Fi in L. A possible world from the MLN must sat-
isfy all of the hard-constrained formulas and be proportional                 An activity affects various fundamental environmental prop-
to the exponential sum of the weights of the soft-constrained                 erties, such as speed, pressure, temperature, luminosity, etc.
formulas satisfied in this world (cf. Equation 2). Hence, a                   Surrounding sensors may capture the various changes (form-
MLN defines a log-linear probability distribution over Her-                   ing the activity’s DF), which depends on different contextual
brand interpretations(i. e., possible worlds).                                information, such as their proximity from the occurred phe-
   In an indirect sensing task context, we know a priori that                 nomenon and their type (cf. Section 3.1). In addition, a sen-
we will have two kinds of predicates; the evidence variable                   sor may observe ambient values (e. g., temperature) or require
X, containing the narrative of real-time input events, trans-                 manual intervention to observe a change (e. g., separating the
lated with the Happens predicates of EC, and the set of query                 two magnetic elements of a contact sensor) (cf. Section 3.2).
HoldsAt predicates Y , as well as other groundings of “hid-                      This “change” (i. e., the forming pattern) is the “interest-
den” predicates (i. e., neither query nor evidence); in EC these              ing event” we want to focus on. This observed change mostly
are the InitiatedAt and TerminatedAt predicates. Finally,                     stays unobserved. Thus, the emitted DF indicates its occur-
the conditional likelihood of Y given X is defined as fol-                    rence. In addition, its state is a continuous value in time,
lows [Singla and Domingos, 2005]:                                             which is tracked under the definition of the “fluent”. With
                                                     !                        no sensor modality to identify the occurrence of an activity,
                          1         X                                         due to its unavailability at the given time, or by simply stating
            P (y | x) =      exp         wi ni (x, y)        (2)              that there does not exist any direct one, we account its DF as
                         Zx
                                       i∈FY                                   a space with equivalent options that “indirectly” account for
   x ∈ X and y ∈ Y represent the possible assignment of                       the same activity.
the evidence set X and the query set Y , respectively. FY                        Our work uses commonsense knowledge (CK) to charac-
is the set of all MLN clauses produced from the KB L and                      terise how activity affects its environment. From the running
the finite set of constants C. The ni (x, y) is the number of
                                                                                  3
true groundings of the i-th clause involving the query atoms y                        They are formed from hard-constrained formulas in the KB.




Copyright c 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021                                                         13


example in Section 2.2, some distinct data patterns exist, al-                Numerical sensors are almost every sensor with an arith-
most as recognisable to the human eye where one may exer-                        metic output in the set of real numbers R. Some ex-
cise a hypothesis against the data. We consider that a data-                     amples of quantifiable sensors are an accelerometer, hu-
processing step is viable to extract such patterns, but it is out                midity, temperature, pressure sensor etc.. Accordingly,
of the scope of the current paper. The abstracted representa-                    the data patterns, which we found in the raw sensor data,
tions (cf. Section 2.1) from low-level sensor data reflect their                 are those of ANGLE, HORN and FLAT.
organisations in shapes and trends (e. g., an increasing angle                   One could say that a binary sensor is a subset of numerical
in the sensor data). Therefore, one with a naive knowledge                    sensors. However, we make the distinction explicit, as the bi-
of physics can make hypotheses about the occurrence of an                     nary sensors are semantically a practical standalone class. In
activity using the abstractions from sensor data as evidence                  the running example, we use numerical sensors. The sensor
(cf. Section 3.3).                                                            data’s available observations (i.e., shapeoids) are the simple
3.1    Contextual Constraints                                                 events with their respective time point in the focused bounded
                                                                              time window. We represent them with the Happens predi-
Sensors are interfaces that serve as occurrence indicators for                cate, where finally a collection of such predicates form the
various monitored situations. The sensor numbers could in-                    so-called “narrative” in EC.
crease accordingly as their numbers increase, making the
instrumentation, deployment and maintenance cumbersome                        3.3 The Narrative of Events in EC
tasks. A sensor primarily measures an environmental change
                                                                              An event “just” happens, with an accompanied discrete time
as accurate as possible, varying between the different manu-
                                                                              point to keep a reference in the timeline. The chosen repre-
facturers. Selecting a sensor to monitor a situation ought to
                                                                              sentation of it, according to the dialect of EC, is the predicate
obey some criteria, which formulate the sensing fidelity of its
                                                                              Happens(e, t). Time t can be quantified over the spectrum
output. In this paper, we propose the following criteria:
                                                                              of integers, exhibiting coherence among the occurred events.
Type There exist different vendors for various sensors.                       The events in the sensing timeline are the formed shapeoids,
    Nonetheless, the type of sensor is of key importance.                     and by using lexical words for the symbolic representation,
    There is no doubt that different manufacturers may offer                  the intuition behind them is human-readable (e. g., downward
    a better sensor device, affecting accuracy. Semantically,                 ANGLE). For example, in Figure 1, the two activities of open-
    the sensor type determines if the sensor participates in                  ing and closing the window produce an impact in the five sur-
    the verification process, not its model.                                  rounding sensors. We observe that around the time of opening
Location The location is another important aspect of deter-                   the window, distinct patterns are forming. Figure 2 contains
    mining the credibility of the sensor output. Either the                   in separate graphs a more clear view of the data in Figure 1,
    physical location or the position of the sensor in the                    after performing a dimensionality reduction step (e. g., Piece-
    space should affect the decision of selecting any sensor                  wise Aggregate Approximation (PAA) [Ding et al., 2008]).
    of a given type in a location (e. g., a room).                            The patterns were extracted empirically, resembling the pro-
                                                                              posed lexical shapeoids (cf. Section 2.1):
  As discussed later in the paper, the above criteria are mini-
mal constraints for a sensor to participate in reasoning. How-
ever, the sensors have a fundamental high-level classification,                             ...
making the shapeoid extraction from their data clearer and                                  Happens(Flat_Mic,3)
focused.                                                                                    Happens(Flat_Hum,3)
3.2    Sensor Classification                                                                Happens(DownwardAngle_Temp,4)
A sensor is an interface between the physical and the digital                               Happens(DownwardAngle_Aq,4)
world. The raw sensor data rarely matches human semantics,                                  Happens(UpwardHorn_Mic,4)
but the representations of patterns in them are. The kind of                                ...
sensor classification the paper foresees, bases on the nature of
                                                                                            Happens(UpwardHorn_Temp,11)
the resulting sensor data, is as follows:
                                                                                            Happens(UpwardAngle_Hum,11)                    (3)
Binary sensors restrict their result to two possible values.
    Usually, the values resemble the category itself (i. e., be-                            Happens(Flat_Mic,11)
    ing binary); thus, one and zero. Furthermore, depending                                 ...
    on the context4 the result may take values from it. For                                 Happens(DownwardAngle_Temp,14)
    example, the output of a physical switch is “on” or “off”,
                                                                                            Happens(DownwardAngle_Pres,14)
    the result of a motion sensor may be “present” or “not
    present”, and so on. The suitable data patterns for the                                 Happens(Flat_Hum,15)
    binary sensors are the HOP and FLAT representations.                                    Happens(UpwardHorn_Mic,15)
    4
      Context is any information that one can use to characterise the
                                                                                            Happens(UpwardAngle_Temp),15)
situation of an entity. An entity is a person, place, or object consid-                     ...
ered relevant to the interaction between a user and an application,
including the user and application themselves [Dey, 2000]




Copyright c 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021                                                                                                                                                                                                                                        14


                                                                  2.0                                                           1.5                                                           2.0
    2.0                                                                                                                                                                                                                                                      1.0
                                                                  1.5                                                           1.0                                                           1.5
    1.5
                                                                                                                                                                                                                                                             0.5
                                                                  1.0                                                                                                                         1.0
    1.0                                                                                                                         0.5
                                                                  0.5                                                                                                                         0.5                                                            0.0
    0.5
                                                                                                                                0.0
    0.0                                                           0.0                                                                                                                         0.0                                                            0.5
                                                                                                                                0.5
    0.5                                                           0.5                                                                                                                         0.5
                                                                                                                                                                                                                                                             1.0
                                                                                                                                1.0
    1.0                                                           1.0                                                                                                                         1.0
                                                                                                                                1.5                                                                                                                          1.5
    1.5                                                           1.5                                                                                                                         1.5

    2.0                                                           2.0                                                           2.0                                                           2.0                                                            2.0
       0.0   2.5   5.0   7.5   10.0   12.5   15.0   17.5   20.0      0.0   2.5   5.0   7.5   10.0   12.5   15.0   17.5   20.0      0.0   2.5   5.0   7.5   10.0   12.5   15.0   17.5   20.0      0.0   2.5    5.0   7.5   10.0   12.5   15.0   17.5   20.0      0.0   2.5   5.0   7.5   10.0   12.5   15.0   17.5   20.0




              (a) Temperature                                               (b) Air Pressure                                                   (c) Humidity                                                  (d) Air Quality                                            (e) Microphone

                               Figure 2: The z-normalised sensor data (in 20 data points) from Figure 1, after a dimensionality reduction step.


4         Probabilistic Indirect Sensing via MLN                                                                                                                    state. The hypothesis is asked in the form of a query, rep-
          definitions                                                                                                                                               resenting the probability for the situation of an opened win-
                                                                                                                                                                    dow to be true for the given observations (i. e., ground truth).
.                                                                                                                                                                   For example, if we require to encode an “opened door”, we
   In the following, we elaborate on constructing the KB con-                                                                                                       may include the same rule with a lower weight encoding our
taining the representations of the sensor events, using contex-                                                                                                     confidence for the result. Then, using the background knowl-
tual words in “sentences” that comply with the formalism of                                                                                                         edge that the sensors are closer to the window, we encode this
EC and are expressed in first-order logic.                                                                                                                          with a higher weight value to the opened window rule. MLN
                                                                                                                                                                    has many learning algorithms [Richardson and Domingos,
4.1          Knowledge Base                                                                                                                                         2006] to determine the weight assignment; however, as we
                                                                                                                                                                    do not intend to select the absolute probabilities of a specific
For our purposes, the KB, or the so-called “theory”, contains                                                                                                       occurred situation, we opt for the most likely situation given
a few function definitions, predicate definitions, as well as the                                                                                                   the evidence.
inertia laws axioms of EC5 as seen in (2). We consider the
observed patterns as a continuous narrative of Happens pred-                                                                                                        4.2 Evidence
icates (cf. (3)). InitiatedAt and TerminatedAt determine un-
der which factors a fluent is initiated or terminated at a given                                                                                                    The evidence contains ground predicates (facts) (e. g., the nar-
time point, using the form in (1). Finally, the query predicate                                                                                                     rative of events in Section 3.3) and optionally ground function
HoldsAt incorporate a possible quantification over the verifi-                                                                                                      mappings. A function mapping is a process of mapping a
cation of a monitored situation (i. e., a fluent).                                                                                                                  function to a unique identifier. For example, the first formula
   Table 2 shows a fragment of the KB and the associated                                                                                                            in Table 2 contains the function downwardAngleTemp(r).
weights. The formulas are converted to a clausal form dur-                                                                                                          During the grounding phase, constants from the domain of
ing the grounding phase, also known as conjunctive normal                                                                                                           the variable r substitute it7 . Thus, a function mapping could
form (CNF), a disjunction of literals. The next step is the                                                                                                         be the following: DownwardAngle_Temp_LocA = down-
replacement of the variables with the constants, which for-                                                                                                         wardAngleTemp(LocationA). All the events of the grounded
mulate grounded predicates. As such, the construction of the                                                                                                        Happens predicates in Section 3.3 follow the same procedure
Markov Network consists of one binary node V for each pos-                                                                                                          for their function mappings.
sible grounding of each predicate. A world is an assignment
of a truth value to each of these nodes.                                                                                                                            5 Experiments and Results
   The definition of the indirect sensing rules follow CK rep-                                                                                                      In this section, we evaluate our approach in the domain of
resented as a theory in MLN enacting it as part of common-                                                                                                          smart homes. As presented in Section 2.2, we use a publicly
sense reasoning (CR)6 ; the sort of reasoning people perform                                                                                                        available dataset. The data timeline spawns over twelve con-
in daily life [Mueller, 2014], which is vague and uncertain.                                                                                                        secutive full days. The dataset was in a zip format, which
For example, the Table 2 contains two separate rules, which                                                                                                         contains multiple comma-separated value (CSV) files with a
reflect an atomic instruction of the DF, using a temperature                                                                                                        total size of approximately 50 Gigabytes (GB)8 . We selected
sensor and a microphone. For our purposes, we consider that                                                                                                         one device close to the interest situation (i.e., close to the win-
the events in the narrative are the only one occurred.                                                                                                              dow). We extracted the relevant data points using the five
   A rise in the temperature readings, or a sudden spike in the                                                                                                     sensors capturing the DF of opening/closing the room’s win-
sound pressure levels, could be anything in an open world,                                                                                                          dow. We do not process the raw data points, but instead, we
including the opening/closing of a door in a room. However,                                                                                                         use the shapeoids from the data; their extraction was possible
with the help of context, we may exercise the hypothesis that                                                                                                       via our tool Scotty9 . The total number of shapeoid events are
a temperature sensor close to the window could indicate its                                                                                                         4393, where the ground truth events from the window contact
                                                                                                                                                                    sensor are 87.
    5
      They should remain hard-constrained; otherwise, the recogni-
                                                                                                                                                                                7
tion of the situation will converge to be uncertain up to the horizon                                                                                                    We assume a single room and its context is not reflected in the
of probability.                                                                                                                                                     naming scheme of the function.
    6                                                                                                                                                                  8
      CR is implemented as a valid (or approximately valid) infer-                                                                                                       The actual size of the raw data exceeds the 250 GB.
                                                                                                                                                                       9
ence [Davis, 2017] in MLN as part of the EC law of inertia.                                                                                                              This work is meant to be published in a forthcoming conference.




Copyright c 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021                                                                                     15


             FOL formula                                                                                                             Weight
             InitiatedAt (openedWindow (r) , t) ⇒Happens (downwardAngleTemp (r) , t) ∧
                                                                                                                                     2.1
                                                 Happens (flatTemp (r) , t − 1)
             InitiatedAt (openedWindow (r) , t) ⇒Happens (upwardHornMic (r) , t) ∧
                                                                                                                                     0.2
                                                 Happens (flatMic (r) , t − 1)

                                            Table 2: An excerpt of the first-order KB and the corresponding weights in the MLN.


       Scenario             Description                                         Duration              Scenario   TP     TN     FP    FN     Precision Recall   F1
                                                                                                      S#1        288    2039   174   1892   0.6234    0.1321   0.2180
       S#1                  Two sensors with weak and strong                    1 m 45 s
                            weights.                                                                  S#2        1016   1554   659   1164   0.6066    0.4661   0.5271

       S#2                  Three sensors with one weak and                     1m9s
                            two strong weights.
                                                                                                   Table 4: Performance results using the marginal inference and a
                                                                                                   threshold of 0.6.
Table 3: The described scenarios with their inference duration times.
                                                                                                      In the experimental analysis, we present the results for the
                                                                                             S#1
                                                                                             S#2   marginal inference in terms of F1 score for a range of thresh-
             0.7
                                                                                                   olds between 0.0 and 1.0. We consider the situation recog-
                                                                                                   nition task successful with a probability above the specified
                                                                                                   threshold. In Table 4, we present a snapshot of the perfor-
             0.6

             0.5
                                                                                                   mance using the threshold value 0.6 in terms of True Posi-
                                                                                                   tives (TP), True Negatives (TN), False Positives (FP), False
  F1 Score




             0.4

             0.3                                                                                   Negatives (FN), Precision, Recall and F1 score.
             0.2                                                                                      The scenarios have a certain flavour. The basic intuition
             0.1
                                                                                                   from the experiments is to showcase that we may use sen-
                                                                                                   sors that have an obscure interpretation (e. g., a spike in the
              0
                    0.0   0.1   0.2   0.3     0.4      0.5    0.6   0.7   0.8    0.9   1.0         microphone can be anything, even being next to the win-
                                                    Threshold
                                                                                                   dow) and sensors that act as a more direct verification step
Figure 3: F1 scores using various threshold values for the situation                               (e. g., air quality, temperature). We assume that the shapeoid
recognition of the opened window.                                                                  events are the only ones that happen in the environment in fo-
                                                                                                   cus. More alternative sentences may be encoded accordingly,
                                                                                                   using shapeoids of the humidity or the air pressure sensor.
   We put forward two scenarios (cf. Table 3), which contain                                       Based on the inertia laws of EC, the fluent start to hold at
rules for declaring the alternatives in recognising the situation                                  the time point t+1, and therefore the assignment to the next
of an opened/closed window. The purpose of the scenarios                                           time point from the used pattern event in the narrative (3). In
is to run the computation against the existing narrative with                                      Figure ,3 the F1 score is higher for the marginal inference in
the discovered events but using different sensor compositions.                                     S#2 due to the additional strong sensor. The S#1, similar to
Each recognition rule also contains a weight value, which was                                      S#2, contains a shapeoid in the microphone data (increasing
empirically assigned, as we consider them confidence values                                        horn), which matches both the fluent’s initiation and termina-
of the rule.                                                                                       tion rules. Hence, during the inference process, the probabil-
   We implemented the KB and the narrative evidence file                                           ity always strives towards 0.5, which is regulated by another
to demonstrate the approach’s feasibility using an open-                                           sensor in the rules (air quality sensor) with a higher weight
source implementation of Markov Logic Networks, named                                              value.
LoMRF [Skarlatidis and Michelioudakis, 2014]. Together                                                We note here that in a real setting, the verification of situa-
with the domain-dependent rules for each scenario, the full                                        tion (i. e., the fluent) depends on whether the required obser-
KB and the evidence file are publicly available online10 , en-                                     vation is made (e. g., the shapeoid event from the temperature
abling the reproducible results. The KB, given the evidence,                                       sensor), which may be a delayed effect of the activity itself
is transformed into a Markov Network of 26353 ground                                               - in other words: it takes some time until the open window
clauses and 13177 ground predicates. We run marginal in-                                           affects the temperature sufficiently. In the experimental anal-
ference from the developed MLN on vanilla runs without any                                         ysis, we calculate the performance measures strictly based on
interference from other processes. All the results are averaged                                    the time range of an opened window. Therefore the ground
over five runs with a corresponding standard deviation. The                                        truth is the single point of reference for calculating the perfor-
experiments are executed on a virtual machine(VM) running                                          mance. The delay between the activity and its observable DF
in a self-hosted data centre at the University of Ulm running                                      should be accounted for a more accurate timing prediction.
on OpenStack under the series “Victoria”. The VM runs with                                         We observe a considerable amount of FP, which indicates a
8 cores (16 threads) and 16 GB of RAM.                                                             plausible calculation of an opened window but with a certain
                                                                                                   recognition delay. Thus, we consider the F1 scores in the
   10
             https://osf.io/n3ury/                                                                 scenarios to be slightly higher.



Copyright c 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021                                                         16


6    Related Work                                                             method of indirect sensing. We use the temporal formalism of
Research in context modelling, context reasoning, and their                   EC as a “linchpin” for driving the reasoning about the sensing
unified view via various middleware systems is tremendous;                    objects and creating observations for the occurrence of certain
for a recent survey, we point the reader to [Perera et al., 2013].            situations (e. g., “is the window open”). The concept of the
In the paper, we focus more on a bottom-up approach to the                    DF allows using different sensor setups to monitor the same
recognition of occurred situations. We employ a probabilis-                   situation(s). In other words, it is parallel to interpreting the
tic rule-based approach, using occurred sensor events as evi-                 given evidence (e. g., sensor data) for finding the most likely
dence for the reasoning task.                                                 explanation, which created the DF. As such, we declare these
   In [Liu et al., 2017], the authors create a bottom-up hierar-              logical “inference” sentences in a human-readable form of
chical model using the raw sensor data as evidence while cre-                 reasoning that incorporates commonsense logic.
ating inference rules encoded in an MLN to recognise com-                        Due to the nature of environmental situations, the interpre-
plex events. In order to create abstractions from the raw data,               tation (i. e., evaluation) of such sentences depends on the full
they use various thresholds per sensor type. In our approach,                 context. For example, the same sentence in Table 2 might
we use generic template abstractions which base on the data                   not apply if the weather outside is warmer than the sensor’s
shapes and trends. The core contribution of their paper is                    environment. In this case, the temperature may not decrease
the dynamic assignment of weights learned from a training                     but stay the same or even increase. Therefore, one will never
dataset; we do not assume that the user has a training dataset                evaluate the according to sentence to true. Instead, a fall-
to learn the weights from because we use them as confidence                   back to another sensor is needed. Nevertheless, the approach
values for the inference rules. Finally, in our paper, we fore-               defends the redundancy, or alternatives, in detecting the de-
see scalability issues that may arise from the free variables                 sired situation, considering that we usually use direct means
in the MLN rules, which may drive the computation times to                    for sensing (e. g., use a contact sensor to detect if the door is
higher levels.                                                                open).
   Considering our choice for a rule-based reasoning tech-                       The lack of sensors to capture the whole DF of activity
nique has a broad spectrum of applications to many domains,                   leads to an incomplete “view of the world”. The question
making it a commonly used technique [Perera et al., 2013].                    thereby is, which physical effects are of specific relevance for
Another interesting technique, which bases on previously ac-                  interpreting an event and omitted. These conditions may vary
quired knowledge, is case-based reasoning (CBR) [Aamodt                       enormously between different events, e. g., a person speaking
and Plaza, 1994; Biswas et al., 2014]. It offers solving mech-                or the sun rising both have other effects on the environment
anisms by adopting solutions that have been suggested to sim-                 and thus (to a degree) require various sensors for interpreta-
ilar issues in the past. The authors in [Kofod-Petersen and                   tion, but also both could be observed using additional infor-
Aamodt, 2003] use CBR to understand an occurred situation                     mation: sound, visual, temperature, time etc.
based on available contextual information. A case-based so-
                                                                                 Concerning the employed method of MLNs, there is an is-
lution is not favourable in our case because collecting and
                                                                              sue using predicates with free variables in the body of a rule;
maintaining previous cases is a cumbersome task. Our work
                                                                              during the grounding phase, it creates a disjunction of the
does not require any previous known input from sensor ob-
                                                                              cartesian grounded conjunction of the formulas, translating
servations and domain-dependent knowledge during the rule
                                                                              these variables to existentially quantified leading to a possi-
specification.
                                                                              ble combinatorial explosion. We consider any additional con-
   In the paper, we focus on finding alternatives for recognis-
                                                                              straint in a domain-dependent rule should contain as variables
ing a situation. Similarly, Loke [Loke, 2006] advocates that
                                                                              only the time t and the location r. Any knowledge engineer
the situation in_meeting_now has different recognition ways
                                                                              should follow this and remove any existentially quantified
based on contextual cues. The author follows an abductive
                                                                              variables, using the technique of skolemisation [Broeck et al.,
treatment of the subject as we also do. In the forthcoming
                                                                              2013], overcoming this limitation for the solution’s scalabil-
years, the author developed a formalism to represent compo-
                                                                              ity.
sitions of sensors that can act on an understanding of their
situations [Loke, 2016].                                                         Finally, the observed data patterns may also result from
   Finally, although sensing data contain implicit information,               multiple overlapping activities challenging to separate, such
explicit domain knowledge is required for situation recogni-                  as speaking in traffic, leading to uncertainty about the inter-
tion. Many research works employ logic-based models for                       pretation. As future work, we want to overcome the limita-
situation recognition in smart homes, such as the Event Cal-                  tions of MLN concerning the free variables in the rules and
culus (EC) [Chen et al., 2008]. Other works have also em-                     concentrate on a dynamic ecosystem that realises the pro-
ployed EC in activity recognition from video streams [Artikis                 posed work.
et al., 2014] and health monitoring [Falcionelli et al., 2019].
However, it is unclear how they move from the raw data to
the tagged symbolic representations in these systems.                         Acknowledgments
7    Conclusion & Discussion                                                  This work was partially funded by the Federal Ministry of
In the paper, we employed Markov Logic Networks for the                       Education and Research (BMBF) of Germany under Grant
modelling and reasoning over uncertain alternatives for the                   No. 01IS18072.



Copyright c 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021                                                       17


References                                                                    [Ding et al., 2008] Hui Ding, Goce Trajcevski, Peter
                                                                                Scheuermann, Xiaoyue Wang, and Eamonn Keogh.
[Aamodt and Plaza, 1994] Agnar Aamodt and Enric Plaza.
                                                                                Querying and mining of time series data: experimental
  Case-based reasoning: Foundational issues, methodolog-                        comparison of representations and distance measures.
  ical variations, and system approaches. AI communica-                         Proceedings of the VLDB Endowment, 1(2):1542–1552,
  tions, 7(1):39–59, 1994.                                                      2008.
[Artikis et al., 2014] Alexander Artikis, Marek Sergot, and                   [Domingos and Lowd, 2009] Pedro Domingos and Daniel
  Georgios Paliouras. An event calculus for event recogni-                      Lowd. Markov logic: An interface layer for artificial in-
  tion. IEEE Transactions on Knowledge and Data Engi-                           telligence. Synthesis lectures on artificial intelligence and
  neering, 27(4):895–908, 2014.                                                 machine learning, 3(1):1–155, 2009.
[Bettini et al., 2010] Claudio Bettini, Oliver Brdiczka, Karen                [Durrant-Whyte, 1990] Hugh F Durrant-Whyte.       Sensor
  Henricksen, Jadwiga Indulska, Daniela Nicklas, Anand                          models and multisensor integration. In Autonomous robot
  Ranganathan, and Daniele Riboni. A survey of context                          vehicles, pages 73–89. Springer, 1990.
  modelling and reasoning techniques. Pervasive and mo-
  bile computing, 6(2):161–180, 2010.                                         [Falcionelli et al., 2019] Nicola Falcionelli, Paolo Sernani,
                                                                                 Albert Brugués, Dagmawi Neway Mekuria, Davide Cal-
[Birnbach et al., 2019] Simon Birnbach, Simon Eberz, and                         varesi, Michael Schumacher, Aldo Franco Dragoni, and
   Ivan Martinovic. Peeves: Physical event verification in                       Stefano Bromuri. Indexing the event calculus: towards
   smart homes. In Proceedings of the 2019 ACM Conference                        practical human-readable personal health systems. Artifi-
   on Computer and Communications Security. ACM, 2019.                           cial intelligence in medicine, 96:154–166, 2019.
[Biswas et al., 2014] Saroj K Biswas, Nidul Sinha, and                        [Gellersen et al., 2002] Hans W Gellersen,        Albrecht
   Biswajit Purkayastha. A review on fundamentals of case-                      Schmidt, and Michael Beigl.       Multi-sensor context-
   based reasoning and its recent application in different do-                  awareness in mobile devices and smart artifacts. Mobile
   mains. International Journal of Advanced Intelligence                        Networks and Applications, 7(5):341–351, 2002.
   Paradigms, 6(3):235–254, 2014.
                                                                              [Genesereth and Nilsson, 1987] Michael R Genesereth and
[Broeck et al., 2013] Guy Van den Broeck, Wannes Meert,                         Nils J Nilsson. Logical foundations of artificial intelli-
  and Adnan Darwiche. Skolemization for weighted first-                         gence. Morgan Kaufmann, 1987.
  order model counting. arXiv preprint arXiv:1312.5378,
  2013.                                                                       [Hartmann et al., 2007] Björn Hartmann, Leith Abdulla,
                                                                                Manas Mittal, and Scott R Klemmer. Authoring sensor-
[Chen et al., 2008] Liming Chen, Chris Nugent, Maurice                          based interactions by demonstration with direct manipula-
  Mulvenna, Dewar Finlay, Xin Hong, and Michael Poland.                         tion and pattern recognition. In Proceedings of the SIGCHI
  Using event calculus for behaviour reasoning and assis-                       conference on Human factors in computing systems, pages
  tance in a smart home. In International Conference                            145–154, 2007.
  on Smart Homes and Health Telematics, pages 81–89.
  Springer, 2008.                                                             [Kofod-Petersen and Aamodt, 2003] Anders           Kofod-
                                                                                Petersen and Agnar Aamodt.          Case-based situation
[Cheng et al., 2014] Guangchun Cheng, Yiwen Wan, Bill P                         assessment in a mobile context-aware system. In Artificial
  Buckles, and Yan Huang. An introduction to markov logic                       Intelligence in Mobile Systems, pages 41–49, 2003.
  networks and application in video activity analysis. In
                                                                              [Kowalski and Sergot, 1989] Robert Kowalski and Marek
  Fifth International Conference on Computing, Communi-
  cations and Networking Technologies (ICCCNT), pages 1–                        Sergot. A logic-based calculus of events. In Foundations
  7. IEEE, 2014.                                                                of knowledge base management, pages 23–55. Springer,
                                                                                1989.
[Davis, 2017] Ernest Davis. Logical formalizations of com-
                                                                              [Laput et al., 2017] Gierad Laput, Yang Zhang, and Chris
  monsense reasoning: a survey. Journal of Artificial Intel-
  ligence Research, 59:651–723, 2017.                                            Harrison. Synthetic sensors: Towards general-purpose
                                                                                 sensing. In Proceedings of the 2017 CHI Conference on
[Dey et al., 2004] Anind K Dey, Raffay Hamid, Chris Beck-                        Human Factors in Computing Systems, pages 3986–3999,
  mann, Ian Li, and Daniel Hsu. a cappella: programming                          2017.
  by demonstration of context-aware applications. In Pro-
                                                                              [Lin et al., 2003] Jessica Lin, Eamonn Keogh, Stefano
  ceedings of the SIGCHI conference on Human factors in
  computing systems, pages 33–40, 2004.                                          Lonardi, and Bill Chiu. A symbolic representation of time
                                                                                 series, with implications for streaming algorithms. In Pro-
[Dey, 2000] Anind Kumar Dey. Providing Architectural                             ceedings of the 8th ACM SIGMOD workshop on Research
  Support for Building Context-Aware Applications. PhD                           issues in data mining and knowledge discovery, pages 2–
  thesis, Georgia Institute of Technology, USA, 2000.                            11, 2003.
  AAI9994400.                                                                 [Liu et al., 2017] Fagui Liu, Dacheng Deng, and Ping
[Dey, 2001] Anind K Dey. Understanding and using context.                        Li. Dynamic context-aware event recognition based on
  Personal and ubiquitous computing, 5(1):4–7, 2001.                             markov logic networks. Sensors, 17(3):491, 2017.



Copyright c 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021                                                          18


[Loke, 2006] Seng Wai Loke. On representing situations for                    [Shanahan, 2004] Murray Shanahan. An attempt to for-
   context-aware pervasive computing: six ways to tell if you                    malise a non-trivial benchmark problem in common sense
   are in a meeting. In Fourth Annual IEEE International                         reasoning. Artificial intelligence, 153(1-2):141–165, 2004.
   Conference on Pervasive Computing and Communications                       [Shanahan, 2006] Murray Shanahan. Frame problem, the.
   Workshops (PERCOMW’06), pages 5–pp. IEEE, 2006.                               Encyclopedia of cognitive science, 2006.
[Loke, 2016] Seng W Loke. Representing and reasoning                          [Shoham and McDermott, 1988] Yoav Shoham and Drew
   with the internet of things: a modular rule-based model for                   McDermott. Problems in formal temporal reasoning. Ar-
   ensembles of context-aware smart things. EAI endorsed                         tificial Intelligence, 36(1):49–61, 1988.
   transactions on context-aware systems and applications,
   3(8), 2016.                                                                [Singla and Domingos, 2005] Parag Singla and Pedro
[McCarthy and Hayes, 1981] John McCarthy and Patrick J                           Domingos.         Discriminative training of markov logic
                                                                                 networks. In AAAI, volume 5, pages 868–873, 2005.
   Hayes. Some philosophical problems from the standpoint
   of artificial intelligence. In Readings in artificial intelli-             [Skarlatidis and Michelioudakis, 2014] Anastasios Skarla-
   gence, pages 431–450. Elsevier, 1981.                                         tidis and Evangelos Michelioudakis. Logical Markov
[Mueller, 2004] Erik T Mueller. Event calculus reasoning                         Random Fields (LoMRF): an open-source implementation
   through satisfiability. Journal of Logic and Computation,                     of Markov Logic Networks, 2014.
   14(5):703–730, 2004.                                                       [Skarlatidis et al., 2015] Anastasios Skarlatidis, Georgios
[Mueller, 2008] Erik T Mueller. Event calculus. Foundations                      Paliouras, Alexander Artikis, and George A Vouros. Prob-
   of Artificial Intelligence, 3:671–708, 2008.                                  abilistic event calculus for event recognition. ACM Trans-
                                                                                 actions on Computational Logic (TOCL), 16(2):1–37,
[Mueller, 2014] Erik T Mueller. Commonsense reasoning:                           2015.
   an event calculus based approach. Morgan Kaufmann,
   2014.                                                                      [Snidaro et al., 2015] Lauro Snidaro, Ingrid Visentini, and
                                                                                 Karna Bryan. Fusing uncertain knowledge and evidence
[Nelson and Mateas, 2008] Mark J Nelson and Michael
                                                                                 for maritime situational awareness via markov logic net-
   Mateas. Recombinable game mechanics for automated de-                         works. Information Fusion, 21:159–172, 2015.
   sign support. In AIIDE, 2008.
                                                                              [Sutton and McCallum, 2006] Charles Sutton and Andrew
[Papadopoulos and Tzanetakis, 2016] Helene Papadopoulos
                                                                                 McCallum. An introduction to conditional random fields
   and George Tzanetakis. Models for music analysis from                         for relational learning. Introduction to statistical relational
   a markov logic networks perspective. IEEE/ACM Trans-                          learning, 2:93–128, 2006.
   actions on Audio, Speech, and Language Processing,
   25(1):19–34, 2016.                                                         [Yeh et al., 2016] Chin-Chia Michael Yeh, Yan Zhu, Liud-
[Patel et al., 2002] Pranav Patel, Eamonn Keogh, Jessica                         mila Ulanova, Nurjahan Begum, Yifei Ding, Hoang Anh
                                                                                 Dau, Diego Furtado Silva, Abdullah Mueen, and Eamonn
   Lin, and Stefano Lonardi. Mining motifs in massive time
                                                                                 Keogh. Matrix profile i: all pairs similarity joins for time
   series databases. In 2002 IEEE International Confer-
                                                                                 series: a unifying view that includes motifs, discords and
   ence on Data Mining, 2002. Proceedings., pages 370–377.
                                                                                 shapelets. In 2016 IEEE 16th international conference on
   IEEE, 2002.
                                                                                 data mining (ICDM), pages 1317–1322. Ieee, 2016.
[Perera et al., 2013] Charith Perera, Arkady Zaslavsky, Peter
                                                                              [Zhang et al., 2019] Pei Zhang, Shijia Pan, Mostafa Mir-
   Christen, and Dimitrios Georgakopoulos. Context aware
   computing for the internet of things: A survey. IEEE com-                     shekari, Jonathon Fagert, and Hae Young Noh. Structures
   munications surveys & tutorials, 16(1):414–454, 2013.                         as sensors: Indirect sensing for inferring users and envi-
                                                                                 ronments. Computer, 52(10):84–88, 2019.
[Poon and Domingos, 2006] Hoifung Poon and Pedro
   Domingos. Sound and efficient inference with probabilis-
   tic and deterministic dependencies. In AAAI, volume 6,
   pages 458–463, 2006.
[Richardson and Domingos, 2006] Matthew Richardson and
   Pedro Domingos. Markov logic networks. Machine learn-
   ing, 62(1-2):107–136, 2006.
[Russel et al., 2013] Stuart Russel, Peter Norvig, et al. Arti-
   ficial intelligence: a modern approach. Pearson Education
   Limited, 2013.
[Schmidt, 2003] Albrecht Schmidt. Ubiquitous computing-
   computing in context. Lancaster University (United King-
   dom), 2003.
[Shanahan and others, 1997] Murray Shanahan et al. Solving
   the frame problem: a mathematical investigation of the
   common sense law of inertia. MIT press, 1997.



Copyright c 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).