<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Deep learning for sensor-based activity recognition: A sur-
vey. CoRR abs/</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Measuring Functional Independence of an Aged Person with a Combination of Machine Learning and Logical Reasoning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nobuyuki Oishi</string-name>
          <email>n.oishi@uec.ac.jp</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Masayuki Numao</string-name>
          <email>numao@cs.uec.ac.jp</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Communication Engineering and Informatics The University of Electro-Communications</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>1707</year>
      </pub-date>
      <volume>03502</volume>
      <abstract>
        <p>Various approaches to human activity recognition have been proposed to achieve better management of human health and wellness. However, there are few approaches that measure the levels of activity in an accountable way. In this paper, we propose a novel approach to measure the functional independence of an aged person with a combination of machine learning and ontology-based logical reasoning. As to combining the two different approaches, we utilize semantic contexts as the interlayer and dummy contexts as a way of handling the difficulty in reasoning with incomplete data. The Functional Independence Measure (FIM) is used to build an ontology to evaluate an aged person's functional independence. Evaluation experiments using data collected in the laboratory environment of the authors' are conducted, and the results of which show the effectiveness of the proposed approach.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        As the world’s population is aging rapidly, the importance
of maintaining elderly people’s health and wellness is
rising. Ambient Assisted Living (AAL) has been gathering a
great deal of interest under such circumstances. AAL is a
concept to support elderly people to live independently for
as long as possible and improve their quality of life with
ambient intelligence techniques including AI and IoT. In the
AAL community, human activity recognition is one of the
most attention-getting topics
        <xref ref-type="bibr" rid="ref6">(Monekosso, Florez-Revuelta,
and Remagnino 2015)</xref>
        and is necessary for making
intelligent systems to be proactive and adaptive to each user.
      </p>
      <p>
        Numerous previous studies have proposed various
approaches to human activity recognition. However, there is
insufficient research conducted on measuring activity levels
in an accountable way, even though it is essential to conduct
long-term observation of changes in Activities of Daily
Living (ADL) for assisting elderly people to stay active longer.
With regard to the activity recognition approaches, there are
two mainstreams, data-driven approaches and
knowledgedriven approaches
        <xref ref-type="bibr" rid="ref2">(Chen et al. 2012)</xref>
        .
      </p>
      <p>In this paper, we propose a novel approach to measure the
functional independence of an aged person with a
combination of machine learning and ontology-based logical
reasoning. The proposed approach possesses an explainability
derived from ontology-based logical reasoning while
keeping data-driven approaches’ flexibility and robustness to
individual differences of activities. When combining the two
different approaches, we use semantic contexts as the
interlayer between them. The machine learning layer applies
machine learning techniques to data collected from various
sensors such as object sensors (RFID, contact sensor),
wearable sensors (smartwatch, RFID-attached cloth), and
environment sensors (temperature sensor, light sensor). It
extracts various context information such as inhabitants’
actions, postures, and relatively low-level activities, and spatial
relations between human and objects. The recognized
context information is then organized as semantic contexts to
derive higher contextual activities and their activity levels by
ontology-based logical reasoning. In addition, we propose to
utilize dummy contexts as a way to handle the difficulty in
reasoning with incomplete data. It makes the system
calculate a confidence value for each possible class even when
context information obtained at machine learning layer is
incomplete.</p>
      <p>We use the Functional Independence Measure (FIM) to
design an ontology for scoring aged person’s functional
independence level. The ontology is written in OWL/RDF
format following the W3C standards. The FIM is widely used
in medical and nursing care fields and now the Japanese
government uses it to determine the amount of nursing care
insurance. Thus, development of automatic FIM scoring
method is highly desired. The FIM consists of 18 items: 13
items of motor and 5 items of cognitive. Each of the 18 items
has a maximum score of 7 which indicates a patient’s
independence level with 7 being the highest (independent) and
1 the lowest (most dependent). The score decreases as the
level of assistance the patient requires increases. The list of
the 18 items is shown in Table 1. A characteristic of the FIM
is that the score should be based on what a patient actually
does and not on what a patient should or might be able to do.
Hence, the FIM matches sensor-based activity recognition.</p>
      <p>The rest of this paper is organized as follows. Section 2
introduces the related works. Section 3 describes the
proposed approach’s architecture, the semantic contexts, the
ADL/FIM ontology, and the usage of dummy properties in
detail. Section 4 presents results and analysis of the
experiments of this study. Section 5 concludes the paper with
future work.</p>
    </sec>
    <sec id="sec-2">
      <title>Related Works</title>
      <p>In this section, we review the pros and cons of both
machine learning-based activity recognition approaches and
ontology-based activity recognition approaches.</p>
      <sec id="sec-2-1">
        <title>Machine Learning-based ADL RecognitioEnating</title>
        <p>/ 7
One of the most important advantages of machGirnoeomleinagrning- / 7
based approaches is that they can handle noises, uncertain- / 7
Bathing
ties, and incompletenesses tShealtf-sCeanresor datDarepsositnegnUtipaplelryBpodoys- / 7
sesses. Their major drawback is that a largDeresnsuinmg bLoewr eorfBsoedyn- / 7
Toileting / 7
sor data is required to create activity models,Swubhtioctahl Score
results in /42
its cold-start problem and model applicability and reusabil- / 7
Bladder
ity. Despite that, in recent ySepahrisn,cttehre advancemBeonwtelof deep / 7
learning makesMitoptoorssible toCopnetrrfool rm auto mSuabttioctalaSncdorehigh- /14
level feature extraction (Wang et al. 20B1e7d,)Cahanird, Wahcehelicehvaeirs / 7
higher performances than conventional techniqTouielest. How- / 7</p>
        <p>Trasnfers
ever, while deep learning’s remarkable perforTmuab,nScheowisergath- / 7
ering great interests, its insufficient explaiSnuabbtoitlailtyScoirse now /21
considered an urgent issue (Gilpin et al. 20W18al)k., AWlhseoel,chmairany / 7
of the activities recognizeLdocuosminogtiomnachine leaSrtnaiirnsg tech- / 7
niques are relatively less contextual, have disStuibntoctta,l aSncodreoften /14
periodic, motion patterns such as MstoatnordiSnugb,towCtaolamSlckporirenehgen,sbiornush- //971
ing teeth, and ascending/dCeosmcmenudniincagtisotnairs. Expression / 7
Subtotal Score /14
Social Interaction / 7</p>
      </sec>
      <sec id="sec-2-2">
        <title>Ontology-based ADL Recognition</title>
        <p>
          Cognitive
In real-world settings, intelligSeonctiaslystems arPerorbelqemuiSroeldvintog be / 7
able to recognize consideraCbolgynictioomn plex activMitieemsorsyuch as //271
Subtotal Score
eating, dressing, and social interaCcotginointivse. SInubttohtoalsSecore
situations, /35
ontology-based approaches have been gaining increasing in- /126
Total FIM Score
terest to recognize such complicated activities in an
explainable way by using its comprehensive reasoning mechanism.
Ontologies have been actively used in object-based and
location-based activity recognition communities. In
objectbased activity recognition, activity models are constructed
using detected human-object interactions and/or objects in
the space detected in a combination with object sensors.
Yamada et al. proposed to detect semantics of location by
exploiting WordNet to handle unlearned things and their
multiple name representation (Yamada et al. 2007). There are
more features that ontology-based activity recognition
approaches provide: machine-processable rich domain
knowledge, multi-level reasoning, flexible and easily customizable
nature, and integration and interoperability between
contextual information and ADL recognition
          <xref ref-type="bibr" rid="ref1">(Chen and Khalil
2011)</xref>
          . However, ontology-based reasoning has difficulties
in handling uncertainty and incomplete data. There are
several approaches tackling the weaknesses of the
ontologybased reasoning with uncertainty. For instance, Noor et al.
integrated ontological reasoning based on Description Logic
with Dempster-Shafer theory to handle uncertainty derived
from imperfect observations
          <xref ref-type="bibr" rid="ref7 ref8">(Noor, Salcic, and Wang 2016)</xref>
          .
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Combination of Machine Learning and</title>
    </sec>
    <sec id="sec-4">
      <title>Ontology-based Reasoning</title>
      <p>In this paper, we propose a novel approach to measure
the functional independence of an aged person cum patient
with a combination of a machine learning-based approach
and ontology-based logical reasoning. By adding
ontologybased reasoning function on top of machine learning-based
techniques, an ADL recognition system becomes more
explanatory and more context-aware while keeping data-driven
approaches’ flexibility and robustness to noises and
uncertainties in sensor data. Figure 1 shows the overview of the
proposed approach from sensor data collection and
semantic context extraction to ADL recognition and FIM scoring.
Collected sensor data is to be processed in the data-driven
phase to extract semantic contexts, then possible activity
classes and FIM scores are inferred by ontology-based
logical reasoning using the obtained semantic contexts.</p>
      <sec id="sec-4-1">
        <title>Semantic Context Extraction</title>
        <p>
          The lower part of the middle section of Figure 1 shows the
process of semantic context extraction from low-level sensor
data using data-driven approaches. In this study, semantic
contexts include action, posture, activity, interacting object,
surrounding object, location, and time. In order to extract the
semantic contexts, various types of sensors can be utilized.
The left section of Figure 1 shows the list of sensors which
can be used. Wearable sensors are often used to detect a
person’s actions, postures, and simple activities
          <xref ref-type="bibr" rid="ref5 ref7">(Morales and
Roggen 2016; Jin et al. 2018)</xref>
          . Object sensors such as
RFIDtag and contact sensor can capture where the objects are, the
objects’ status of usage, and spatial relation between the
objects
          <xref ref-type="bibr" rid="ref1">(Bouchard, Bouchard, and Bouzouane 2011)</xref>
          . Wearable
sensors can also be considered as one kind of object sensors
which are attached to humans. Such information like what
objects are in the target place, what the interacting objects
        </p>
        <sec id="sec-4-1-1">
          <title>Multimodal Sensor</title>
          <p>Wearable Sensor
Smart Phone/Watch
RFID-attached Cloth
Object Sensor
RFID
Contact Sensor
Mattress Sensor
Environment Sensor
Temperature Sensor
Humidity Sensor
Atmospheric Pressure
Light Sensor
Accelerometer
Noise Sensor
RFID PIR-Sensor</p>
        </sec>
        <sec id="sec-4-1-2">
          <title>Knowledge-driven Processing</title>
          <p>Ontology-based Logical</p>
          <p>Reasoning
ADL, FIM
Ontology</p>
          <p>DL
Reasoner</p>
          <p>Confidence</p>
          <p>Value</p>
          <p>Calculation</p>
          <p>Semantic Contexts
Action</p>
          <p>Posture
are, and if a staff or a helper is around the patient can be
obtained by using object sensors. Environment sensors such as
temperature sensor and humidity sensor are used to monitor
the conditions of the room.</p>
          <p>
            Interacting object contexts especially have an important
role in the proposed approach. We take advantage of the
semantic information to express the level of assistance that a
helper provides for a patient. The Egenhofer’s topological
spatial framework
            <xref ref-type="bibr" rid="ref3">(Egenhofer and Franzosa 1991)</xref>
            is
referenced to express a spatial relation between objects. Figure
2 shows the visual representation of the spatial relations
between a patient and a helper and their rough correspondence
with different assistance levels. It can be considered that the
closer the distance between the helper and the patient, the
higher the level of assistance.
          </p>
          <p>P
Note that applying machine learning is not always
necessary when extracting semantic contexts. For example,
context information of the objects in a room can be extracted by
applying a simple filter program to the read RFID tags data.
It is also worth noting that data formatting is a necessary
process before applying machine learning techniques since a
large amount of data collected from various kinds of sensors
comes at different timing in different formats. Noises and
variations of sensor data resulted from individual differences
are handled in this phase. Subsequently, recognized
semantic contexts are sent to the next logical reasoning phase to be
processed.</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>ADL Recognition and FIM Calculation</title>
        <p>The top part of the middle section of Figure 1 shows the
process from ontology-based logical reasoning and confidence
value calculation to ADL recognition and FIM scoring. An
OWL DL reasoner such as the pellet (Sirin et al. 2007) infers
pairs of a possible activity class and a FIM score based on
the pre-defined ADL/FIM ontology and the semantic
contexts obtained from the previous phase. Then the most likely
pair of activity class and score is chosen after the confidence
value calculation. This process is required because dummy
contexts are utilized to make the reasoner output possible
classes even if the previously obtained semantic contexts are
incomplete. In this phase, modifying the semantic contexts
come from the data-driven layer may be needed if the type
or the semantic level of the obtained context is not the
required one. In the case that the type fails to be the required
one, the type will be converted whenever possible. For
example, when a posture context is needed but only the action
context of isSittingOn(Context, Chair) arrives from the
datadriven layer, it will be converted to hasPosture(Context,
Sitting) using SWRL or in an external program through OWL
API. Similarly, when the semantic level of a context is
different from what is required, it will be adjusted as specified
in SWRL or external programs. For example, the spatial
relationship between a helper and a patient will be converted
to assistance level since the level from the previous phase.
ADL/FIM Ontology Web Ontology Language (OWL)
and Semantic Web Rule Language (SWRL) are used to
construct ADL/FIM ontologies. An OWL ontology consists
of individuals, properties, and classes. Individuals represent
objects in the domain of interest. In this case, an individual
is, e.g. a person, an artifact, or an activity. Properties are
binary relations on two individuals. For example, the property
hasCurrentActivity may link the individual Person1 and the
individual Person1CurrentActivity.</p>
        <p>In the ADL/FIM ontology, every person has one’s own
activity individual, and extracted semantic contexts are linked
to the activity individual with hasContext properties. An
example is shown in Figure 3. Also, a sequence of contexts
can be expressed using hasPrecedentContext or
hasSubsequentContext property as shown in Figure 4. This example
illustrates transitions from a living room to a restroom and
then to a private room. Since hasPrecedentContext is
transitive property, the contexts at both ends are implicitly linked
with hasPrecedentContext.</p>
        <p>hasContext
Context
Dealing with Incomplete Semantic Contexts In
sensorbased ADL recognition, as mentioned in (Tiberghien et al.
2012), missing or falling sensor readings resulted from
running out of battery, packet loss, and wifi disconnection are
common but severe issues. However, an OWL DL reasoner
does not provide any results when the required semantic
contexts are incomplete. It is obvious that such an approach
1https://www.cms.gov/Medicare/Medicare-Fee-for-ServicePayment/InpatientRehabFacPPS/Downloads/IRFPAI-manual2012.pdf
hasContext
lacks in efficiency and effectiveness especially in real-life
settings where the environment is more unsatisfactory and
where some levels of results from even incomplete data are
in greater needs. Hence, we introduce dummy contexts to
make a reasoner output candidate classes. Using this design,
we hope that the most likely results can be obtained by
calculating a confidence value for each class.</p>
        <p>A dummy context can be expressed by adding an
isDummy property to a context. By linking dummy contexts
with the activity of a person as its initial state, a reasoner
can output candidate classes even if some semantic contexts
cannot be reached. In the simplest case, a confidence value
can be obtained by calculating the proportion of non-dummy
contexts among the contexts required to be the activity class.
Figure 6 shows that an individual’s class in the Activity class
would be inferred to be Eating-6 due to dummy contexts.
This inference is made based on the assumption that a
smartwatch’s battery has run out and the data cannot be collected
while the other contexts have reached as expected. In this
case, confidence value would be 5=7(= 0:71) since its 5
contexts out of 7 are valid contexts.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Evaluation Experiments</title>
      <sec id="sec-5-1">
        <title>Implementation and Data Preparation</title>
        <p>We implemented an ADL recognition and FIM scoring
system which the proposed methods were applied, and
conducted experiments. An ADL/FIM ontology was created
using the Prote´ge´ 5.2, which is a widely used ontology editor.
Figure 7 shows an excerpt of the ontology. It shows a part
has Activity
of Classes, Object properties, and Individuals from left to
right. The system was implemented in Java with the OWL
API and the JFact reasoner.</p>
        <p>We also created a dataset in our laboratory (Numao Lab)
environment. As shown in Figure 8, a subject wears
smartwatches and RFID-attached shirt, pants, and slippers.
Furthermore, RFID tags are attached to things and
environments such as the floor, tables, eating utensils and grooming
utensils. We asked three subjects to act in accordance with
the prepared scenarios. FIM items of Eating, Grooming,
Transfer-BedChair were chosen for the experimental
scenarios. We recorded their performances with a 360-degree
camera and later labeled the videos’ segments manually
using the labels listed in Table 2. The scenarios used are as
follows:</p>
        <p>Eating-7: The subject eats and drinks all by himself in a
safe and timely manner.</p>
        <p>Eating-4: The subject sometimes needs assistance when
scooping small pieces of the food.</p>
        <p>Eating-2: The helper gives hand-over-hand assistance to
scoop the food and bring the spoon to the subject’s mouth
so the subject can chew and swallow the food.</p>
        <p>Grooming-7: The subject does all grooming tasks by
himself in a safe and timely manner.</p>
        <p>Grooming-4: The subject is independent with three of the
four tasks (washing hands, combing, washing face) after
Smartwatch</p>
        <p>RFID Tag
_
setup assistance by a helper.</p>
        <p>Grooming-2: The subject washes his hands by himself
but needs help with the rest of grooming activities.
Transfers-BedChair-7: The subject safely gets up to a
standing position from a regular chair then the subject
safely transfers from chair to bed independently. Also, the
subject safely gets up from the bed and sits on a regular
chair independently.</p>
        <p>Transfers-BedChair-4: The subject transfers into and out
of the bed to an armchair. The subject needs light support
in order to keep himself steady.</p>
        <p>Transfers-BedChair-2: The subject requires lifting and
lowering assistance to stand up and sit down.</p>
        <p>Although we actually collected sensor data, in these
experiments, we only used the labeled data which had
been created by labeling the recorded videos as input for
ontology-based logical reasoning, assuming that semantic
contexts had been obtained by machine learning-based
approaches for the sake of simplicity.</p>
        <p>In this experiment, assistance levels were calculated based
on the proportion of assistance time among the entire
mealtime. When there were no helpers around the subject, it
would be considered to be independent state. When a helper
was close to the subject, it would be considered to be setup
or supervision assistance. If a helper was touching the
subject, assuming the two had distance shorter than an arm’s
length, it would be considered to be minimal contact
assistance. On the other hand, if a helper was caught by the sensor
to be closer to the subject by an arm’s length or looking as
if the two partially overlapped, it would be treated as more
assistance than touching. Confidence values were calculated
based on the proportion of non-dummy contexts among the
contexts required to be that activity class.</p>
        <p>At the end of the experiments, we compared the inferred
results in both cases of with and without dummy contexts.
Dummy contexts were applied for all possible semantic
contexts except the location context. We also tested and
compared the cases where the semantic contexts which should
be obtained from a smartwatch were missing and checked
how the dummy contexts deal with it.</p>
      </sec>
      <sec id="sec-5-2">
        <title>Results and Analysis</title>
        <p>Table 3 shows the inferred results when dummy contexts are
not used. It correctly classifies the six scenarios including
Eating-7, Eating-4, Grooming-7, Transfers-BC-7,
TransfersBC-4, and Transfers-BC-2 out of the nine scenarios, but
cannot correctly classify the Eating-2 scenario and cannot
output any results for the Grooming-4 and Grooming-2
scenarios. The Eating-2 scenario is misclassified because it fails at
the assistance level calculation phase. In this experiment,
assistance level calculation is simply based on the proportion
of assistance time among the entire mealtime. Due to the
significantly long time of conversation between the helper and
the subject, it has resulted in long mealtime and sparse
assistance even though the subject of the Eating-2 has required
hand-over-hand assistance in scooping food and bringing the
food to the mouth. Regarding the latter two grooming
scenarios, subjects have not performed at least one grooming
activity. It results in missing contexts and no outputs by the
reasoner.</p>
        <p>Grooming-2 which are not classified to any class in Table
3 generate inferred results, though the Grooming-4 scenario
was misclassified.
From these results, it can be said that using
ontologybased logical reasoning in combination with machine
learning-based approaches is promising for FIM
measurement; and that using dummy contexts is effective to
deal with incomplete contexts. However, improvements are
needed for treating the details of the FIM ontology to handle
more specific situations and the calculation method of the
assistance level.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Conclusion and Future Work</title>
      <p>In this study, we propose a novel approach to measure the
functional independence of an aged person with a
combination of machine learning and ontology-based logical
reasoning. The combination makes a system more explanatory
and more context-aware while keeping flexibility and
robustness to noises and uncertainties in sensor data. The
proposed approach uses semantic contexts as the interlayer for
connecting machine learning techniques and ontology-based
logical reasoning. Furthermore, we explore the utilization of
dummy contexts when handling the difficulties in
reasoning with incomplete data. The evaluation experiments
indicate the effectiveness of our proposed approach that can
infer activity classes and FIM scores based on even
incomplete semantic contexts. Considering the fact that the
experiments of our proposed approach are limited to only research
laboratory settings, it is essential to test the same proposal
in real-life environments and to reconfirm its effectiveness.
This study has been approved by the Human Research Ethics
Committees of the University of Electro-Communications
(registration number: 18042) and conducting a
demonstration experiment in this spring at a nursing home with the
cooperation of St. Marianna University School of Medicine.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgement</title>
      <p>This work was supported by JSPS KAKENHI Grant
Number JP17H01823 “Development of Watching System by
Integration of Non-Restrictive Sensors”.</p>
      <p>Yamada, N.; Sakamoto, K.; Kunito, G.; Isoda, Y.; Yamazaki,
K.; and Tanaka, S. 2007. Applying ontology and
probabilistic model to human activity recognition from surrounding
things. IPSJ Digital Courier 3:506–517.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Khalil</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <year>2011</year>
          .
          <article-title>Activity Recognition: Approaches, Practices and Trends</article-title>
          . Paris: Atlantis Press.
          <fpage>1</fpage>
          -
          <lpage>31</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Hoey</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ; Nugent,
          <string-name>
            <given-names>C. D.</given-names>
            ;
            <surname>Cook</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. J.;</given-names>
            and
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <surname>Z.</surname>
          </string-name>
          <year>2012</year>
          .
          <article-title>Sensor-based activity recognition</article-title>
          .
          <source>IEEE Transactions on Systems, Man, and Cybernetics</source>
          , Part C (
          <article-title>Applications</article-title>
          and Reviews)
          <volume>42</volume>
          (
          <issue>6</issue>
          ):
          <fpage>790</fpage>
          -
          <lpage>808</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Egenhofer</surname>
            ,
            <given-names>M. J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Franzosa</surname>
            ,
            <given-names>R. D.</given-names>
          </string-name>
          <year>1991</year>
          .
          <article-title>Point-set topological spatial relations</article-title>
          .
          <source>International Journal of Geographical Information Systems</source>
          <volume>5</volume>
          (
          <issue>2</issue>
          ):
          <fpage>161</fpage>
          -
          <lpage>174</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Gilpin</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Bau</surname>
            , D.;
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Yuan</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Bajwa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Specter</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Kagal</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Explaining explanations: An approach to evaluating interpretability of machine learning</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Jin</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Kumar</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Hong</surname>
            ,
            <given-names>J. I.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Towards wearable everyday body-frame tracking using passive rfids</article-title>
          .
          <source>Proc. ACM Interact. Mob. Wearable Ubiquitous Technol</source>
          .
          <volume>1</volume>
          (
          <issue>4</issue>
          ):
          <volume>145</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>145</lpage>
          :
          <fpage>23</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Monekosso</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Florez-Revuelta</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ; and Remagnino,
          <string-name>
            <surname>P.</surname>
          </string-name>
          <year>2015</year>
          .
          <article-title>Ambient assisted living [guest editors' introduction]</article-title>
          .
          <source>IEEE Intelligent Systems</source>
          <volume>30</volume>
          (
          <issue>4</issue>
          ):
          <fpage>2</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Morales</surname>
            ,
            <given-names>F. J. O.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Roggen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition</article-title>
          .
          <source>In Sensors.</source>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Noor</surname>
            ,
            <given-names>M. H. M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Salcic</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>K. I.-K.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>Enhancing ontological reasoning with uncertainty handling for activity recognition</article-title>
          .
          <source>Knowledge-Based Systems</source>
          <volume>114</volume>
          :
          <fpage>47</fpage>
          -
          <lpage>60</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>