<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Roadmap for Semantically-Enabled Human Device Interactions</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Madhawa Perera</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Armin Haller</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Matt Adcock</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Australian National University</institution>
          ,
          <addr-line>Canberra ACT 2601, AU https://cecs.anu.edu.au/</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>CSIRO</institution>
          ,
          <addr-line>Canberra ACT 2601, AU</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>With the evolving Internet of Things (IoT), the number of smart devices we interact in our day-to-day life has signi cantly increased. The nature of human interaction with these devices must be perceived, because too much complexity could lead to the risk of IoT appearing unattractive for users or the system losing its e ciency, regardless of its potential. Therefore, it is important to address problems in Human Device Interactions to provide a high-quality User Experience (UX) in the emerging IoT. This paper proposes a roadmap to address the complexities associated with human smart device (sensor or actuator) interaction through a methodology that incorporates context awareness in Augmented Reality (AR) using semantic Web technologies. Further, we analyse the use of Natural User Interfaces (NUI), such as hand gestures and gaze, to provide noninvasive and intuitive interaction in optimising the user experience.</p>
      </abstract>
      <kwd-group>
        <kwd>Semantic Web</kwd>
        <kwd>Augmented Reality</kwd>
        <kwd>Internet of Things</kwd>
        <kwd>Smart Devices</kwd>
        <kwd>Natural User Interfaces</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        With the proliferation of IoT [
        <xref ref-type="bibr" rid="ref63">63</xref>
        ], the number of sensors deployed around the
world started to grow at a rapid pace. With the increased demand created for
smart devices (hereafter the term smart device includes sensors as well as
actuators) [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], per unit prices of these are drastically decreasing which henceforth
creates a high demand for building smart environments such as smart homes,
automobiles, digital farming, modern smart hotel rooms, future cities etc. [
        <xref ref-type="bibr" rid="ref45">45</xref>
        ].
Statistics show that the total installed base of IoT connected devices is projected
to increase to 75.44bn worldwide by 2025, a vefold increase in ten years [
        <xref ref-type="bibr" rid="ref63">63</xref>
        ] and
the number of network-connected smart devices per person around the world is
predicted to grow from 0.08 to 6.58 [
        <xref ref-type="bibr" rid="ref63">63</xref>
        ].
      </p>
      <p>
        Further, with this rapid evolvement of IoT [
        <xref ref-type="bibr" rid="ref31 ref50">50,31</xref>
        ], a human will often be
confronted with smart devices which they are not familiar with. The number
of smart devices that a human interacts in our day-to-day life has already
signi cantly increased [
        <xref ref-type="bibr" rid="ref45">45</xref>
        ]. We face situations where either we have to interact
with smart devices which we are not aware of the context and connections or
have forgotten how to interact with due to the rapid growth of the number of
devices that a modern human interacts with. Thus, a person must refer to
alternate information sources to adduce context and details, such as user manuals or
questions, or learning from an expert. This problem challenges the use of current
systems that are built to establish HSI (in this paper Human Sensor Interaction
(HSI) will be used synonymous with the phrase `Human Device Interaction').
A key element of HSI is to provide access to the diverse information of smart
devices that are relevant to a user in a given context, in the most convenient and
usable way. Today, there are smart sensor hubs such as Amazon echo [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] and
Google Home [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] which provide a HSI interface where users can use their voice
to operate in a `pre-con gured' environment and where users know what sensors
and actuators to interact with and how to. However, to interact with a speci c
sensor, a user must memorize and know the exact label that they have given
to the sensor in order to establish an interaction through voice commands. In a
real-world situation, these smart devices are built for di erent usages to di erent
persons. Inability to provide user speci c information is where the systems have
failed to deliver a high-quality user experience. It remains as a challenge and an
ongoing research area in IoT.
      </p>
      <p>
        The above challenges can be further exempli ed through a phenomenon in
the domain of tourism and leisure. Currently, the world is rapidly moving towards
automating customer experiences, especially in the hotel industry. Research and
surveys are conducted to leverage the discomfort of being away from home and
o er a better user experience in such situations [
        <xref ref-type="bibr" rid="ref42 ref49">42,49</xref>
        ]. As such, in the future,
hotel rooms will consist of more sensors to provide better UX. In a modern
hotel, even at the present day, the guest must try out several switches, remotes
and potentially audio commands, read the hotel room manual or call the hotel
reception to gure out how to operate smart doors, window blinds, adjust the
thermostat, operate the audio system and the TV, etc. This is a very tedious
task and sometimes a user may give up on some of the available features and
will not gain the full experience that the environment o ers. Key problems that
ensue are how exactly are the users going to interact with those smart devices?
      </p>
      <p>Therefore, to deliver a noninvasive interaction between human and smart
devices, and to provide a better UX, we are aiming to utilize/consider a blend of
semantic Web technologies, Augmented Reality technology along with Natural
User Interfaces (NUIs) such as gaze, voice or hand gestures in HSI. In our
research roadmap, we are investigating how one could use eye gaze to detect, and
hand gestures or voice to interact with these feature-unfamiliar smart devices.
That means we try to eliminate the burden of memorizing devices and their
functionality in order to interact with them. Therefore, by looking at a smart
device, the user should be able to read out and comprehend the capabilities of
the device. Later these interactions should be made possible via hand gestures
or voice.</p>
      <p>Since this research is in its investigation phase we structured the paper as
follows. Section 2 provides a discussion of the use of AR technology and we
review and analyse Microsoft's HoloLens capabilities in regards to the identi ed
challenges. Section 3 provides our de nition of context, and why context is an
important factor to consider. Section 4 discusses how we intend to incorporate
semantic Web technologies to model the device capabilities and its context.
Section 5 contains an analysis of how semantic Web technologies could blend with
AR to provide better context-awareness. We propose a novel AR and semantic
Web technology blend to address this problem and conclude the paper with a
discussion section.
2
2.1</p>
    </sec>
    <sec id="sec-2">
      <title>AR as a tool to enable Human-Device Interactions</title>
      <sec id="sec-2-1">
        <title>Use of Augmented Reality</title>
        <p>
          Augmented Reality (AR) is a eld in computer science which uses computer
vision-based techniques that enable superimposing interactive graphical content
such as 2D and 3D multimedia content, on top of the view of real objects [
          <xref ref-type="bibr" rid="ref59">59</xref>
          ].
Therefore, AR could be used as a powerful visualization medium [
          <xref ref-type="bibr" rid="ref59">59</xref>
          ] to
conveniently facilitate HSI. Its potential to blend real and virtual objects has opened
up new opportunities for building interactive and engaging applications in
multiple application domains [
          <xref ref-type="bibr" rid="ref60">60</xref>
          ]. AR has already been used in various domains
to provide a better UX, for example, in Education [
          <xref ref-type="bibr" rid="ref32 ref68 ref8">8,32,68</xref>
          ], Marketing [
          <xref ref-type="bibr" rid="ref16 ref70">70,16</xref>
          ],
Military [
          <xref ref-type="bibr" rid="ref43 ref44">43,44</xref>
          ], Medicine [
          <xref ref-type="bibr" rid="ref12 ref34 ref52">12,34,52</xref>
          ], Tourism [
          <xref ref-type="bibr" rid="ref28 ref66">28,66</xref>
          ], Entertainment [
          <xref ref-type="bibr" rid="ref26 ref3 ref39">3,26,39</xref>
          ]
etc.
        </p>
        <p>Looking at its widespread deployment in di erent use cases across multiple
domains, we are aiming to investigate how successful it will be in addressing
the aforementioned HSI challenges. In our research, we are aiming to utilize AR
technology to extend a user's view in order to present them additional
information regarding a smart device in the form of virtual content to then provide a
mode of NUI to establish interaction between the smart device and the user.</p>
        <p>
          Presently, AR technology has shown remarkable progress in building
consumer-level hardware and use of it has spread rapidly in recent years [
          <xref ref-type="bibr" rid="ref22 ref38 ref5">22,5,38</xref>
          ].
It is estimated that by 2020, there will be 1 billion AR users where AR revenues
will surpass Virtual Reality (VR) revenues [
          <xref ref-type="bibr" rid="ref47">47</xref>
          ]. Starting from Feiner et al's
Head Mount Display (HMD) that was connected to a backpack containing a
laptop and sensors such as GPS and gyroscopes [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ], the technology has
recently shrank to the size of handheld displays (HHD) such as mobile phones1 2
which have widened the access to AR experiences [
          <xref ref-type="bibr" rid="ref56">56</xref>
          ]. Research conducted by
Billinghurst et al. utilizes both HMDs and HHD to provide better UX [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
Advancement in processing and graphical performance of computing hardware and
1 Google ARCore - https://developers.google.com/ar/discover/
2 Apple ARKit - https://developer.apple.com/documentation/arkit
quickly growing bandwidth of mobile networks has been one of the key reasons
behind this [
          <xref ref-type="bibr" rid="ref60">60</xref>
          ]. With these advances in the eld of AR, the use of AR HMD
such as Microsoft HoloLens (HL) is proposed in this research to track gaze
actions and visualize content noninvasively to users. Further, this HMD is capable
of identifying gaze points and recognizing hand gestures and voice. Thus, we
will be utilizing these features in our proposed roadmap. The most critical
functionalities of AR HMD are 1) Identifying the smart device a user is gazing at;
2) displaying relevant information to the user; 3) identifying user's hand
gestures; 4) interpreting the hand gestures; and 5) communicating the interpreted
information to the smart device. However, a key element of this research is how
to present contextual information to users. Thus, in section 2.2, we investigate
HoloLens' capability to identify smart devices located in physical environments
with contextual information.
2.2
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Analysis of AR HMD capabilities</title>
        <p>Blending AR with natural user interaction methods like hand gestures, gaze and
voice is a potential approach to make noninvasive and intuitive HSI. Therefore, in
this section, we investigate the required capabilities of an AR HMD to potentially
identify physical devices with their contextual knowledge.</p>
        <p>If a user could get instructions on how to operate a smart device by looking
at it (via visual instructions), we assume, that it saves time and maximizes the
UX. Later, if the user can also interact via hand gestures or voice with the same
device, in use cases such as smart hotel rooms, it would further improve the UX.</p>
        <p>Yet, in each of these cases, the intermediate layer (which is the AR HMD)
must be able to detect, and identify smart devices correctly. Then the HMD
should be able to provide relevant information according to the user's context.
For example, a frequent traveller to a speci c hotel, does not need to know
each information anew every time, thus the HMD need to be able to identify
the user. A TV in a suite and a regular double room might look similar but
might have di erent capabilities, thus the HMD needs to be able to identify
its location. Likewise, depending on the contextual information the interaction
will vary. Hence, looking at the problem holistically, we identi ed three main
questions to be answered.
1. How could an AR HMD device detect and identify physical sensors and
actuators within its context?
2. How could user preferences be extracted and generate contextually relevant
information to a user?
3. How could this contextual information be added, maintained and altered
without the help of experts in the AR domain?</p>
        <p>
          As a contemporary example, we analysed Microsofts HL to see whether we
can only use an AR HMD without integrating it with any other physical element
to address these challenges. Looking at question 1, it seems it can be addressed
with computer vision-based object detection techniques [
          <xref ref-type="bibr" rid="ref64">64</xref>
          ] along with an AR
HMD. However, contextual information could be absent from these systems
without prior authoring (pre-con gurations).
        </p>
        <p>
          Automatic detection and segmentation of unknown objects in unknown
environments is still work-in-progress for computer vision researchers [
          <xref ref-type="bibr" rid="ref36 ref69">36,69</xref>
          ]. Many
existing object detection and segmentation methods assume prior knowledge
about the object or human interference [
          <xref ref-type="bibr" rid="ref37">37</xref>
          ]. Even though techniques like
segmentation, zero-shot learning and self-supervised learning can lead to developing
the ability to predict an unknown object, predicting its context and associated
details is still challenging.
        </p>
        <p>(a) Room 1 spatial data representation
(b) Room 2 spatial data representation</p>
        <p>
          In an AR space, object recognition can be achieved either by using a
1)physical marker (or with the help of another physical element like a Bluetooth
beacon), or by 2)direct object recognition (markerless AR). These markerless AR
techniques use a combination of dedicated sensors, depth cameras, object
recognition algorithms and environmental mapping algorithms to detect and map the
real-world environment with objects [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]. However, Hammady et al. point to the
possibility of even hybrid techniques. Further, the AR device has to know about
its position in the world along with the awareness of its physical space [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]. For
this purpose, AR devices use a technique called spatial mapping (also called 3D
reconstruction) which maps the physical environment in order to blend the real
and virtual worlds. This mapping helps the device to di erentiate its physical
locations and display virtual objects accordingly. It calculates this through the
spatial relationship between itself and multiple key points. This process is called
\Simultaneous Localization and Mapping (SLAM)" [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. Figure 1 shows a spatial
mapping mesh captured using Microsoft HL 1. It is also important to notice that
spatial mapping provides a detailed representation of real-world surfaces which
creates a 3D map of the environment [
          <xref ref-type="bibr" rid="ref46">46</xref>
          ] and as the user manoeuvres through
space or objects move around, the mesh is updated to re ect the boundaries
of the environment. Therefore, the device can understand and interact with the
real world accordingly.
        </p>
        <p>
          Spatial Mapping is identi ed as one of those capabilities that make the HL
stand out from other AR HMDs. Yet, developers do not have direct access to
the raw data of this mesh created by HL [
          <xref ref-type="bibr" rid="ref41">41</xref>
          ]. It is, however, possible to view the
generated mesh and develop further on top of it and save it against a particular
location as a visual representation [
          <xref ref-type="bibr" rid="ref41">41</xref>
          ]. However, HL only operates with a 3D
mesh created by another HL and does not support the 3D meshes created by
other devices or SLAM algorithms in general.
        </p>
        <p>The following section analyzes the capabilities of using Microsoft HL for
object detection with location awareness.</p>
        <p>A</p>
        <p>B</p>
        <p>Our analysis was conducted under four major conditions. HL capabilities
were analysed in detecting and interacting with a smart device when the HL is
in 1)known environment with known smart devices. 2)Known environment with
unknown devices 3)Unknown environment with known devices and 4)Unknown
environment with unknown devices.</p>
        <p>If the given environment is previously mapped (using an HL) and the smart
devices are known (refer to scenario B \ A in Figure 2), then the HL itself can
be used to interact with those objects (again) with contextual information such
as location. With HL1 a gaze pointer can be used whereas with HL2 eye tracking
will be available3 to make the interaction more natural (noninvasive).</p>
        <p>
          If the given environment is not mapped (using a HL) and the smart
devices are known (scenario B' \ A), then the HL can use object detection
algorithms (such as OpenCV [
          <xref ref-type="bibr" rid="ref54">54</xref>
          ], YOLO [
          <xref ref-type="bibr" rid="ref57">57</xref>
          ] etc.) without their context
details. In this type of situation, objects could be labelled with physical markers
to bind context-speci c information. Additionally, beacons are another option
which could be considered a solution to address this type of situation to provide
context-awareness. In the AR domain beacons are commonly used to aide AR
applications [
          <xref ref-type="bibr" rid="ref24 ref53 ref55">24,53,55</xref>
          ]
3 HoloLens 2-Overview, Features, and Specs: Microsoft HoloLens
(https://www.microsoft.com/enus/hololens/hardware)
        </p>
        <p>Even if the environment is mapped (using a HL) the smart devices could
be unknown (that is object detection algorithms are not trained with a speci c
object's data previously), then it is not possible to interact with the smart devices
by only using HL (scenario A' \ B). In this situation, again if the smart devices
can be physically labelled (using markers or trackers) the contextual information
could be attached to these markers.</p>
        <p>Finally, if the given environment is not mapped (using a HL) and the smart
devices are unknown (scenario A' \ B'), then it is not feasible to establish an
interaction with a smart device with contextual information by only using HL.
In such situations HL must pair with another physical element. Again, beacons
and markers could be used to address the problem.</p>
        <p>Table 1 summarizes the capabilities of Microsoft HL to interact with a smart
device with contextual information, without the help of another physical element.
Except for B \ A in which a prior authoring is conducted, all the other three
scenarios need assistance of a beacon/physical marker along with a semantic
description or visual recognition of the object and/or its position using semantic
descriptions of either or both of these, i.e. the physical characteristics and/or its
positional characteristics.</p>
        <p>Environment Objects
Scenario is mapped are known
using a
HL</p>
        <p>Looking at question 1 again, it is clear that for HL to detect a smart device
with its context, a pre-authoring process is essential. Therefore, either a physical
marker creation or con guring a beacon with contextual data will require deep
knowledge of AR and the suitable technology. This process includes collection,
modeling, reasoning, and distribution of context-related information in relation
to a smart device. This makes it a challenging and di cult task to build an
AR application which will suit any given indoor environment in general. Thus,
AR devices, applications and other relevant equipment needs to be authored
to suit its application environment. This creates the necessity of an easy-to-use
authoring tool.</p>
        <p>
          The majority of currently available AR authoring tools, software, libraries
and frameworks provide rich capabilities, but require advanced programming
skills to use them [
          <xref ref-type="bibr" rid="ref48">48</xref>
          ]. There are a very few simple and easy-to-use
authoring tools for non-technical users [
          <xref ref-type="bibr" rid="ref48">48</xref>
          ]. Yet, only very limited research has been
conducted to building an easy-to-use AR authoring tool with the capability of
adding contextual awareness.
3
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Importance of Context</title>
      <p>
        Humans can glance at objects and instantly identify or recognize them along
with associated details, their location, and means and methods of interaction
yet could struggle when the objects are unknown/unfamiliar or have a similar
resemblance to another [
        <xref ref-type="bibr" rid="ref65">65</xref>
        ]. For instance, how many times one struggles to
locate the exact room key among an unlabeled bunch of keys? Therefore, the
contextual knowledge of an object is important to establish e ective interaction
with the same.
      </p>
      <p>
        With the introduction of the term `ubiquitous computing' by Mark Weiser
in his seminal paper `The Computer for the 21st Century' in 1991 [
        <xref ref-type="bibr" rid="ref67">67</xref>
        ],
contextaware computing became a popular research area [
        <xref ref-type="bibr" rid="ref51">51</xref>
        ]. The term `context-aware'
was rst used by Schilit et al. [
        <xref ref-type="bibr" rid="ref51 ref61">51,61</xref>
        ] in 1994. Thereafter many researchers
attempted to address this concept in various applications and domains. According
to Abowd et al. \Context is any information that can be used to characterise
the situation of an entity. An entity is a person, place, or object that is
considered relevant to the interaction between a user and an application, including
the user and applications themselves" [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. In HSI, the main concern is to present
users with relevant and timely smart device data. Detection of sensors and then
identifying and segmenting them according to the relevancy of the context is
the best possible way of addressing this problem. To do this, we have to identify
each smart device relevant to its context because the same device can be used for
di erent purposes in indi erent contexts. Therefore the presentation technique
(i.e. AR in our case) must identify the smart devices within their context.
      </p>
      <p>Yet, we identi ed several challenges when addressing this problem such as;
1) how to identify an exact device that a user is gazing at; 2) how to distinguish
similar-looking devices from each other; 3) how to model the device-speci c
information; and 4) how to change the interaction based on user preferences,
location, time etc. Therefore, to present appropriate information related to each
and every smart device, it is important to identify these smart devices within
their context, which incorporates aspects such as indoor/outdoor location, user
preferences, date and time, device capabilities etc. Detecting a physical device
with its context is ongoing research in HSI and a complex interaction design
challenge. Based on a literature review, we identi ed that the use of semantic
Web technologies blended with AR provides a promising direction in solving this
problem.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Incorporation of semantic Web technologies</title>
      <p>
        According to J. Manyika et al., interoperability among IoT systems is required
to capture 40% of the total potential value through the use of IoT [
        <xref ref-type="bibr" rid="ref45">45</xref>
        ].
According to their research, there is a more than $4 trillion per year potential economic
impact from IoT use in 2025, out of the total potential impact of $11.1
trillion predicted [
        <xref ref-type="bibr" rid="ref45">45</xref>
        ]. Recently, semantic Web technologies have been integrated
in IoT with the aim of addressing interoperability challenges and reducing the
heterogeneity in the domain [
        <xref ref-type="bibr" rid="ref25 ref7">7,25</xref>
        ].
      </p>
      <p>
        The semantic Web is a term proposed by Tim Berners-Lee [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] \has been
conceived as an extension of the World Wide Web that allows computers to
intelligently search, combine and process Web content based on the meaning that this
content has to humans" [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ]. The semantic Web is aiming to provide a universal
framework that allows data to be shared and reused across systems. It decouples
applications from data through the use of an abstract model for knowledge
representation [
        <xref ref-type="bibr" rid="ref59">59</xref>
        ]. Therefore, any application/system that understands the model
can consume any data source that uses this model which in turn helps to address
the problem of heterogeneity. By looking at the vast majority of manufacturers
in the IoT domain and their heterogeneity, this type of knowledge representation
model could provide a promising direction in providing a better HSI.
      </p>
      <p>Using semantic Web technologies we can endow smart devices with their
semantics (i.e. their intended use, capabilities and purpose). Thus, combining
that information with contextual data allows to specify which conclusion should
be drawn and then what information should be augmented and visualized (via
an AR interface) to users to get an idea on what the smart device is for and how
to interact with it.</p>
      <p>
        There is a growing interest in the area of blending semantic with IoT and
AR. Rumiski et al.'s ndings suggest that an application of semantic Web
techniques can be an e cient solution to search contextually described distributed
resources constituting interactive AR presentations [
        <xref ref-type="bibr" rid="ref60">60</xref>
        ]. Further, Rumiski et al.
have developed a semantic model for distributed AR services and built
ubiquitous dynamic AR presentations based on semantically described AR services in
a contextual manner. Yet, their work concerns integrating distributed services
in AR, but does not focus on addressing the problem of maximizing the user
experience in HSI, when humans are compelled to use multiple unfamiliar
devices. However, the following researches addressed the user experience aspects
in interactions. FarmAR by Katsaros et al. exploit AR technology to identify
plants and to augment useful information to farmers. Their system is based on
a knowledge base which consists of an ontology that describes information
concerning the plant, such as its common scienti c name and frequent plant diseases
etc. [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ]. Contreras et al. present a mobile application for searching places, people
and events within a university campus. In their work they leverage semantic Web
and AR to provide an application with a high degree of query expressiveness and
an enhanced user experience [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. In both Katsaros and Contreras approaches
they have incorporated semantic Web technologies, yet their approaches do not
consider contextual information. Further, they have used a handheld display
(HHD) instead of AR HMDs which will create di erent UX in HSI.
      </p>
      <p>
        L. Cheng et al.'s works shows that embedding semantic understanding with
Mixed Reality (MR) can greatly enhance the user experience by helping to
understand object-speci c behaviours [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. L. Cheng et. al demonstrate a framework
for a material-aware prototype system for generating context-aware physical
interactions between the real and the virtual objects. However, the focus of their
research is on material understanding and its semantic fusion with the virtual
scene in a MR environment hence not addressing HSI. Further looking at the
context awareness, Hoque et al. have proposed a generic context model based on
ontology and reasoning techniques in the smart home domain [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ]. Zhu J. et al.
have proposed a framework speci cally designed for an assisted maintenance
system in which they have incorporated context-aware AR in their research with
semantic Web technologies to provide information that is more useful to the
user [
        <xref ref-type="bibr" rid="ref71">71</xref>
        ]. Their main focus is towards a context aware AR authoring tool.
Further, Flatt. H et al's proposed a framework towards a context-aware assistance
system for maintenance applications in smart factories [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. The central element
of this approach is an ontology-based context-aware framework, which
aggregates and processes data from di erent sources. Yet, their application is towards
HHD AR and not addressing the HSI challenges. Thus, it is observed that, HSI
and improving UX is not a concern in their work.
5
      </p>
    </sec>
    <sec id="sec-5">
      <title>Blend of semantic Web technology with AR for context awareness</title>
      <p>
        As per our literature review, the use of semantic Web technologies has provided
a promising direction to add meaningful contextual information to AR
presentations [
        <xref ref-type="bibr" rid="ref59">59</xref>
        ]. In this section, we explain the knowledge modeling approach.
      </p>
      <p>
        Seydoux et al.s analysis of existing ontologies related to IoT, concludes that
\some of the IoT ontologies cover most of the key concepts but none of them
covers them all" [
        <xref ref-type="bibr" rid="ref62">62</xref>
        ]. Therefore, in our investigation we consider a combination
of several ontologies like the DogOnt [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] which \aims at o ering a uniform,
extensible model for all devices being part of a local Internet of Things inside
a smart environment", the Semantic Sensor Network (SSN) ontology [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], and
the IoT-Lite ontology [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] which \is a lightweight ontology to represent Internet
of Things (IoT) resources, entities and services. IoT-Lite is an instantiation of
the SSN ontology", OneM2M [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and IoT-O [
        <xref ref-type="bibr" rid="ref62">62</xref>
        ] which are widely used in IoT
domain ontologies.
      </p>
      <p>Preserving semantic Web best practices of reusability, instead of developing
an ontology from scratch we analysed existing IoT ontologies to identify the
suitability of using an existing knowledge model. Prior to this, as explained in
section 2.2, we did an analysis of AR HMD to identify its limitations when
handling contextual data and then to see whether we could use knowledge modeling
to address those limitations. Our conclusion was, that except for B \ A (as
described in Table 1) in all the other cases AR HMD itself could not resolve
contextual data.</p>
      <p>Therefore, rstly we identi ed the main contextual information that we
required to blend with AR application in order to provide a better UX in HSI. As
per our analysis, indoor/outdoor location, device capabilities and user
information(users) are the high level concepts that are required. Secondly we investigate
on suitable methods/ontologies to model these knowledge/concepts. Further,
this knowledge model will be decoupled from the AR application which makes it
customizable to di erent use cases/scenarios, without a ecting the functionality
of AR application.</p>
      <p>In our roadmap, rstly we need to model a smart device which could be
either a sensor or an actuator or both. Therefore, we require a combined and
also a separated representation for sensors and actuators. Next, these device
capabilities which could be either an observable property or an actuatable
property, respectively need to be modelled. Presenting these capabilities to a user
could vary based on the device location, user preferences and features of
interest. Therefore these three are the next required modelling concepts. Further,
in our investigation, we are researching on how to interact with a smart device
using NUIs like gestures. Therefore, human gestures is another concept that we
need to consider.</p>
      <p>By looking at the conceptual requirements, we designed a high level concept
overlapping diagram (See Figure 3) to identify the type of ontologies that we
need to consider. This diagram depicts the cluster of concepts that are related
to the domain of IoT. Each circle indicates the required concept and di erent
colours depict the current representation level of these concepts within existing
IoT ontologies. After identifying the required concepts we started analysing the
existing IoT ontologies to see their adaptability. Table 2 below shows our analysis
results.</p>
      <p>Devices
Sensors</p>
      <p>Actuators</p>
      <p>Indoor/Outdoor</p>
      <p>Location
Features of </p>
      <p>Interest
Human
Gestures</p>
      <p>Observable </p>
      <p>
        Properties
Most of these ontologies are capable of modeling the knowledge speci c to a
device, sensor, actuator, and their capabilities and the location. However, user
preferences and human gestures are not addressed directly in any of these
ontologies. Even outside of the IoT domain, ontologies that de ne device users and
potential interaction locations are rare. For example, Nazer et al. have de ned
a user's pro le ontology, yet it is a use case ontology which aims at providing
personalized food and nutrition recommendations [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Thus, there is a necessity
for a global ontology to model user related and human hand gestures related
knowledge. Table 2 summarized the fact that, we can reuse and merge some IoT
ontologies to ful l part of our conceptual requirements, yet user preferences and
hand gestures modeling need to be further investigated to avoid rede nition as
much as possible.
      </p>
      <p>SSN
IoonTeM-L2itMe pp
IDooTg-OOnt pp pp
*Supported by an external ontology module.
In this section we discuss the high-level process ow and highlight some of the
potential challenges when establishing human device interactions in the IoT.</p>
      <p>As the intention of the research is to maximize the UX by enabling an e ective
human - device interaction when a user wears an AR HMD, the visualized content
has to be personalised to a speci c user. For this, an AR device should be able
to identify its user and associated information such as the user's preferences. A
user study needs to assess natural interaction patterns and the results need to be
encoded in an ontology. A study on how users naturally interact with unfamiliar
devices and their behaviours. The intention is to identify/generalize ways to make
human device interaction more intuitive and noninvasive. The user interaction
data itself will then be captured and stored/updated accordingly as ontology
instances. If it is a rst-time user there will be no data recorded about previous
interactions. Therefore, the AR application would not have the needed guidance
on the user. The aim of studying user interactions is to reduce redundancy
(by reusing previously stored interactions). When the information related to a
user is processed, privacy and security is a concern. Proper authentication and
authorization mechanisms need to be used when querying user speci c data in
the knowledge model. Therefore the speci c requirements on privacy and security
need to be further analysed in the long run.</p>
      <p>Once the AR application is capable of identifying the user, the next concern
is the device identi cation process. Based on the object recognition e ciency of
the AR HMD, either a marker based or direct object recognition based or hybrid
approach could be chosen. This needs to be further explored. If a marker based
approach is selected, physical markers need to be pre-con gured with a unique
device identi er and their location information. This creates the necessity of an
authoring task. In either cases, with a physical marker-based or direct object
detection approach, there is the potential of facing processing delays. This could
be due to the AR HMD's capability of recognizing a marker or an object as well
as the size of information stored in the knowledge model (query time). Looking
at the potential size of data sets, it is not feasible to store the data on the AR
device. Thus, this creates the necessity of storing data in the cloud which brings
the concern of network latency as well.</p>
      <p>Once the smart device and user are identi ed, the relationships between the
device and the user and previous interactions need to be identi ed. In this stage,
an identi ed device could fall into one of the following categories.
1. Previously seen but not interacted with
2. Previously seen and successfully interacted with
3. Previously seen and unsuccessfully interacted with
4. Previously unseen and not interacted with</p>
      <p>These information along with location details will be utilized when deciding
which content to be displayed to the user.</p>
      <p>
        Once the content is displayed, users will start interacting with the smart
devices. Then the next concern is the human gesture interpretation process.
Microsoft HL 1 has built-in functionality to recognize a restricted number of
gestures whereas in Microsoft HL 2 this has been extended. [
        <xref ref-type="bibr" rid="ref40">40</xref>
        ]. Interpreting the
meaning of hand gestures in accordance with a device-capability again requires
to query the knowledge model related to human hand gestures. To the best of
our knowledge, there is no study or ontology available that describes natural
user behaviours when they are confronted with unfamiliar devices for the rst
time. These behaviours are most likely also culturally di erent, i.e. switches,
for example, operate in opposite directions in di erent countries. Thus, human
gestures could be changed based on personal preferences, geography and health
conditions of a user. Therefore, the knowledge model is more important than a
rigid mapping of device capabilities against xed/de ned hand gestures. Figure
4 shows a summary of overall process as ow in the roadmap.
      </p>
      <p>User</p>
      <p>Identification
AR HMD needs to
identify its user details
in a privacy
preserving manner
If an existing user, then
AR device needs to
iindfeonrtmifyataiosnsoscuicahteads the
user's preferences</p>
      <p>Smart device
recognition/identification
AR device needs to
recognise the smart
device user is gazing
at.</p>
      <p>Options are to use
marker based AR
techniques or direct
object recognitions or
hybrid approach.</p>
      <p>Personalising content</p>
      <p>for the user 
Identifying the
connection/relationship
between the user and
the associated
information of a smart
device, displayed
content needs to be
personalised .</p>
      <p>User input recognition
User gestures could
vary due to different
reasons.</p>
      <p>Thus user gestures
needs to be
interpret in
accordance with a
device-capabilities.</p>
      <p>Establishing
communication
Peer to peer
communication
techniques such as
web Bluetooth is a
potential option.</p>
      <p>This AR based interaction technique needs to be evaluated against the
commonly used, voice-based human sensor interaction methods such as Google Home
or Amazon echo dot to assess whether users would like to wear a pair of AR
glasses and to interact with smart devices.</p>
      <p>It is important to know that, lighting conditions, distance between the
physical marker/device and the AR HMD, can create delays when identifying a
device. Yet, these devices are rapidly evolving and their capabilities are enhanced,
addressing these limitations.</p>
      <p>
        Hardware is also getting more user friendly and there are already wearables
available that are similar to a pair of shades with AR features and
functionality [
        <xref ref-type="bibr" rid="ref58">58</xref>
        ]. There are many ways of detecting hand gestures with the help of
many industrial equipment such as Myo [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], the Leap motion [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ], hand tracking
gloves etc. These could be incorporated to reduce the invasiveness created by
the hardware designs.
      </p>
      <p>Finally, an easy-to-use authoring tool is additional bene t for this work. In
our future research we are planning to investigate on how we can build such an
authoring tool so that a general user can con gure their environment. Further,
merging real-time sensor data along with AR HMD sensor readings could help
to provide real-time contextual data.
7</p>
    </sec>
    <sec id="sec-6">
      <title>Conclusion</title>
      <p>This paper presents a roadmap on how augmented reality could be used in
combination with semantic Web technologies as a powerful interaction technique
that yields a new types of user experience with the Internet-of-Things. The
proposed methodology uses semantic Web technologies to produce context-aware
interactions in AR presentations. Our key insight to build context awareness
through ontologies is not only to enhance a user experience through
devicespeci c behaviours but also to pave the way for solving complex interaction
design challenges in HSI. We are planning to conduct quantitative and qualitative
evaluations for the proposed methodology, and based on the results of these
studies intend to show how this framework could further be enhanced to provide
user friendly authoring interfaces for the purpose of creating context-aware AR
presentations.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Abowd</surname>
            ,
            <given-names>G.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dey</surname>
            ,
            <given-names>A.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brown</surname>
          </string-name>
          , P.J.,
          <string-name>
            <surname>Davies</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smith</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Steggles</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Towards a better understanding of context and context-awareness</article-title>
          .
          <source>In: Proc. of International symposium on handheld and ubiquitous computing</source>
          . pp.
          <volume>304</volume>
          {
          <fpage>307</fpage>
          . Springer (
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Al-Nazer</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Helmy</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Al-Mulhem</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>User's pro le ontology-based semantic framework for personalized food and nutrition recommendation</article-title>
          .
          <source>Procedia Computer Science</source>
          <volume>32</volume>
          ,
          <issue>101</issue>
          {
          <fpage>108</fpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Alha</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koskinen</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paavilainen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hamari</surname>
          </string-name>
          , J.:
          <article-title>Why do people play location-based augmented reality games: A study on pokemon go</article-title>
          .
          <source>Computers in Human Behavior</source>
          <volume>93</volume>
          ,
          <issue>114</issue>
          {
          <fpage>122</fpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Ali</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Samad</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mehmood</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ayaz</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Qazi</surname>
            ,
            <given-names>W.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Khan</surname>
            ,
            <given-names>M.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Asgher</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          :
          <article-title>Hand gesture based control of nao robot using myo armband</article-title>
          .
          <source>In: Proc. of 10th AHFE</source>
          . pp.
          <volume>449</volume>
          {
          <fpage>457</fpage>
          . Springer (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Altinpulluk</surname>
          </string-name>
          , H.:
          <article-title>Determining the trends of using augmented reality in education between 2006- 2016</article-title>
          .
          <source>Education and Information Technologies</source>
          <volume>24</volume>
          (
          <issue>2</issue>
          ),
          <volume>1089</volume>
          {
          <fpage>1114</fpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6. Andijakl:
          <article-title>Basics of ar: Slam simultaneous localization and mapping (</article-title>
          <year>Sep 2018</year>
          ), https://www. andreasjakl.
          <article-title>com/basics-of-ar-slam-simultaneous-localization-and-mapping/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Barnaghi</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Henson</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Taylor</surname>
          </string-name>
          , K.:
          <article-title>Semantics for the internet of things: early progress and back to the future</article-title>
          .
          <source>International Journal on Semantic Web and Information Systems (IJSWIS) 8</source>
          (
          <issue>1</issue>
          ),
          <volume>1</volume>
          {
          <fpage>21</fpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Barrow</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Forker</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sands</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>OHare</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hurst</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          :
          <article-title>Augmented reality for enhancing life science education</article-title>
          .
          <source>In: Proc. of VISUAL</source>
          <year>2019</year>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Bermudez-Edo</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Elsaleh</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barnaghi</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Taylor</surname>
          </string-name>
          , K.:
          <article-title>Iot-lite: a lightweight semantic model for the internet of things</article-title>
          .
          <source>In: Proc. of</source>
          IEEE UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld conference. pp.
          <volume>90</volume>
          {
          <fpage>97</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Berners-Lee</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hendler</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lassila</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          , et al.:
          <article-title>The semantic web</article-title>
          . Scienti c american
          <volume>284</volume>
          (
          <issue>5</issue>
          ),
          <volume>28</volume>
          {
          <fpage>37</fpage>
          (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Biederman</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Recognition-by-components: a theory of human image understanding</article-title>
          .
          <source>Psychological review 94(2)</source>
          ,
          <volume>115</volume>
          (
          <year>1987</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Birkfellner</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Figl</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huber</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Watzinger</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wanschitz</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hummel</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hanel</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Greimel</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Homolka</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ewers</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , et al.:
          <article-title>A head-mounted operating binocular for augmented reality visualization in medicine-design and initial evaluation</article-title>
          .
          <source>IEEE Transactions on Medical Imaging</source>
          <volume>21</volume>
          (
          <issue>8</issue>
          ),
          <volume>991</volume>
          {
          <fpage>997</fpage>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Black</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Your complete guide to amazon echo (</article-title>
          <year>Jun 2019</year>
          ), https://www.techadvisor.co.uk/ new-product/audio/amazon-echo-
          <volume>3584881</volume>
          /
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Bonino</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Corno</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Dogont-ontology modeling for intelligent domotic environments</article-title>
          .
          <source>In: Proc. of ISWC 2008</source>
          . pp.
          <volume>790</volume>
          {
          <fpage>803</fpage>
          . Springer (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Budhiraja</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>G.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Billinghurst</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Using a hhd with a hmd for mobile ar interaction</article-title>
          .
          <source>In: Proc. of IEEE ISMAR</source>
          . pp.
          <volume>1</volume>
          {
          <issue>6</issue>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Bulearca</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tamarjan</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Augmented reality: A sustainable marketing tool</article-title>
          .
          <source>Global business and management research: An international journal 2(2)</source>
          ,
          <volume>237</volume>
          {
          <fpage>252</fpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tang</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          , John, N.,
          <string-name>
            <surname>Wan</surname>
            ,
            <given-names>T.R.</given-names>
          </string-name>
          , Zhang, J.J.:
          <article-title>Context-aware mixed reality: A framework for ubiquitous interaction</article-title>
          . arXiv preprint arXiv:
          <year>1803</year>
          .
          <volume>05541</volume>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Compton</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barnaghi</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bermudez</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garc</surname>
            <given-names>A</given-names>
          </string-name>
          -Castro,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Corcho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            ,
            <surname>Cox</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Graybeal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Hauswirth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Henson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Herzog</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          , et al.:
          <article-title>The ssn ontology of the w3c semantic sensor network incubator group</article-title>
          .
          <source>Web semantics: science, services and agents on the World Wide Web</source>
          <volume>17</volume>
          ,
          <issue>25</issue>
          {
          <fpage>32</fpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Contreras</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chimbo</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tello</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Espinoza</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Semantic web and augmented reality for searching people, events and points of interest within of a university campus</article-title>
          .
          <source>In: Proc. of CLEI 2017</source>
          . pp.
          <volume>1</volume>
          {
          <fpage>10</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Feiner</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>MacIntyre</surname>
          </string-name>
          , B., Hollerer, T.,
          <string-name>
            <surname>Webster</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>A touring machine: Prototyping 3d mobile augmented reality systems for exploring the urban environment</article-title>
          .
          <source>Personal Technologies</source>
          <volume>1</volume>
          (
          <issue>4</issue>
          ),
          <volume>208</volume>
          {
          <fpage>217</fpage>
          (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Flatt</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koch</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          , Rocker,
          <string-name>
            <surname>C.</surname>
          </string-name>
          , Gunter,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Jasperneite</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.:</surname>
          </string-name>
          <article-title>A context-aware assistance system for maintenance applications in smart factories based on augmented reality and indoor localization</article-title>
          .
          <source>In: Proc. of 20th IEEE ETFA</source>
          . pp.
          <volume>1</volume>
          {
          <issue>4</issue>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Garzon</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pavon</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baldiris</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Systematic review and meta-analysis of augmented reality in educational settings</article-title>
          .
          <source>Virtual</source>
          Reality pp.
          <volume>1</volume>
          {
          <issue>13</issue>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Gebhart</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Everything you need to know about google home</article-title>
          (May
          <year>2019</year>
          ), https://www.cnet. com/how-to/
          <article-title>everything-you-need-to-know-about-google-home/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Guillama</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Heath</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Personal augmented reality (Apr 25</article-title>
          <year>2019</year>
          ), uS Patent App.
          <volume>16</volume>
          /165,823
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Gyrard</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Serrano</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Atemezing</surname>
            ,
            <given-names>G.A.</given-names>
          </string-name>
          :
          <article-title>Semantic web methodologies, best practices and ontology engineering applied to internet of things</article-title>
          .
          <source>In: Proc. of 2nd IEEE WF-IoT</source>
          . pp.
          <volume>412</volume>
          {
          <fpage>417</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Hamari</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Malik</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koski</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Johri</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Uses and grati cations of pokemon go: Why do people play mobile location-based augmented reality games</article-title>
          ?
          <source>International Journal of Human{ Computer Interaction</source>
          <volume>35</volume>
          (
          <issue>9</issue>
          ),
          <volume>804</volume>
          {
          <fpage>819</fpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Hammady</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , Ma,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Powell</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          :
          <article-title>User experience of markerless augmented reality applications in cultural heritage museums:museumeyeas a case study</article-title>
          .
          <source>In: Proc. of Salento AVR 2018</source>
          . pp.
          <volume>349</volume>
          {
          <fpage>369</fpage>
          . Springer (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Han</surname>
            ,
            <given-names>D.I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jung</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gibson</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Dublin ar: implementing augmented reality in tourism</article-title>
          .
          <source>In: Information and communication technologies in tourism 2014</source>
          , pp.
          <volume>511</volume>
          {
          <fpage>523</fpage>
          . Springer (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Hitzler</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krotzsch</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rudolph</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Foundations of semantic web technologies</article-title>
          .
          <source>Chapman</source>
          and Hall/CRC (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Hoque</surname>
            ,
            <given-names>M.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kabir</surname>
            ,
            <given-names>M.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thapa</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>S.H.</given-names>
          </string-name>
          :
          <article-title>Ontology-based context modeling to facilitate reasoning in a context-aware system: A case study for the smart home</article-title>
          .
          <source>International Journal of Smart Home</source>
          <volume>9</volume>
          (
          <issue>9</issue>
          ),
          <volume>151</volume>
          {
          <fpage>156</fpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Howard</surname>
            ,
            <given-names>P.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Howard</surname>
            ,
            <given-names>P.N.</given-names>
          </string-name>
          :
          <article-title>How big is the internet of things and how big will it get?</article-title>
          (
          <year>Jul 2016</year>
          ), https://www.brookings.edu/blog/techtank/2015/06/08/ how-big
          <article-title>-is-the-internet-of-things-and-how-big-will-it-get/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32. Iban~ez, M.B.,
          <string-name>
            <surname>Delgado-Kloos</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Augmented reality for stem learning: A systematic review</article-title>
          .
          <source>Computers &amp; Education</source>
          <volume>123</volume>
          ,
          <issue>109</issue>
          {
          <fpage>123</fpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          33.
          <string-name>
            <surname>Jia</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tu</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deng</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yi</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          :
          <article-title>Real-time hand gestures system based on leap motion</article-title>
          .
          <source>Concurrency and Computation: Practice and Experience</source>
          <volume>31</volume>
          (
          <issue>10</issue>
          ),
          <year>e4898</year>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          34.
          <string-name>
            <surname>Joda</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gallucci</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wismeijer</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zitzmann</surname>
          </string-name>
          , N.:
          <article-title>Augmented and virtual reality in dental medicine: A systematic review. Computers in biology and medicine (</article-title>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          35.
          <string-name>
            <surname>Katsaros</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Keramopoulos</surname>
          </string-name>
          , E.:
          <article-title>Farmar, a farmer's augmented reality application based on semantic web</article-title>
          .
          <source>In: Proc. of SEEDA-CECNSM 2017</source>
          . pp.
          <volume>1</volume>
          {
          <issue>6</issue>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          36.
          <string-name>
            <surname>Kikkawa</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sekiguchi</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tsuge</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saito</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bise</surname>
          </string-name>
          , R.:
          <article-title>Semi-supervised learning with structured knowledge for body hair detection in photoacoustic image</article-title>
          .
          <source>In: Proc. of 2019 IEEE 16th ISBI 2019</source>
          . pp.
          <volume>1411</volume>
          {
          <fpage>1415</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          37.
          <string-name>
            <surname>Kootstra</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , Bergstrom, N.,
          <string-name>
            <surname>Kragic</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Fast and automatic detection and segmentation of unknown objects</article-title>
          .
          <source>In: Proc. of 10th IEEE-RAS</source>
          . pp.
          <volume>442</volume>
          {
          <fpage>447</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          38.
          <string-name>
            <surname>Kotane</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Znotina</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hushko</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Assessment of trends in the application of digital marketing</article-title>
          .
          <source>Scienti c Journal of Polonia University</source>
          <volume>33</volume>
          (
          <issue>2</issue>
          ),
          <volume>28</volume>
          {
          <fpage>35</fpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          39.
          <string-name>
            <surname>Laine</surname>
            ,
            <given-names>T.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suk</surname>
          </string-name>
          , H.:
          <article-title>Designing educational mobile augmented reality games using motivators and disturbance factors</article-title>
          .
          <source>In: Augmented Reality Games II</source>
          , pp.
          <volume>33</volume>
          {
          <fpage>56</fpage>
          . Springer (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          40.
          <string-name>
            <surname>Langston</surname>
          </string-name>
          , J.:
          <article-title>Hololens 2 gives microsoft the edge in next generation of computing (</article-title>
          <year>Jul 2019</year>
          ), https://news.microsoft.com/innovation-stories/hololens-2/
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          41.
          <string-name>
            <surname>Legiedz</surname>
            ,
            <given-names>R.:</given-names>
          </string-name>
          <article-title>A thorough look into spatial mapping with hololens (</article-title>
          <year>2017</year>
          ), https://solidbrain.com/
          <year>2017</year>
          /08/07/a
          <article-title>-thorough-look-into-spatial-mapping-with-hololens/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          42.
          <string-name>
            <surname>Leonidis</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Korozi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Margetis</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grammenos</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stephanidis</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>An intelligent hotel room</article-title>
          .
          <source>In: Proc. of International Joint Conference on Ambient Intelligence</source>
          . pp.
          <volume>241</volume>
          {
          <fpage>246</fpage>
          . Springer (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          43.
          <string-name>
            <surname>Livingston</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosenblum</surname>
            ,
            <given-names>L.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brown</surname>
          </string-name>
          , D.G.,
          <string-name>
            <surname>Schmidt</surname>
            ,
            <given-names>G.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Julier</surname>
            ,
            <given-names>S.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baillot</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Swan</surname>
            ,
            <given-names>J.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ai</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maassel</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Military applications of augmented reality</article-title>
          .
          <source>In: Handbook of augmented reality</source>
          , pp.
          <volume>671</volume>
          {
          <fpage>706</fpage>
          . Springer (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          44.
          <string-name>
            <surname>Livingston</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosenblum</surname>
            ,
            <given-names>L.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Julier</surname>
            ,
            <given-names>S.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brown</surname>
          </string-name>
          , D.,
          <string-name>
            <surname>Baillot</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Swan</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gabbard</surname>
            ,
            <given-names>J.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hix</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , et al.:
          <article-title>An augmented reality system for military operations in urban terrain</article-title>
          .
          <source>Tech. rep.</source>
          , Naval Research Lab Washington DC Advanced Information Technology Branch (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          45.
          <string-name>
            <surname>Manyika</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>The Internet of Things: Mapping the value beyond the hype</article-title>
          .
          <source>McKinsey Global Institute</source>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          46.
          <string-name>
            <surname>Microsoft</surname>
          </string-name>
          : Spatial mapping - mixed reality, https://docs.microsoft.com/en-us/windows/ mixed-reality/spatial-mapping
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          47.
          <string-name>
            <surname>Moss</surname>
          </string-name>
          , A.:
          <article-title>20 augmented reality stats to keep you</article-title>
          sharp in
          <source>2019 (Jul</source>
          <year>2019</year>
          ), https://techjury. net/stats-about/augmented-reality/
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          48.
          <string-name>
            <surname>Nebeling</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Speicher</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>The trouble with augmented reality/virtual reality authoring tools</article-title>
          .
          <source>In: Proc. of IEEE ISMAR-Adjunct</source>
          . pp.
          <volume>333</volume>
          {
          <fpage>337</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          49. Oracle:
          <article-title>Hotel 2025 emerging technologies destined to reshape our business (</article-title>
          <year>2017</year>
          ), https://www. oracle.com/webfolder/s/delivery production/docs/FY16h1/doc31/Hotels-2025-v5a.pdf
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          50.
          <string-name>
            <surname>Panetta</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Gartner top strategic predictions for 2018 and beyond</article-title>
          , https://www.gartner.com/ smarterwithgartner/gartner-top
          <article-title>-strategic-predictions-</article-title>
          <string-name>
            <surname>for-</surname>
          </string-name>
          2018
          <string-name>
            <surname>-</surname>
          </string-name>
          and-beyond/
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          51.
          <string-name>
            <surname>Perera</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zaslavsky</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Christen</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Georgakopoulos</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Context aware computing for the internet of things: A survey</article-title>
          .
          <source>IEEE communications surveys &amp; tutorials 16(1)</source>
          ,
          <volume>414</volume>
          {
          <fpage>454</fpage>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          52.
          <string-name>
            <surname>Peters</surname>
            ,
            <given-names>T.M.:</given-names>
          </string-name>
          <article-title>Overview of mixed and augmented reality in medicine</article-title>
          .
          <source>In: Mixed and Augmented Reality in Medicine</source>
          , pp.
          <volume>1</volume>
          {
          <fpage>13</fpage>
          . CRC Press (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>
          53.
          <string-name>
            <surname>Plescia</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hui</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Augmented reality background for use in live-action motion picture lming (Jun 6 2019), uS Patent App</article-title>
          .
          <volume>16</volume>
          /210,951
        </mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>
          54.
          <string-name>
            <surname>Pulli</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baksheev</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kornyakov</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eruhimov</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Real-time computer vision with opencv</article-title>
          .
          <source>Communications of the ACM</source>
          <volume>55</volume>
          (
          <issue>6</issue>
          ),
          <volume>61</volume>
          {
          <fpage>69</fpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref55">
        <mixed-citation>
          55.
          <string-name>
            <surname>Rajeev</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wan</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yau</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Panetta</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Agaian</surname>
            ,
            <given-names>S.S.:</given-names>
          </string-name>
          <article-title>Augmented reality-based vision-aid indoor navigation system in gps denied environment</article-title>
          . In: Mobile Multimedia/Image Processing, Security, and
          <article-title>Applications 2019</article-title>
          . vol.
          <volume>10993</volume>
          , p.
          <fpage>109930P</fpage>
          .
          <source>International Society for Optics and Photonics</source>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref56">
        <mixed-citation>
          56.
          <string-name>
            <surname>Rauschnabel</surname>
            ,
            <given-names>P.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Felix</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hinsch</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Augmented reality marketing: How mobile ar-apps can improve brands through inspiration</article-title>
          .
          <source>Journal of Retailing and Consumer Services</source>
          <volume>49</volume>
          ,
          <issue>43</issue>
          {
          <fpage>53</fpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref57">
        <mixed-citation>
          57.
          <string-name>
            <surname>Redmon</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Divvala</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Girshick</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Farhadi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>You only look once: Uni ed, real-time object detection</article-title>
          .
          <source>In: Proc. of the IEEE CVPR</source>
          . pp.
          <volume>779</volume>
          {
          <issue>788</issue>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref58">
        <mixed-citation>
          58.
          <string-name>
            <surname>Robertson</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>It's 2019 - where are our smart glasses?</article-title>
          (
          <year>Jun 2019</year>
          ), https://www.theverge.com/
          <year>2019</year>
          /6/28/18761633/augmented-reality
          <article-title>-smart-glasses-google-glass-real-world-big-picture</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref59">
        <mixed-citation>
          59.
          <string-name>
            <surname>Ruminski</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Walczak</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Semantic model for distributed augmented reality services</article-title>
          .
          <source>In: Proc. of the 22nd Web3D Conference</source>
          . p.
          <fpage>13</fpage>
          .
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref60">
        <mixed-citation>
          60.
          <string-name>
            <surname>Ruminski</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Walczak</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Large-scale distributed semantic augmented reality services{a performance evaluation</article-title>
          .
          <source>Graphical</source>
          Models p.
          <volume>101027</volume>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref61">
        <mixed-citation>
          61.
          <string-name>
            <surname>Schilit</surname>
            ,
            <given-names>B.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Theimer</surname>
            ,
            <given-names>M.M.:</given-names>
          </string-name>
          <article-title>Disseminating active mop infonncition to mobile hosts</article-title>
          . IEEE network (
          <year>1994</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref62">
        <mixed-citation>
          62.
          <string-name>
            <surname>Seydoux</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Drira</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hernandez</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Monteil</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Iot-o, a core-domain iot ontology to represent connected devices networks</article-title>
          .
          <source>In: European Knowledge Acquisition Workshop</source>
          . pp.
          <volume>561</volume>
          {
          <fpage>576</fpage>
          . Springer (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref63">
        <mixed-citation>
          63. statista.
          <source>com: Iot: number of connected devices worldwide</source>
          <year>2012</year>
          -2025, https://www.statista.com/ statistics/471264/iot-number
          <article-title>-of-connected-devices-worldwide/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref64">
        <mixed-citation>
          64.
          <string-name>
            <surname>Svensson</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Atles</surname>
          </string-name>
          , J.:
          <article-title>Object detection in augmented reality</article-title>
          .
          <source>Masters Theses in Mathematical Sciences</source>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref65">
        <mixed-citation>
          65.
          <string-name>
            <surname>Trafton</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , O ce, M.N.:
          <article-title>How the brain recognizes objects (</article-title>
          <year>Oct 2015</year>
          ), http://news.mit.edu/ 2015/how
          <article-title>-brain-recognizes-objects-1005</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref66">
        <mixed-citation>
          66.
          <string-name>
            <surname>Wei</surname>
          </string-name>
          , W.:
          <article-title>Research progress on virtual reality (vr) and augmented reality (ar) in tourism and hospitality: A critical review of publications from 2000 to 2018</article-title>
          .
          <article-title>Journal of Hospitality and Tourism Technology (</article-title>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref67">
        <mixed-citation>
          67.
          <string-name>
            <surname>Weiser</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>The computer for the 21st century</article-title>
          .
          <source>Scienti c American</source>
          <volume>265</volume>
          (
          <issue>3</issue>
          ),
          <volume>66</volume>
          {
          <fpage>75</fpage>
          (
          <year>1991</year>
          ), https: //dl.acm.org/citation.cfm?doid=
          <volume>329124</volume>
          .
          <fpage>329126</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref68">
        <mixed-citation>
          68.
          <string-name>
            <surname>Wojciechowski</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cellary</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          :
          <article-title>Evaluation of learners attitude toward learning in aries augmented reality environments</article-title>
          .
          <source>Computers &amp; Education</source>
          <volume>68</volume>
          ,
          <issue>570</issue>
          {
          <fpage>585</fpage>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref69">
        <mixed-citation>
          69.
          <string-name>
            <surname>Zhang</surname>
            , D., Han,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Meng</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Leveraging prior-knowledge for weakly supervised object detection under a collaborative self-paced curriculum learning framework</article-title>
          .
          <source>International Journal of Computer Vision</source>
          <volume>127</volume>
          (
          <issue>4</issue>
          ),
          <volume>363</volume>
          {
          <fpage>380</fpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref70">
        <mixed-citation>
          70.
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Navab</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liou</surname>
            ,
            <given-names>S.P.:</given-names>
          </string-name>
          <article-title>E-commerce direct marketing using augmented reality</article-title>
          .
          <source>In: Proc. of IEEE ICME</source>
          <year>2000</year>
          . vol.
          <volume>1</volume>
          , pp.
          <volume>88</volume>
          {
          <fpage>91</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref71">
        <mixed-citation>
          71.
          <string-name>
            <surname>Zhu</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ong</surname>
            ,
            <given-names>S.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nee</surname>
          </string-name>
          , A.Y.:
          <article-title>A context-aware augmented reality assisted maintenance system</article-title>
          .
          <source>International Journal of Computer Integrated Manufacturing</source>
          <volume>28</volume>
          (
          <issue>2</issue>
          ),
          <volume>213</volume>
          {
          <fpage>225</fpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>