<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Spatial Mapping for Visually Impaired and Blind using BLE Beacons</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alan McGibney[</string-name>
          <email>alan.mcgibney@cit.ie</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roman Pospisil</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kevin O'Mahony</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Juan Francisco M</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Nimbus Centre, Cork Institute of Technology</institution>
          ,
          <addr-line>Bishopstown, Cork</addr-line>
          ,
          <country country="IE">Ireland</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <fpage>15</fpage>
      <lpage>29</lpage>
      <abstract>
        <p>This paper describes the development of a set of software services called the Context Awareness Module to support the visually impaired and blind (ViB) to construct a spatial map of their environment through the provision of context information (contextual, directional and positional cues) relating to the surrounding environment. This information is captured through the interaction of the users' smart phone and the deployment of low-cost Bluetooth beacons within the environment to identify objects, landmarks or markers. The solution aims to supplement existing methods that support mobility and navigation through complex spaces by providing an additional layer of information that describes the space, location, object or any entity that a user might come in the vicinity of or interact with. Initial validation of the proposed solution was undertaken with members of the visually impaired community and tested with an example scenario where a visually impaired person is attending a meeting at an unknown building.</p>
      </abstract>
      <kwd-group>
        <kwd>Bluetooth</kwd>
        <kwd>Location Services</kwd>
        <kwd>Mapping</kwd>
        <kwd>Software</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Based on a detailed analysis of existing trends, global projections estimate a continued
increase in people with moderate and severe vision impairment from 237.1 million
people in 2020 to as high as 587.6 million people by 2050 [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. According to the definition
of visual impairment of the World Health Organisation, currently 1.6 million people
suffer from blindness in the EU and only 5% are fully autonomous in their daily
mobility. 40% of the visually impaired suffer head level accidents at least once a month,
and 30% suffer a fall accident at least once a month. As our cities evolve and population
continue to expand mobility is becoming increasingly challenging task facing all
citizens, however it is even more significant if a person has a visual impairment or
disability. While on one side the Irish Disability Act 2005 states that Government departments
and public bodies must work to improve the quality of life for people with
disabilities, on the other side public spaces are being designed based on the concept of shared
spaces where there is no kerb or level difference to segregate pedestrians and vehicles.
This design approach has resulted in unexpected challenges that are adversely affecting
vulnerable citizens. Removal of the clear demarcation between paths and roadways
makes mobility significantly more challenging as drivers, cyclists and pedestrians now
all occupy the same-shared space with pedestrians relying on the principle of mutual
eye contact to navigate safely. For people with sensorial or cognitive disabilities this is
not appropriate, and it further marginalises already vulnerable citizens. Similarly, in
indoor environments architectural and visually appealing design can often result in the
challenges faced by ViB being overlooked, as a result this can limit the level of
independence, increase stress and add additional risk for the ViB person when moving in
unfamiliar spaces. Technology can play a role in improving how the ViB community
can experience the environment around them while also ensuring safety as they
navigate through a space.
      </p>
      <p>Several systems termed as Electronic Travel Aid (ETA) have been created to
improve the autonomous mobility for ViB people however the adoption rate remains very
low. Devices such as wearable solutions (sunglasses, gloves etc) are sometimes
considered as extra prosthesis, cumbersome and stigmatising. Inaccuracy of sensor systems
that rely on a single sensor technology can diminish the confidence of the user in the
benefits of the solutions, for example ultrasound is sensitive to multi-echo and can
easily lead to wrong detections. Perception is often limited to range sensing (of the nearest
target) and as a result most systems scan the environment without interpreting it, this
provides some additional support to the user however it does not provide sufficient
detail to allow the visually impaired person to construct a representation and
understanding of their specific situation and environment. While existing ETA help a user
to navigate and detect obstacles there is a need to provide mechanism that can enhance
interaction with the surrounding environment for ViB users. It is proposed that by
leveraging low cost Bluetooth beacons and the users smart phone it is possible to add a
layer of cognition that will allow the user to build a spatial map of the surrounding
environment and ultimately enhance personal autonomy and accessibility rather than
just providing directional information for navigation. The solution is distinct from
wayfinding or navigation and should be considered as a platform that provides additional
context about the environment itself through direct or indirect interaction. The
remainder of the paper is structured as follows: Section 2 will provide an overview of existing
approaches for navigation and interaction with the user. Section 3 will present an
overview of the proposed solution. Section 4 will provide an overview of an example use
case for the developed technology and Section 5 will conclude the paper.
2
2.1</p>
    </sec>
    <sec id="sec-2">
      <title>Spatial Mapping &amp; Navigation Support</title>
      <sec id="sec-2-1">
        <title>Spatial Mapping</title>
        <p>An individual generates a spatial map using a number of different sources, the main
source of information comes from the visual system, senses such as vision, smell,
movement and hearing are all used to infer a person's location within their environment
and as they move through it. It also allows a person to create a navigation path through
or a vector that represents a person’s position and direction, specifically in comparison
to an earlier reference point. Directional cues (e.g. signs, arrows, labels) and positional
landmarks (entrances, exits, meeting point) all provide valuable input to allow a person
create a spatial map and can be used both when an individual is static and when
determining movement paths and also dynamically while a person is moving through the
space. Positional landmarks are generally used to compare the relative position of
specific objects, whereas directional cues give information about the shape and layout of
the environment itself. We rely heavily on our vision to map our environment and move
safely. For a ViB person they must rely on their other senses with touch, hearing, and
smell becoming the more dominant senses in mapping there environment and they use
items such as a long cane as an obstacle detector or a guide dog as an obstacle avoider.
Wall edges, and kerbs are used as a navigational tool and support straight line principle.
In addition, over 80% of persons registered blind have some residual vision, and as
such colour contrast enhances perception and aids way finding. Textured surfaces can
act as a warning and indicate particular types of situations including pedestrian
crossings and location of stairs or escalators. For a ViB person, to create a mental model or
representation of the environment around them they must decode and aggregate
information about their relative location and leverage knowledge of attributes of the spatial
environment. This is generally built dynamically firstly by creating a bearing map,
which represents space through self-movement and gradient cues for example using a
cane cam create a rough 2D map of the environment, this can be combined with specific
positional cues, to sketch a mental map by integrating specific objects or landmarks
with their relative locations to create “minds eye” view of the environment. The process
of navigating for ViB can be mentally exhausting particularly in unfamiliar
environments.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Navigation Support Tools</title>
        <p>Navigation and wayfinding GPS applications such as Google maps have been adopted
for many years by the mainstream for independent travel when mapping data and
satellite transmission is available. Whereas popular outdoor navigation apps such as
Ariadane1, GetThere and BlindSquare previously have been developed specifically for
people with visually impairments. Most smartphones and tablet devices are GPS and
Bluetooth enabled, therefore allowing developers to create applications, which take
advantages of location-based technologies and services. Taking advantage of mobile
devices that are already embedded with GPS and positional sensing technologies
(gyroscope, accelerometer, digital compass, IMU etc.) can be cost effective as it eliminates
the requirement to procure install and maintain tracking and sensing technologies.
However, although GPS is the most widely used real-time location system, it relies on
continuous signal transmission from several satellite source therefore it does not work
well indoors or within closed environments where there is significant signal
interference. In addition, orientation supported by GPS can be inaccurate and disorientating
for the user as a result. Within a closed indoor setting and where navigational and
contextual audio-based information needs to be triggered at a more precise location and
time, alternative location tracking technologies and methods need to be considered. For
example, without precisely tracking a mobile device’s location and pose (proximity and
1 https://www.ariadnegps.eu/
orientation) relative to a point of interest as the user moves, it would be difficult to
provide contextual audio based information relevant to be played when required at the
right time and moment. For indoor location tracking most systems are based upon using
wireless technologies such as Wi-Fi, Bluetooth, ultra-wideband (UWB), and
Radiofrequency identification (RFID). Most indoor location and positional tracking systems
use wireless sensor nodes such as tags that emit signals (beacons), typically points of
interest or optimal communication areas are embeded or attached with tags or badges
(iBeacons, RFID tags) that broadcast signals to receivers (mobile device). There are
more accurate indoor tracking system such as the Decawave DW-1000 UWB chip
which can achieve high precision tracking of between 10-30cm, however this
technology has not become widespread as the hardware is not yet positioned as low cost for
mainstream consumers and most smartphones are not UWB enabled. The selection of
technology is dependent on several factors: accuracy required for application specific
needs, battery lifetime, cost of installation and maintenance and ease of integration with
other processes or systems.
2.3</p>
      </sec>
      <sec id="sec-2-3">
        <title>BLE Beacons</title>
        <p>
          Bluetooth Low Energy (BLE) beacons have been widely used for indoor tracking,
where once a receiver (mobile device) is in proximity of a beacon, content can be
triggered where its position can be tracked if within range of 2 or more beacons by
processing the distance data. With BLE, location- tracking accuracy can vary but tracking
accuracy can be &lt;1.5m, they are easy to install and maintain and affordable. Real-time
indoor location services (RTLS) have begun to gain wider attraction from many
industry domains, where there are many examples from airports and hospitals taking
advantage of BLE beacons to help users navigate large indoor spaces, to retailers
providing directed, personalized marketing content to shoppers entering their stores. A
number of studies have focused on detailed analysis of BLE accuracy in indoor
environments and have demonstrated sub-meter accuracy can be achieved [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], however this
can vary significantly across different environments and other aspects such as
positioning and orientation of the phone on a persons body can reduce the ability to achieve
fine grained positioning information. For the application under consideration providing
inaccurate positioning information has a much greater adverse effect on a user that is
ViB (from a safety perspective). As such the focus of the work presented is not to
improve the accuracy of BLE localisation but rather to investigate how solutions can
leverage existing proximity data to trigger the provision of key information relating to the
surrounding environment for the ViB user. For spatial mapping BLE beacons provide
sufficient accuracy for satisfying the criteria to trigger contextual information when the
ViB person is within defined proximities of indoor areas (reception, halls, stairwells,
room and toilets) and points of interest (doors, signage as potential collision risks).
Proximity detection conditions can be determined by adjusting the beacons antenna
power, therefore beacons could be set to varying proximity ranges (2m, 10m, 70m),
however it has to be noted that if the beacon antenna is powered up for a longer
proximity range the lifetime of the device is reduced to only several month, whereas
environmental factors (temperature, beacon placement) will also effect power
consumption and reliability of the beacons.
2.4
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>Other Tools</title>
        <p>
          Markers and fiducials can be used to provide additional information, QR codes have
become widespread on products and adverts where a person can use their mobile device
camera (QR reader) to access further information such as triggering information
exchange or even an interactive experience using mobile applications. Essentially,
marker-based applications use a devices camera to estimate the position of the device
(center point, orientation, range) based upon what it is “seeing”, such as the visual
information attained from the fiducial marker. Markers such as QR codes, have a unique
predefined shape and pattern that can be easily detected in low lighting conditions and
easily printed to be attached to a point of interest. Markers can be an inexpensive and
technically simple method for gathering the devices position and therefore provides a
very accurate positional cue. For example, BlindSquare, has a QR reader built-in to
their app, where they have developed a super-set of the QR barcode matrix purpose
built to be more accessible for VIB people when acquiring (scanning) a QR code. For
example, the app provides audible and haptic feedback to the user while they are
searching and acquiring a QR code. In use cases presented [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] where BlindSquare QR reader
is demonstrated, QR code are printed and attached to doors, whereas the user has to
find and scan the QR code on the door, where information associated with the room
(room name, purpose, member of staff who work there) is read aloud (Voice Over TTS)
to the user. The QR codes are placed at optimal locations above the door handle on each
door as an early required skill for cane-travel is to trail walls, discern doors and locate
door handles so placing illuminating information nearby is helpful. Whereas
BlindSquare also aim to aid VIB people in finding and scanning QR code through audio and
haptics cues, this still requires manual effort, and explicit interaction that is not so
intuitive for the user. Natural feature tracking (NFT) is an image-based tracking method
that recognizes and tracks natural features (edges, corners, patterns etc) within a scene
or object (building, ornament etc.). Therefore, to the user this is a maker-less tracking
method as there is no identifiable marker such as an identifiable fiducial marker (QR
code, ID Marker) to scan. NFT extract key point descriptors that are associated with an
image captured from a camera, where these key points then query a database to identify
matching images and those interpret potential position. Using 3D object recognition
and augmented reality visioning systems the physical world and contextual information
can be rendered more visible to people with vision impairments, e.g. objects and
signage could be enhanced though rendering the increase colour contrast, tone, dimensions
or brightness of images based upon a particular persons visual impairment condition
type. Augmented Reality glasses such as OxSight and AceSight have been developed
specifically for people with vision impairments. Simultaneous Localisation and
Mapping (SLAM) is a more complex and progressive computer vision method that is
currently a very popular topic within the computer vision community. Through a SLAM
system and process a device can create a map of its surroundings whilst at the same
time have the capability to localize (position and orientation) itself within the map.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Context Awareness Module</title>
      <p>The context awareness module is a set of software services that enables interaction
between Bluetooth Low Energy (BLE) devices deployed in the environment, the user via
a mobile application and the provision of audio feedback. The objective is to provide
positional and directional cues in a format that is easily configured, interpreted, and
used to build a spatial map of the surrounding space.
3.1</p>
      <sec id="sec-3-1">
        <title>System Architecture</title>
        <p>Fig 1. provides a high-level representation of the context aware module. The module
provides common functionality for the interaction of existing BLE beacons and devices
while also provide an extension point for integration with other applications and
services. The context awareness modules are available across multiple platforms including
Android and iOS.
The base context services and libraries were developed using the Xamarin framework
which supports cross-platform compatibility. This included the development of a front
end to support testing and evaluation of the services. In addition, a separate set of
libraries were developed using SWIFT and Objective C specifically for the iOS platform,
this was to support the integration of the modules with 3rd party iOS applications. The
module consists of four main components, firstly all interaction is location driven, as
such libraries to estimate the location of the devices were developed, once location is
established the next component is to map this to specific context data. The last two
components are to support management of the system and user interaction.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Location Services</title>
        <p>Location services were developed to leverage existing location capabilities available
on smart phone platforms (iOS and Android), these include extracting sensor data such
as GPS, accelerometer, compass and other location services that may be available on
the mobile platforms. This data is fused with the scanning of BLE advertisement
packets using existing protocols (iBeacon and Eddystone) that are generated from devices
deployed in the surrounding environment and registered with the system. Leveraging
this raw data sets, a number of localisation algorithms were investigated and developed
to fuse various source of data and to provide an estimate of the users location (i.e.
proximity to the beacon). Localisation approaches generally incorporate prior knowledge of
the environment, sensor location, coverage fingerprinting and utilise techniques such
as map filtering to improve positioning accuracy. BLE provides less precision however
offers a sufficient level of accuracy in terms of proximity to the device (far, near,
immediate) utilising received signal strength indictor and other metrics. If there are
multiple beacons present in the space techniques such as triangulation can be used to
provide more accurate estimate of position. While running initial tests with potential end
users, privacy was highlighted as a key requirement, to ensure user privacy is
maintained the context-aware modules were developed with the following requirements: the
system does not record or maintain any historical data on location information, the
location estimation is calculated in real-time based on the live information extracted from
the environment. The services do not record any identifiable information relating to the
user or their personal devices to protect user identity. Only pre-defined beacons are
used in processing the user’s location, i.e. only “trusted” beacons that have been
registered with the system are used for estimating the user’s proximity\location. The module
only operates in beacon mode so as they do not create any persistent connections to
external device or services. From a data processing point of view the processed data i.e.
location information\history is not stored locally or on a cloud server once used to
provide context data it is purged from memory.
3.3</p>
      </sec>
      <sec id="sec-3-3">
        <title>Context Services</title>
        <p>
          The context services use the estimated location information the provision of context
information by a combination of predefined meta-data capturing beacon locations,
environmental layout and relevant environmental\object descriptors. From a performance
perspective the application manages data by a combination of locally caching context
information and context services running in a cloud environment. The context services
essentially contain meta-data and information on the locations, e.g. buildings, floors or
areas and the beacons, their position and mapping of the context data or action (i.e. user
notification) to these devices. When defining the content of context descriptors, it is
important to consider how a person can build an image of the environment. The special
map can be characterised based the following features of the environment, paths that
provide “straight lines” through a city or environment, edges such as walls, kerbs,
building boundaries that provide edges that can be followed and guide a person, nodes which
represent focal points for people such as crossing points, door entrance, exit or lift. And
zones can be large areas where people can congregate (meeting rooms, reception areas,
park). While a cane can be used to detect and object, touch is the main source of
information and provides insight to the height, size, type of object that is in proximity.
People often rely on others to provide a description of a room or space to help construct a
representation of the zone, this can be static information about layout of room, position
of tables, where sockets are located, things to avoid etc. Any potential risk that may
reside in a space needs to be highlighted to the ViB person e.g. steps down, circulation
route what to avoid. Generally, there is a need to provide information that enables the
user to feel safer and confident and this has to be driven by easier interaction with an
emphasis on simplicity. The context awareness module focuses on delivering spatial
contextual information to enhance wayfinding information. This is provided as the
person’s location is gathered, their proximity to points of interest and objects (potential
collision risks) and description about their physical surroundings (space, layout,
location of furniture etc). Spatial contextual awareness has been defined as information such
as an individual's location, activity, the time of day, and proximity to other people or
objects and devices [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], our approach supplements this to also include a description of
the functionality of objects in the environment (e.g. opening configuration of doors,
width, height of objects) also. As such it aligns with the definition provide by [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] that
specifies any information that can be used to characterize the situation of an entity,
where entity means a person, place, or object, which is relevant to the interaction
between a user and an application. Presenting contextual information to the person must
be relevant to the user’s current task and situation. Therefore, for a VIB person visiting
an unfamiliar environment for the first time it is necessary to provide spatial contextual
based information to enable them to build a mental representation of their surrounding
environmental features, while also providing usability information in order to complete
tasks (opening doors, lifts, using furniture etc).
3.4
        </p>
      </sec>
      <sec id="sec-3-4">
        <title>User Interaction</title>
        <p>Once context information is constructed driven by a user’s location it must be provided
back to the user, the focus was on providing audio-based feedback via the user’s smart
phone. As such an application was developed that used text to speech which
automatically converted the context data to audio relayed to the user via headphones or speaker.
Through engagement with the ViB community it was highlighted that audio feedback
should not mask other sounds from the environment that are currently used for mobility
(e.g. listening for cars, signals at traffic lights), being aware of your surroundings during
outdoor environments specifically is a necessity for safe navigation. To address this
concern the use of bone conducting headphones to relay audio back to the user was
investigated. These headphones are positioned on your cheekbone and do not create a
seal the ear canal, this allows a wearer to hear other sounds, or potential hazards coming
from the environment while also receiving the audio cues from the context awareness
services. It is envisaged the further modes of feedback will also be used such as haptic
to provide specific cues to the end user driven by the location information. To support
validation the mobile application incorporated a map of the environment where beacons
are deployed, and the estimation of the users location is placed on the map while also a
list of beacons within the proximity was included to show the id and quality of the
signal received as well as the estimated proximity to that beacon. To simplify the
specification and collection of context data a context information model was defined, this
allows for a common representation of the data is captured, prioritised, and relayed to
the end user. The model enables more flexibility in how context information is defined
by the deployer and delivered to the end user, e.g. prioritise information based on
distance to an object. The model can be linked to different layers of the environment,
building, floor, regions, objects, or beacon proximities.
3.5</p>
      </sec>
      <sec id="sec-3-5">
        <title>Content Management System</title>
        <p>To support the management of BLE infrastructure a web-based content management
system was developed, this allows the user (e.g. deployer of BLE beacons) to map the
real position of BLE beacons to locations mapped out in the environment the context
awareness module will operate in.
For example, for an indoor environment a user can define a set of beacons along typically
used paths and specify the type of interaction expected by the end user. Fig. 2 (top
screen) presents the user interface to define indoor destinations that are linked to a
particular building and floor. This information allows the interaction algorithms to not
only estimate the location from a coordinates perspective but to link the users position
to a more descriptive representation of where in the environment they are, such as room
number, name or area description. Fig. 2 (bottom screen) provide a view of the interface
listing the proximities or beacon identifiers, this captures the unique identifier of the
beacon and positions it within the environment that can be used to infer the users location
when an advertisement packet is received identifying a particular beacon. It also
provides the list of beacons that are considered by the application so as not all beacons that
may be deployed in the environment are scanned by the application and it is limited to
specific devices only.</p>
        <p>Fig. 3 shows how the user via the content management system captures the context
information model. The configuration is linked to a parent attribute (proximity, floor,
building etc) and stored as part of the context aware services. They can be updated at
any time and adjusted as needed by the user, the context services will update its cached
data to refresh this data automatically meaning the ViB person will always have the
most up to date and relevant information regarding the environment. This flexibility is
essential particularly in scenarios where dynamic obstacles can be moved to new
locations or new configurations of spaces might be common (e.g. event or meeting room).
For indoor environments it is possible to define proximities within the structure using
local coordinates, this requires a geometric representation of the building or
environment where the beacons will be deployed. In addition where the environment
description is not available the beacon positions can be defined using GPS coordinates, the
position can then be converted to local coordinates if a representation of the building
becomes available, these positions provide a visual context for the deployer to support
the planning and setting up of the context path in a site specific scenario.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Use Case Example</title>
      <p>The following scenario was considered as an example of how the context-awareness
module in an indoor context. A person who has sight loss has confirmed that they will
be attending a meeting at a facility they have never been to before. They have contacted
the meeting coordinator who has scheduled the meeting and has also gathered any
requirements they may have to aid their appointment prior to their visit. Prior to the
meeting the building administrator will use the content management system and application
to specify where beacons are deployed and provide the configuration needed to
facilitate the provision of audio messages (wayfinding instructions, meeting room contextual
information, collision risk alerts) to aid the VIB user’s visit within the unfamiliar indoor
environment. The objectives are as follows:
• Provide a mechanism that offers ViB users a customised, intuitive, and independent
way of getting around an indoor facility.
• Provide meaningful audio descriptors that inform the user about the environment
characteristics and context (space/room function, size, layout, objects therein etc…)
• Alert users to potential collision risks within the environment (head collision, slip
hazards).</p>
      <p>Firstly, a spatial map is specified within the context of the target building (Fig. 4), this
outlines how a user may move through the space to understand the level of graduality
for context information and the types of interaction that may be required. Directional
and positional cues are captured based on a review of the building, this includes
preexisting cues such as tactile mats, definition of entrances, doors, potential risks and
hazards. Any mobile objects deployed in the environment are tagged with a specific
beacon. Potential navigation paths are outlined and generated based on point to point
trajectory between nodes, zones and landmarks. Contextual description transcripts for
the various indoor space (reception area, corridors, meeting room, toilets etc.) were
specified, information relating to navigation followed the open standard ITU-F.921
(03/2017) Audio-based network navigation system for persons with vision impairment,
that provides recommendations relating to how audio-based navigation systems can be
designed to ensure that they are inclusive and meet the needs of persons with visual
impairments. The placement of beacons and proximity range need to be carefully
considered and optimized in order to ensure appropriate contextual audio-based
information can be triggered at the right time and location. For example, it would not be
advisable to set the distance range of a beacon to 20m for triggering contextual audio
information related to a specific room door in a large space consisting of many other
doors, as it would be difficult for the ViB visitor to determine which door the
information relates to. As for approaching larger outdoor buildings it could be more optimal
to set the range at larger distance such as when the visitor is in proximity to a building
or site. When selecting a building, it would be recommended to review the topology of
the buildings space and for each areas and point of interest determine the proximity
ranges and conditions for triggering playback of wayfinding and contextual
audiobased information.
Fig. 5 provides an example of a zone in the target building that had beacons deployed
to provide additional context information. The reception area is unmanned and there
are several obstacles are present including low level furniture, plants, chairs and display
cases that need to be highlighted to the user.
The context descriptors aligned to Fig. 5 are defined as follows: the audio is generated
based on proximity to the entrance and follows the flow of messages as the user enters
the main door of the building: 1. “You have arrived at the entrance of the [Building]
reception area heading towards the reception desk” This provides both positional cue
in terms of location and also directional cue. 2. “Please beware of the carpet mat and
furniture just ahead of you located in the centre of the reception area” this provides
information of a collision risk to the user. 3. “The reception desk is located straight
ahead. Located to the left of the entrance are accessible toilets” this message provides
information relating to the surroundings. 4. “Located directly left of the reception desk
is a secure double door leading to the corridor on the ground floor.” The final message
delivers information about next possible course of action and highlights an intersection
point between zones/spaces that must be considered. Beacons are then strategically
placed at other points in the building, on the entrance to new spaces. The emphasis was
placed on providing context information relating to high risk objects such as stairwells
and dynamic obstacles that are introduced within the environment. Beacons are
deployed in these zones and attached to obstacles (e.g. floor sign as depicted in bottom
section of Fig 4.) and mapped to specific context descriptors such as “[Collision Risk]
Caution, wet floor sign directly ahead, proceed with caution.“ The following flow of
events are enabled through the use of the context-awareness module:
• The building is already equipped with BLE beacons, these beacons are already
mapped to specific contextual data as described above.
• The ViB person downloads and installs the mobile application to their smart phone
prior to arrival. The context services download the meta-data and context
information based on regional location.
• When the user arrives at the building they come into range of a beacon and beacon
signal is received, an estimated location is calculated and associated contextual
information is generated and provided to the ViB person (via headset or phone
speaker).
• When the user comes into proximity of specific objects (doors, posters, tactile
indicators) they are provided with an audio descriptor, further interaction is supported
via user touch.
• The user can find the meeting room location. Furthermore, the user is provided with
contextual audio descriptors (where am I, describe surroundings) to allow them to
build a spatial map of their surroundings
• The user can navigate and explore their environment confidently and independently.
As part of a user centred design process, an initial qualitative evaluation of the proposed
solution was undertaken with a number of representative users (ViB individuals) as part
of an observational study. The users operated the system under real conditions allowing
us to understand the benefit of the solution from a technology and usability perspective.
This provided valuable feedback that was used to inform the subsequent technology
design iterations. Initial tests demonstrated the need to reduce the amount of
information being delivered to the user, initially the information was very descriptive
however due to mobility patterns of users the amount of time required to deliver this level
of detail was too short and the user had already moved to another part of the space,
resulting in them receiving data that was not relevant to their current position. This also
had an impact on the cognitive load of the user. This was somewhat addressed by the
modification of the triggers within the context model, i.e. the administrator was able to
provide short bursts of information at different proximities (far, near and immediate) to
the beacons as well as prioritise critical messages such as collision risks. It was also
found that the responsiveness of the users action and the provision of context data was
influenced based on the type of device the user had and its location on their person (e.g.
in pocket, in hand etc), as such it is not possible to use BLE beacons alone to provide
precision navigation steps however they offer sufficient accuracy for the provision of
additional descriptive information that allows the user to understand how they could
move through and interact with the space they are in. This deployment provides a
testbed environment to evaluate the capabilities of the context awareness module and
further tests will be carried out in collaboration with ViB people to ensure the solution
is useful and reliable for the end user.</p>
      <p>Additional tests are required with a wider cohort of users from the ViB community
to ensure that a broader performance assessment can be conducted with individuals that
have different capabilities, expectations and usage requirements to ensure the solution
can adapt to their specific needs. Therefore, personalisation is an important criterion,
every individual has different capabilities and needs, however this emphasises another
critical consideration, protecting the privacy of the user. While personalisation is
required it must be delivered in a privacy preserving manner (e.g. leveraging edge
processing, anonymisation etc) that will impact the system architecture.</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusion and Future Work</title>
      <p>The context-awareness module leverages low-cost BLE devices and existing
infrastructure to provide additional cues and information to a ViB person that can support them
in building a spatial map of the environment they are moving through. This has the
potential to provide the user with more confidence when moving through and
interacting with environments that are unfamiliar to them and offer a better level of experience
in these spaces including being more aware of their surroundings and safer mobility.
Future work includes the integration of the context-awareness module with other modes
of interaction and sensors for example touch that can generate events and automate
interaction with other smart connected systems (e.g. seamless access control). In
addition, the use of BLE has gained significant attention due to the COVID-19 pandemic,
it has obvious applications to support contact tracing and as such a number of protocols
have emerged extending existing BLE and localisation approaches to be utilised for this
purpose in a privacy preserving manner. The solution proposed here can be extended
to this application, in addition it provides a mechanism to support spatial analysis and
utilisation management for indoor environments, i.e. it can be used to understand
patterns of use within buildings, provide information to users on how to navigate and
interact with the environment considering constraints such as social distancing rules etc,
and also has the potential to support organisation to digitise space management,
workflows and site access tracability etc.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgement</title>
      <p>The work presented relates to EU project INSPEX which received funding from the
EU's Horizon 2020 research and innovation programme under grant agreement No
73095, and from the Swiss Secretariat for Education, Research and Innovation (SERI)
under Grant 16.0136 730953</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1. RRA. Bourne, SR . Flaxman, T. Braithwaite, MV. Cicinelli,
          <string-name>
            <surname>A .Das</surname>
          </string-name>
          ,
          <string-name>
            <surname>JB. Jonas</surname>
          </string-name>
          , et al.;
          <source>Vision</source>
          Loss Expert Group.
          <article-title>Magnitude, temporal trends, and projections of the global prevalence of blindness and distance and near vision impairment: a systematic review and meta-analysis</article-title>
          .
          <source>Lancet Glob Health. 2017 Sep;5</source>
          (
          <issue>9</issue>
          ):
          <fpage>e888</fpage>
          -
          <lpage>97</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2. BlindSquare.
          <year>2019</year>
          .
          <article-title>Pioneering accessible navigation - indoors and outdoors</article-title>
          . [ONLINE] Available at: https://www.blindsquare.
          <source>com. [Accessed 25 January</source>
          <year>2020</year>
          ].
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Chen</surname>
            , Guanling, and
            <given-names>David</given-names>
          </string-name>
          <string-name>
            <surname>Kotz</surname>
          </string-name>
          .
          <year>2000</year>
          .
          <article-title>A Survey of Context-Aware Mobile Computing Research</article-title>
          .
          <source>Dartmouth Computer Science Technical Report TR2000-381.</source>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Dey</surname>
          </string-name>
          ,
          <string-name>
            <surname>Anind</surname>
            <given-names>K.</given-names>
          </string-name>
          <year>2001</year>
          .
          <article-title>Understanding and Using Context</article-title>
          .
          <source>Personal and Ubiquitous Computing</source>
          , Volume
          <volume>5</volume>
          ,
          <fpage>4</fpage>
          -
          <lpage>7</lpage>
          . Springer London.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>K.</given-names>
            <surname>Phutcharoen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chamchoy</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Supanakoon</surname>
          </string-name>
          ,
          <article-title>"Accuracy Study of Indoor Positioning with Bluetooth Low Energy Beacons</article-title>
          ,
          <article-title>"</article-title>
          2020 Joint International Conference on Digital Arts, Media and Technology, Pattaya, Thailand,
          <year>2020</year>
          , pp.
          <fpage>24</fpage>
          -
          <lpage>27</lpage>
          , doi: 10.1109/ECTIDAMTNCON48261.
          <year>2020</year>
          .
          <volume>9090691</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>