=Paper= {{Paper |id=Vol-2815/CERC2020_paper01 |storemode=property |title=Spatial Mapping for Visually Impaired and Blind Using BLE Beacons |pdfUrl=https://ceur-ws.org/Vol-2815/CERC2020_paper01.pdf |volume=Vol-2815 |authors=Alan McGibney,Roman Pospisil,Kevin O’Mahony,Juan Francisco Martinez,Susan Rea |dblpUrl=https://dblp.org/rec/conf/cerc/McGibneyPOMR20 }} ==Spatial Mapping for Visually Impaired and Blind Using BLE Beacons== https://ceur-ws.org/Vol-2815/CERC2020_paper01.pdf
Internet of Things, Networks and Robotics




     Spatial Mapping for Visually Impaired and Blind using
                         BLE Beacons

   Alan McGibney[0000-0002-0665-2005], Roman Pospisil, Kevin O’Mahony, Juan Francisco
                        Martinez and Susan Rea[0000-0002-4388-661X]

               Nimbus Centre, Cork Institute of Technology, Bishopstown, Cork, Ireland
                                   alan.mcgibney@cit.ie



            Abstract. This paper describes the development of a set of software services
            called the Context Awareness Module to support the visually impaired and blind
            (ViB) to construct a spatial map of their environment through the provision of
            context information (contextual, directional and positional cues) relating to the
            surrounding environment. This information is captured through the interaction of
            the users’ smart phone and the deployment of low-cost Bluetooth beacons within
            the environment to identify objects, landmarks or markers. The solution aims to
            supplement existing methods that support mobility and navigation through com-
            plex spaces by providing an additional layer of information that describes the
            space, location, object or any entity that a user might come in the vicinity of or
            interact with. Initial validation of the proposed solution was undertaken with
            members of the visually impaired community and tested with an example sce-
            nario where a visually impaired person is attending a meeting at an unknown
            building.

            Keywords: Bluetooth, Location Services, Mapping, Software.


 1          Introduction

 Based on a detailed analysis of existing trends, global projections estimate a continued
 increase in people with moderate and severe vision impairment from 237.1 million peo-
 ple in 2020 to as high as 587.6 million people by 2050 [1]. According to the definition
 of visual impairment of the World Health Organisation, currently 1.6 million people
 suffer from blindness in the EU and only 5% are fully autonomous in their daily mo-
 bility. 40% of the visually impaired suffer head level accidents at least once a month,
 and 30% suffer a fall accident at least once a month. As our cities evolve and population
 continue to expand mobility is becoming increasingly challenging task facing all citi-
 zens, however it is even more significant if a person has a visual impairment or disabil-
 ity. While on one side the Irish Disability Act 2005 states that Government departments
 and public bodies must work to improve the quality of life for people with disabili-
 ties, on the other side public spaces are being designed based on the concept of shared
 spaces where there is no kerb or level difference to segregate pedestrians and vehicles.
 This design approach has resulted in unexpected challenges that are adversely affecting
 vulnerable citizens. Removal of the clear demarcation between paths and roadways
Copyright © 2020 for this paper by its authors.
Use permitted under Creative Commons License         15                                          CERC 2020
Attribution 4.0 International (CC BY 4.0).
                                                             Internet of Things, Networks and Robotics
 2


 makes mobility significantly more challenging as drivers, cyclists and pedestrians now
 all occupy the same-shared space with pedestrians relying on the principle of mutual
 eye contact to navigate safely. For people with sensorial or cognitive disabilities this is
 not appropriate, and it further marginalises already vulnerable citizens. Similarly, in
 indoor environments architectural and visually appealing design can often result in the
 challenges faced by ViB being overlooked, as a result this can limit the level of inde-
 pendence, increase stress and add additional risk for the ViB person when moving in
 unfamiliar spaces. Technology can play a role in improving how the ViB community
 can experience the environment around them while also ensuring safety as they navi-
 gate through a space.
    Several systems termed as Electronic Travel Aid (ETA) have been created to im-
 prove the autonomous mobility for ViB people however the adoption rate remains very
 low. Devices such as wearable solutions (sunglasses, gloves etc) are sometimes consid-
 ered as extra prosthesis, cumbersome and stigmatising. Inaccuracy of sensor systems
 that rely on a single sensor technology can diminish the confidence of the user in the
 benefits of the solutions, for example ultrasound is sensitive to multi-echo and can eas-
 ily lead to wrong detections. Perception is often limited to range sensing (of the nearest
 target) and as a result most systems scan the environment without interpreting it, this
 provides some additional support to the user however it does not provide sufficient
 detail to allow the visually impaired person to construct a representation and under-
 standing of their specific situation and environment. While existing ETA help a user
 to navigate and detect obstacles there is a need to provide mechanism that can enhance
 interaction with the surrounding environment for ViB users. It is proposed that by lev-
 eraging low cost Bluetooth beacons and the users smart phone it is possible to add a
 layer of cognition that will allow the user to build a spatial map of the surrounding
 environment and ultimately enhance personal autonomy and accessibility rather than
 just providing directional information for navigation. The solution is distinct from way-
 finding or navigation and should be considered as a platform that provides additional
 context about the environment itself through direct or indirect interaction. The remain-
 der of the paper is structured as follows: Section 2 will provide an overview of existing
 approaches for navigation and interaction with the user. Section 3 will present an over-
 view of the proposed solution. Section 4 will provide an overview of an example use
 case for the developed technology and Section 5 will conclude the paper.


 2          Spatial Mapping & Navigation Support

 2.1        Spatial Mapping

 An individual generates a spatial map using a number of different sources, the main
 source of information comes from the visual system, senses such as vision, smell,
 movement and hearing are all used to infer a person's location within their environment
 and as they move through it. It also allows a person to create a navigation path through
 or a vector that represents a person’s position and direction, specifically in comparison
 to an earlier reference point. Directional cues (e.g. signs, arrows, labels) and positional
 landmarks (entrances, exits, meeting point) all provide valuable input to allow a person

CERC 2020                                     16
Internet of Things, Networks and Robotics
                                                                                         3


 create a spatial map and can be used both when an individual is static and when deter-
 mining movement paths and also dynamically while a person is moving through the
 space. Positional landmarks are generally used to compare the relative position of spe-
 cific objects, whereas directional cues give information about the shape and layout of
 the environment itself. We rely heavily on our vision to map our environment and move
 safely. For a ViB person they must rely on their other senses with touch, hearing, and
 smell becoming the more dominant senses in mapping there environment and they use
 items such as a long cane as an obstacle detector or a guide dog as an obstacle avoider.
 Wall edges, and kerbs are used as a navigational tool and support straight line principle.
 In addition, over 80% of persons registered blind have some residual vision, and as
 such colour contrast enhances perception and aids way finding. Textured surfaces can
 act as a warning and indicate particular types of situations including pedestrian cross-
 ings and location of stairs or escalators. For a ViB person, to create a mental model or
 representation of the environment around them they must decode and aggregate infor-
 mation about their relative location and leverage knowledge of attributes of the spatial
 environment. This is generally built dynamically firstly by creating a bearing map,
 which represents space through self-movement and gradient cues for example using a
 cane cam create a rough 2D map of the environment, this can be combined with specific
 positional cues, to sketch a mental map by integrating specific objects or landmarks
 with their relative locations to create “minds eye” view of the environment. The process
 of navigating for ViB can be mentally exhausting particularly in unfamiliar environ-
 ments.


 2.2      Navigation Support Tools

 Navigation and wayfinding GPS applications such as Google maps have been adopted
 for many years by the mainstream for independent travel when mapping data and sat-
 ellite transmission is available. Whereas popular outdoor navigation apps such as Ari-
 adane 1, GetThere and BlindSquare previously have been developed specifically for
 people with visually impairments. Most smartphones and tablet devices are GPS and
 Bluetooth enabled, therefore allowing developers to create applications, which take ad-
 vantages of location-based technologies and services. Taking advantage of mobile de-
 vices that are already embedded with GPS and positional sensing technologies (gyro-
 scope, accelerometer, digital compass, IMU etc.) can be cost effective as it eliminates
 the requirement to procure install and maintain tracking and sensing technologies.
 However, although GPS is the most widely used real-time location system, it relies on
 continuous signal transmission from several satellite source therefore it does not work
 well indoors or within closed environments where there is significant signal interfer-
 ence. In addition, orientation supported by GPS can be inaccurate and disorientating
 for the user as a result. Within a closed indoor setting and where navigational and con-
 textual audio-based information needs to be triggered at a more precise location and
 time, alternative location tracking technologies and methods need to be considered. For
 example, without precisely tracking a mobile device’s location and pose (proximity and

 1 https://www.ariadnegps.eu/



                                              17                                      CERC 2020
                                                             Internet of Things, Networks and Robotics
 4


 orientation) relative to a point of interest as the user moves, it would be difficult to
 provide contextual audio based information relevant to be played when required at the
 right time and moment. For indoor location tracking most systems are based upon using
 wireless technologies such as Wi-Fi, Bluetooth, ultra-wideband (UWB), and Radio-
 frequency identification (RFID). Most indoor location and positional tracking systems
 use wireless sensor nodes such as tags that emit signals (beacons), typically points of
 interest or optimal communication areas are embeded or attached with tags or badges
 (iBeacons, RFID tags) that broadcast signals to receivers (mobile device). There are
 more accurate indoor tracking system such as the Decawave DW-1000 UWB chip
 which can achieve high precision tracking of between 10-30cm, however this technol-
 ogy has not become widespread as the hardware is not yet positioned as low cost for
 mainstream consumers and most smartphones are not UWB enabled. The selection of
 technology is dependent on several factors: accuracy required for application specific
 needs, battery lifetime, cost of installation and maintenance and ease of integration with
 other processes or systems.


 2.3        BLE Beacons

 Bluetooth Low Energy (BLE) beacons have been widely used for indoor tracking,
 where once a receiver (mobile device) is in proximity of a beacon, content can be trig-
 gered where its position can be tracked if within range of 2 or more beacons by pro-
 cessing the distance data. With BLE, location- tracking accuracy can vary but tracking
 accuracy can be <1.5m, they are easy to install and maintain and affordable. Real-time
 indoor location services (RTLS) have begun to gain wider attraction from many indus-
 try domains, where there are many examples from airports and hospitals taking ad-
 vantage of BLE beacons to help users navigate large indoor spaces, to retailers provid-
 ing directed, personalized marketing content to shoppers entering their stores. A num-
 ber of studies have focused on detailed analysis of BLE accuracy in indoor environ-
 ments and have demonstrated sub-meter accuracy can be achieved [5], however this
 can vary significantly across different environments and other aspects such as position-
 ing and orientation of the phone on a persons body can reduce the ability to achieve
 fine grained positioning information. For the application under consideration providing
 inaccurate positioning information has a much greater adverse effect on a user that is
 ViB (from a safety perspective). As such the focus of the work presented is not to im-
 prove the accuracy of BLE localisation but rather to investigate how solutions can lev-
 erage existing proximity data to trigger the provision of key information relating to the
 surrounding environment for the ViB user. For spatial mapping BLE beacons provide
 sufficient accuracy for satisfying the criteria to trigger contextual information when the
 ViB person is within defined proximities of indoor areas (reception, halls, stairwells,
 room and toilets) and points of interest (doors, signage as potential collision risks).
 Proximity detection conditions can be determined by adjusting the beacons antenna
 power, therefore beacons could be set to varying proximity ranges (2m, 10m, 70m),
 however it has to be noted that if the beacon antenna is powered up for a longer prox-
 imity range the lifetime of the device is reduced to only several month, whereas



CERC 2020                                     18
Internet of Things, Networks and Robotics
                                                                                           5


 environmental factors (temperature, beacon placement) will also effect power con-
 sumption and reliability of the beacons.


 2.4      Other Tools

 Markers and fiducials can be used to provide additional information, QR codes have
 become widespread on products and adverts where a person can use their mobile device
 camera (QR reader) to access further information such as triggering information ex-
 change or even an interactive experience using mobile applications. Essentially,
 marker-based applications use a devices camera to estimate the position of the device
 (center point, orientation, range) based upon what it is “seeing”, such as the visual in-
 formation attained from the fiducial marker. Markers such as QR codes, have a unique
 predefined shape and pattern that can be easily detected in low lighting conditions and
 easily printed to be attached to a point of interest. Markers can be an inexpensive and
 technically simple method for gathering the devices position and therefore provides a
 very accurate positional cue. For example, BlindSquare, has a QR reader built-in to
 their app, where they have developed a super-set of the QR barcode matrix purpose
 built to be more accessible for VIB people when acquiring (scanning) a QR code. For
 example, the app provides audible and haptic feedback to the user while they are search-
 ing and acquiring a QR code. In use cases presented [2] where BlindSquare QR reader
 is demonstrated, QR code are printed and attached to doors, whereas the user has to
 find and scan the QR code on the door, where information associated with the room
 (room name, purpose, member of staff who work there) is read aloud (Voice Over TTS)
 to the user. The QR codes are placed at optimal locations above the door handle on each
 door as an early required skill for cane-travel is to trail walls, discern doors and locate
 door handles so placing illuminating information nearby is helpful. Whereas Blind-
 Square also aim to aid VIB people in finding and scanning QR code through audio and
 haptics cues, this still requires manual effort, and explicit interaction that is not so in-
 tuitive for the user. Natural feature tracking (NFT) is an image-based tracking method
 that recognizes and tracks natural features (edges, corners, patterns etc) within a scene
 or object (building, ornament etc.). Therefore, to the user this is a maker-less tracking
 method as there is no identifiable marker such as an identifiable fiducial marker (QR
 code, ID Marker) to scan. NFT extract key point descriptors that are associated with an
 image captured from a camera, where these key points then query a database to identify
 matching images and those interpret potential position. Using 3D object recognition
 and augmented reality visioning systems the physical world and contextual information
 can be rendered more visible to people with vision impairments, e.g. objects and sign-
 age could be enhanced though rendering the increase colour contrast, tone, dimensions
 or brightness of images based upon a particular persons visual impairment condition
 type. Augmented Reality glasses such as OxSight and AceSight have been developed
 specifically for people with vision impairments. Simultaneous Localisation and Map-
 ping (SLAM) is a more complex and progressive computer vision method that is cur-
 rently a very popular topic within the computer vision community. Through a SLAM
 system and process a device can create a map of its surroundings whilst at the same
 time have the capability to localize (position and orientation) itself within the map.

                                               19                                       CERC 2020
                                                            Internet of Things, Networks and Robotics
 6


 3          Context Awareness Module

 The context awareness module is a set of software services that enables interaction be-
 tween Bluetooth Low Energy (BLE) devices deployed in the environment, the user via
 a mobile application and the provision of audio feedback. The objective is to provide
 positional and directional cues in a format that is easily configured, interpreted, and
 used to build a spatial map of the surrounding space.


 3.1        System Architecture

 Fig 1. provides a high-level representation of the context aware module. The module
 provides common functionality for the interaction of existing BLE beacons and devices
 while also provide an extension point for integration with other applications and ser-
 vices. The context awareness modules are available across multiple platforms including
 Android and iOS.




                        Fig. 1. Context Awareness Module Components

 The base context services and libraries were developed using the Xamarin framework
 which supports cross-platform compatibility. This included the development of a front


CERC 2020                                    20
Internet of Things, Networks and Robotics
                                                                                          7


 end to support testing and evaluation of the services. In addition, a separate set of li-
 braries were developed using SWIFT and Objective C specifically for the iOS platform,
 this was to support the integration of the modules with 3rd party iOS applications. The
 module consists of four main components, firstly all interaction is location driven, as
 such libraries to estimate the location of the devices were developed, once location is
 established the next component is to map this to specific context data. The last two
 components are to support management of the system and user interaction.


 3.2      Location Services

 Location services were developed to leverage existing location capabilities available
 on smart phone platforms (iOS and Android), these include extracting sensor data such
 as GPS, accelerometer, compass and other location services that may be available on
 the mobile platforms. This data is fused with the scanning of BLE advertisement pack-
 ets using existing protocols (iBeacon and Eddystone) that are generated from devices
 deployed in the surrounding environment and registered with the system. Leveraging
 this raw data sets, a number of localisation algorithms were investigated and developed
 to fuse various source of data and to provide an estimate of the users location (i.e. prox-
 imity to the beacon). Localisation approaches generally incorporate prior knowledge of
 the environment, sensor location, coverage fingerprinting and utilise techniques such
 as map filtering to improve positioning accuracy. BLE provides less precision however
 offers a sufficient level of accuracy in terms of proximity to the device (far, near, im-
 mediate) utilising received signal strength indictor and other metrics. If there are mul-
 tiple beacons present in the space techniques such as triangulation can be used to pro-
 vide more accurate estimate of position. While running initial tests with potential end
 users, privacy was highlighted as a key requirement, to ensure user privacy is main-
 tained the context-aware modules were developed with the following requirements: the
 system does not record or maintain any historical data on location information, the lo-
 cation estimation is calculated in real-time based on the live information extracted from
 the environment. The services do not record any identifiable information relating to the
 user or their personal devices to protect user identity. Only pre-defined beacons are
 used in processing the user’s location, i.e. only “trusted” beacons that have been regis-
 tered with the system are used for estimating the user’s proximity\location. The module
 only operates in beacon mode so as they do not create any persistent connections to
 external device or services. From a data processing point of view the processed data i.e.
 location information\history is not stored locally or on a cloud server once used to pro-
 vide context data it is purged from memory.


 3.3      Context Services

 The context services use the estimated location information the provision of context
 information by a combination of predefined meta-data capturing beacon locations, en-
 vironmental layout and relevant environmental\object descriptors. From a performance
 perspective the application manages data by a combination of locally caching context
 information and context services running in a cloud environment. The context services

                                              21                                       CERC 2020
                                                               Internet of Things, Networks and Robotics
 8


 essentially contain meta-data and information on the locations, e.g. buildings, floors or
 areas and the beacons, their position and mapping of the context data or action (i.e. user
 notification) to these devices. When defining the content of context descriptors, it is
 important to consider how a person can build an image of the environment. The special
 map can be characterised based the following features of the environment, paths that
 provide “straight lines” through a city or environment, edges such as walls, kerbs, build-
 ing boundaries that provide edges that can be followed and guide a person, nodes which
 represent focal points for people such as crossing points, door entrance, exit or lift. And
 zones can be large areas where people can congregate (meeting rooms, reception areas,
 park). While a cane can be used to detect and object, touch is the main source of infor-
 mation and provides insight to the height, size, type of object that is in proximity. Peo-
 ple often rely on others to provide a description of a room or space to help construct a
 representation of the zone, this can be static information about layout of room, position
 of tables, where sockets are located, things to avoid etc. Any potential risk that may
 reside in a space needs to be highlighted to the ViB person e.g. steps down, circulation
 route what to avoid. Generally, there is a need to provide information that enables the
 user to feel safer and confident and this has to be driven by easier interaction with an
 emphasis on simplicity. The context awareness module focuses on delivering spatial
 contextual information to enhance wayfinding information. This is provided as the per-
 son’s location is gathered, their proximity to points of interest and objects (potential
 collision risks) and description about their physical surroundings (space, layout, loca-
 tion of furniture etc). Spatial contextual awareness has been defined as information such
 as an individual's location, activity, the time of day, and proximity to other people or
 objects and devices [3], our approach supplements this to also include a description of
 the functionality of objects in the environment (e.g. opening configuration of doors,
 width, height of objects) also. As such it aligns with the definition provide by [4] that
 specifies any information that can be used to characterize the situation of an entity,
 where entity means a person, place, or object, which is relevant to the interaction be-
 tween a user and an application. Presenting contextual information to the person must
 be relevant to the user’s current task and situation. Therefore, for a VIB person visiting
 an unfamiliar environment for the first time it is necessary to provide spatial contextual
 based information to enable them to build a mental representation of their surrounding
 environmental features, while also providing usability information in order to complete
 tasks (opening doors, lifts, using furniture etc).


 3.4        User Interaction

 Once context information is constructed driven by a user’s location it must be provided
 back to the user, the focus was on providing audio-based feedback via the user’s smart
 phone. As such an application was developed that used text to speech which automati-
 cally converted the context data to audio relayed to the user via headphones or speaker.
 Through engagement with the ViB community it was highlighted that audio feedback
 should not mask other sounds from the environment that are currently used for mobility
 (e.g. listening for cars, signals at traffic lights), being aware of your surroundings during
 outdoor environments specifically is a necessity for safe navigation. To address this


CERC 2020                                      22
Internet of Things, Networks and Robotics
                                                                                              9


 concern the use of bone conducting headphones to relay audio back to the user was
 investigated. These headphones are positioned on your cheekbone and do not create a
 seal the ear canal, this allows a wearer to hear other sounds, or potential hazards coming
 from the environment while also receiving the audio cues from the context awareness
 services. It is envisaged the further modes of feedback will also be used such as haptic
 to provide specific cues to the end user driven by the location information. To support
 validation the mobile application incorporated a map of the environment where beacons
 are deployed, and the estimation of the users location is placed on the map while also a
 list of beacons within the proximity was included to show the id and quality of the
 signal received as well as the estimated proximity to that beacon. To simplify the spec-
 ification and collection of context data a context information model was defined, this
 allows for a common representation of the data is captured, prioritised, and relayed to
 the end user. The model enables more flexibility in how context information is defined
 by the deployer and delivered to the end user, e.g. prioritise information based on dis-
 tance to an object. The model can be linked to different layers of the environment,
 building, floor, regions, objects, or beacon proximities.


 3.5      Content Management System
 To support the management of BLE infrastructure a web-based content management
 system was developed, this allows the user (e.g. deployer of BLE beacons) to map the
 real position of BLE beacons to locations mapped out in the environment the context
 awareness module will operate in.




                Fig. 2. UI to allow the definition of Buildings, Floors and Destinations


                                                    23                                     CERC 2020
                                                               Internet of Things, Networks and Robotics
 10


 For example, for an indoor environment a user can define a set of beacons along typically
 used paths and specify the type of interaction expected by the end user. Fig. 2 (top
 screen) presents the user interface to define indoor destinations that are linked to a
 particular building and floor. This information allows the interaction algorithms to not
 only estimate the location from a coordinates perspective but to link the users position
 to a more descriptive representation of where in the environment they are, such as room
 number, name or area description. Fig. 2 (bottom screen) provide a view of the interface
 listing the proximities or beacon identifiers, this captures the unique identifier of the
 beacon and positions it within the environment that can be used to infer the users location
 when an advertisement packet is received identifying a particular beacon. It also pro-
 vides the list of beacons that are considered by the application so as not all beacons that
 may be deployed in the environment are scanned by the application and it is limited to
 specific devices only.
 Fig. 3 shows how the user via the content management system captures the context
 information model. The configuration is linked to a parent attribute (proximity, floor,
 building etc) and stored as part of the context aware services. They can be updated at
 any time and adjusted as needed by the user, the context services will update its cached
 data to refresh this data automatically meaning the ViB person will always have the
 most up to date and relevant information regarding the environment. This flexibility is
 essential particularly in scenarios where dynamic obstacles can be moved to new loca-
 tions or new configurations of spaces might be common (e.g. event or meeting room).
 For indoor environments it is possible to define proximities within the structure using
 local coordinates, this requires a geometric representation of the building or environ-
 ment where the beacons will be deployed. In addition where the environment descrip-
 tion is not available the beacon positions can be defined using GPS coordinates, the
 position can then be converted to local coordinates if a representation of the building
 becomes available, these positions provide a visual context for the deployer to support
 the planning and setting up of the context path in a site specific scenario.




                             Fig. 3. Context meta-data definition


CERC 2020                                      24
Internet of Things, Networks and Robotics
                                                                                          11


 4        Use Case Example

 The following scenario was considered as an example of how the context-awareness
 module in an indoor context. A person who has sight loss has confirmed that they will
 be attending a meeting at a facility they have never been to before. They have contacted
 the meeting coordinator who has scheduled the meeting and has also gathered any re-
 quirements they may have to aid their appointment prior to their visit. Prior to the meet-
 ing the building administrator will use the content management system and application
 to specify where beacons are deployed and provide the configuration needed to facili-
 tate the provision of audio messages (wayfinding instructions, meeting room contextual
 information, collision risk alerts) to aid the VIB user’s visit within the unfamiliar indoor
 environment. The objectives are as follows:

 • Provide a mechanism that offers ViB users a customised, intuitive, and independent
   way of getting around an indoor facility.
 • Provide meaningful audio descriptors that inform the user about the environment
   characteristics and context (space/room function, size, layout, objects therein etc…)
 • Alert users to potential collision risks within the environment (head collision, slip
   hazards).

 Firstly, a spatial map is specified within the context of the target building (Fig. 4), this
 outlines how a user may move through the space to understand the level of graduality
 for context information and the types of interaction that may be required. Directional
 and positional cues are captured based on a review of the building, this includes pre-
 existing cues such as tactile mats, definition of entrances, doors, potential risks and
 hazards. Any mobile objects deployed in the environment are tagged with a specific
 beacon. Potential navigation paths are outlined and generated based on point to point
 trajectory between nodes, zones and landmarks. Contextual description transcripts for
 the various indoor space (reception area, corridors, meeting room, toilets etc.) were
 specified, information relating to navigation followed the open standard ITU-F.921
 (03/2017) Audio-based network navigation system for persons with vision impairment,
 that provides recommendations relating to how audio-based navigation systems can be
 designed to ensure that they are inclusive and meet the needs of persons with visual
 impairments. The placement of beacons and proximity range need to be carefully con-
 sidered and optimized in order to ensure appropriate contextual audio-based infor-
 mation can be triggered at the right time and location. For example, it would not be
 advisable to set the distance range of a beacon to 20m for triggering contextual audio
 information related to a specific room door in a large space consisting of many other
 doors, as it would be difficult for the ViB visitor to determine which door the infor-
 mation relates to. As for approaching larger outdoor buildings it could be more optimal
 to set the range at larger distance such as when the visitor is in proximity to a building
 or site. When selecting a building, it would be recommended to review the topology of
 the buildings space and for each areas and point of interest determine the proximity
 ranges and conditions for triggering playback of wayfinding and contextual audio-
 based information.


                                               25                                       CERC 2020
                                                               Internet of Things, Networks and Robotics
 12




                   Fig. 4. Spatial Layout of Target Building over two floors


 Fig. 5 provides an example of a zone in the target building that had beacons deployed
 to provide additional context information. The reception area is unmanned and there
 are several obstacles are present including low level furniture, plants, chairs and display
 cases that need to be highlighted to the user.




CERC 2020                                      26
Internet of Things, Networks and Robotics
                                                                                       13




                     Fig. 5. Example area with many obstacles BLE was deployed

 The context descriptors aligned to Fig. 5 are defined as follows: the audio is generated
 based on proximity to the entrance and follows the flow of messages as the user enters
 the main door of the building: 1. “You have arrived at the entrance of the [Building]
 reception area heading towards the reception desk” This provides both positional cue
 in terms of location and also directional cue. 2. “Please beware of the carpet mat and
 furniture just ahead of you located in the centre of the reception area” this provides
 information of a collision risk to the user. 3. “The reception desk is located straight
 ahead. Located to the left of the entrance are accessible toilets” this message provides
 information relating to the surroundings. 4. “Located directly left of the reception desk
 is a secure double door leading to the corridor on the ground floor.” The final message
 delivers information about next possible course of action and highlights an intersection
 point between zones/spaces that must be considered. Beacons are then strategically
 placed at other points in the building, on the entrance to new spaces. The emphasis was
 placed on providing context information relating to high risk objects such as stairwells
 and dynamic obstacles that are introduced within the environment. Beacons are de-
 ployed in these zones and attached to obstacles (e.g. floor sign as depicted in bottom
 section of Fig 4.) and mapped to specific context descriptors such as “[Collision Risk]
 Caution, wet floor sign directly ahead, proceed with caution.“ The following flow of
 events are enabled through the use of the context-awareness module:

 • The building is already equipped with BLE beacons, these beacons are already
   mapped to specific contextual data as described above.
 • The ViB person downloads and installs the mobile application to their smart phone
   prior to arrival. The context services download the meta-data and context infor-
   mation based on regional location.

                                                 27                                  CERC 2020
                                                             Internet of Things, Networks and Robotics
 14


 • When the user arrives at the building they come into range of a beacon and beacon
   signal is received, an estimated location is calculated and associated contextual in-
   formation is generated and provided to the ViB person (via headset or phone
   speaker).
 • When the user comes into proximity of specific objects (doors, posters, tactile indi-
   cators) they are provided with an audio descriptor, further interaction is supported
   via user touch.
 • The user can find the meeting room location. Furthermore, the user is provided with
   contextual audio descriptors (where am I, describe surroundings) to allow them to
   build a spatial map of their surroundings
 • The user can navigate and explore their environment confidently and independently.

 As part of a user centred design process, an initial qualitative evaluation of the proposed
 solution was undertaken with a number of representative users (ViB individuals) as part
 of an observational study. The users operated the system under real conditions allowing
 us to understand the benefit of the solution from a technology and usability perspective.
 This provided valuable feedback that was used to inform the subsequent technology
 design iterations. Initial tests demonstrated the need to reduce the amount of infor-
 mation being delivered to the user, initially the information was very descriptive how-
 ever due to mobility patterns of users the amount of time required to deliver this level
 of detail was too short and the user had already moved to another part of the space,
 resulting in them receiving data that was not relevant to their current position. This also
 had an impact on the cognitive load of the user. This was somewhat addressed by the
 modification of the triggers within the context model, i.e. the administrator was able to
 provide short bursts of information at different proximities (far, near and immediate) to
 the beacons as well as prioritise critical messages such as collision risks. It was also
 found that the responsiveness of the users action and the provision of context data was
 influenced based on the type of device the user had and its location on their person (e.g.
 in pocket, in hand etc), as such it is not possible to use BLE beacons alone to provide
 precision navigation steps however they offer sufficient accuracy for the provision of
 additional descriptive information that allows the user to understand how they could
 move through and interact with the space they are in. This deployment provides a
 testbed environment to evaluate the capabilities of the context awareness module and
 further tests will be carried out in collaboration with ViB people to ensure the solution
 is useful and reliable for the end user.
    Additional tests are required with a wider cohort of users from the ViB community
 to ensure that a broader performance assessment can be conducted with individuals that
 have different capabilities, expectations and usage requirements to ensure the solution
 can adapt to their specific needs. Therefore, personalisation is an important criterion,
 every individual has different capabilities and needs, however this emphasises another
 critical consideration, protecting the privacy of the user. While personalisation is re-
 quired it must be delivered in a privacy preserving manner (e.g. leveraging edge pro-
 cessing, anonymisation etc) that will impact the system architecture.




CERC 2020                                     28
Internet of Things, Networks and Robotics
                                                                                              15


 5        Conclusion and Future Work

 The context-awareness module leverages low-cost BLE devices and existing infrastruc-
 ture to provide additional cues and information to a ViB person that can support them
 in building a spatial map of the environment they are moving through. This has the
 potential to provide the user with more confidence when moving through and interact-
 ing with environments that are unfamiliar to them and offer a better level of experience
 in these spaces including being more aware of their surroundings and safer mobility.
 Future work includes the integration of the context-awareness module with other modes
 of interaction and sensors for example touch that can generate events and automate
 interaction with other smart connected systems (e.g. seamless access control). In addi-
 tion, the use of BLE has gained significant attention due to the COVID-19 pandemic,
 it has obvious applications to support contact tracing and as such a number of protocols
 have emerged extending existing BLE and localisation approaches to be utilised for this
 purpose in a privacy preserving manner. The solution proposed here can be extended
 to this application, in addition it provides a mechanism to support spatial analysis and
 utilisation management for indoor environments, i.e. it can be used to understand pat-
 terns of use within buildings, provide information to users on how to navigate and in-
 teract with the environment considering constraints such as social distancing rules etc,
 and also has the potential to support organisation to digitise space management, work-
 flows and site access tracability etc.


 Acknowledgement

 The work presented relates to EU project INSPEX which received funding from the
 EU's Horizon 2020 research and innovation programme under grant agreement No
 73095, and from the Swiss Secretariat for Education, Research and Innovation (SERI)
 under Grant 16.0136 730953


 References
   1. RRA. Bourne, SR . Flaxman, T. Braithwaite, MV. Cicinelli, A .Das, JB. Jonas, et al.; Vision
      Loss Expert Group. Magnitude, temporal trends, and projections of the global prevalence of
      blindness and distance and near vision impairment: a systematic review and meta-analysis.
      Lancet Glob Health. 2017 Sep;5(9):e888–97.
   2. BlindSquare. 2019. Pioneering accessible navigation – indoors and outdoors. [ONLINE]
      Available at: https://www.blindsquare.com. [Accessed 25 January 2020].
   3. Chen, Guanling, and David Kotz. 2000. A Survey of Context-Aware Mobile Computing
      Research. Dartmouth Computer Science Technical Report TR2000-381.
   4. Dey, Anind K. 2001. Understanding and Using Context. Personal and Ubiquitous Compu-
      ting, Volume 5, 4-7. Springer London.
   5. K. Phutcharoen, M. Chamchoy and P. Supanakoon, "Accuracy Study of Indoor Positioning
      with Bluetooth Low Energy Beacons," 2020 Joint International Conference on Digital Arts,
      Media     and      Technology,    Pattaya,   Thailand,     2020,     pp.    24-27,     doi:
      10.1109/ECTIDAMTNCON48261.2020.9090691.

                                                 29                                         CERC 2020