<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Building Augmented You-are-here Maps through Collaborative Annotations for the Visually Impaired</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Limin Zeng</string-name>
          <email>limin.zeng@tu-dresden.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gerhard Weber</string-name>
          <email>gerhard.weber@tu-dresden.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Technische Universität Dresden Institut für Angewandte Informatik</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2012</year>
      </pub-date>
      <fpage>7</fpage>
      <lpage>12</lpage>
      <abstract>
        <p>For the visually impaired it is important to learn about different kinds of spatial knowledge from non-visual maps, specifically while walking via mobile devices. In this article, we presented an augmented audio-haptic You-arehere map system based on a novel pin-matrix device. Through the proposed system, the users can acquire not only the basic geographic information and their location, but also augmented accessibility attributes of geographic features by user-contributed annotations. Furthermore, we at the first time discuss the annotation taxonomy on geographic accessibility, towards building a systemic methodology.</p>
      </abstract>
      <kwd-group>
        <kwd>haptic interaction</kwd>
        <kwd>social interaction</kwd>
        <kwd>annotation</kwd>
        <kwd>taxonomy</kwd>
        <kwd>youare-here map</kwd>
        <kwd>outdoor</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Apart from the geographic information, for the visually impaired they expect to know
additional geographic accessibility information which is specific for them. For
example, the blind prefers to know if there is a non-barrier sidewalk to the entrance of the
nearby POI and where, or the types of doors (e.g., automatic or manual). However,
it’s time-consuming and cost-consuming to collect those kinds of accessibility
geographic data over the world by one or several organizations.</p>
      <p>Besides, due to lack of accessible location-aware YAH maps for the visually
impaired, it is hard for them to explore the surrounding environments while walking
outside, despite the mainstream GPS-based navigation systems would announce
where users are, e.g. the name of the street or the nearby point of interest. In this
paper, in addition to acquiring basic geographic information, we focus on investigating
which other kinds of information would be acquired from the location-based YAH
maps for the visually impaired, such as location and augmented accessibility
information. We presented a tactile You-are-here map system on a portable pin-matrix device
(PMD), and proposed a collaborative approach to gather accessibility information of
geographic features from users’ annotating. Furthermore, we discussed about users’
annotation taxonomy systematically from its definition to the data model.
2
2.1</p>
    </sec>
    <sec id="sec-2">
      <title>Acquisition of spatial knowledge from map exploration</title>
      <sec id="sec-2-1">
        <title>Basic geographic knowledge</title>
        <p>
          The basic geographic knowledge contains the spatial layout of map elements, names
and categories of geographic features, and other map elements (e.g. scale, north
direction). Although the swell-paper based maps have great touch perception, they only
can represent a few of brief and static information, as well as related map legend in
Braille. To present much more map elements and with detailed descriptions, the
acoustic output has been employed in recent decades, from the auditory icons and
sonification to text-to-speech (TTS) synthesis, like in [
          <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
          ]. However, it is hard to
learn about precise layout from the virtual maps. Aiming at obtaining explicit touch
perception simultaneously, haptic-audio maps have been proposed on touch-screen
tablet [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] and pin-matrix device [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].
2.2
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Spatial relationship to users on location-aware maps</title>
        <p>
          In addition to rendering basic geographic information, the location-aware maps state
users’ current position and the spatial relationship between them and the surrounding
environments. TP3 [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] allows users to discover spatial relationship to nearby point of
interests on mobile phones, e.g. distance and orientation. Specifically, the novel
spatial tactile feedback in SpaceSence [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] represents explicitly the orientation from users’
location to the destination. However, it is still challengeable to allow the users to
explore the surroundings explicitly.
2.3
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>Augmented geographic accessibility</title>
        <p>
          For people with special needs, the kind of geographic information stored in current
map database is not enough, because they have their own additional requirements. For
example, the online Wheelmap1 and Access Together2 collect the accessibility
features of POIs in cities for wheelchair users, through user-contributed annotations.
However, the visually impaired has much more special requirements than the disabled
people who are sighted. In addition to avoiding various unaware obstacles, they
expect to know accessibility information of geographic features while on the move. The
“Look-and-Listen-Map”3 is a project to prepare a free accessible geo-data for the
visually impaired, such as traffic signals with or without sound. But due to lack of an
accessible platform, the visually impaired is hard to benefit from the project currently.
As one possible benefit from those collaborative accessible geo-data, it can improve
the performance of involved applications, like a personalized route plan [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
1 www.wheelmap.org
2 www.accesstogether.org
3 http://www.blind.accessiblemaps.org/
3
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Augmented tactile YAH maps through collaborative annotations</title>
      <p>In this section, we described a tactile YAH maps on a mobile pin-matrix device.
Through the proposed system, the users not only can explore maps and learn about
their location context, but also can acquire knowledge on geographic accessibility in
their cities by collaborative annotations.
3.1</p>
      <sec id="sec-3-1">
        <title>The System Architecture</title>
        <p>As shown in Figure 1, the system is a typical C/S (Client/Server) based system. Its
server stores map data, annotation data and user information, and responses the map
client, like sending map data. The ubiquitous mobile internet enables the client to
connect the server at anywhere, and to present a tactile map with current location
context. Additionally, it’s convenient to read annotation on the go.
The Figure 2 (left) illustrates the overview of the prototype, consisting of a portable
PMD, a WiiCane, a smart phone, an earphone with a microphone and a portable
computer. The WiiCane is made by mounting a Wii remote controller on the top of a
normal white cane. The smart phone is mounted on one shoulder, which has involved
sensors like GPS, digital compass and Bluetooth. Users can listen to related
annotations by the earphone. The computer in a backpack, runs the main application, and
connects all of the other devices through Bluetooth or USB interface.</p>
        <p>Besides, we design a set of tactile map symbols and YAH symbols through raised
pins, to represent the YAH maps, see Figure 2 (right). In particular, the set of YAH
symbols has eight symbols which point at one direction respectively, e.g. south,
southwest, etc. Therefore, the YAH symbols would present users’ location and
heading orientation simultaneously on maps while walking or stopping.</p>
        <p>While interacting with the YAH map, users can press the buttons on the WiiCane
to panning or zooming. Due to the touch-sensitive feature of the PMD, users can
obtain auditory descriptions by one finger contacting involved map symbols. To inquire
“Where I AM”, the YAH symbol will present in the center of the display
automatically. With the help of the YAH symbols, users would explore the surroundings and
discover the spatial relationship between themselves and nearby geographic features,
such as the orientation and distance to a bus stop or a building.
Although there are a couple of systems to collect enhanced accessible
geoinformation for the people with special needs via user-contributed annotation data,
from the perspective of scientific research it is still lack of a comprehensive
investigation of annotation taxonomy. Thus, in this section we focus on systematically
discussing how to utilize collaborative annotated data to enhance accessibility of geographic
features in real world, from its definition to data model and involved applications.
methods, from text, audio media, videos and pictures to haptic/tactile feedback, which
can help end users to learn about involved accessibility features.</p>
        <p>Fourthly, towards a stricter range of the term of geographic feature in the
definition, it contains various existing geographic referenced objects which can be
digitalized and stored in digital world, rather than all of the components on the Earth. In
addition, the annotations are made not only by the end users, but also from the
volunteer community who concerns about accessibility.
4.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>AGa Annotation Data Model</title>
        <p>
          Different to the ubiquitous annotation systems [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] for general requirements by linking
virtual information and existing objects in both physical and digital space, the
dimensions of AGa annotation focus on accessibility features while reading, writing, and
sharing the annotations. We list 9 dimensions to learn about AGa annotations.
1. Categories of Geographic Features: different accessibility features for different
categories;
2. Location Dimension: the annotation’s location in physical world;
3. Temporal Dimension: the creating/editing time;
4. Content Dimension: annotation’s body is about objective mapping information or
about subjective users’ experiences;
5. Structure Dimension: structured accessible attributes and users’ quantify
annotations, e.g. rating, or an unstructured description;
6. Source Dimension: explicit annotations from users’ descriptions directly and
implicit annotations from digital sensors data;
7. Presentation Dimension: accessible user interfaces;
8. Editing &amp; Sharing: annotation can be edited and shared between user groups;
9. Privacy Dimension: involved personal data;
        </p>
        <p>As illustrated in Figure 3, the visually impaired users would access the
accessibility map information (e.g. bus stops, entrances) and annotations from others. Even if
the 9 dimensions are described respectively, the relationship of them is correlative,
and will impact each other in the practical applications.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Discussion &amp; Conclusion</title>
      <p>Different to rendering color-enabled maps for the sighted through the visual channel,
the 2D PMD doesn’t allow overlapping map symbols by the raising pins. Thus, it’s
important to find a suitable strategy to rendering the YAH maps on a limited portable
PMD. For the visually impaired the cognitive mental maps generated while reading
location-aware maps on the move might be different from a desktop-based map for
pre-journey. However, what are their differences exactly is not clear yet. Besides,
excepting the above mentioned 9 dimensions, which dimensions else are useful for
the visually impaired?</p>
      <p>In order to let the visually impaired acquire much more spatial knowledge on the
move, the paper introduces an overview proposal how to build an augmented
nonvisual you-are-here maps through collaborative annotation via a portable PMD. For
future work, the prototype should be evaluated with end users who are visually
impaired.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Parente</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bishop</surname>
          </string-name>
          , G.:
          <article-title>BATs: The blind audio tactile mapping system</article-title>
          .
          <source>ACMSE. Savannah</source>
          ,
          <string-name>
            <surname>GA</surname>
          </string-name>
          ,
          <year>March 2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Heuten</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wichmann</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boll</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>Interactive 3D sonification for the exploration of city maps</article-title>
          ,
          <source>Proc. of NordiCHI</source>
          <year>2006</year>
          , pp.
          <fpage>155</fpage>
          -
          <lpage>164</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Miele</surname>
          </string-name>
          , J.:
          <string-name>
            <surname>Talking</surname>
            <given-names>TMAP</given-names>
          </string-name>
          :
          <article-title>Automated generation of audio-tactile maps using SmithKettlewell's TMAP software</article-title>
          ,
          <source>British Journal of Visual Impairment</source>
          ,
          <volume>24</volume>
          (
          <issue>2</issue>
          ),
          <year>2006</year>
          ,
          <fpage>93</fpage>
          -
          <lpage>100</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Zeng</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weber</surname>
            .,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>Audio-Haptic browser for a geographical information system</article-title>
          ,
          <source>Proc. of ICCHP</source>
          <year>2010</year>
          , pp.
          <fpage>466</fpage>
          -
          <lpage>473</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Park</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , et al.:
          <article-title>Supporting spatial awareness and independent wayfinding for pedestrians with visual impairments</article-title>
          ,
          <source>Proc. of ASSET</source>
          <year>2011</year>
          , pp.
          <fpage>27</fpage>
          -
          <lpage>34</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Yatani</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Banovic</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Truong</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>SpaceSense: Representing geographical information to visually impaired people using spatial tactile feedback</article-title>
          ,
          <source>Proc. of CHI</source>
          <year>2012</year>
          ,
          <volume>415</volume>
          -
          <fpage>424</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Völkel</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weber</surname>
          </string-name>
          , G.:
          <article-title>RouteCheckr: personalized multicriteria routing for mobility impaired pedestrians</article-title>
          .
          <source>Proc. of ASSETS</source>
          <year>2008</year>
          , ACM Press, pp.
          <fpage>185</fpage>
          -
          <lpage>192</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Hansen</surname>
            ,
            <given-names>F.A.</given-names>
          </string-name>
          :
          <article-title>Ubiquitous annotation systems: technologies and challenges</article-title>
          .
          <source>Proc. of HYPERTEXT</source>
          <year>2006</year>
          , pp.
          <fpage>121</fpage>
          -
          <lpage>132</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>