<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>UML Modeling for Visually-Impaired Persons</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Brad Doherty and Betty HC Cheng Department of Computer Science and Engineering Michigan State University East Lansing</institution>
          ,
          <addr-line>Michigan 48824</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <fpage>4</fpage>
      <lpage>10</lpage>
      <abstract>
        <p>-Software modeling is generally a collaborative activity and typically involves graphical diagrams. The Unified Modeling Language (UML) is the de facto standard for modeling object-oriented software. It provides notations for modeling a system's structural information (e.g. databases, sensors, controllers, etc.), and behavior, depicting the functionality of the software. Because UML relies heavily on graphical information, visually impaired persons (VIPs) frequently face challenges conceptualizing the often complex graphical layouts, involving numerous graphical objects. The overall objective of the PRISCA project is to facilitate collaborative modeling between VIPs and other project team members. Towards this end, this paper describes preliminary PRISCA work into developing software that automatically generates a haptic 3D representation of the UML diagrams from the output of an existing UML diagram editor. In addition, textual annotations for the models are converted to Braille and printed in 3D atop the respective graphical objects. Research and human factor challenges are reviewed in the paper in an effort to raise the level of awareness to the MDE community of this important area of work.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>
        While UML and other graphical modeling approaches
have become key components of the modern software
design process, the heavy use of visual features as
descriptions has created unintentional obstacles for
software developers with visual impairments. Recreating the
diagrams as haptic models has demonstrated a viable
solution for addressing this problem [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] (e.g., using
pushpins and string on a sheet of cardboard to represent
graphical objects), however, an automatic method for
producing such models directly from the editing source
used to create the models for sighted developers does
not exist. Moreover, requiring software developers to
recreate the models in a 3D modeling environment is
not feasible given time and effort constraints imposed by
project deadlines. This paper describes PRISCA, a
toolchain that automatically renders a 3D-printable haptic
UML model from an existing UML modeling tool. 1
1This project is inspired in part by Priscilla McKinley, a VIP who
was an advocate for making computing and technology accessible to
VIPs [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        Presenting a 2D visual object to an individual with
a visual impairment can be difficult as verbal-based
descriptions become less effective as the complexity of
an image increases. The TeDUB software [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]
automates the description process by translating a UML
diagram into a navigable XML type format that can be
processed by a standard text reader, thus providing a
more efficient process for describing models to VIPs than
verbal-based descriptions [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. A drawback of such
a tool, however, is its inability to fully capture
visuallydependent descriptions, such as relative spacing of
diagram components, relationships depicted by differing
connector types, etc., all of which are features commonly
used in UML and other software modeling. The use
of haptic translations of such models is one approach
to overcoming such challenges. A system of cardstock,
plastic strips, and pins developed by Brookshire [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]
demonstrates the usefulness of haptic models.
Brookshire's method however, requires significant effort from
a sighted person to manually assemble the models, which
is usually not feasible under time and effort constraints.
Automatically creating haptic displays has seen some
success with a refreshable tactile pin display developed
by Roberts et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. This system has the added benefit
of being able to easily produce Braille descriptions along
with the tactile graphic, but the low resolution of the pins
is a limiting factor.
      </p>
      <p>This paper presents preliminary results for PRISCA,
a project whose overall objective is to facilitate
software modeling collaboration between VIP developers
and other developers by using haptic representations of
software models. Specifically, PRISCA currently supports
a proof of concept tool-chain that takes output from
an existing UML modeling tool and transforms the
models into corresponding 3D representations defined in
terms of the input language for an existing 3D printer.
PRISCA also generates a 3D Braille representation of the
textual annotations included in the graphical models. In
addition, we have reusability and extensibility as guiding
principles when developing the PRISCA translation and
rendering components in order to handle multiple types
of diagrams.</p>
      <p>
        An overarching goal is to make the technology
accessible, without requiring developers to purchase expensive
software or hardware, or incur development time to
recreate software models in a 3D format. As such,
PRISCA automatically processes the XML output from
Visual Paradigm [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], a commonly-used commercial
modeling tool, to produce an STL (STereoLithography)
file, the input language for the MakerBot® 3D printer
(“affordable and powerful consumer 3D printer” [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]).
The tool-chain incorporates techniques to develop 3D
representations of the diagram element using 3D
modeling libraries. PRISCA also includes a utility for
translating the textual annotations (e.g. class name and
attributes) into a 3D representation of the corresponding
Braille text. Thus far, as proof of concept of the
approach and given their frequent use in practice, we have
focused on producing UML class and sequence diagrams
(including example Braille text annotations) to capture
structural and behavioral information, respectively.
      </p>
      <p>
        Our tool chain has been validated on sample
models created with Visual Paradigm UML [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], including
models created for projects developed in collaboration
with industrial partners. The remainder of this paper is
organized as follows. Based on the literature and our
experiences with VIP students, Section II highlights three
key obstacles to making graphical artifacts accessible
to VIPs. In Section III, we overview the approach we
used to develop PRISCA, including the key elements
of PRISCA and examples of PRISCA renderings. Next,
in Section IV, we discuss open issues and research
challenges with developing modeling support for VIPs
through our work with PRISCA. Section V summarizes
the work and discusses future investigations.
      </p>
    </sec>
    <sec id="sec-2">
      <title>II. POSITION STATEMENT</title>
      <p>Based on review of the literature and feedback from
VIP students studying computer science and software
engineering, three key obstacles make graphical artifacts
difficult to access by visually-impaired persons (VIPs).</p>
      <sec id="sec-2-1">
        <title>A. Technology Limitations</title>
        <p>First, there exists limited technology for translating
graphical artifacts into an accessible form for VIPs.
Three complementary approaches have been developed
to provide modeling support for VIPs: text-based
descriptions, smart interfaces with voice-over textual
descriptions, and haptic recreations of models. Each of
these are briefly reviewed, including their limitations.</p>
        <p>
          The TeDUB diagram interface [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] relies on verbal
descriptions of diagrams to convey the contents to visually
impaired persons. The process begins with the input of
a bitmap image of a diagram [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. TeDUB then translates
the bit map image into a semantically enriched format
and uses this XML-like structure to produce a navigable
hierarchy that is accessible through an interface [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. This
tool relies on automatic recognition of nodes and other
hierarchical structures found on the diagram [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. Upon
analysis of the system, all the users in a study found the
software provided a sufficient way to learn how to read
the UML diagrams, where a few criticisms were noted
regarding the representation of hierarchical structured
diagrams [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ].
        </p>
        <p>
          Another approach is to provide audio translations of
the diagrams directly. Research and pedagogical
activities (e.g., PLUMB [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]), along with companies
like ViewPlus [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] and TouchGraphics [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] have made
progress in creating audio translations of graphical
artifacts. Smart devices, such as the Apple® iPAD® [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]
have voiceover features that can also provide audio
descriptions of graphical artifacts. As with the text-based
presentation of the diagrams, it can be challenging to
establish spatial and hierarchical relationships between
graphical icons and maintain a conceptual model as it
evolves. Unfortunately, without a mental model of a
given graphical artifact, it can be extremely
challenging to fully comprehend the syntax and the intended
semantics of the models, despite detailed text-based
descriptions [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. To this point, the National Federation
of the Blind (NFB) has emphasized that while text-based
approaches to modeling are helpful in making graphics
more accessible to VIPs, the lack of tactile interfaces is
still a significant limiting factor [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ].
        </p>
        <p>
          Limited work has been explored to develop tactile
representations, but they have been shown to be effective
in describing graphical relationships, including
hierarchical structures. One such example was used to teach
database diagrams to visually impaired students [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ].
This approach used cardstock cards to represent classes,
plastic strips for connectors, and push pins to describe
cardinality. The objects were fastened to a cardboard
mounting surface, thus providing a haptic diagram
representation. Upon evaluation, students admittedly
appreciated the tactile design over an auditory graphics system,
and collaboration with sighted students was seen as an
advantage of such a design [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. This system is limited by
the amount of manual assembly required, thus making
it less feasible in an environment constrained by time
and effort. A similar approach was used to teach UML
class diagrams at Michigan State University. Specifically,
a sheet of cardboard with push pins and string were
used as a means to “display” a simple model to a
VIP in the Computer Science and Engineering (CSE)
Department [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. While these approaches might be useful
for illustrating a specific model, it is clearly not scalable
nor practical even for a collaborative semester project,
involving sighted developers.
        </p>
        <p>
          Automatic tactile displays can be used to decrease the
amount of effort required to perform manual assembly of
diagrams. The use of a refreshable tactile graphic display
described by Roberts et al. [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ], provides a reusable
surface for rendering haptic displays. A graphical image
of the model was scanned to produce a sequence of
actuation signals for a “bed of nails” to represent the
graphical artifacts images [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. Specifically, the system
used an array of pins that could be raised or lowered
individually to create an image in a similar mode to
pixels on a screen. The resolution was limited, where
only 3600 nails were used as actuators. These displays
can be navigated by sense of touch, much like the card
and plastic strip system described previously, providing
a similar tactile medium for diagram descriptions.
Refreshable tactile displays are limited by the detail that
they can provide however, as the pin density is ten pins
per inch [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ], which may cause difficulties in providing
optimal resolution for some diagrams.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>B. Project Resource Constraints</title>
        <p>A second obstacle is the limited resources in terms
of time and money, which prohibit collaborators from
redrawing models produced in a collaborative modeling
environment using a 3D modeling tool as a means to
produce tactile output representations of the models. In
order for the VIP to contribute to modeling activities,
they must be working with the same version of the model
as the sighted team members. If major changes are made
to the models, then those changes have to be recreated
and updated in the 3D modeling tool. This additional
burden will make it unattractive to participate in such a
team for both the sighted and the VIP team members.
Eventually, the VIPs role in the modeling efforts will
decrease as the project progresses.</p>
      </sec>
      <sec id="sec-2-3">
        <title>C. Misperceptions</title>
        <p>
          The third obstacle is the lack of education and
misperception of the capabilities of VIPs to contribute
to modeling [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. Studies have shown that when one
of our main senses is lost, the remaining senses are
heightened and even more developed than those without
the disability [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]. For people with visual disabilities,
their auditory, tactile, and language processing skills are
particularly advanced [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. As such, by making existing
modeling platforms accessible to VIPs, we gain access to
these people and their skills that would otherwise not be
available to a team. Despite tenacity and their pioneering
spirit, many VIPs have encountered the above-mentioned
obstacles [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ], [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ], [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>III. APPROACH</title>
      <p>This section describes the PRISCA project. We start
by overviewing our objectives in developing PRISCA.
Then we describe the enabling technologies and process
that we used to support the PRISCA tool-chain. Then
we demonstrate the use of PRISCA on examples and
highlight example human factor issues that motivated our
specific design decisions.</p>
      <sec id="sec-3-1">
        <title>A. Objectives</title>
        <p>The main objective of PRISCA is to facilitate VIP
collaboration in software modeling. PRISCA does this by
automatically translating the output of a UML
diagramming tool into a 3D printer format by leveraging and
extending existing parsing and 3D rendering libraries.
PRISCA also handles the translation of textual
annotations within a UML diagram by translation into a 3D
Braille format atop the graphical elements. Priorities in
the design of PRISCA include a focus on reusability of
diagram features (e.g. similar shapes and line types),
extensibility to other diagram types, and finally
extensibility to a range of modeling tools and 3D printers.</p>
      </sec>
      <sec id="sec-3-2">
        <title>B. Modeling and Printing Facilities.</title>
        <p>
          As mentioned before, an overarching goal of this
project is accessibility of the technology to VIPs. To
this point, we selected a relatively easily accessible
modeling tool and affordable 3D printer. Specifically,
we selected Visual Paradigm [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] given its differential
licensing program, with special consideration for
academic institutions, its XML output format, its availability
for use on multiple platforms, and its support for multiple
diagramming notations in addition to UML.
        </p>
        <p>
          For producing the 3D representations of the 2D UML
diagrams, we make use of a graphics library from
OpenSCAD, an open source computer aided design
software commonly used to create 3D CAD models [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ].
OpenSCAD is built atop multiple software libraries and
is available for use on multiple platforms. As such, it
provides a rich set of primitives that we can use,
including a scripting language to create 3D representations of
the 2D UML diagram elements.
        </p>
        <p>In order to produce the physical 3D, haptic
representation of the model, we use the MakerBot®
Replicator 2 3D printer. This printer is affordable by individual
users (i.e., less than $3000 USD), and it is becoming
increasingly popular in academic settings for students
and faculty to use. Of particular interest for our project, it
has an option to accept input in an ASCII representation
of the STL (STereo Lithography), a commonly used
language in many software packages for 3D printing,
CAD modeling, and rapid prototyping. A key feature
for this project is its simplicity in that it only represents
surface geometry of a 3D object, without support for
color or texture, neither of which is needed for our
current purposes.
optimal contrast in the image, the STL as viewed in
OpenSCAD’s viewer has been used in lieu of a
photographic image of the physical 3D “print-out”. Figure 2
shows a UML class diagram created in Visual Paradigm
(in blue) and the PRISCA rendering below (in yellow).
Figure 3 shows a Visual Paradigm sequence diagram (in
blue) and its PRISCA rendering (in yellow).</p>
      </sec>
      <sec id="sec-3-3">
        <title>C. Overview</title>
        <p>
          This section overviews the PRISCA approach. Based
on classroom and industrial experiences, our intent is
for a development team to use Visual Paradigm [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] to
create a UML diagram and then export it as a standard
XML file. PRISCA processes the XML file to produce
an STL file that can be sent to a MakerBot® 3D printer
to produce a 3D, haptic representation of the UML
diagram.
        </p>
        <p>Figure 1 gives a data flow diagram (DFD) of the
PRISCA tool chain. Here, rectangles denote external
entities, circles describe processes, two parallel lines
delimit a grammar file (i.e. persistent data), and arrows
represent data flows. Starting with the modeling tool
Visual Paradigm, a UML diagram can be exported as
an XML document (1). PRISCA parses the XML file
to produce the corresponding 3D representations of the
graphical UML objects comprising rendering
instructions defined in terms of the OpenSCAD library utilities.
Then PRISCA uses the OpenSCAD utility to generate the
ASCII STL format (2) from the OpenSCAD instructions.
The STL file generated by PRISCA can then be sent to
a 3D printer to render the final diagram (3).</p>
      </sec>
      <sec id="sec-3-4">
        <title>D. Example Diagrams</title>
        <p>The following section provides examples of UML
diagrams processed by PRISCA. In order to produce the</p>
      </sec>
      <sec id="sec-3-5">
        <title>E. Human Factor Issues</title>
        <p>
          These examples help to illustrate the challenges we
faced when trying to teach UML diagramming to a VIP
student in a project course involving industrial
collaborators. In an earlier course, the student had previously
used the cardboard with push pins and string to learn
the syntax of a class diagram [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. With that
hapticbased information, she was able to actively contribute
to the discussions associated with the class diagram.
But the sequence diagram was taught only through oral
instruction (using descriptive explanations of shapes,
connectors, layouts, etc.) and from textbook descriptions.
Based on a debriefing session with the student after the
course completed, despite the simplicity of the sequence
diagram notation, the student never gained a proficient
understanding of the object lifelines and the messages
between them. As a result, the student was only able to
contribute at a conceptual level to discussions related to
sequence diagram modeling. The sequence diagram
information seemed abstract, lacking concrete connection
to the project (including the class diagram elements).
Now with PRISCA, the student has access to a haptic
representation of the sequence diagram and can create
a mental model of the spatial relationships between the
graphical elements. Furthermore, PRISCA enables VIPs
to obtain access to models (and their revisions) without
incurring any additional modeling efforts from the rest
of the development team.
        </p>
        <p>A key human factor issue that was raised with this
experience is how to effectively communicate with VIPs
(or others with disabilities). In part due to lack of lead
time in realizing that a VIP was taking the course (i.e.,
the VIP student approached the instructor at the end of
the first day of class to ask for an electronic copy of
the syllabus and the other materials that were distributed
to the class), it was not possible to rewrite the course
materials to be more descriptive for a VIP. As such,
, an attempt was made to add impromptu descriptive
explanations during the live delivery of lectures. The
VIP student was asked at the end of each lecture if there
were any questions regarding the lecture material, or if
additional examples could be reviewed with the VIP to
illustrate the modeling syntax and/or semantics. In each
case, the VIP declined any additional assistance. During
the debriefing after the conclusion of the course, the
VIP student informed the instructor that she thought that
she understood the modeling materials sufficiently, but
only when taking an exam containing questions about the</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>IV. OPEN ISSUES AND RESEARCH CHALLENGES This section discusses several challenges uncovered during the course of the development of PRISCA.</title>
      <p>Handling Textual Annotations: Handling the
textual annotations found on UML diagrams has been a
challenge in the development of PRISCA (e.g.,
association labels, class attributes and operations), where we
are continuing to investigate better ways to provide the
textual descriptions in a way that both assists VIPs
and supports collaboration. The original plan for the
textual annotations was to convert the text to Braille and
print the 3D model with Braille descriptions presented
in the same manner as the original diagram’s textual
annotations. The main challenge with translating the text
directly to Braille is that the minimum size of Braille
text is too large relative to the 3D print area, thus not
allowing the same amount of textual description found
in a standard UML diagram (see Figure 4). While this
limitation might ultimately rule out translation to and
production of all the diagram annotations in Braille, the
most effective medium for relaying textual annotations
will be determined through interactive feedback with
VIPs and further investigation of emerging technologies.</p>
      <p>Representing Connector Endpoint Information:
A key advantage to diagramming, particularly
wellillustrated with the UML class diagram, is the ability
to represent a rich amount of information in a concise
fashion. For example, consider the potential pieces of
information that can be represented with a binary
association between two classes (e.g., association label,
multiplicities, link attributes, role names, directed
association, aggregation, etc.) Positioning connector endpoints
more effectively, optimizing connector thickness, and
determining the best method for scaling the diagram
also present challenges for PRISCA development. These
challenges are being addressed through trial and error
as we continue to apply PRISCA to a variety of diagram
layouts and through extending the PRISCA translation
procedures.</p>
      <p>Applicability and extensibility of PRISCA: As we
continue to explore more diagramming notations, we
may discover that the current graphics library/utilities
are insufficient to capture the level of details needed. Or
the STL language may be insufficient to capture the level
of sophistication needed. For example, different textures
and/or thickness variations may be one way to capture
coloring schemes commonly used in diagramming tools.</p>
      <p>Another challenge is how to make our techniques
portable to other modeling tools. While XML is intended
to be a standard output format for UML modeling
languages, we have found that the XML output from
one UML modeling tool is not directly usable by another
UML modeling tool. Previously, we explored the
development of an “XML-interchange” language that could
be a common format to which various vendor-specific
or tool-specific XML languages could be mapped.</p>
    </sec>
    <sec id="sec-5">
      <title>Finally, while the current work applies to UML</title>
      <p>modeling, our longer term goal is to extend the
capabilities to other commonly-used diagramming notations,
such as those used for mathematical graphing,
MATLAB, etc.</p>
      <p>Collaborative Modeling: As mentioned earlier,
VIPs have heightened senses to compensate for the
visual impairment, such as memory and touch. We are
continuing to explore the most effective process for
using PRISCA to produce 3D representations of the
UML diagrams. In particular, what degree and types
of model changes warrant generating an updated haptic
representation. During the course of creating a model for
a software project, the beginning stages might involve
dramatic changes from one iteration to the next as the
development team gains a better understanding of the
requirements and design constraints from the customer.
As the project progresses, the changes become more
finegrained and occur less frequently. We will work with
VIPs to determine what is the most effective means to
convey the changes in order to enable them to effectively
provide feedback on the models.</p>
    </sec>
    <sec id="sec-6">
      <title>V. CONCLUSION</title>
      <p>This paper describes PRISCA, a proof of concept
project to make progress in enabling VIPs to collaborate
with other (sighted) developers by working with 3D
“print-outs” of software models. Automatically creating
haptic 3D diagrams from the tools used by other
developers enables PRISCA to be practically implemented in a
setting constrained by time and effort. Currently, it takes
about 1.5-2 hours to produce a 3D representation of a
UML diagram with 10-15 elements. (The upper
timeframe is needed when producing the sequence diagrams
since the object lifelines and their labels are printed atop
a supporting sheet of plastic; the 3D “print-out” of the
diagram is too fragile to handle otherwise.) The
conversion of text to Braille further facilitates comprehension
of a model.</p>
      <p>The original motivation for the objective of PRISCA
was to teach UML modeling to visually-impaired
computer science students to enable their collaboration on
industry-sponsored team projects. As such, introduction
to the syntactic elements and their (spatial) relationships
was an important capability. As with sighted developers,
proficiency with modeling for students and industry staff
will come with practice and exposure to a variety of
examples. PRISCA enables VIPs to gain at least a cursory
understanding of UML modeling notations beyond what
might be gained from auditory and/or textual
descriptionbased instruction.</p>
      <p>
        Our future work will pursue several dimensions of the
PRISCA project, including the aforementioned research
challenges. The focus on reusability of diagram elements
facilitates our ability to extend PRISCA to handle other
diagram types, beyond the sequence and class UML
diagrams presented in the paper. We will continue to
work with VIP developers to expand PRISCA to address
the mentioned research challenges and to facilitate
modeling collaboration between VIPs and other development
team members. We will also explore and leverage the
emerging technology associated with the use of haptic
representations for images on websites [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], innovative
user interface designs specifically for VIPs [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ], and 3D
art with touch sensors that trigger auditory descriptions
and explanations [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ].
      </p>
    </sec>
    <sec id="sec-7">
      <title>ACKNOWLEDGMENTS</title>
      <p>This work has been supported in part by NSF grants
CNS-1305358 and DBI-0939454, Michigan State
University EnSURE Program, General Motors Research,
and Ford Motor Company. The authors gratefully
acknowledge the early work with the OpenSCAD libraries
for the PRISCA project done by Marcos Botros. Jordyn
Castor has been an inspiration for the project, answering
questions and providing feedback for the PRISCA project.
Finally, we greatly appreciate the detailed comments
provided by the reviewers.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C. B.</given-names>
            <surname>Owen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Coburn</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Castor</surname>
          </string-name>
          , “
          <article-title>Teaching modern objectoriented programming to the blind: An instructor and student experience</article-title>
          ,”
          <source>in Proceedings of 2014 ASEE Annual Conference</source>
          , (Indianapolis, Indiana),
          <source>ASEE Conferences</source>
          ,
          <year>June 2014</year>
          . Available from website https://peer.asee.org/23100.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>R. G</surname>
          </string-name>
          . Brookshire, “
          <article-title>Teaching UML database modeling to visually impaired students</article-title>
          ,
          <source>” Issues in Information Systems</source>
          , vol.
          <volume>7</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>98</fpage>
          -
          <lpage>101</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P. L.</given-names>
            <surname>McKinley</surname>
          </string-name>
          , “
          <article-title>Literacy in the Lives of the Blind: An ethnographic study in the San Francisco Bay area</article-title>
          .”
          <source>PhD Dissertation</source>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>H.</given-names>
            <surname>Petrie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Schlieder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Blenkhorn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. G.</given-names>
            <surname>Evans</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>King</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.- M. O'Neill</surname>
            ,
            <given-names>G. T.</given-names>
          </string-name>
          <string-name>
            <surname>Ioannidis</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Gallagher</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Crombie</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Mager</surname>
            , and
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Alafaci</surname>
          </string-name>
          , “
          <article-title>Tedub: A system for presenting and exploring technical drawings for blind people</article-title>
          ,”
          <source>in Proceedings of the 8th International Conference on Computers Helping People with Special Needs</source>
          , ICCHP '
          <fpage>02</fpage>
          , (London, UK, UK), pp.
          <fpage>537</fpage>
          -
          <lpage>539</lpage>
          , Springer-Verlag,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Horstmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lorenz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Watkowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Ioannidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Herzog</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>King</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. G.</given-names>
            <surname>Evans</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hagen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Schlieder</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.-M. Burn</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>King</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Petrie</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Dijkstra</surname>
            , and
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Crombie</surname>
          </string-name>
          , “
          <article-title>Automated interpretation and accessible presentation of technical diagrams for blind people,” New Review of Hypermedia and Multimedia</article-title>
          , vol.
          <volume>10</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>141</fpage>
          -
          <lpage>163</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>King</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Blenkhorn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Crombie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dijkstra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Evans</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Wood</surname>
          </string-name>
          ,
          <article-title>Presenting UML software engineering diagrams to blind people</article-title>
          . Springer,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Miller</surname>
          </string-name>
          , “
          <article-title>Accessible diagram interfaces</article-title>
          ,”
          <year>2006</year>
          . Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi
          <source>= 10.1.1.104.6892&amp;rep=rep1&amp;type=pdf.</source>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D.</given-names>
            <surname>Miller</surname>
          </string-name>
          , Can we work together?
          <source>PhD thesis</source>
          , University of North Carolina,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>W. M. H.</given-names>
            <surname>Oosting</surname>
          </string-name>
          , “
          <article-title>Giving Visual Impaired People an Extra Dimension: Designing a Tactile Graphics Handling Braille Display</article-title>
          ,” in 1st Twente Student Conference on IT,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Visual</surname>
            <given-names>Paradigm</given-names>
          </string-name>
          , http://www.visual-paradigm.com/features, Visual Paradigm Quick Start,
          <volume>12</volume>
          .1 ed.,
          <source>April</source>
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>CNET</given-names>
            <surname>Staff</surname>
          </string-name>
          , “CNET MakerBot Replicator Review,”
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>R. F.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Meacham</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Skaff</surname>
          </string-name>
          , “PLUMB:
          <article-title>Displaying Graphs to the Blind Using an Active Auditory Interface,”</article-title>
          <source>in Proceedings of the 7th International ACM SIGACCESS Conference on Computers and Accessibility</source>
          , pp.
          <fpage>182</fpage>
          -
          <lpage>183</lpage>
          , ACM,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>R. F.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Meacham</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Skaff</surname>
          </string-name>
          , “
          <article-title>Teaching graphs to visually impaired students using an active auditory interface,”</article-title>
          <source>in Proceedings of the 37th SIGCSE Technical Symposium on Computer Science Education</source>
          , pp.
          <fpage>279</fpage>
          -
          <lpage>282</lpage>
          , ACM,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14] ViewPlus, “Delivering Sense Ability.”
          <article-title>Tiger Software Suite for translating text to Braille</article-title>
          . Website: https://viewplus.com/ software/.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15] TouchGraphics, “
          <article-title>Tactile Design for Universal Access</article-title>
          .”
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Apple</given-names>
            <surname>Inc</surname>
          </string-name>
          ., “Chapter 11.
          <article-title>Using VoiceOver Gestures</article-title>
          .” Description of features on website: https://www.apple.com/voiceover/info/ guide/ 1137.html.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>J.</given-names>
            <surname>Castor</surname>
          </string-name>
          , “
          <article-title>Experiences with modeling, from high school to college</article-title>
          .” Personal Communication,
          <year>Fall 2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>D.</given-names>
            <surname>Brauner</surname>
          </string-name>
          and E. Summers, “iOS Accessibility:
          <article-title>Teaching the Teachers</article-title>
          ,”
          <year>2013</year>
          . Retrieved from website: https://nfb.org/images/ nfb/publications/fr/fr32/1/fr320112.htm.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>J.</given-names>
            <surname>Roberts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Slattery</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. O'Doherty</surname>
          </string-name>
          , and T. Comstock, “
          <volume>37</volume>
          .2:
          <string-name>
            <given-names>A</given-names>
            <surname>New Refreshable</surname>
          </string-name>
          <article-title>Tactile Graphic Display Technology for the Blind</article-title>
          and Visually Impaired,
          <source>” SID Symposium Digest of Technical Papers</source>
          , vol.
          <volume>34</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>1148</fpage>
          -
          <lpage>1151</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>National</given-names>
            <surname>Institute Of</surname>
          </string-name>
          Standards And Technology, “Nist ”
          <article-title>Pins” down imaging system for the blind</article-title>
          ,
          <source>” ScienceDaily</source>
          , vol.
          <volume>13</volume>
          ,
          <year>November 2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <article-title>National Federation of the Blind of Washington, “About blindness.” Retrieved from website</article-title>
          . http://www.nfbw.org/blindness. html.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>M.</given-names>
            <surname>Bates</surname>
          </string-name>
          , “
          <article-title>Super powers for the blind and deaf: The brain rewires itself to boost the remaining senses</article-title>
          ,” Scientific American,
          <year>September 2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <article-title>“Priscilla McKinley Obituary</article-title>
          .” Mitchell County Globe Gazette,
          <year>December 2010</year>
          . http://globegazette. com/mcpress/obituaries/priscilla-mckinley
          <source>/article d60b3c1e-0c8a-11e0-9c41-001cc4c002e0.html.</source>
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>P. L. McKinley</surname>
          </string-name>
          , “
          <article-title>Baby Steps, Long Strides, and Elephant Seal Humps</article-title>
          .” Braille Monitor,
          <year>2004</year>
          . Available from website: https://nfb.org/images/nfb/publications/bm/bm04/ bm0410/bm041003.htmj.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>J.</given-names>
            <surname>Castor</surname>
          </string-name>
          , “
          <article-title>Pursuing a Dream and Beating the Odd</article-title>
          .” Michigan State University Newsletter. Retrieved from website: http:// givingto.msu.edu/stories/story.cfm?id=
          <fpage>33</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kintel</surname>
          </string-name>
          , About OpenSCAD. OpenSCAD, Retrieved from http: //www.openscad.org/about,
          <year>2015</year>
          .03 ed.,
          <source>March</source>
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bajcsy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-S.</given-names>
            <surname>Li-Baboud</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Brady</surname>
          </string-name>
          , “
          <article-title>Depicting web images for the blind and visually impaired</article-title>
          .
          <source>” SPIE website</source>
          ,
          <year>2013</year>
          . Retrieved from http://spie.org/x104896.xml.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>D.</given-names>
            <surname>Schnelle-Walka</surname>
          </string-name>
          and
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Mhlha¨user, “User interfaces for brainstorming meetings with blind and sighted persons</article-title>
          .” Darmstadt University website. Available at https://www.tk.informatik.tu-darmstadt.de/de/research/ talk-touch-interaction/uiblindmeeting/.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>S.</given-names>
            <surname>Schlinker</surname>
          </string-name>
          , “'Accessible Art at the Broad:
          <article-title>Exhibit Combines Braille, Poetry and Painting</article-title>
          .”
          <source>MSUToday Newspaper</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>