=Paper=
{{Paper
|id=Vol-1419/paper0042
|storemode=property
|title=What is Lost in Translation from Visual Graphics to Text for Accessibility
|pdfUrl=https://ceur-ws.org/Vol-1419/paper0042.pdf
|volume=Vol-1419
|dblpUrl=https://dblp.org/rec/conf/eapcogsci/Coppin15
}}
==What is Lost in Translation from Visual Graphics to Text for Accessibility==
What is Lost in Translation from Visual Graphics to Text for Accessibility
Peter Coppin (pcoppin@faculty.ocadu.ca)
Dept. of Industrial Design, Faculty of Design, OCAD University, Toronto, ON M5T 1W1 CANADA
Dept. of Mechanical and Industrial Engineering, University of Toronto, Toronto, ON M5S 3G8 CANADA
Abstract
Many blind and low-vision individuals are unable to access
digital graphics visually. Currently, the solution to this
accessibility problem is to produce text descriptions of visual
graphics, which are then translated via text-to-speech screen
reader technology. However, if a text description can
accurately convey the meaning intended by an author of a
visualization, then why did the author create the visualization
in the first place? This essay critically examines this problem
by comparing the so-called graphic–linguistic distinction to
similar distinctions between the properties of sound and
speech. It also presents a provisional model for identifying
visual properties of graphics that are not conveyed via text-to-
speech translations, with the goal of informing the design of
more effective sonic translations of visual graphics.
Graphics Without Visual Perception
Consider the experience of a blind or low-vision individual
who uses a screen reader to access pictures, diagrams,
charts, and graphs. Unlike a user who accesses graphical Figure 1. The chart (a) is composed of visually perceived
media through visual perception, the screen reader user shape contours (b) and text labels (c). Accessibility practices
usually accesses these graphics via text-to-speech translate b–c to text (d), with shapes described via text (e).1
“descriptions,” essentially interpretations of what was
deemed most relevant by the person who produced the text Many scholars have explored the differences between
descriptions of the author’s intended meaning. For example, graphics and text, often referred to as the so-called
Figure 1a presents a financial chart with rising and falling “graphic–linguistic distinction” (Shimojima, 1999). In
stock prices over time, where time is shown on the addition, researchers have investigated how so-called “non-
horizontal axis and monetary value is shown on the vertical linguistic sonification” can be employed to make charts and
axis. Figure 1d presents a text description of the chart graphs more accessible (e.g., Edwards, 2010). This essay
compliant with the Web Content Accessibility Guidelines examines the graphic–linguistic distinction in order to better
(WCAG), using text to describe the rising and falling understand how it could correspond to a similar distinction
monetary values over time. The next sections compare and between properties of non-linguistic sonification compared
contrast how these presentations are experienced. to speech to provide a means to identify what is lost when
In a text description of a visual graphic (Figure 1d), all of graphics are translated to text-to-speech. An increased
the information is conveyed via text (or text-to-speech, understanding could inform the design of new approaches
when conveyed via screen reader technology). But in the for conveying properties of graphically represented shapes
original chart (Figure 1a), only some of the information is via sound.
conveyed via text, predominantly numerical values and
labels (Figure 1c); the shape of the shaded contour The Graphic–Linguistic Distinction:
(Figure 1b) is not conveyed via text: the visually perceived Implications for Sonic Interface Design
shapes are picked up “more directly” and the features of The graphic–linguistic distinction has been described in
shapes are translated to text descriptions. However, various ways: analogical versus Fregean; analog versus
important properties of visually perceived shape information propositional; graphical versus sentential; and
(Figure 1b) are lost in translation and are instead conveyed diagrammatical versus linguistic (Shimojima, 1999).
via text (Figure 1e). This shape information is needed to
provide the unique affordances that are often associated
1
with “visual” representations relative to text. Adapted from “Web Accessibility Best Practices: Graphs” by
Campus Information Technologies and Educational Services
(CITES) and Disability Resources and Educational Services
(DRES), University of Illinois at Urbana/Champaign. Copyright
2005 by University of Illinois at Urbana/Champaign.
276
2D Versus Sequential axis, so that higher pitches correspond to points that
According to Larkin and Simon (1987), a diagrammatic intersect with the cursor at higher elevations (Figure 2,
representation can be defined as a “data structure in which right) and lower pitches correspond to points that intersect
information is indexed by two-dimensional location” with the cursor at lower elevations, thereby allowing blind
whereas a sentential representation can be defined as “a data or low-vision users to perceive the contours of the graph (cf.
structure in which elements appear in a single sequence”. Brown, Ramloll, Burton, & Riedel, 2003).
An advantage of diagrams is they “preserve explicitly the
information about the topographical and geometric relations
Relation Symbols and Object Symbols
among the components of the problem.” For the purposes of According to Russell (1923), in sentences “words which
this essay, the text description in Figure 1e are classified as mean relations are not themselves relations,” whereas in
sentential because the text is composed of marks arranged in graphical representations like maps, “a relation is
a linear sequence and the marks are taken to refer to words represented by a relation.” An example of the latter is the
with linguistic meanings (linguistically conveyed elements). financial chart (e.g., Figure 1a), where higher monetary
In contrast, Figure 1a is classified as a diagram because the values are conveyed via marks at higher elevations of the
financial values are indicated via (textually) labeled points graphic, whereas lower monetary values are conveyed via
or lines (elements) that are indexed to a graphical grid. The marks at lower elevations. This convention allows the
visually processed spatial relations among these labeled visually perceived spatial relationships among the marks to
marks yield powerful affordances, because by processing represent relationships among monetary values over time.
the contours of lines or the relative positions of marks
scattered across the two-dimensional graphical surface, the Implications for sonic charts and graphs
viewer can infer values and trends that are not explicitly Graphical relations could be conveyed sonically. Consider
conveyed via labels (cf. Barwise & Etchemendy, 1990). two tones with different pitches: Tone A and Tone B
(Figure 2, right). If Tone A is at a lower frequency than
Implications for sonic charts and graphs Tone B, then the sonic relation between the two tones is the
Sonic sentential properties. Text-to-speech (the current perceptible difference in pitch between the tones. For
standard for WCAG accessibility) would seem to be the example, if Tone A refers to a stock price at an earlier point
obvious candidate for the sonic version of what Larkin and in time, and Tone B refers to a stock price at a later point in
Simon referred to as a sentential structure, where elements time, then the perceptible difference between the pitches of
are arranged in a linear sequence. In the case of visually the tones can convey the difference in price over time.
processed written sentences composed of word forms Moving the sonic cursor from left to right would correspond
Pitch
printed on a page, the sequential properties result from the to a change (increase) in pitch, conveying the change in Stereo
linear arrangement of characters and word forms on the stock price over time via a sonic relation.
printed surface. In the case of sonic sentential structures, the “A is lower than B and
B is to the right of A”
sequential properties are temporal, presented as a sequence
of sounds that are perceptually processed as words that refer Pitch
D
to intended meanings. Larkin and Simon did not define what B
Increases
Pitch
the elements (that are arranged in sequence) are composed C
A E
of. For the purpose of this subsection, let us assume that the
elements are some combination of properties that, when Stereo
Cursor Moves
sequentially processed as words, refer to intended items. to Right
Sonic diagrammatic properties. To present diagrammatic
Figure 2. By scrubbing a “sonic cursor” along an axis,
properties in a way that can be perceived aurally, designers
audiences could access sonically conveyed relations through
would need to exploit properties of sound that can convey
changes in pitch and via stereo.
topological and geometric relations. People use stereo, echo,
and the Doppler effect to determine the spatial locations of
Analog Versus Digital
sound-producing objects in physical environments (cf. Nasir
& Roberts, 2007). Designers could exploit these cues to The classic distinction between analog versus digital, where
convey geometric and topological relations among elements analog refers to visual properties of a graphic and digital
that are indexed to a 2D plane (cf. Brown, Ramloll, Burton, refers to linguistic properties, is most commonly associated
& Riedel, 2003; Hermann, Hunt, & Neuhoff, 2011). with Goodman (1968). Shimojima (1999) illustrated this
Figure 2 shows how left and right arrow keys could move distinction using the example of a speedometer dial. The
an “audio cursor” to different positions on an x-axis of a analog aspect of the dial is the perceived orientation of the
computationally generated 2D space. The position of the speedometer needle relative to the numerically labeled
sonically conveyed cursor on the x-axis could be indicated marks on the dial. The digital aspect is the numerical
via stereo (cf. Zhao, Plaisant, Shneiderman, & Lazar, 2008). magnitude (speed) that the user extrapolates by perceptually
For a simple spark line graph, the sonic cursor can alter the processing the orientation of the needle relative to the marks
pitch of the sound if “scrubbed” to different points on the x- representing numerical values.
277
To illustrate how “diagrams are physical situations,”
Implications for sonic charts and graphs consider the illustration shown in Figure 2 (left). A text (or
The analog versus digital distinction appears to involve two text-to-speech) description might go as follows: “A is below
interrelated capabilities: lower-level perceptual capabilities B and both A and B are to the left of C.” Another textual
to process geometric and topological properties (e.g., those description might read: “B is between A and C and is above
shown on the speedometer dial); and higher-level both A and C.” Each text description conveys a different
capabilities to process, filter, and interpret how those interpretation of what is shown visually and therefore
perceptually processed features fall into conceptual affords different inferences. In contrast, a diagram can
categories (e.g., the numerically represented velocity) convey many other relationships because of how it conveys
(Mandler, 2006; Figure 3). For instance, to discern the topological and geometric information through visual
values shown on a visual financial chart, a user must perception: Barwise and Etchemendy referred to this as a
perceptually process the light reflected from the surface of diagram’s ability to present “countless facts.”
the chart, observing lines in relation to dots that are labeled
using textually conveyed numerical values and/or company Implications for sonic charts and graphs
names. To discern topological and geometric features using When Barwise and Etchemendy (1990) referred to diagrams
sound perception, a user would need the same set of as “physical situations,” they were referring to the properties
interrelated capabilities: lower-level capabilities to process (and affordances) of diagrams that emerge through
varying sound frequencies, timbre, etc., as well as higher- interaction via a human visual perception system. The
level capabilities to identify the linguistic meanings of the challenge for designers who seek to extend the affordances
sounds. The current text-to-speech approach only exploits of visual diagrams to the sonic domain is to identify
the digital properties of language – but designers could properties or dimensions of sound that similarly (i.e., using
produce more effective translations by recruiting “pre- human perceptual processing of sound) make use of
categorized” analog properties of sound such as pitch, echo, “physical situations” to present “countless facts.”
stereo, and timbre to convey geometric and topological Thus, a hybrid stereo–varying frequency interface (see
properties. Figure 3) should enable a user to “hear the shape” of a
contour. Indexing text-to-speech labels to contours should
allow users to form multiple sentences (countless facts)
about the geometric and/or topological relations among the
labeled elements.
Extending the Graphic–Linguistic Distinction into
the Sonic Domain
Let us now extend on the various graphic–linguistic
distinctions to consider sonic versions of visual charts and
graphs.
1. Extending on the diagrammatic versus sentential
distinction, text-to-speech can be considered a sonic version
of what Larkin and Simon referred to as a sentential
Figure 3. A perception–reaction system is hierarchically structure and is the current WCAG approach to web
organized to process lower-level perceptual structures and accessibility. In contrast, spatial sound can be exploited to
categorize them into higher-level conceptual categories. convey 2D sonic diagrammatic external representations.
2. Extending on the analog versus digital distinction, text-
Intrinsic Versus Extrinsic Constraints to-speech uses language to convey digital properties
sonically. The analog properties of sound, such as tone,
For brevity, the following discussion will use the classic timbre, stereo, and echo could afford the communication of
characterization provided by Barwise and Etchemendy spatial, geometric, or topological information.
(1990) because it is compact and intuitive: 3. Extending on the distinction between relation symbols
Diagrams are physical situations. They must be, since we and object symbols, the current text-to-speech approach
can see them. As such, they obey their own set of constraints uses words to convey relations. Because relations among
. . . By choosing a representational scheme appropriately, elements represented by analog and spatial properties of
so that the constraints on the diagrams have a good match sound are themselves relations, analog and spatial properties
with the constraints on the described situation, the diagram of sound could be recruited to map numerical values to
can generate a lot of information that the user never need perceptual dimensions.
infer. Rather, the user can simply read off facts from the 4. Extending on the distinction between intrinsic and
diagram as needed. This situation is in stark contrast to extrinsic constraints, producing sonic versions of visual
sentential inference, where even the most trivial graphics would require identifying “physical situations” that
consequence needs to be inferred explicitly.
278
naturally emerge during human perceptual processing of zones” (Damasio, 1989; Simmons & Barsalou, 2003) enable
sound to present “countless facts.” simulated prototypes of possible perception–reactions that
are not as easily described in terms of a specific perceptual
Perceptual and Conceptual Graphic Relations mode or a reenactment of a specific prior perception–action.
This section integrates these extensions and proposes how Instead, these simulated prototypes fall under more general
the graphic–linguistic distinction could be extended to sonic categories of possible perception–actions (Barsalou, 2003).
external representations. First, let us recruit and expand on These are not only more amodal, but have been described as
the distinction between lower-level perceptually processed more filtered, interpreted (Pylyshyn, 1973), conceptual
topological and geometric features of an environment versus (Barsalou, 2003, 2005), or abstract (Barsalou, 2003). For
the recognition, categorization, and linguistic example, a child who takes a bite out of what turns out to be
communication of those features. a rotten apple might later reenact this experience when she
Visual and aural sentential structures and relations are perceives another rotten apple with common properties.
detected and perceptually processed via lower-level sensory Over time, she will develop an understanding of ‘rotten’ as a
receptors and perceptual categories (Figure 3, left). In category that can include apples, as well as many other
written text or text-to-speech, what is most relevant is the objects and experiences.
higher-level conceptual category (Figure 3, right) that a Similarly, a child can learn to associate sounds with
given feature (such as perceptually processed printed text on certain intended meanings (learning a language), or to
a page or text-to-speech) is taken to fall under. What is associate marks with intended meaning (learning to read).
needed is a way to convey topological and geometric The abstract concept of ‘square’ can apply to a shape on a
relations among elements by exploiting lower-level raised surface that is touched but not seen, as well as to a
perceptually processed features of a visual graphic or sonic drawing on a piece of paper that is seen and not touched.
structure (Figure 3, left). Let us refer to these perceptually These “less modally specific” simulations have been
processed features as perceptual properties. Let us refer to described as more “interpreted” or “conceptual,” while more
these perceptually processed relations among elements as perceptually based simulations are considered to be more
perceptual relations. Let us refer to relations that are “concrete.” The next section applies this interpretation to
communicated via text as text-described relations. external graphic representation.
Back to charts and graphs. In a financial chart (and
Perceptual Relations vs. Text-Described Relations many other kinds of diagrams), relations are conveyed via
lower-level perceptual processing of the geometrical and
We are now ready to build on previous work by Coppin
topological properties of the marked physical surface
(2014) to provide a theoretical foundation for distinguishing
(Table 1). In contrast, in text descriptions (sentential
perceptual relations versus text-described relations.
structures), relations are conceptual (and conveyed
The model is based on the idea that an individual’s
linguistically; see Table 2); although visual properties of
perception–reaction loop (cf. Gibson, 1986) enables survival
printed text or aural properties of text-to-speech are also
and prosperity within a dynamic environment composed of
picked up by sensory receptors, what is meaningful about
change and variation. This requires capabilities to predict,
them is the conceptual relation that is conveyed
anticipate, and simulate (Barsalou, 1999) dynamic change
linguistically.
and variation. For example, reaching for and grasping an
item such as a cup requires capabilities to perceptually
Table 1. Diagrams are composed of perceptually processed
process features from the proximal surface of the item and
relations among linguistically conveyed conceptual objects;
also to predict, anticipate, and simulate features of the distal
sentences are composed of linguistically conveyed
surface of the item.
conceptual relations among linguistically conveyed
These simulations are constructed from the memory
conceptual objects (adapted from Coppin, 2014).
traces of past perception–reactions (conjunctive neurons), so
Diagrammatic Sentential
simulation involves many of the same neural systems used
Relations Perceptual Conceptual
during perception (Kosslyn, Ganis, & Thompson, 2001).
Objects or Items Conceptual Conceptual
For example, as I perceive the cup, I am also informing
potential action (reaching for and grasping the proximal and
distal sides of the cup). Thus, perception and simulation are
integrated aspects of perception–reaction within a physical
Perceptual Specificity is Lost in Translation
environment, and each act of perception–reaction leaves The idea of “specificity” is central to understanding what is
memory traces in the form of conjunctive neurons across lost in translation, so let us begin by clarifying what is
lower-level association areas (Figure 3). meant by “more or less specific” in this context. Consider
At lower-level association areas, which are more tightly the line shown in Figure 4b. Relative to the line of Figure
coupled with sensory receptors, simulated prototypes fall 4c, we have more knowledge about the location of a point in
under perceptual categories. At higher-level association a one-dimensional space, due to the shaded red marker. This
areas (see Figure 3, right), conjunctive neurons converge in means we have more certainty (or more information) about
zones across multiple sensory modes. These “convergence
279
the specified location of the point in Figure 4b than we do Application to an Example Design Problem
about the location of the point in Figure 4c. Let us now return to the WCAG text description example
from Figure 1 in order to demonstrate what is lost in
translation and how what is lost could be conveyed via non-
linguistic sound. In the text description (Figure 1d), the
problem is that all content is conveyed conceptually (via
text-to-speech) whereas the original visual graphic that the
text description is based on conveys much of the content
(the contour of the shape) perceptually: Perceptual relations
Figure 4. The left vertical line (b) refers to the limited range are lost and replaced by conceptual relations, generating
of perceptual structures conveyed via a given graphic. The perceptual ambiguity. If the objective is to present Figure 1a
right line (c) refers to the wider range of possible conceptual sonically, how can a designer decide which aspects should
categories that the perceptual structures could fall under. be conveyed via conceptual properties (text-to-speech) and
The model predicts that when perceptual specificity is high which aspects should be conveyed via perceptual sonic
(b) conceptual specificity is low (c). properties (such as spatial sound)?
Recall the perceptual distinction, where perceptual
Extending the line example to discuss perceptual properties are predicted to afford the communication of
relations, Figure 4b refers to intentionally configured marks concrete structures more effectively compared with
or sounds from an author to cause intended audience conceptual properties, and an aspect of a graphic can be
percepts (the diagram in Figure 4a). However, the identified as “more concrete” if it produces a perceptual
perceptual relations of Figure 4a can be processed, filtered, structure that corresponds to what could be picked up and
and interpreted to fall under a range of possible relational perceptually processed from a physical environment. In this
categories (that can be text-described), indicated by the account, the graphically represented shape contour
highlighted segment of the right line in Figure 4c (as shown (Figure 1b) is primarily perceptual, and is therefore more
in Figure 4d: “A is below B and both A and B are to the left appropriate for translation to sonic properties that can use
of C” or “B is between A and C and is above both A and spatial sound to convey geometric and topological relations
C”). In other words, although perceptual specificity is high, among conceptually conveyed objects.
conceptual specificity of the intended relation is low To determine which aspects of a graphic should be
because the perceptual relations can fall under numerous conveyed via text-to-speech, recall the conceptual
conceptual categories. However, the reverse is also true and distinction: text is predicted to afford the communication of
this reversal exposes the heart of what is lost during the abstract conceptual categories more effectively compared
translation process. with perceptual properties, and a concept can be identified
as more abstract if it is more amodal. In other words, it is
Conceptual Specificity is Perceptually Ambiguous less easily mapped back to a structure that could be picked
Extending the line example to discuss the perceptual up and perceptually processed from a physical environment.
ambiguity of text-described (conceptual) relations, the right Under this account, the numbers that label increments on the
highlighted line in Figure 5c refers to a specific (sentential) x and y axes (Figure 1a) are more conceptual because they
text description authored to convey intended conceptual cannot be mapped back to a perceptual structure that could
relations (Figure 5d). However, numerous perceptual be picked up from a physical environment.
relations (Figure 5a) can fall under the text-described
conceptual relations, indicated by the highlighted segment Conclusion
of the left line in Figure 5b. In other words, although
This essay proposes a provisional model to underpin the
conceptual specificity is high, perceptual specificity of the
various accounts of the graphic–linguistic distinction
intended relations is low, because numerous perceptual
described in the literature as a means to extend the graphic–
relations can fall under the text-described conceptual
linguistic distinction into aural domains. The model makes
relations.
the distinction in terms of lower level perceptual capabilities
that enable perceivers to perceptually process concrete
structures (e.g., geometric and topological features) on the
one hand, and higher level capabilities that enable
perceivers to process and interpret how those perceptually
processed structures fall under more abstract conceptual
categories on the other.
Due to these distinctions, the model predicts that
Figure 5. The model predicts that when conceptual perceptual relations (conveyed via graphics or non-linguistic
specificity is high (c) perceptual specificity is low (b). sonification) afford the communication of concrete relations
(conveyed via text or text-to-speech) more effectively
compared to conceptual relations conveyed via text or text-
280
to-speech. In addition, the model predicts that conceptual Brown, L. M., Brewster, S. A., Ramloll, S. A., Burton, R.,
relations (conveyed via text or text-to-speech) afford the & Riedel, B. 2003. Design guidelines for audio
communication of abstract relations more effectively presentation of graphs and tables. International
compared to perceptual relations conveyed via graphics or Conference on Auditory Display.
non-linguistic sonification. This could be tested, for Coppin, P. W. 2014. Perceptual-cognitive properties of
example, by observing whether perceivers can identify pictures, diagrams, and sentences: Toward a science of
visual data sets more accurately using sonification or text visual information design (Doctoral dissertation, Univers-
descriptions. ity of Toronto, Toronto, Canada). Retrieved from https://
In addition, the model streamlines accounts that tspace.library.utoronto.ca/handle/1807/44108
distinguish diagrammatic from sentential structures to Damasio, A. R. 1989. The brain binds entities and events by
(1) characterize sentential structures as composed of multiregional activation from convergence zones. Neural
conceptual relations among conceptual objects on the one Computation, 1(1), 123–132.
hand, and (2) diagrammatic structures as perceptually Gibson, J. J. 1986. The ecological approach to visual
represented relations among conceptual objects on the other. perception. Hillsdale, NJ: Lawrence Erlbaum.
Under this account, (3) a sonic diagram is conceptualized as Goodman, N. 1968. Languages of art: An approach to a
sonically conveyed relations among linguistically conveyed theory of symbols. Indianapolis, IN: Bobbs-Merrill
(via text-to-speech) objects. Company.
This model is useful within a design context because Edwards, A. D. N. 2010. Auditory display in assistive
designers lack clear models or guidelines for converting technology. In T. Hermann & A. Hunt (Eds.), The
visual graphics into non-visual perceptual modes. This can Sonification Handbook (431–453). Berlin: Verlag.
be seen in the WCAG text description example, which Kosslyn, S. M., Ganis, G., and Thompson, W. L. 2001.
ignores the pictorial properties of graphics. Neural foundations of imagery. Nature Reviews
By reverse engineering the classic graphic–linguistic Neuroscience, 2(9), 635–642. doi:10.1038/35090055
distinction to more fundamental perceptual principles, this Larkin, J. H., & Simon, H. A. 1987. Why a diagram is
model provides a way to understand how the distinction (sometimes) worth ten thousand words. Cognitive
applies to sonic representations. This approach can also be Science, 11, 65–99. doi:10.1111/j.1551-6708.1987
applied to haptic representations but the focus of this paper .tb00863.x
was on sound for its ubiquity in the consumer market. Mandler, J. M. 2006. Categorization, development of. In
Encyclopedia of Cognitive Science. doi:10.1002
Acknowledgements /0470018860.s00516
This research was supported in part by grants from the Nasir, T., & Roberts, J. C. 2007. Sonification of spatial data.
Centre for Innovation in Data-Driven Design and the In 13th International Conference on Auditory Display
Graphics Animation and New Media Centre for Excellence. (ICAD 2007) (pp. 112–119). ICAD.
I would like to thank Research Assistant Ambrose Li for his Palmer, S. E. 1978. Fundamental aspects of cognitive
assistance in the preparation of this essay and Dr. David representation. In E. Rosch & B. B. Llyod (Eds.)
Steinman for the many fruitful conversations that helped Cognition and Categorization, 259–303. Hillsdale, NJ:
inform the ideas explored in the work described here. Lawrence Erlbaum Associates, Publishers.
Pylyshyn, Z. W. 1973. What the mind’s eye tells the mind’s
References brain: A critique of mental imagery. Psychological
Bulletin, 80(1), 1.
Barsalou, L. W. 1999. Perceptual symbol systems. Russell, B. 1923. Vagueness. Australasian Journal of
Behavioral & Brain Sciences, 22, 577–660. Psychology and Philosophy, 1(2), 84–92. doi:10.1080
Barsalou, L. W. 2003. Abstraction in perceptual symbol /00048402308540623
systems. Philosophical Transactions of the Royal Society Shimojima, A. 1999. The graphic-linguistic distinction:
of London. Series B: Biological Sciences, 358(1435), Exploring alternatives. Artificial Intelligence Review,
1177–1187. doi:10.1098/rstb.2003.1319 13(4), 313–335.
Barsalou, L. W. 2005. Abstraction as dynamic interpretation Simmons, W. K., & Barsalou, L. W. 2003. The similarity-
in perceptual symbol systems. In L. Gershkoff-Stowe & in-topography principle: reconciling theories of concept-
D. Rakison (Eds.), Carnegie Symposium Series: Building ual deficits. Cognitive neuropsychology, 20, 451–486.
object categories (pp. 389–431). Majwah, NJ: Erlbaum. Spence, C. 2011. Crossmodal correspondences: A tutorial
Barsalou, L. W. 2009. Simulation, situated conceptual- review. Attention, Perception, & Psychophysics, 73(4),
ization, and prediction. Philosophical Transactions of the 971–995. doi:10.3758/s13414-010-0073-7
Royal Society of London. Series B: Biological Sciences, Zhao, H., Plaisant, C., Shneiderman, B., & Lazar, J. 2008.
364(1521): 1281–1289. doi:10.1098/rstb.2008.0319 Data sonification for users with visual impairment: a case
Barwise, J., & Etchemendy, J. 1990. Visual information and study with georeferenced data. ACM Transactions on
valid reasoning. In W. Zimmerman (Ed.), Visualization in Computer-Human Interaction (TOCHI), 15(1), 4.
mathematics (pp. 8–23). Washington, DC: Mathematical
Association of America.
281