<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A new Molyneux&apos;s problem: Sounds, shapes and arbitrary crossmodal correspondences</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Ophelia</forename><surname>Deroy</surname></persName>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Centre for the Study of the Senses</orgName>
								<orgName type="department" key="dep2">Institute of Philosophy</orgName>
								<orgName type="institution">University of London</orgName>
								<address>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Malika</forename><surname>Auvray</surname></persName>
							<affiliation key="aff1">
								<orgName type="laboratory">LIMSI</orgName>
								<orgName type="institution">CNRS</orgName>
								<address>
									<settlement>Orsay</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">A new Molyneux&apos;s problem: Sounds, shapes and arbitrary crossmodal correspondences</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">B64459436A6481D547206CFDB5D5195F</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T09:27+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Crossmodal correspondences, Audition, Touch</term>
					<term>Molyneux&apos;s problem</term>
					<term>Amodal invariants</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Several studies in cognitive sciences have highlighted the existence of privileged and universal psychological associations between shape attributes, such as angularity, and auditory dimensions, such as pitch. These results add a new puzzle to the list of arbitrary-looking crossmodal matching tendencies whose origin is hard to explain. The puzzle is all the more general in the case of shape that the shapes-sounds correspondences have a wide set of documented effects on perception and behaviour: Sounds can for instance influence the way a certain shape is perceived <ref type="bibr" target="#b14">(Sweeny et al., 2012)</ref>. In this talk, we suggest that the study of these crossmodal correspondences can be related to the more classical cases of crossmodal transfer of shape between vision and touch documented as part of Molyneux's question, and reveal the role that movement plays as an amodal invariant in explaining the variety of multimodal associations around shape.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Introduction: A contemporary version of Molyneux's problem</head><p>How do shapes sound? The question does not seem to make sense metaphysically: Shapes are not endowed with auditory properties. In addition, similarities or differences in shapes do not directly correlate with differences in sounds, given that crucial elements such as density, size, and material properties will make similarly shaped objects sound very differently when they are similarly struck. For instance, a small dense sphere might have the same sound as a bigger and less dense cylinder when both are struck in a similar way; and the rich repertoire of drums should convince us that shape is not all that matters to determine how objects sound.</p><p>If the question 'how do shapes sound?' needs to be dismissed then, a milder version of the question might be more resistant: Supposing that shapes have sounds, what would their sound be? Surprisingly, several studies in cognitive sciences have highlighted convergent and stable responses to this question, and they have shown the existence of privileged psychological associations between shape attributes and auditory dimensions, such as pitch. When asked which of two shapes, one rounded and the other one angular, should be called 'Takete' and which one should be called 'Maluma', most participants answer that the angular shape should be 'Takete' and the rounded one, 'Maluma' <ref type="bibr">(Kohler, 1929</ref><ref type="bibr">(Kohler, , 1947</ref>; see also <ref type="bibr">Ramanchandran &amp; Hubbard, 2001a</ref> and figure <ref type="figure">1</ref>).</p><p>Figure <ref type="figure">1</ref>. Three examples of crossmodal correspondences, documented between (a) sounds and size by <ref type="bibr" target="#b11">Sapir (1929)</ref>; (b) sounds and shape (angularity) by <ref type="bibr">Köhler (1929</ref><ref type="bibr" target="#b4">Köhler ( , 1947) )</ref> and <ref type="bibr">Ramachandran &amp; Hubbard (2001)</ref>; and (c) sounds and shape (aspect ratio) by <ref type="bibr" target="#b14">Sweeny et al. (2012)</ref>. This crossmodal association between shapes and sounds might look surprising at first, but a series of evidence shows it to be present across cultures <ref type="bibr" target="#b1">(Bremner et al.,2013)</ref> and from an early age (i.e., four months, see <ref type="bibr">Orztuck et al., 2012</ref>, see also <ref type="bibr" target="#b7">Maurer et al., 2006</ref>, for evidence in 2 to 2,5 years old). While neurological investigation starts to unveil a specific pattern of neurological activity in the superior / intraparietal regions as well as in frontal areas corresponding to the shape-sound associations <ref type="bibr" target="#b4">(Kovic et al., 2009;</ref><ref type="bibr">Peiffer-Smadja, 2010</ref>; see also <ref type="bibr">Bien et al., 2012 for a EEG/ TMS study and</ref><ref type="bibr" target="#b11">Sadaghiani et al., 2009</ref>, for a fMRI study of related arbitrary audio-visual correspondences), associations between shapes and sounds is absent in individuals with damage to the angular gyrus <ref type="bibr" target="#b11">(Ramachandran &amp; Hubbard, 2001b)</ref>, suggesting that this is a robust neuropsychological phenomenon.</p><p>What's more, shapes-sounds correspondences have recently been shown to have behavioural consequences, as the visual perception of briefly presented shapes can be affected by certain types of sounds <ref type="bibr" target="#b14">(Sweeny et al., 2012;</ref><ref type="bibr"></ref> see also Spence &amp; Deroy, 2012a for a discussion). Sweeny and his colleagues have indeed shown that oval shapes, whose aspect ratio (relating width to height) varied on a trial-by-trial basis, were rated as looking wider when a /woo/ sound was presented at the same time, and as looking taller when a /wee/ sound was presented instead. By contrast, the perceived shape was not affected by other natural sounds such as birds or engine sounds, showing that a specific crossmodal effect was at stake between these sounds and these shapes. On the one hand, these findings add to a growing body of evidence demonstrating that audiovisual correspondences can have perceptual (as well as decisional) effects (see <ref type="bibr" target="#b14">Parise &amp; Spence, 2012;</ref><ref type="bibr" target="#b1">Deroy &amp; Spence, 2013</ref>, for a review). On the other hand, the results concerning sound-shape correspondences add a new puzzle to the list of arbitrarylooking crossmodal matching tendencies whose origin is hard to explain.</p><p>The puzzle is all the more general in the case of shape that the shape-sound correspondences have a wide set of documented effects and applications. Besides the aforementioned bias in shape perception, they are shown to facilitate language learning <ref type="bibr" target="#b3">(Imai et al., 2008)</ref> and to be exploited in various audio-visual mapping technologies such as music visualization software representing sounds as shapes or sensory substitution devices encoding shapes as sounds (see <ref type="bibr" target="#b1">Deroy &amp; Auvray, 2012)</ref>.</p><p>In the present paper, we suggest that the study of these multimodal associations surrounding shape can be related to the more classical cases of crossmodal transfer of shape between vision and touch documented as part of Molyneux's question (part 2). We review the dominant explanations offered to explain shape-sound correspondences, in terms of conceptual mediation (part 3) and innate hyper-connectivity which is not eliminated by perceptual learning (part 4), before arguing that the hypotheses of associative learning and common neurological representations, proposed to explain the tactile-visual Molyneux's shape transfer can also explain the shape-sound crossmodal matchings. In conclusion, we stress that the hypotheses currently investigated for shape matchings in touch and vision benefit from being extended to the more arbitrarylooking cases of matchings shapes between audition and vision, thereby stressing the multimodal dimension of shape.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">A new Molyneux problem</head><p>Arbitrary-looking crossmodal matchings, as they are called <ref type="bibr" target="#b6">(Maurer &amp; Mondloch, 2005;</ref><ref type="bibr">Spence &amp; Deroy, 2012b)</ref>, can be defined as tendencies to associate distinct sensory features that do not obviously co-occur in experience or in the environment. For instance, moving away from sound-shape pairings for a moment, the tendency to pair higher-pitched sounds with brighter visual surfaces is also shown to be present in adults <ref type="bibr" target="#b5">(Marks, 1974)</ref> and in infants <ref type="bibr" target="#b7">(Maurer et al., 2006)</ref>. So is the tendency to match higher frequency sounds with higher visual locations (e.g. Evans &amp; Treisman, 2010; Spence, 2011 for a review). These pairings occur although brighter objects and animals do not (at least straightforwardly) emit higher pitched sounds than their darker counterparts, and although higher pitched sounds do not regularly come from higher locations in space. The same lack of environmental grounding holds for the correspondence between shapes and sounds: Unless it should turn out that angular objects give rise to sounds that are relevantly different from rounded objects when, for example, they are explored haptically (e.g. <ref type="bibr">Guzman-Martinez et al., 2012)</ref>, there seems to be no straightforward environmental correlation between shapes and sounds of objects either.</p><p>Crossmodal correspondences between sounds and shapes (or between pitch, brightness and elevation) are difficult to square with the currently popular view that crossmodal associations need to be learned out of the natural multisensory statistics of the environment (see <ref type="bibr">Spence, 2011)</ref>. Their origin therefore prompts a series of questions. How do such crossmodal correspondences come to be present in humans and other animals? Do they have any ecological value? Determining here whether these sound-shape associations are innate <ref type="bibr">(Ludwig et al., 2011;</ref><ref type="bibr" target="#b6">Maurer &amp; Mondloch, 2005;</ref><ref type="bibr" target="#b7">Maurer et al., 2012)</ref> or acquired; and in this case, determining how they are acquired (see <ref type="bibr" target="#b5">Martino &amp; Marks, 1999;</ref><ref type="bibr">Spence, 2011;</ref><ref type="bibr">Walker et al., in press</ref>) raise, as we shall see, a new Molyneux's problem, which teaches us new lessons on the multimodal aspect of shapes.</p><p>The core of Molyneux's problem, raised initially by Molyneux back in the 17th century, in the heat of the rationalist-empiricist controversies (Locke, 1690; see also <ref type="bibr" target="#b9">Morgan, 1977)</ref> is still very much relevant today (e.g., <ref type="bibr">Held et al., 2011</ref><ref type="bibr" target="#b14">, Streri, 2012)</ref>. The question is to determine whether the crossmodal matching observed between felt and seen shapes at a very early age is acquired through exposure and associative learning, or whether it pre-exists exposure instead. To put it in a philosophical way, the question consists in deciding whether the crossmodal matching of shapes is a priori or a posteriori. To put it in a psychological way: is the tactile-visual connection for shapes innate / hardwired or acquired?</p><p>All past and current replies to Molyneux's problem have been framed on the basis that the matching between tactile and visual shapes targets one and the same environmental property (that is , shape is viewed as an objective or primary quality. Note that <ref type="bibr">Berkeley (1948)</ref> is one of the rare philosophers who seems to have accepted that tactile shapes and visual shapes can constitute different objective properties). This objective grounding is what gives the crossmodal matching of tactile shapes and visual shapes a form of necessity and rationality of interest to philosophers. Now, necessity, rationality and objectivity are what become problematic when we turn to arbitrary crossmodal matchings between sounds and shapes; as they obviously do not target one and the same environmental feature. Certain shapes do not necessarily go with certain sounds. For instance, associating the sound 'Bouba' to a rounded rather than to an angular shape looks irrational and this association does not seem to inform us about an objective regularity. So why would we pair sounds to shapes? Due to these key differences, the mainstream proposals developed for the Molyneux-type of crossmodal associations have not been thought to be relevant to address this question.</p><p>The very fact that the crossmodal corespondences between shapes and sounds is called arbitrary comes from the fact that scientists have had a hard time pinning down a regular environmental correlation between the property of being of a certain shape and the property of emitting a certain sound's pitch. Even harder to explain are crossmodal correspondences between shapes and flavours <ref type="bibr" target="#b1">(Deroy &amp; Valentin, 2011)</ref> or between symbolic shapes and smells <ref type="bibr">(Seo et al., 2010)</ref> which also do not receive a straightforward explanation as internalised statistics of the environments. These other matchings might deserve a separate treatment, but they stress the crux of the problem: If shapes and the other properties are not necessarily or regularly correlated, how could these matchings be learned by association? And if they are not learned by exposure, how could one make sense of the fact that we have evolved to have hard-wired or a priori connections between the representations of shapes and these apparently unrelated properties in our mind / brain?</p><p>The competing options that have been recently proposed to explain arbitrary crossmodal matchings between shapes and sounds, as we shall see below, recycle the ones that were once proposed for Molyneux's case but that were subsequently rejected. On the one hand, the idea, initially proposed for Molyneux's cases (see Locke, 1690; or <ref type="bibr" target="#b9">Morgan, 1977</ref> for a review) is that matchings across sensory modalities take place through an association made at the level of ideas or concepts and, on the other hand, the idea, that they are fully present at birth (i.e. that they are a priori, see Kant, 1998).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Sounds-shapes correspondences as conceptually mediated</head><p>The idea that crossmodal matchings require a conceptual mediation is very much the way Molyneux's cases were discussed at the times of Locke and Berkeley, when the connection was supposed to be established between the 'idea' of shape prompted by vision and the 'idea' of shape prompted by touch. The idea of a conceptual mediation is however no longer considered appropriate in order to explain early crossmodal matchings of visual and tactile shapes. However, in the case of arbitrary crossmodal matchings, this hypothesis is pursued by a growing number of researchers: Correspondences between pitch, brightness, and angularity, for instance, have been explained by the cognitive capacity that observers have to represent various sensory features, or dimensions, on a common scale <ref type="bibr" target="#b5">(Martino &amp; Marks, 1999;</ref><ref type="bibr" target="#b14">Walker et al., 2012)</ref>, to metaphorically map one conceptual domain onto another <ref type="bibr" target="#b13">(Shen, 1997 ;</ref><ref type="bibr">Shen &amp; Eisemann, 2008;</ref><ref type="bibr" target="#b14">Williams, 1976)</ref> or to reason analogically <ref type="bibr">(Premack &amp; Premack, 2003</ref>; see also <ref type="bibr" target="#b1">Deroy &amp; Spence, 2013;</ref><ref type="bibr">Spence, 2011</ref>, for a discussion). Now, the main problem for these conceptual solutions comes from explaining the presence of crossmodal matchings at a very early age (e.g., as early as 4 months, for shapes and sounds, see <ref type="bibr">Orztuck et al., 2012)</ref> and the difference between the neurological activations noticed for crossmodal correspondences and semantic or analogical reasoning <ref type="bibr">(Sadhigani et al., 2009)</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Shape-sound correspondences as remnants of non-functional innate connections</head><p>The nativist idea that crossmodal matchings could be present from birth has been eliminated -at least in the case of non-arbitrary matchings -for a long time in favour of the less radically nativist claim that they come from innate learning mechanisms guided by amodal or redundant representations of time, space, and intensity in the brain (see Bahrick &amp; Lickiter, 2012, for a review). The strong nativist option is however still very much present when it comes to explaining arbitrary crossmodal correspondences as shown by the growing popularity of what is called the 'neonatal synaesthesia hypothesis' (see <ref type="bibr">Maurer et al., 2012, for a review)</ref>. The idea here is that these correspondences come from a lack of differentiation of the infant's perceptual apparatus, and persist into adulthood due to of a lack of pruning or inhibitory feedback of some of these non-functional connections <ref type="bibr">(Maurer &amp; Mondloch, 1995;</ref><ref type="bibr" target="#b7">Maurer et al., 2012)</ref>. Now, there are good reasons not to go back to strong nativist hypotheses, even to explain the arbitrary crossmodal matchings evidenced in infants. The putative functional role of arbitrary crossmodal matchings as coupling priors in multisensory learning <ref type="bibr" target="#b1">(Ernst, 2007;</ref><ref type="bibr">Spence, 2011)</ref> and multisensory integration <ref type="bibr" target="#b14">(Parise &amp; Spence, 2012)</ref>, or as a kind of crossmodal Gestalt grouping principle (namely, a kind of crossmodal grouping by similarity; see Spence, submitted), together with neurological differences <ref type="bibr" target="#b11">(Sadaghiani et al., 2009;</ref><ref type="bibr">Spence &amp; Parise, in press)</ref>, are sufficient to distinguish them from non-functional associations that can exist in synaesthetes (no matter whether they are adults or children; see <ref type="bibr">Ward, 2012)</ref>. This adds to the fact that nativist explanations in general are now hard to support in face of the demand that innate traits are traced back to their genetic encoding (a demand which is not easy to meet for most nativist hypotheses, see <ref type="bibr" target="#b5">Lewkowicz, 2011)</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Updating the associative learning and common coding hypotheses to explain</head><p>sound-shape correspondences.</p><p>In this section, we want to argue that the alternative to explain arbitrary crossmodal correspondences either by late conceptual mediation or as being innate is wrongly limited. A first step here consists in stressing that explanations in terms of statistical learning and/or common neural coding have been too swiftly excluded.</p><p>The assumption that pairings -between, for instance, angularity and high-pitched sounds -are not regularly experienced by infants is more of an ungrounded assumption. It rather appears to be the default conclusion once one cannot come up with a plausible environmental source for the correlation. It should be more thoroughly investigated by taking into account precise measurements of exposure. Audiovisual correspondences between shapes and sounds might also come from a specific domain, namely speech. The mouth movements observed when someone utters speech sounds like 'Takete' or 'wee' are more stretched (angular / narrow) than the wider rounded movements observed when one utters 'Maluma' or 'woo'; suggesting a regular correlation between pitch and shapes. This restores the plausibility of an associative learning account, especially compatible with the idea that infants are particularly attentive to face / voice or mouth / sounds in the first months of their life (see the perceptual narrowing hypothesis, <ref type="bibr" target="#b5">Lewkowicz, 2002)</ref>.</p><p>The second assumption that the neurological representation of visual brightness and auditory pitch cannot have anything in common also appears to rely on a predetermined view of what the legitimate common amodal representations in the brain are (i.e., space, time -plus or minus number / magnitude and quantity / intensity; see <ref type="bibr" target="#b5">Marks, 1978)</ref>. This assumption does not consider other possibilities which are getting investigated in recent work in cognitive neurosciences, that movement <ref type="bibr">(Held et al., 2011)</ref> and embodiment could act as common sensibles (note that movement was considered as such by Aristole and Locke).</p><p>Once related to speech, the correspondences between sound and shape can also be explained not merely in terms of audiovisual associations, but also in terms of audiomotor associations, linking the sounds that one hears to the automatic articulatory movements generated when listening to speech <ref type="bibr">(Galantucci, Fowler, &amp; Turvey, 2006)</ref>. If the latter account were to be correct, this crossmodal correspondence would then become embodied <ref type="bibr">(Pezzulo et al., 2011)</ref>, grounded in sensorimotor associations, rather than being based on an external association between two sensory experiences, whose resemblance would be processed in an amodal manner.</p><p>One way to distinguish between the statistical and embodied accounts here would be to test whether this correspondence exists only in cases or in species where the vocalising follows the takete-sharp mouth movements rule. Note that this can be contrasted with the correspondence between the sound-size of the source which can be found across species, independently of their rules of vocalization (see <ref type="bibr">Ludwig et al., 2011)</ref>.</p><p>It will further be interesting to determine whether the sound-shape and sound-size crossmodal correspondences are related, and whether the latter has multiple origins (perhaps originating both in external and embodied underlying factors). Understanding the role of embodied vs. external associations would certainly help to link Sweeny et al.'s <ref type="bibr" target="#b2">(2012)</ref> results to others showing that the shapes we see -and respond to -can also influence the pitch (or fundamental frequency) of the speech sounds we utter <ref type="bibr" target="#b9">(Parise &amp; Pavani, 2011)</ref> or that making a mouth movement (consistent with 'ba' or 'da') can give rise to a McGurk effect <ref type="bibr">(McGurk &amp; MacDonald, 1976)</ref> when listening to speech sounds, just as when actually viewing someone else's mouth movements uttering those sounds (see <ref type="bibr" target="#b11">Sams, Mottonen, &amp; Sihvonen, 2005)</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>In this concluding section, we want to insist on the importance of focusing on shape-sound correspondences when thinking about shapes, especially in a multidisciplinary approach. From a global / philosophical perspective, these correspondences encourage a broadening of the investigation of Molyneux's problem, initially focused on tactile and visual shapes, to more contingent associations which can come to matter as much for linguistic and perceptual behaviour. Interestingly, the associative and commonality hypotheses framed here to account for correspondences between shapes and auditory attributes are also at the moment pursued for 'nonarbitrary' matchings of visual and tactile shapes, raising important questions as to how these two shapes might interact, and how situations of single vs. distinct properties can come to differ.</p><p>From a more specific and empirical perspective, crossmodal correspondences between shapes and sounds have a role in language acquisition and linguistic intuitions <ref type="bibr" target="#b3">(Imai et al., 2008)</ref>. They can also explain the use of crossmodal adjectives to talk about sounds (e.g., sharp sounds). But mostly, as we want to highlight, they show all their importance when thinking about the optimization of auditory-visual translations, be it the 'auditory' translation of visual shapes -as in sensory substitution through devices which aim at compensating the loss of sight through a coding / decoding device, such as the vOICe <ref type="bibr" target="#b5">(Meijer, 1992)</ref> or the Vibe <ref type="bibr" target="#b2">(Hanneton et al., 2010</ref>; see also <ref type="bibr" target="#b0">Auvray &amp; Myin, 2009</ref>, for a review) or the visual translation of sounds; for instance as in musical composition software.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="2,135.37,216.70,327.51,245.64" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Correspondence to: Ophelia Deroy, Centre for the Study of the Senses, Institute of Philosophy, University of London, Malet Street, London, UK. ophelia.deroy@sas.ac.uk</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">The sound of size: Crossmodal binding in pitch-size synesthesia: A combined TMS, EEG, and psychophysics study</title>
		<author>
			<persName><forename type="first">M</forename><surname>Auvray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Myin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">E</forename><surname>Bahrick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Lickliter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Works of George Berkeley</title>
				<editor>
			<persName><forename type="middle">A A</forename><surname>Bishop Of Cloyne</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><forename type="middle">E</forename><surname>Luce</surname></persName>
		</editor>
		<editor>
			<persName><surname>Jessop</surname></persName>
		</editor>
		<meeting><address><addrLine>Oxford, UK; Berkeley, G.; London</addrLine></address></meeting>
		<imprint>
			<publisher>Thomas Nelson and Sons</publisher>
			<date type="published" when="1948">2009. 2012. 1948-1957. 2012</date>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page" from="663" to="672" />
		</imprint>
	</monogr>
	<note>NeuroImage</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Bouba and Kiki in Namibia? Western shape-symbolism does not extend to taste in a remote population</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bremner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Caparos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Davidoff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>De Fockert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Linnell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Spence</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Deroy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Auvray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Deroy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Crisinel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><forename type="middle">O</forename><surname>Deroy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Valentin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">O</forename><surname>Ernst</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">K</forename><surname>Evans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Treisman</surname></persName>
		</author>
		<author>
			<persName><surname>Galantucci</surname></persName>
		</author>
		<idno type="DOI">10.3389/fpsyg.2012.00457</idno>
		<ptr target="http://dx.doi.org/10.3758/s13423-013-0387-2" />
	</analytic>
	<monogr>
		<title level="m">Crossmodal correspondences between odours and contingent features: Odours, musical notes, and arbitrary shapes</title>
				<editor>
			<persName><forename type="first">J</forename><surname>Mossbridge</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Suzuki</surname></persName>
		</editor>
		<imprint>
			<publisher>Psychonomic Bulletin</publisher>
			<date type="published" when="2006">2013. 2012. 2013. 2011. 2007. 2010. 2006</date>
			<biblScope unit="volume">126</biblScope>
			<biblScope unit="page" from="361" to="377" />
		</imprint>
	</monogr>
	<note>Psychonomic Bulletin &amp; Review</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Interactive coding of visual spatial frequency and auditory amplitudemodulation rate</title>
		<author>
			<persName><forename type="first">S</forename><surname>Hanneton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Auvray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Durette</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Bionics and Biomechanics</title>
		<editor>Gandhi, T., Ganesh, S., Mathur</editor>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page" from="269" to="276" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
	<note>Current Biology</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Sound symbolism facilitates early verb learning</title>
		<author>
			<persName><forename type="first">P</forename><surname>Sinha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Imai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kita</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Nagumo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Okada</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Critique of Pure Reason</title>
				<editor>
			<persName><forename type="first">W</forename><surname>Köhler</surname></persName>
		</editor>
		<meeting><address><addrLine>Cambridge; New York</addrLine></address></meeting>
		<imprint>
			<publisher>Liveright</publisher>
			<date type="published" when="1929">2011. 2008. 1998. 1929</date>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="54" to="65" />
		</imprint>
	</monogr>
	<note>Gestalt psychology</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Gestalt psychology: An introduction to new concepts in modern psychology</title>
		<author>
			<persName><forename type="first">W</forename><surname>Köhler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kovic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Plunkett</surname></persName>
		</author>
		<author>
			<persName><surname>Westermann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Cognition</title>
		<imprint>
			<biblScope unit="volume">114</biblScope>
			<biblScope unit="page" from="19" to="28" />
			<date type="published" when="1947">1947. 2009</date>
			<publisher>Liveright Publication</publisher>
		</imprint>
	</monogr>
	<note>The shape of words in the brain</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Visuo-auditory mappings between high luminance and high pitch are shared by chimpanzees (Pan troglodytes) and humans</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Lewkowicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Lewkowicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><surname>Meijer</surname></persName>
		</author>
		<author>
			<persName><surname>Marks</surname></persName>
		</author>
		<author>
			<persName><surname>Martino</surname></persName>
		</author>
		<author>
			<persName><surname>Martino</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The unity of the senses: Interrelations among the modalities</title>
				<editor>
			<persName><forename type="first">L</forename><forename type="middle">E</forename><surname>Marks</surname></persName>
		</editor>
		<meeting><address><addrLine>London; USA; New York</addrLine></address></meeting>
		<imprint>
			<publisher>Academic Press</publisher>
			<date type="published" when="1690">2002. 2011. 1690. 2011. 1992. 1974. 1978. 1999. 2001</date>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="61" to="65" />
		</imprint>
	</monogr>
	<note>Current Directions in Psychological Science</note>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Neonatal synaesthesia: A reevaluation</title>
		<author>
			<persName><forename type="first">D</forename><surname>Maurer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">J</forename><surname>Mondloch</surname></persName>
		</author>
		<editor>L.</editor>
		<imprint>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">The shape of boubas: Soundshape correspondences in toddlers and adults</title>
		<author>
			<persName><forename type="first">D</forename><surname>Maurer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Pathman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">J</forename><surname>Mondloch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Synaesthesia: Perspectives from cognitive neuroscience</title>
				<editor>
			<persName><forename type="first">A</forename><forename type="middle">J</forename></persName>
		</editor>
		<meeting><address><addrLine>Oxford</addrLine></address></meeting>
		<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="2006">2006. 2012</date>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="316" to="322" />
		</imprint>
	</monogr>
	<note>Infant synaesthesia</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Hearing lips and seeing voices</title>
	</analytic>
	<monogr>
		<title level="m">Multisensory development</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Mcgurk</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Macdonald</surname></persName>
		</editor>
		<meeting><address><addrLine>Oxford, UK</addrLine></address></meeting>
		<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="1976">1976</date>
			<biblScope unit="volume">264</biblScope>
			<biblScope unit="page" from="239" to="250" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Sound symbolism in infancy: Evidence for sound-shape cross-modal correspondences in 4-month-olds</title>
		<author>
			<persName><forename type="first">M</forename><surname>Morgan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Ozturk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Krehm</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Vouloumanos ; Parise</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Pavani</surname></persName>
		</author>
		<idno type="DOI">2155/10.1016/j.jecp.2012.05.004</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Experimental Child Psychology</title>
		<imprint>
			<biblScope unit="volume">214</biblScope>
			<biblScope unit="page" from="373" to="380" />
			<date type="published" when="1977">1977. 2012. 2011</date>
			<publisher>Cambridge University Press</publisher>
		</imprint>
	</monogr>
	<note>Experimental Brain Research</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">The mechanics of embodiment: A dialog on embodiment and computational modelling</title>
		<author>
			<persName><forename type="first">C</forename><surname>Parise</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Spence</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Mcrae</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Spivey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Exploring the bouba/kiki effect: A behavioral and fMRI study</title>
				<editor>
			<persName><forename type="first">D</forename><surname>Premack</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">J</forename><surname>Premack</surname></persName>
		</editor>
		<meeting><address><addrLine>Oxford; New York</addrLine></address></meeting>
		<imprint>
			<publisher>McGraw-Hill</publisher>
			<date type="published" when="2003">2010. 2011. 2003</date>
			<biblScope unit="volume">2</biblScope>
		</imprint>
		<respStmt>
			<orgName>Universite Paris</orgName>
		</respStmt>
	</monogr>
	<note>Original intelligence: Unlocking the mystery of who we are</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Natural, metaphoric, and linguistic auditory direction signals have distinct influences on visual motion processing</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">S</forename><surname>Ramachandran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M</forename><surname>Hubbard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">S</forename><surname>Ramachandran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M</forename><surname>Hubbard</surname></persName>
		</author>
		<author>
			<persName><surname>Sadaghiani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sams</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Mottonen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Sihvonen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Sapir</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Royal Society London B</title>
				<meeting>the Royal Society London B</meeting>
		<imprint>
			<date type="published" when="1929">2001a. 2001b. 2009. 2005. 1929</date>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="225" to="239" />
		</imprint>
	</monogr>
	<note>Journal of Consciousness Studies</note>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">H.-S</forename><surname>Seo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Arshamian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Schemmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Scheer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Sander</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Ritter</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Cross-modal integration between odors and abstract symbols</title>
		<author>
			<persName><forename type="first">T</forename><surname>Hummel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R ;</forename><surname>Eisenamn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><surname>Spence</surname></persName>
		</author>
		<author>
			<persName><surname>Spence</surname></persName>
		</author>
		<author>
			<persName><surname>Spence</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Crossmodal correspondences: A tutorial review. Attention, Perception, &amp; Psychophysics</title>
				<imprint>
			<date type="published" when="1997">2010. 1997. 2008. 2011. 2012a. 2012b</date>
			<biblScope unit="volume">478</biblScope>
			<biblScope unit="page" from="316" to="318" />
		</imprint>
	</monogr>
	<note>Crossmodal correspondences: Innate or learned? I-Perception</note>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Crossmodal interactions in the human newborn: New answers to Molyneux&apos;s question</title>
		<author>
			<persName><forename type="first">C</forename><surname>Spence</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">V</forename><surname>Parise</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Streri</surname></persName>
		</author>
		<author>
			<persName><surname>Sweeny</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Walker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Walker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Williams</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">A common scheme for cross-sensory correspondences</title>
				<editor>
			<persName><surname>Synaesthesia</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><forename type="middle">E</forename><surname>In</surname></persName>
		</editor>
		<editor>
			<persName><surname>Stein</surname></persName>
		</editor>
		<meeting><address><addrLine>Oxford, UK; Cambridge, MA</addrLine></address></meeting>
		<imprint>
			<publisher>MIT Press</publisher>
			<date type="published" when="1976">2012. 2012. 2012. 2012. 1976</date>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="461" to="478" />
		</imprint>
	</monogr>
	<note>Language</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
