<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Corresponding author, ORCID:</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Towards Grounding Conceptual Spaces in Neural Representations</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Lucas Bechberger?</string-name>
          <email>lucas.bechberger@uni-osnabrueck.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kai-Uwe Kuhnberger</string-name>
          <email>kai-uwe.kuehnberger@uni-osnabrueck.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Cognitive Science, Osnabruck University</institution>
          ,
          <addr-line>Osnabruck</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>1962</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>The highly in uential framework of conceptual spaces provides a geometric way of representing knowledge. It aims at bridging the gap between symbolic and subsymbolic processing. Instances are represented by points in a high-dimensional space and concepts are represented by convex regions in this space. In this paper, we present our approach towards grounding the dimensions of a conceptual space in latent spaces learned by an InfoGAN from unlabeled data.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        The cognitive framework of conceptual spaces [
        <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
        ] attempts to bridge the
gap between symbolic and subsymbolic AI by proposing an intermediate
conceptual layer based on geometric representations. A conceptual space is a
highdimensional space spanned by a number of quality dimensions representing
interpretable features. Convex regions in this space correspond to concepts. Abstract
symbols can be grounded by linking them to concepts in a conceptual space
whose dimensions are based on subsymbolic representations.
      </p>
      <p>
        The framework of conceptual spaces has been highly in uential in the last
15 years within cognitive science and cognitive linguistics [
        <xref ref-type="bibr" rid="ref11 ref13 ref20">11, 13, 20</xref>
        ]. It has also
sparked considerable research in various sub elds of arti cial intelligence,
ranging from robotics and computer vision [5{7] over the semantic web and ontology
integration [
        <xref ref-type="bibr" rid="ref1 ref10">1, 10</xref>
        ] to plausible reasoning [
        <xref ref-type="bibr" rid="ref19 ref9">9, 19</xref>
        ].
      </p>
      <p>Although this framework provides means for representing concepts, it does
not consider the question of how these concepts can be learned from mostly
unlabeled data. Moreover, the framework assumes that the dimensions spanning
the conceptual space are already given a priori. In practical applications of the
framework, they thus often need to be handcrafted by a human expert.</p>
      <p>
        In this paper, we argue that by using neural networks, one can automatically
extract the dimensions of a conceptual space from unlabeled data. We propose
that latent spaces learned by an InfoGAN [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] (a special class of Generative
Adversarial Networks [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]) can serve as domains in the conceptual spaces
framework. We further propose to use a clustering algorithm in these latent spaces in
order to discover meaningful concepts.
Copyright © 2017 for this paper by its authors. Copying permitted for private and academic purposes.
      </p>
      <p>The remainder of this paper is structured as follows: Section 2 presents the
framework of conceptual spaces and Section 3 introduces the InfoGAN
framework. In Section 4, we present our idea of combining these two frameworks.</p>
      <p>Section 5 gives an illustrative example and Section 6 concludes the paper.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Conceptual Spaces</title>
      <p>
        A conceptual space [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] is a high-dimensional space spanned by so-called
\quality dimensions". Each of these dimensions represents an interpretable way in
which two stimuli can be judged to be similar or di erent. Examples for quality
dimensions include temperature, weight, time, pitch, and hue. A domain is a set
of dimensions that inherently belong together. Di erent perceptual modalities
(like color, shape, or taste) are represented by di erent domains. The color
domain for instance can be represented by the three dimensions hue, saturation,
and brightness.1 Distance within a domain is measured by the Euclidean metric.
      </p>
      <p>The overall conceptual space is de ned as the product space of all dimensions.
Distance within the overall conceptual space is measured by the Manhattan
metric of the intra-domain distances. The similarity of two points in a conceptual
space is inversely related to their distance { the closer two instances are in the
conceptual space, the more similar they are considered to be.</p>
      <p>The framework distinguishes properties like \red", \round", and \sweet"
from full- eshed concepts like \apple" or \dog": Properties are represented as
regions within individual domains (e.g., color, shape, taste), whereas full- eshed
concepts span multiple domains. Reasoning within a conceptual space can be
done based on geometric relationships (e.g., betweenness and similarity) and
geometric operations (e.g., intersection or projection).</p>
      <p>
        Recently, Balkenius &amp; Gardenfors [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] have argued that population coding in
the human brain can give rise to conceptual spaces. They discuss the connection
between neural and conceptual representations from a neuroscience/psychology
perspective, whereas we take a machine learning approach in this paper.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Representation Learning with InfoGAN</title>
      <p>
        Within the research area of neural networks, there has been some substantial
work on learning compressed representations of a given feature space. Bengio et
al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] provide a thorough overview of di erent approaches in the representation
learning area. They de ne representation learning as \learning representations
of the data that make it easier to extract useful information when building
classi ers or other predictors". We will focus our discussion here on one speci c
approach that is particularly tting to our proposal, namely InfoGAN [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
InfoGAN is an extension of the GAN (Generative Adversarial Networks) framework
[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] which has been applied to a variety of problems (e.g., [12, 18, 21{23]). We
rst describe the original GAN framework before moving on to InfoGAN.
      </p>
      <sec id="sec-3-1">
        <title>1 Of course, one can also use other color spaces, e.g., the CIE L*a*b* space.</title>
        <p>The GAN framework (depicted in the left part of Figure 1) consists of two
networks, the generator and the discriminator. The generator is fed with a
lowdimensional vector of noise values. Its task is to create high-dimensional data
vectors that have a similar distribution as real data vectors taken from an
unlabeled training set. The discriminator receives a data vector that was either
created by the generator or taken from the training set. Its task is to distinguish
real inputs from generated inputs. Although the discriminator is trained on a
classi cation task, the overall system works in an unsupervised way. The overall
architecture can be interpreted as a two-player game: The generator tries to fool
the discriminator by creating realistic inputs and the discriminator tries to avoid
being fooled by the generator. When the GAN framework converges, the
discriminator is expected to make predictions only at chance level and the generator is
expected to create realistic data vectors. Although the overall framework works
quite well, the dimensions of the input noise vector are usually not interpretable.</p>
        <p>
          Chen et al. [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] have extended the original framework by introducing latent
variables: In the InfoGAN framework (shown in the right part of Figure 1),
the generator receives an additional input vector. The entries of this vector are
values of latent random variables, selected based on some probability distribution
that was de ned a priori (e.g., uniform or Gaussian). The discriminator has the
additional task to reconstruct these latent variables.2 Chen et al. argue that
this ensures that the mutual information between the latent variable vector and
the generated data vector is high. They showed that after training an InfoGAN,
the latent variables tend to have an interpretable meaning. For instance, in
an experiment on the MNIST data set, the latent variables corresponded to
type of digit, digit rotation and stroke thickness. InfoGANs can thus provide a
bidirectional mapping between observable data vectors and interpretable latent
dimensions: One can both extract interpretable dimensions from a given data
vector and create a data vector from an interpretable latent representation.
2 This introduces a structure similar to an autoencoder (with the latent variables as
input/output and the generated data vector as hidden representation).
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Using Representation Learning to Ground Domains</title>
      <p>For some domains of a conceptual space, a dimensional representation is
already available. For instance, the color domain can be represented by the
threedimensional HSB space. For other domains, it is however quite unclear how to
represent them based on a handful of dimensions. One prominent example is
the shape domain: To the best of our knowledge, there are no widely accepted
dimensional models for describing shapes.</p>
      <p>We propose to use the InfoGAN framework in order to learn such a
dimensional representation based on an unlabeled data set: Each of the latent
variables can be interpreted as one dimension of the given domain of interest. For
instance, the latent variables learned on a data set of shapes can be interpreted
as dimensions of the shape domain. Three important properties of domains in
a conceptual space are the following: interpretable dimensions, a distance-based
notion of similarity, and a geometric way of describing semantic betweenness. We
think that the latent space of an InfoGAN is a good candidate for representing
a domain of a conceptual space, because it ful lls all of the above requirements:</p>
      <p>
        As described before, Chen et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] found that the individual latent
variables tend to have an interpretable meaning. Although this is only an empirical
observation, we expect it generalize to other data sets and thus to other domains.
      </p>
      <p>
        Moreover, the smoothness assumption used in representation learning (cf. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]
and [16, Ch. 15]) states that points with small distance in the input space should
also have a small distance in the latent space. This means that a distance-based
notion of similarity in the latent space is meaningful.
      </p>
      <p>
        Finally, Radford et al. [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] found that linear interpolations between points
in the latent space of a GAN correspond to a meaningful \morph" between
generated images in the input space. This indicates that geometric betweenness in
the latent space can represent semantic betweenness.
      </p>
      <p>There are two important hyperparameters to the approach of grounding
domains in InfoGANs: The number of latent variables (i.e., the dimensionality of
the learned domain) and the type of distribution used for the latent variables
(e.g., uniform vs. Gaussian). Note that one would probably aim for the
lowestdimensional representation that still describes the domain su ciently well.</p>
      <p>Finally, we would like to address a critical aspect of this proposal: How can
one make sure that the representation learned by the neural network only
represents information from the target domain (e.g., shape) and not anything related
to other domains (e.g., color)? In our opinion, there are two complementary
methods to \steer" the network towards the desired representation:</p>
      <p>The rst option consists of selecting only such inputs for the training set that
do not exhibit major di erences with respect to other domains. For instance, a
training set for the shape domain should only include images of shapes that have
the same color (e.g., black shape on white ground). If there is only very small
variance in the data set with respect to other domains (e.g., color), the network
is quite unlikely to incorporate this information into its latent representation.</p>
      <p>The second option concerns modi cations of the network's loss function: One
could for instance introduce an additional term into the loss function which
measures the correlation between the learned latent representation and dimensions
from other (already de ned) domains. This would cause a stronger error signal if
the network starts to re-discover already known dimensions from other domains
and therefore drive the network away from learning redundant representations.</p>
      <p>
        A simple proof of concept implementation for the shape domain could be
based on a data set of simple 2D shapes (circles, triangles, rectangles, etc.) in
various orientations and locations. For a more thorough experiment, one could
for instance use ShapeNet3 [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], a data base of over 50,000 3D models for more
than 50 categories of objects. One could render these 3D models from various
perspectives in order to get 2D inputs (for learning to represent 2D shapes) or
work on a voxelized 3D input (for learning representations of 3D shapes).
5
      </p>
    </sec>
    <sec id="sec-5">
      <title>An Illustrative Example</title>
      <p>Figure 2 illustrates a simpli ed example of our envisioned overall system. Here,
we consider only two domains: color and shape. Color can be represented by the
HSB space using the three dimensions hue, saturation and brightness. This is
an example for a hard-coded domain. The representation of the shape domain,
however, needs to be learned. The arti cial neural network depicted in Figure</p>
      <sec id="sec-5-1">
        <title>3 https://www.shapenet.org/</title>
        <p>2 corresponds to the discriminator of an InfoGAN trained on a data set of shapes.</p>
        <p>Let us consider two example concepts: The concept of an apple can be
described by the \red" region in the color domain and the \round" region in the
shape domain. The concept of a banana can be represented by the \yellow"
region in the color domain and the \cylindric" region in the shape domain.</p>
        <p>If the system makes a new observation (e.g., an apple as depticted in Figure
2), it will convert this observation into a point in the conceptual space. For the
color domain, this is done by a hard-coded conversion to the HSB color space.
For the shape domain, the observation is fed into the discriminator and its latent
representation is extracted, resulting in the coordinates for the shape domain.
Now in order to classify this observation, the system needs to check whether
the resulting data point is contained in any of the de ned regions. If the data
point is an element of the apple region in both domains (which is the case in our
example), this observation should be classi ed as an apple. If the data point is
an element of the banana region, the object should be classi ed as a banana.</p>
        <p>Based on a new observation, the existing concepts can also be updated: If
the observation was classi ed as an apple, but it is not close to the center of the
apple region in one of the domains, this region might be enlarged or moved a bit,
such that the observed instance is better matched by the concept description.
If the observation does not match any of the given concepts at all, even a new
concept might be created. This means that concepts cannot only be applied
for classi cation, but they can also be learned and updated. Note that this can
take place without explicit label information, i.e., in an unsupervised way. Our
overall reserach goal is to develop a clustering algorithm that can take care of
incrementally updating the regions in such a conceptual space.</p>
        <p>Please note that the updates considered above only concern the connections
between the conceptual and the symbolic layer. The connections between the
subsymbolic and the conceptual layer remain xed. The neural network thus
only serves as a preprocessing step in our approach: It is trained before the
overall system is used and remains unchanged afterwards. Simultaneous updates
of both the neural network and the concept description might be desirable, but
would probably introduce a great amount of additional complexity.
6</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Conclusion and Future Work</title>
      <p>In this paper, we outlined how neural representations can be used to ground
the domains of a conceptual space in perception. This is especially useful for
domains like shape, where handcrafting a dimensional representation is di cult.
We argued that the latent representations learned by an InfoGAN have suitable
properties for being combined with the conceptual spaces framework. In future
work, we will implement the proposed idea by giving a neural grounding to the
domain of simple 2D shapes. Furthermore, we will devise a clustering algorithm
for discovering and updating conceptual representations in a conceptual space.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>Benjamin</given-names>
            <surname>Adams</surname>
          </string-name>
          and
          <string-name>
            <given-names>Martin</given-names>
            <surname>Raubal</surname>
          </string-name>
          .
          <article-title>Conceptual Space Markup Language (CSML): Towards the Cognitive Semantic Web</article-title>
          .
          <source>2009 IEEE International Conference on Semantic Computing</source>
          ,
          <year>Sep 2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Christian</given-names>
            <surname>Balkenius</surname>
          </string-name>
          and
          <article-title>Peter Gardenfors. Spaces in the Brain: From Neurons to Meanings</article-title>
          . Frontiers in Psychology,
          <volume>7</volume>
          :
          <year>1820</year>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Courville</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Vincent</surname>
          </string-name>
          .
          <article-title>Representation Learning: A Review and New Perspectives</article-title>
          .
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          ,
          <volume>35</volume>
          (
          <issue>8</issue>
          ):1798{1828,
          <string-name>
            <surname>Aug</surname>
          </string-name>
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Angel</surname>
            <given-names>X.</given-names>
          </string-name>
          <string-name>
            <surname>Chang</surname>
            , Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang,
            <given-names>Zimo</given-names>
          </string-name>
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>Silvio</given-names>
          </string-name>
          <string-name>
            <surname>Savarese</surname>
            , Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao,
            <given-names>Li</given-names>
          </string-name>
          <string-name>
            <surname>Yi</surname>
          </string-name>
          , and Fisher Yu.
          <source>ShapeNet: An Information-Rich 3D Model Repository. December</source>
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5. Antonio Chella, Haris Dindo, and
          <string-name>
            <given-names>Ignazio</given-names>
            <surname>Infantino</surname>
          </string-name>
          .
          <article-title>Anchoring by Imitation Learning in Conceptual Spaces</article-title>
          .
          <source>AI*IA 2005: Advances in Arti cial Intelligence</source>
          , pages
          <fpage>495</fpage>
          {
          <fpage>506</fpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6. Antonio Chella, Marcello Frixione, and
          <string-name>
            <given-names>Salvatore</given-names>
            <surname>Gaglio</surname>
          </string-name>
          .
          <article-title>Conceptual Spaces for Computer Vision Representations</article-title>
          .
          <source>Arti cial Intelligence Review</source>
          ,
          <volume>16</volume>
          (
          <issue>2</issue>
          ):
          <volume>137</volume>
          {
          <fpage>152</fpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7. Antonio Chella, Marcello Frixione, and
          <string-name>
            <given-names>Salvatore</given-names>
            <surname>Gaglio</surname>
          </string-name>
          . Anchoring Symbols to Conceptual
          <source>Spaces: The Case of Dynamic Scenarios. Robotics and Autonomous Systems</source>
          ,
          <volume>43</volume>
          (
          <issue>2-3</issue>
          ):
          <volume>175</volume>
          {
          <fpage>188</fpage>
          , May
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>Xi</given-names>
            <surname>Chen</surname>
          </string-name>
          , Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel.
          <article-title>InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets</article-title>
          .
          <source>In Advances in Neural Information Processing Systems</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <article-title>Joaqu n Derrac and Steven Schockaert. Inducing Semantic Relations from Conceptual Spaces: A Data-Driven Approach to Plausible Reasoning</article-title>
          .
          <source>Arti cial Intelligence</source>
          ,
          <volume>228</volume>
          :
          <fpage>66</fpage>
          {
          <fpage>94</fpage>
          ,
          <string-name>
            <surname>Nov</surname>
          </string-name>
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>Stefan</given-names>
            <surname>Dietze</surname>
          </string-name>
          and John Domingue.
          <article-title>Exploiting Conceptual Spaces for Ontology Integration</article-title>
          .
          <source>In Data Integration Through Semantic Technology (DIST2008) Workshop at 3rd Asian Semantic Web Conference (ASWC</source>
          <year>2008</year>
          ),
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Igor</surname>
            <given-names>Douven</given-names>
          </string-name>
          , Lieven Decock, Richard Dietz, and
          <string-name>
            <given-names>Paul</given-names>
            <surname>Egre</surname>
          </string-name>
          .
          <article-title>Vagueness: A Conceptual Spaces Approach</article-title>
          .
          <source>Journal of Philosophical Logic</source>
          ,
          <volume>42</volume>
          (
          <issue>1</issue>
          ):
          <volume>137</volume>
          {
          <fpage>160</fpage>
          ,
          <string-name>
            <surname>Nov</surname>
          </string-name>
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Ishan</surname>
            <given-names>Durugkar</given-names>
          </string-name>
          , Ian Gemp, and
          <string-name>
            <given-names>Sridhar</given-names>
            <surname>Mahadevan</surname>
          </string-name>
          .
          <article-title>Generative Multi-Adversarial Networks</article-title>
          .
          <source>November</source>
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Sandro R. Fiorini</surname>
          </string-name>
          , Peter Gardenfors, and Mara Abel.
          <article-title>Representing Part-Whole Relations in Conceptual Spaces</article-title>
          .
          <source>Cognitive Processing</source>
          ,
          <volume>15</volume>
          (
          <issue>2</issue>
          ):
          <volume>127</volume>
          {
          <fpage>142</fpage>
          ,
          <string-name>
            <surname>Oct</surname>
          </string-name>
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Peter</surname>
          </string-name>
          <article-title>Gardenfors. Conceptual Spaces: The Geometry of Thought</article-title>
          . MIT press,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15. Peter Gardenfors.
          <source>The Geometry of Meaning: Semantics Based on Conceptual Spaces</source>
          . MIT Press,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Ian</surname>
            <given-names>Goodfellow</given-names>
          </string-name>
          , Yoshua Bengio, and
          <string-name>
            <given-names>Aaron</given-names>
            <surname>Courville</surname>
          </string-name>
          .
          <source>Deep Learning</source>
          . MIT Press,
          <year>2016</year>
          . http://www.deeplearningbook.org.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Ian J. Goodfellow</surname>
            , Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and
            <given-names>Yoshua</given-names>
          </string-name>
          <string-name>
            <surname>Bengio</surname>
          </string-name>
          .
          <source>Generative Adversarial Networks. June</source>
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Alec</surname>
            <given-names>Radford</given-names>
          </string-name>
          , Luke Metz, and
          <string-name>
            <given-names>Soumith</given-names>
            <surname>Chintala</surname>
          </string-name>
          .
          <article-title>Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks</article-title>
          .
          <source>November</source>
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <source>Steven Schockaert and Henri Prade. Interpolation and Extrapolation in Conceptual Spaces: A Case Study in the Music Domain. Lecture Notes in Computer Science</source>
          , pages
          <volume>217</volume>
          {
          <fpage>231</fpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Massimo</surname>
            <given-names>Warglien</given-names>
          </string-name>
          , Peter Gardenfors, and Matthijs Westera. Event Structure,
          <source>Conceptual Spaces and the Semantics of Verbs. Theoretical Linguistics</source>
          ,
          <volume>38</volume>
          (
          <issue>3-4</issue>
          ),
          <year>Jan 2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Jiajun</surname>
            <given-names>Wu</given-names>
          </string-name>
          , Chengkai Zhang, Tianfan Xue, Bill Freeman, and
          <string-name>
            <given-names>Josh</given-names>
            <surname>Tenenbaum</surname>
          </string-name>
          .
          <article-title>Learning a Probabilistic Latent Space of Object Shapes via 3D GenerativeAdversarial Modeling</article-title>
          . In D. D.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Sugiyama</surname>
            ,
            <given-names>U. V.</given-names>
          </string-name>
          <string-name>
            <surname>Luxburg</surname>
            ,
            <given-names>I. Guyon</given-names>
          </string-name>
          , and R. Garnett, editors,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>29</volume>
          , pages
          <fpage>82</fpage>
          {
          <fpage>90</fpage>
          . Curran Associates, Inc.,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Junbo</surname>
            <given-names>Zhao</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Michael</given-names>
            <surname>Mathieu</surname>
          </string-name>
          , and Yann LeCun.
          <article-title>Energy-based Generative Adversarial Network</article-title>
          .
          <year>September 2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Jun-Yan</surname>
            <given-names>Zhu</given-names>
          </string-name>
          , Taesung Park, Phillip Isola, and
          <string-name>
            <surname>Alexei</surname>
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Efros</surname>
          </string-name>
          .
          <article-title>Unpaired Imageto-Image Translation using Cycle-Consistent Adversarial Networks</article-title>
          .
          <source>March</source>
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>