=Paper= {{Paper |id=Vol-1895/AIC16_paper13 |storemode=property |title=On Representing Concepts in High-dimensional Linear Spaces |pdfUrl=https://ceur-ws.org/Vol-1895/paper13.pdf |volume=Vol-1895 |authors=Agnese Augello,Salvatore Gaglio,Gianluigi Oliveri,Giovanni Pilato |dblpUrl=https://dblp.org/rec/conf/aic/AugelloGOP16 }} ==On Representing Concepts in High-dimensional Linear Spaces== https://ceur-ws.org/Vol-1895/paper13.pdf
On Representing Concepts in High-Dimensional Linear
                     Spaces

     Agnese Augello3 , Salvatore Gaglio1,3 , Gianluigi Oliveri2,3 , and Giovanni Pilato3
                                1
                                  DICGIM- Università di Palermo
                    Viale delle Scienze, Edificio 6 - 90128, Palermo - ITALY
                 2
                   Dipartimento di Scienze Umanistiche - Università di Palermo
                   Viale delle Scienze, Edificio 12 - 90128, Palermo - ITALY
                          3
                             ICAR - Italian National Research Council
                     Viale delle Scienze - Edificio 11 - 90128 Palermo, Italy
                              {gaglio.oliveri}@unipa.it
                         {augello,pilato}@icar.pa.cnr.it



         Abstract. Producing a mathematical model of concepts is a very important issue
         in artificial intelligence, because if such a model were found this, besides being
         a very interesting result in its own right, would also contribute to the emergence
         of what we could call the ‘mathematics of thought.’ One of the most interesting
         attempts made in this direction is P. Gärdenfors’ theory of conceptual spaces, a
         theory which is mostly presented by its author in an informal way. The main aim
         of the present article is contributing to Gärdenfors’ theory of conceptual spaces
         by discussing some of the advantages which derive from the possibility of repre-
         senting concepts in high-dimensional linear spaces.


1      Introduction

Producing a mathematical model of concepts is a very important issue in artificial in-
telligence, because if such a model were found this, besides being a very interesting
result in its own right, would also contribute to the emergence of what we could call the
‘mathematics of thought.’ One of the most interesting attempts made in this direction
is P. Gärdenfors’ theory of conceptual spaces, a theory which is mostly presented by its
author in an informal way.
     The main aim of the present article is contributing to Gärdenfors’ theory of concep-
tual spaces by discussing some of the advantages which derive from the possibility of
representing concepts in high-dimensional linear spaces.
     In what follows, section 2 is dedicated to providing the main features of and moti-
vations behind Gärdenfors’s theory of conceptual spaces.
     Section 3 discusses the application of high-dimensional linear spaces to the repre-
sentation of concepts without engaging in a preliminary discussion of the several at-
tempts made to formalize conceptual spaces present in the literature.4 (The lack of the
above mentioned survey is due to requirements concerning article length).
 4
     See on this [Chella et al., 1997], [Chella et al., 1998], [Raubal, 2004], [Raubal, 2009],
     [Rickard, 2006], [Rickard et al., 2007], [Augello et al., 2013].
    In section 4 we examine some of the advantages and limitations of our approach to
conceptual spaces; and, eventually, section 5 brings the paper to a close with providing
a very short resume of what we think we have achieved in this article.


2      Gärdenfors conceptual spaces
Gärdenfors in [Gärdenfors, 2004] describes a cognitive architecture for modelling rep-
resentations. In this architecture an intermediate level, called ‘geometrical conceptual
space’, is introduced between a linguistic-symbolic level and an associationistic sub-
symbolic level to produce a mathematical representation of concepts and their use.5
    According to Gärdenfors, conceptual spaces represent concepts exploiting geomet-
rical structure rather than symbols or connections between neurons. This geometrical
representation is based on the existence/construction of a space endowed with a number
of what Gärdenfors calls ‘quality dimensions,’ quality dimensions whose main function
is that of representing different qualities of objects such as brightness, temperature,
height, width, depth. For him, many important conceptual spaces are metric spaces, i.e.
they are sets of points on which a distance function is defined.
    Within conceptual spaces, objects are represented as points and concepts as regions.
These regions may have various shapes, although to some concepts — those which refer
to natural kinds or natural properties6 — correspond regions which are characterized
by convexity.7 According to Gärdenfors, this latter type of region is strictly related to
the notion of prototype, i.e., to those entities that may be regarded as the archetypal
representatives of a given category of objects (the centroids of the convex regions).8
    For him, judgments of similarity play a crucial rôle in cognitive processes, and he
conjectures that the smaller is the distance between the representations of two given
objects (in a conceptual metric space) the more similar to each other the objects repre-
sented are.
    Gärdenfors’s motivation for the introduction of conceptual spaces is that, even if the
symbolic approach to modeling representations is very rich and expressive, it has some
 5
   On     Gärdenfors     tripartite   cognitive     architecture    see:    [Augello et al., 2013],
   [Augello et al., 2014], [Augello et al., 2015].
 6
   [Gärdenfors, 2004], Chapter 3, §3.5, p. 71:

          C RITERION P A natural property is a convex region of a domain in a conceptual
       space.

 7
     [Gärdenfors, 2004], Chapter 3, §3.4, p. 69:

          D EFINITION 3.3 A subset C of a conceptual space S is said to be convex if, for all
       points x and y in C, all points between x and y are also in C.

 8
     [Gärdenfors, 2004], Chapter 3, §3.9, p. 88:

           [A]ssuming that a Euclidean metric is defined on the subspace that is subject to
       categorization, a set of prototypes will by this method [Voronoi tesselation] generate a
       unique partitioning of the subspace into convex regions.
intrinsic limitations represented, for example, by the ‘symbol grounding problem,’9 and
by the well known A.I. ‘frame problem’.10 On the other hand, the associationist ap-
proach suffers from its low-level nature, which makes it unsuited for modeling complex
representations.
    According to Gärdenfors, conceptual spaces strike a happy medium between the two
above mentioned systems used for modeling representations, a happy medium given by
the consideration that:

 1. conceptual spaces, in contrast with associationist approaches, are suited for model-
    ing complex representations;
 2. within conceptual spaces problems affecting the symbolic approach, like the sym-
    bol grounding problem, can be solved.

     Let us now consider a simple example of a conceptual space. Assume the existence
of what we are going to call ‘ground space,’ that is, the space in which a cognitive agent
A acts according to certain rules or by using given tools; that such a space is none other
than R2 (see Figure 1); and that A can operate in R2 using pencil, straightedge, and
compasses.
     Now, using compasses, A can draw circles of any centre p ∈ R2 , and any finite
radius r ∈ R, for 0 < r; and can also measure the length of the radius of the circles
he draws. Since ‘x has a radius of the same length as y’ is an equivalence relation on
the collection C of all circles in R2 , it follows that C is partitioned by it into mutually
disjoint equivalence classes in such a way that any two circles in C having radii of the
same length belong to the same equivalence class.
     Note that the equivalence classes of congruent circles in R2 can be represented as
points of the one-dimensional conceptual space [C] with coordinates x ∈ (0, ∞) ⊆ R.
This is a conceptual space which, for obvious reasons, we are going to call circle. In
circle, any x ∈ (0, ∞) expresses the length of the radius of the circles belonging to the
equivalence class with coordinate x. (We use x as the index of the equivalence class of
which x is the coordinate.)
     If we define the distance in circle between any two equivalence classes as the abso-
lute value of the difference between the lengths of the radii of any two of their represen-
tatives then circle is a metric space, and, clearly, the smaller is the distance between two
 9
     The following quotation from [Harnad, 1990] is to be found in [Gärdenfors, 2004], Chapter 2,
     §2.2.2, p. 38:

           How can the semantic interpretation of a formal symbol system be made intrinsic
       to the system, rather than just parasitic on the meaning in our heads? How can the
       meanings of the meaningless symbol tokens, manipulated solely on the basis of their
       (arbitrary) shapes, be grouped in anything but other meaningless symbols?

10
     [Gärdenfors, 2004], Chapter 2, §2.2.2, p. 37:

           The frame problem can be defined as the problem of specifying on the symbolic
       level what changes and what stays constant in the particular domain when a particular
       action is performed.
                              Fig. 1. From Ground to Conceptual Space


points [cx ] and [cy ] of circle, the more similar to each other any s ∈ [cx ] and l ∈ [cy ]
will be. Other interesting mathematical examples of conceptual spaces are rectangle,
and direction.11
    What we intend to do in the remaining part of this paper is contributing to Gärden-
fors’s theory of conceptual spaces by:
11
     The conceptual space rectangle is discussed by Gärdenfors in [Gärdenfors, 2004], Chapter 3,
     §3.10, pp. 93–94. With regard to the conceptual space direction, if you define the parallelism
     relation on R2 in such a way that any straight line l in R2 is parallel to itself, then the paral-
     lelism relation becomes an equivalence relation on the set of straight lines S in R2 . From this
     we have that this equivalence relation partitions S into mutually disjoint equivalence classes.
        Now, if [S] is the set of the above mentioned equivalence classes, let τ be the function

                                            τ : [S] → [0, π)

     that associates each [s] ∈ [S] to the angle made by each s ∈ [s] with the x-axis of R2 . (Note
     that τ is injective and surjective.) If τ ([s]) = θ and τ ([l]) = λ, and we put d([s], [l]) =
     d(τ ([s]), τ ([l])), where d(τ ([s]), τ ([l])) is the usual metric on [0, π), we have:

                                     d([s], [l]) = d(τ ([s]), τ ([l]))                             (1)
                                                = d(θ, λ)                                          (2)
                                                = |θ − λ|.                                         (3)

     Therefore, [S], with the above metric, is the 1-dimensional conceptual metric space direction.
        Consider that in direction each [s] ∈ [S] is a direction, and that in it we can distinguish a
     region R1 , the points of which are equivalence classes of straight lines with an angle θ with
     the x-axis such that θ ∈ (0, π2 ) (which have a positive derivative); from the region R2 , the
     points of which are equivalence classes of straight lines with an angle θ with the x-axis such
     that θ ∈ ( π2 , π) (which have a negative derivative). Clearly, R1 and R2 are convex sets.
        Lastly, for any [s], [l] ∈ direction, if d([s], [l]) → 0 then also |θ − λ| → 0 and, conse-
     quently, any s ∈ [s] and l ∈ [l] will be more similar to each other in the sense that they will
     tend to become parallel to each other.
(a) providing plausible similarity measures between patterns12 using a particular ex-
    ample of conceptual spaces — high-dimensional linear spaces — and the so-called
    ‘kernels’ (see next section);
(b) individuating a general mathematical strategy for solving, within high-dimensional
    linear spaces, the problem concerning the possibility of assigning perceptual input
    either to the region of a conceptual space representing a given concept or to its com-
    plement when exemplars of patterns falling under the concept (positive exemplars)
    and of patterns not falling under the concept (negative exemplars) are provided.

   Both these questions are of vital importance for the theory of conceptual spaces,
because:
(i) much of what Gärdenfors intends to do (with his theory) hangs on the existence of
     an effective measure of similarity between patterns;
(ii) the meaningfulness and usefulness of a concept in general and, in particular, of a
     concept given in terms of ‘positive’ and ‘negative’ exemplars, rest on the possibility
     of recognizing whether perceptual input falls under it or not.


3      Kernels, and high-dimensional linear spaces
Imagine you are in a farm and, while you are walking about in its grounds, you come
across a group of horses. These horses are either ponies or normal height horses. You
are not an expert on horses and, although you recognize some of them as ponies and
others as normal height horses, you would like to have a criterion which, exploiting the
few ponies and normal height horses you recognize, might enable you to assign to each
horse in the group either the label ‘pony’ or the label ‘normal height horse.’
     Given the present state of your knowledge about horses, your wish can be granted
only if you are able to establish, for each horse in the group, that either this is more
similar to the ponies you recognize than to the normal height horses you recognize
or vice versa. Of course, this ability presupposes the possibility of defining a relevant
similarity measure on the group of horses.
     We can give a mathematical representation of this problem in the following way.
Call D, domain, the group of horses, and let P and Q be the two distinguished subsets of
D whose elements are, respectively, the positive (ponies you recognize) and the negative
(normal height horses you recognize) exemplars.
     The problem of finding a (relevant) similarity measure on the group of horses now
becomes the problem of finding a function k on D × D into the reals, k : D × D →
R, such that, given any ordered pair of horses (a, b) ∈ D × D, associates to it the
real number expressing how similar the first and the second element of the pair are to
one other (with regard to the relevant feature). The function k above is known in the
literature as kernel.
     It is important to say that similarity measures (kernels) vary according to the type
of elements we find in the domain D, e.g. horses, cats, dogs, pigs, lawyers, politicians,
12
     Here the concept of pattern is more general than that of object. For the concepts of pattern and
     object see: [Resnik, 1981], [Dennett, 1991], [Oliveri, 1997], [Oliveri, 1998], [Shapiro, 2000],
     [Resnik, 2001], [Oliveri, 2007], [Oliveri, 2012], [Bombieri, 2013].
etc., and that the choice of the right/relevant similarity measure for the elements of a
certain domain is not, in general, a trivial matter.
    However, to see a particularly simple example of kernels at work as similarity mea-
sures, let us assume that there exists a function σ which embeds D (our group of horses)
into a real inner product linear space W, which we are going to call feature space;13 and
that we can define the appropriate kernel on D × D using the canonical dot product in
W 14 in the following way:
for any a, b ∈ D,

                                   k(a, b) = (σ(a) · σ(b))                                   (4)
                                           = (wa · wb ).                                     (5)
                                                                  wa                 wb
     Indeed, if, for wa , wb ∈ W, we consider the vectors wa = kw   ak
                                                                       and wb = kw      bk
                                                                                           , we
have that kwa k = kwb k = 1, and that (wa · wb ) = cos θ, where θ is the angle between
wa and wb . And to see how the dot product in W defines a similarity measure k on D,
it is sufficient to observe that when θ → 0 the distance between the two vectors wa and
wb , d(wa , wb ) = kwa − wb k, tends to 0 as well and, therefore, as a consequence of the
embedding σ, a and b become more and more similar to one another.
     Having seen how kernels can offer a satisfactory similarity measure on D (point (a)
of section 2), let us now illustrate an example of a simple algorithm for solving, within
high-dimensional linear spaces, the problem concerning the possibility of assigning
perceptual input either to the region of a conceptual space representing a given concept
or to its complement when positive and negative exemplars are provided (point (b) §2).
     Assume that D, k, W, P, Q and σ are as above, and that wx1 , . . . , wxn ∈ σ(P )
are the vectors in W representing the positive exemplars (the ponies you recognize),
whereas wy1 , . . . , wym ∈ σ(Q) are the vectors in W representing the negative exem-
plars (the normal height horses you recognize).
     Now, given an arbitrary horse h ∈ D, and its vectorial representation wh in W, de-
termine the mean vector wx of wx1 , . . . , wxn , and the mean vector wy of wy1 , . . . , wym .
Having done so, calculate the distance of wh from wx and from wy ; h is a pony if and
only if kwh − wx k < kwh − wy k; and, of course, h is a normal height horse if and only
if kwh − wy k < kwh − wx k.
     As a matter of fact, it is possible to show that the hyperplane, in the feature space
W, separating the vectorial representations of ponies from the vectorial representa-
       S normal height horses depends only on a proper subset Sv of the set of vectors
tions of
σ(P ) σ(Q) whose elements are the vectorial representations of the positive and the
negative exemplars. The elements of Sv are called ‘support vectors,’ and the idea here
is that the separating hyperplane is determined only by those vectors representing the
positive and negative exemplars which are closest to it.
     A last important consideration about kernels is that, if we are dealing with classifi-
cations which are separable by hyperplanes (linearly separable), and operate the right
13
   Here the fact that, as a consequence of the embedding σ, D is isomorphic to σ(D) is of crucial
   importance.
14
   If W is an n-dimensional vector space and v, w ∈ W, where v = (v1 , . . . , vn ) and w =
                                       n
   (w1 , . . . , wn ), then (v · w) = Σi=1 v i wi .
choice of high-dimensional feature space W, it is possible to show that there exists an
optimal separating hyperplane in W , that is, a hyperplane which is ‘distinguished by
the maximum margin of separation between any [vector representing a] training point
[a positive or negative exemplar] and the hyperplane’ (§1.4, p. 10); and that ‘[b]y the
use of a kernel function . . . it is possible to compute the separating hyperplane without
explicitly carrying out the [embedding] map into the feature space’ (§1.4, p.13).
    To see the relevance of kernels to the possibility of giving a solution to problem
(b) of §2, consider that, if the vectorial representations of the ponies and of the normal
height horses belonging to our group of horses are linearly separable within a given
high-dimensional real linear space W, then W well deserves to be called ‘conceptual
space.’ For, since it is possible to find in W an optimal hyperplane which separates the
region of vectors representing ponies from that representing normal height horses, it
follows that:
(i) it is also possible to draw (in W ) a sharp distinction between the concepts ‘x is a
     pony belonging to our group of horses,’ and ‘x is a normal height horse belonging
     to our group of horses;’
(ii) the region of vectors representing ponies and that representing normal height horses
     (and the hyperplane) are clearly convex regions of W .
     Secondly, the existence of a kernel function which gives us the possibility of com-
puting the separating hyperplane in W provides us with an effective procedure for solv-
ing the problem concerning the possibility of assigning perceptual input either to the
region of a conceptual space representing a given concept or to its complement when
exemplars of patterns falling under the concept (positive exemplars) and of patterns not
falling under the concept (negative exemplars) are provided (problem (b), §2).


4      Kernels and conceptual spaces
In attempting to assess the relevance of the application of kernel methods to conceptual
spaces, besides what we said at the end of section 2 with regard to the fact that kernels
provide: (α) effective similarity measures between patterns, and (β) algorithms for
deciding whether or not perceptual input falls under a concept given in terms of positive
and negative exemplars, we need to consider the following points.
    First, there is a strong connection between the mathematical character of our ap-
proach — consisting of an application of ideas and techniques belonging to linear al-
gebra and geometry — and Gärdenfors’ declared main aim in developing his theory of
conceptual spaces: describing the geometry of thought.
    Secondly, in contrast with some of Gärdenfors’ original ideas, pattern-recognition
algorithms based on support vectors do not appeal to the rather controversial notion
of prototype. We find the notion of prototype to be controversial, because since the
prototypical horse, the prototypical bird, etc. are just fictitious entities produced by the
imagination of artists, the decision algorithms based on prototypes,15 are bound to be
deeply flawed.
15
     An example of decision algorithm based on prototypes is the following: let h and p be, respec-
     tively, the representations, within conceptual space C, of the normal height horses, and of the
    On the other hand, this is not the case with decision algorithms based on support
vectors, because since the support vectors are some of the positive and negative ex-
emplars produced/exhibited to generate a concept, and/or train a device in recognizing
certain patterns, they can hardly be thought to be fictitious.
    Thirdly, our attempt to model conceptual spaces is particularly relevant to that stage
in the activity of a cognitive agent known as learning concepts from exemplars. Learn-
ing concepts from exemplars is very important for the theory of conceptual spaces,
because it provides, among other things, a convincing account of a possible way in
which concepts and, therefore, conceptual spaces come about.
    Fourthly, if we identify the input domain D of §3 with the set of measures taken by
the sensors of a given artificial agent, or with the perceptual input of a human being,
such an input is highly non-linear in the sense that if we increase (or decrease) the stim-
ulation beyond certain threshold values, the perception of the agent does not increase
(or decrease) accordingly. An important advantage of the use of kernels consists in the
linearization of the input, a linearization which makes possible the use of vector spaces.
    Lastly, there seem to be cases in which our approach to modeling conceptual spaces
is not applicable. One of these is represented by those conceptual spaces which are not
metric spaces, and another is that family of conceptual spaces in which concepts are not
generated from positive and negative exemplars.


5     Conclusions

In writing this paper we intended to contribute to Gärdenfors’ theory of conceptual
spaces (§2) by showing how it is possible to use linear algebra to give a mathematical
representation of conceptual spaces.
    In particular, by exploiting kernel functions and high-dimensional linear spaces, we
answered two problems which are central to the theory of conceptual spaces:

 1. Is it possible to provide similarity measures for input patterns?
 2. Are there procedures of decision for input patterns falling under concepts give-
    n/learned by means of positive and negative exemplars? (§3)

   We also discussed some of the merits and defects of our approach to formalizing
conceptual spaces (§4). The questions we touched upon were:

(i) the harmony existing between our way of formalizing conceptual spaces, and Gär-
     denfors’ main aim in introducing his theory of conceptual spaces, which is that of
     providing a description of the geometry of thought;
(ii) the superiority of our pattern-recognition algorithms, which exploit support vectors,
     over those based on prototypes;
(iii) the relevance of our approach to conceptual spaces to the important problem of
     learning concepts from exemplars;

    ponies prototypes. If the representation of horse x in C is closer to h than it is to p, the horse
    represented by x is a normal height horse, etc. Gärdenfors discusses this type of algorithm in
    [Gärdenfors, 2004], Chapter 3, §3.9, p. 87.
(iv) the importance of using kernels for applying linear algebra to the problem of pro-
     viding a fruitful mathematical representation of perceptual input;
(v) some of the limitations present in our way of dealing mathematically with concep-
     tual spaces.
    A word of warning concerning point (iv) above, and the general framework within
which this paper should be read. Our approach to conceptual spaces is meant to provide
‘a’ possible way of formalizing them, alongside many others which have been offered.
This is a way of formalizing conceptual spaces which proves to be particularly effective
in circumstances like those we have discussed.


References
[Augello et al., 2013] Augello, A., Gaglio, S., Oliveri, G., Pilato, G.: 2013, ‘An algebra for the
     manipulation of conceptual spaces in cognitive agents’, Biologically Inspired Cognitive Ar-
     chitectures, vol. 6, pp. 23–29.
[Augello et al., 2014] Augello, A., Gaglio, S., Oliveri, G., Pilato, G.: 2014, ‘Mathematical Pat-
     terns and Cognitive Architectures’, in: Lieto, A., Radicioni, D. P., Cruciani, M. (eds.), Artifi-
     cial Intelligence and Cognition 2015, CEUR Workshops Proceedings, vol. 1510, pp. 68–82,
     Aachen: University of Aachen.
[Augello et al., 2015] Augello, A., Gaglio, S., Oliveri, G., Pilato, G.: 2015, ‘Pattern-Recognition:
     A Foundational Approach’, in: Lieto, A., Battaglino, C., Radicioni, D. P., Sanguinetti, M.
     (eds.), Artificial Intelligence and Cognition 2014, CEUR Workshops Proceedings, vol. 1315,
     pp. 134–139, Aachen: University of Aachen.
[Bombieri, 2013] Bombieri, E.: 2013, ‘The shifting aspects of truth in mathematics’, Euresis,
     vol. 5, pp. 249–272.
[Chella et al., 1997] Chella A., M. Frixione, and S. Gaglio. A cognitive architecture for artificial
     vision. Artif. Intell., 89:73111, 1997.
[Chella et al., 1998] Chella A., Frixione,M. Gaglio,S. (1998). An Architecture for Autonomous
     Agents Exploiting Conceptual Representations. Robotics and Autonomous Systems. Vol. 25,
     pp. 231?240 ISSN: 0921-8890.
[Dales & Oliveri, 1998] Dales, H.G. & Oliveri, G. (eds.): 1998, Truth in Mathematics, Oxford
     University Press, Oxford.
[Dennett, 1991] Dennett, D.: 1991, ‘Real Patterns’, The Journal of Philosophy, Vol. 88, No. 1,
     pp. 27-51.
[Gärdenfors, 2004] Gärdenfors, P.: 2004, Conceptual Spaces: The Geometry of Thought, MIT
     Press, Cambridge, Massachusetts.
[Harnad, 1990] Stevan Harnad. The symbol grounding problem. Physica D: Nonlinear Phenom-
     ena, 42(1):335346, 1990.
[Oliveri, 1997] Oliveri, G.: 1997, ‘Mathematics. A Science of Patterns?’, Synthese, vol. 112,
     issue 3, pp. 379–402.
[Oliveri, 1998] Oliveri, G.: 1998, ‘True to the Pattern’, in: [Dales & Oliveri, 1998], pp. 253–269.
[Oliveri, 2007] Oliveri, G.: 2007, A Realist Philosophy of Mathematics, College Publications,
     London.
[Oliveri, 2012] Oliveri, G.: 2012, ‘Object, Structure, and Form’, Logique & Analyse, vol. 219,
     pp. 401-442.
[Raubal, 2004] Raubal, Martin. Formalizing conceptual spaces. Formal Ontology in Information
     Systems, Proceedings of the Third International Conference (FOIS 2004). editor(s) Varzi, A.
     C. and Vieu, L.. 153–164, Year 2004.
[Raubal, 2009] Benjamin Adams and Martin Raubal. 2009. A metric conceptual space al-
    gebra. In Proceedings of the 9th international conference on Spatial information the-
    ory (COSIT’09), Kathleen Stewart Hornsby, Christophe Claramunt, Michel Denis, and
    Gérard Ligozat (Eds.). Springer-Verlag, Berlin, Heidelberg, 51-68.
[Resnik, 1981] Resnik, M.D.: 1981, ‘Mathematics as a Science of Patterns: Ontology and Ref-
    erence’, Noûs XV, pp. 529-550.
[Resnik, 2001] Resnik, M.D.: 2001, Mathematics as a Science of Patterns, Clarendon Press,
    Oxford.
[Rickard, 2006] John T. Rickard. 2006. A concept geometry for conceptual spaces. Fuzzy Op-
    timization and Decision Making 5, 4 (October 2006), 311-329. DOI=10.1007/s10700-006-
    0020-1 http://dx.doi.org/10.1007/s10700-006-0020-1
[Rickard et al., 2007] John T. Rickard, Janet Aisbett, and Greg Gibbon. 2007. Reformula-
    tion of the theory of conceptual spaces. Inf. Sci. 177, 21 (November 2007), 4539-4565.
    DOI=10.1016/j.ins.2007.05.023 http://dx.doi.org/10.1016/j.ins.2007.05.023
[Scholkopf, 2001] Bernhard Scholkopf and Alexander J. Smola. 2001. Learning with Kernels:
    Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cam-
    bridge, MA, USA.
[Shapiro, 2000] Shapiro, S.: 2000, Philosophy of Mathematics. Structure and Ontology, Oxford
    University Press, Oxford.