<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>How can we reduce the gulf between artificial and natural intelligence?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Aaron Sloman</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Invited talk at AIC 2014</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>School of Computer Science</institution>
          ,
          <addr-line>Birmingham</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>AI and robotics have many impressive successes, yet there remain huge chasms between artificial systems and forms of natural intelligence in humans and other animals. Fashionable “paradigms” o↵ering definitive answers come and go (sometimes reappearing with new labels). Yet no AI or robotic systems come close to modelling or replicating the development from helpless infant over a decade or two to a competent adult. Human and animal developmental trajectories vastly outstrip, in depth and breadth of achievement, products of artificial learning systems, although some AI products demonstrate super-human competences in restricted domains. I'll outline a very long-term multi-disciplinary research programme addressing these and other inadequacies in current AI, cognitive science, robotics, psychology, neuroscience, philosophy of mathematics and philosophy of mind. The project builds on past work by actively seeking gaps in what we already understand, and by looking for very di↵erent clues and challenges: the Meta-Morphogenesis project, partly inspired by Turing's work on morphogenesis, outlined here: http://www.cs.bham.ac.uk/research/projects/coga↵/misc/meta-morphogenesis.html</p>
      </abstract>
      <kwd-group>
        <kwd>evolution</kwd>
        <kwd>information-processing</kwd>
        <kwd>meta-morphogenesis</kwd>
        <kwd>Turing</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>There are many impressive successes of AI and robotics, some of them
summarised at http://aitopics.org/news. Yet there remain huge chasms between
artificial systems and forms of natural intelligence in humans and other animals
– including weaver-birds, elephants, squirrels, dolphins, orangutans, carnivorous
mammals, and their prey.1</p>
      <p>Fashionable “paradigms” o↵ering definitive answers come and go, sometimes
reappearing with new labels, and often ignoring previous work, such as the</p>
    </sec>
    <sec id="sec-2">
      <title>1 Nest building cognition of a weaver bird</title>
      <p>
        http://www.youtube.com/watch?v=6svAIgEnFvw
can
be sampled
here:
impressive survey by Marvin Minsky over 50 years ago [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], long before computers
with suitable powers were available.
      </p>
      <p>Despite advances over several decades, accelerated recently by availability of
smaller, cheaper, faster, computing mechanisms, with very much larger memories
than in the past, no AI or robotic systems come close to modelling or replicating
the development from helpless infant over a decade or two to plumber, cook,
trapeze artist, bricklayer, seamstress, dairy farmer, shop-keeper, child-minder,
professor of philosophy, concert pianist, mathematics teacher, quantum physicist,
waiter in a busy restaurant, etc. Human and animal developmental trajectories
vastly outstrip, in depth and breadth of achievement, the products of artificial
learning systems, although AI systems sometimes produce super-human
competences in restricted domains, such as proving logical theorems, winning at chess
or Jeopardy.2</p>
      <p>I’ll outline a very long-term multi-disciplinary research programme
addressing these and other inadequacies in current AI, robotics, psychology,
neuroscience and philosophy of mathematics and mind, in part by building on past
and ongoing work in AI, and in part by looking for very di↵erent clues and
challenges: the Meta-Morphogenesis project, partly inspired by Turing’s work
on morphogenesis.3
2</p>
      <sec id="sec-2-1">
        <title>First characterise the gulf accurately</title>
        <p>We need to understand what has and has not been achieved in AI. The former
(identifying successes) gets most attention, though in the long run the latter
task (identifying gaps in our knowledge) is more important for future progress.</p>
        <p>
          There are many ways in which current robots and AI systems fall short of
the intelligence of humans and other animals, including their ability to reason
about topology and continuous deformation (for examples see [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] and
http://www.cs.bham.ac.uk/research/projects/coga↵/misc/torus.html). Don’t
expect any robot (even with soft hands and compliant joints) to be able to dress a
two year old child (safely) in the near future, a task that requires understanding
of both topology and deformable materials, among other things.4
        </p>
        <p>Getting machines to understand why things work or don’t work lags even
further behind programmed or trained abilities to perform tasks. For example,
understanding why it’s not a good idea to start putting on a shirt by inserting a
hand into a cu↵ and pulling the sleeve up over the arm requires a combination
of topological and metrical reasoning: – a type of mathematical child-minding
theorem, not taught in schools but understood by most child-minders, even if
they have never articulated the theorem and cannot articulate the reasons why
2 Though it’s best not to believe everything you see in advertisements
http://www.youtube.com/watch?v=tIIJME8-au8
3
http://www.cs.bham.ac.uk/research/projects/coga↵/misc/metamorphogenesis.html This project is unfunded and I have no plans to apply
for funding, though others may do so if they wish.
4 As illustrated in this video. http://www.youtube.com/watch?v=WWNlgvtYcEs
it is true. Can you? Merely pointing at past evidence showing that attempts to
dress a child that way always fails does not explain why it is impossible.</p>
        <p>In more obviously mathematical domains, where computers are commonly
assumed to excel, the achievements are narrowly focused on branches of
mathematics using inference methods based on arithmetic, algebra, logic, probability
and statistical theory.</p>
        <p>However, mathematics is much broader than that, and we lack models of
the reasoning (for instance geometrical and topological reasoning) that enabled
humans to come up with the profoundly important and influential mathematical
discoveries reported in Euclid’s Elements 2.5 millennia ago – arguably the single
most important book ever written on this planet. The early pioneers could not
have learnt from mathematics teachers. How did they teach themselves, and
each other? What would be required to enable robots to make similar discoveries
without teachers?</p>
        <p>Those mathematical capabilities seem to have deep, but mostly unnoticed,
connections with animal abilities to perceive practically important types of
a↵ordance, including use of mechanisms that are concerned not only with the
perceiver’s possibilities for immediate action but more generally with what is
and is not possible in a physical situation and how those possibilities and
impossibilities can change, for example if something is moved. A child could learn
that a shoelace threaded through a single hole can be removed from the hole by
pulling the left end of the lace or by pulling the right end. Why does combining
two successful actions fail in this case, whereas in other cases a combination
improves success (e.g. A pushing an object and B pushing the object in the same
direction)? Collecting examples of explanations of impossibilities that humans
understand but not yet current robots is one way to investigate gaps in what
has been achieved so far. It is also a route toward understanding the nature of
human mathematical competences, which I think start to develop in children
long before anyone notices.</p>
        <p>Many animals, including pre-verbal humans, need to be able to perceive and
think about what is and is not possible in a situation, though in most cases
without having the ability to reflect on their thinking or to communicate the
thoughts to someone else. The meta-cognitive abilities evolve later in the history
of a species and develop later in individuals.</p>
        <p>Thinking about what would be possible in various possible states of a↵airs
is totally di↵erent from abilities to make predictions about what will happen,
or to reason probabilistically. It’s one thing to try repeatedly to push a shirt
on a child by pushing its hand and arm in through the end of a sleeve and
conclude from repeated failures that success is improbable. It’s quite another
thing to understand that if the shirt material cannot be stretched, then success
is impossible (for a normally shaped child and a well fitting shirt) though if the
material could be stretched as much as needed then it could be done. Additional
reasoning powers might enable the machine to work out that starting by pushing
the head in through the largest opening could require least stretching, and to
work this out without having to collect statistics from repeated attempts.
3</p>
      </sec>
      <sec id="sec-2-2">
        <title>Shallow statistical vs deep knowledge</title>
        <p>It is possible to have a shallow (statistical) predictive capability based on
observed regularities while lacking deeper knowledge about the set of possibilities
sampled in those observations. An example is the di↵erence between (a) having
heard and remembered a set of sentences and noticed some regular
associations between pairs of words in those sentences and (b) being aware of the
generative grammar used by the speakers, or having acquired such a grammar
unconsciously. The grasp of the grammar, using recursive modes of composition,
permits a much richer and more varied collection of utterances to be produced
or understood. Something similar is required for visual perception of spatial
configurations and spatial processes that are even richer and more varied than
sentences can be. Yet it seems that we share that more powerful competence
with more species, including squirrels and nest-building birds.</p>
        <p>
          This suggests that abilities to acquire, process, store, manipulate, and use
information about spatial structures evolved before capabilities that are unique
to humans, such as use of spoken language. But the spatial information requires
use of something like grammatical structures to cope with scenes of varying
complexity, varying structural detail, and varying collections of possibilities for
change. In other words visual perception, along with planning and acting on the
basis of what is scene, requires the use of internal languages that have many of
the properties previously thought unique to human communicative languages.
Finding out what those languages are, how they evolved, how they can vary
across species, across individuals, and within an individual during development
is a long term research programme, with potential implications for many aspects
of AI/Robotics and Cognitive Science – discussed further in [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].
        </p>
        <p>Conceivably a robot could be programmed to explore making various
movements combining a shirt and a flexible, child-shaped doll. It might discover one
or more sequences of moves that successfully get the shirt on, provided that the
shirt and doll are initially in one of the robot’s previously encountered starting
states. This could be done by exploring the space of sequences of possible moves,
whose size would depend on the degree of precision of its motion and control
parameters. For example, if from every position of the hands there are 50 possible
3-D directions of movement and the robot tries 20 steps after each starting
direction, then the number of physical trajectories from the initial state to be
explored is</p>
        <p>5020 = 9536743164062500000000000000000000
and if it tries a million new moves every second, then it could explore that space
in about 302408000000000000 millennia. Clearly animals do something di↵erent
when they learn to do things, but exactly how they choose things to try at each
moment is not known.</p>
        <p>The “generative grammar” of spatial structures and processes is rich and
deep, and is not concerned only with linear sequences or discrete sequences. In
fact there are multiple overlapping space-time grammars, involving di↵erent
collections of objects assembled, disassembled, moved, repaired, etc. and used, often
for many purposes and in many ways. Think of what structures and processes
are made possible by di↵erent sorts of children’s play materials and construction
kits, including plasticine, paper and scissors, meccano, lego, tinkertoys, etc. The
sort of deep knowledge I am referring to involves grasp of the structure of a
construction-kit with generative powers, and the ability to make inferences about
what can and cannot be built with that kit, by assembling more and more parts,
subject to the possibilities and constraints inherent in the kit.5</p>
        <p>There are di↵erent overlapping subsets of spatio-temporal possibilities, with
di↵erent mathematical structures, including Euclidean and non-Euclidean
geometries (e.g. the geometry of the surface of a hand, or face is non-euclidean)
and various subsets of topology. Mechanisms for acquiring and using these
“possibility subsets”, i.e. possible action sequences and trajectories, seem to be used
by pre-verbal children and other animals. That suggests that those abilities,
must have evolved before linguistic capabilities. They seem to be at work in
young children playing with toys before they can understand or speak a human
language. The starting capabilities extended through much spatial exploration,
provide much of the subject matter (semantic content) for many linguistic
communications.</p>
        <p>Some of the early forms of reasoning and learning in young humans, and
corresponding subsets in other animals, are beyond the scope of current AI theorem
provers, planners, reasoners, or learning systems that I know of. Some of those
forms seem to be used by non-human intelligent animals that are able to perceive
5 An evolving discussion note on this topic can be found here:
http://www.cs.bham.ac.uk/research/projects/coga↵/misc/construction-kits.html
both possibilities and constraints on possibilities in spatial configurations. Betty,
a New Caledonian crow, made headline news in 2002 when she surprised Oxford
researchers by making a hook from a straight piece of wire, in order to lift a
bucket of food out of a vertical glass tube. Moreover, in a series of repeated
challenges she made multiple hooks, using at least four very di↵erent strategies,
taking advantage of di↵erent parts of the environment, all apparently in full
knowledge of what she was doing and why – as there was no evidence of random
trial and error behaviour. Why did she not go on using the earlier methods, which
all worked? Several of the videos showing the diversity of techniques are still
available here: http://users.ox.ac.uk/˜kgroup/tools/movies.shtml. The absence
of trial-and-error processes in the successful episodes suggests that Betty had a
deep understanding of the range of possibilities and constraints on possibilities
in her problem solving situations.</p>
        <p>It is very unlikely that you have previously encountered and solved the
problem posed below the following image, yet many people very quickly think
of a solution.</p>
        <p>In order to think of a strategy you do not need to know the exact, or even
the approximate, sizes of the objects in the scene, how far away they are from
you, exactly what force will be required to lift the mug, and so on. It may occur
to you that if the mug is full of liquid and you don’t want to spill any of it, then
a quite di↵erent solution is required. (Why? Is there a solution?).</p>
        <p>The two pictures in Figure 3 present another set of example action strategies
for changing a situation from one configuration to another. At how many di↵erent
levels of abstraction can you think of the process, where the levels di↵er in the
amount of detail (e.g. metrical detail) of each intermediate stage. For example,
when you first thought about the problem did you specify which hands or which
fingers would be used at every stage, or at which location you would need to
grasp each item? If you specified the locations used to grasp the cup, the saucer
and the spoon, what else would have to change to permit those grasps? The point
about all this is that although you do not normally think of using mathematics
for tasks like this, if you choose a location at which to grasp the cup using
finger and thumb of your left hand, that will mathematically constrain the 3-D
orientation of the gap between between finger and thumb, if you don’t want
the cup to be rotated by the fact of bringing finger and thumb together. A
human can think about the possible movements and the orientations required,
and why those orientations are required, without actually performing the action,
and can answer questions about why certain actions will fail, again without doing
anything.</p>
        <p>These are examples of “o✏ine intelligence”, contrasted with the “online
intelligence” used in actually manipulating objects, where information required
for servo-control may be used transiently then discarded and replaced by new
information. My impression is that a vast amount of recent AI/Robotic research
has aimed at providing online intelligence with complete disregard for the
requirements of o✏ine intelligence. O✏ine intelligence is necessary for achieving
complex goals by performing actions extended over space and time, including the
use of machines that have to be built to support the process, and in some cases
delegating portions of the task to others. The designer or builder of a skyscraper
will not think in terms of his/her own actions, but in terms of what motions of
what parts and materials are required.
3.1</p>
        <p>Limitations of sensorymotor intelligence
When you think about such things even with fairly detailed constraints on the
possible motions, you will not be thinking about either the nervous signals sent
to the muscles involved, nor the patterns of retinal stimulation that will be
provided – and in fact the same actions can produce di↵erent retinal processes
depending on the precise position of the head, and the direction of gaze of the
eyes, and whether and how the fixation changes during the process. Probably
the fixation requirements will be more constrained for a novice at this task than
for an expert.</p>
        <p>
          However, humans, other animals, and intelligent robots do not need to
reason about sensory-motor details if they use an ontology of 3-D structures and
processes, rather than an ontology of sensory and motor nerve signals. Contrast
this with the sorts of assumptions discussed in [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], and many others who attempt
to build theories of cognition on the basis of sensory-motor control loops.
        </p>
        <p>
          As John McCarthy pointed out in [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] it would be surprising if billions of
years of evolution failed to provide intelligent organisms with the information
that they are in a world of persisting 3-D locations, relationships, objects and
processes – a discovery that, in a good design, could be shared across many
types of individuals with very di↵erent sensors and motors, and sensory motor
patterns. Trying to make a living on a planet like this, whose contents extend
far beyond the skin of any individual, would be messy and highly inecient
if expressed entirely in terms of possible sensory-motor sequences, compared
with using unchanging representations for things that don’t change whenever
sensory or motor signals change. Planning a short cut home, with reference to
roads, junctions, bus routes, etc. is far more sensible than attempting to deal,
at any level of abstraction, with the potentially infinite variety of sensory-motor
patterns that might be relevant.
        </p>
        <p>This ability to think about sequences of possible alterations in a physical
configuration without actually doing anything, and without having full metrical
information, inspired much early work in AI, including the sorts of symbolic
planning used by Shakey, the Stanford robot, and Freddy, the Edinburgh robot,
over four decades ago, though at the time the technology available (including
available computer power) was grossly inadequate for the task, including ruling
out visual servo-control of actions.</p>
        <p>Any researcher claiming that intelligent robots require only the right physical
mode of interaction with the environment, along with mechanisms for finding
patterns in sensory-motor signals, must disregard the capabilities and
informationprocessing requirements that I have been discussing.
4</p>
      </sec>
      <sec id="sec-2-3">
        <title>Inflating what “passive walkers” can do</title>
        <p>
          Some (whom I’ll not mention to avoid embarrassing them) have attempted
to support claims that only interactions with the immediate environment are
needed for intelligence by referring to or demonstrating “passive walkers”,6
without saying what will happen if a brick is in the way of a passive walker, or
if part of the walking route starts to slope uphill. Such toys are interesting and
entertaining but do not indicate any need for a “New artificial intelligence”, using
labels such as “embodied”, “enactivist”, “behaviour based”, and “situated”, to
characterise their new paradigm. Those new approaches are at least as selective
as the older reasoning based approaches that they criticised, though in di↵erent
ways. (Some of that history is presented in Boden’s survey [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ].)
6 E.g. http://www.youtube.com/watch?v=N64KOQkbyiI
        </p>
        <p>The requirements for perception and action mechanisms di↵er according to
which “central” layers the organism has. For instance, for an organism able to
use deliberative capabilities to think of, evaluate, and select multi-step plans,
where most of the actions will occur in situations that do not exist yet, it is not
enough to identify objects and their relationships (pencil, mug, handle of mug,
book, window-frame, etc.) in a current visual percept. It is also necessary to be
able to “think ahead” about possible actions at a suitable level of abstraction,
including consideration of objects not yet known, requiring a potentially infinite
variety of possible sensory and motor patterns.
5</p>
      </sec>
      <sec id="sec-2-4">
        <title>The birth of mathematics</title>
        <p>The ability to reason about possible actions at a level of generality that abstracts
from metrical details seems to be closely related to the abilities of ancient Greeks,
and others, to make mathematical discoveries about possible configurations of
lines and circles and the consequences of changing those configurations, without
being tied to particular lengths, angles, curvatures, etc., in Euclidean geometry
or topology. As far as I know, no current robot can do this, and neuroscientists
don’t know how brains do it. Some examples of mathematical reasoning that
could be related to reasoning about practical tasks and which are currently
beyond what AI reasoners can do, are presented on my web site.7,8</p>
        <p>
          In 1971 I presented a paper at IJCAI, arguing that the focus solely on
logicbased reasoning, recommended by McCarthy and Hayes in [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] could hold up
progress in AI, because it ignored forms of spatial reasoning that had proved
powerful in mathematics and practical problem solving. I did not realise then
how dicult it would be to explain exactly what the alternatives were and how
they worked – despite many conferences and journal papers on diagrammatic
reasoning since then.
        </p>
        <p>
          There have also been several changes of fashion promoted by various AI
researchers (or their critics) including use of neural nets, constraint nets,
evolutionary algorithms, dynamical systems, behaviour-based systems, embodied
cognition, situated cognition, enactive cognition, autopoesis, morphological
computation, statistical learning, bayesian nets, and probably others that I have
not encountered, often accompanied by hand-waving and hyperbole without
much science or engineering. In parallel with this there has been continued
research advancing older paradigms for symbolic and logic based, theorem proving,
planning, and grammar based language processing. Several of the debates are
analysed in [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ],
6
        </p>
      </sec>
      <sec id="sec-2-5">
        <title>Other inadequacies</title>
        <p>There are many other inadequacies in current AI, including, for example the
lack of an agreed framework for relating information-processing architectures</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>7 http://www.cs.bham.ac.uk/research/projects/coga↵/misc/torus.html 8 http://www.cs.bham.ac.uk/research/projects/coga↵/misc/triangle-sum.html</title>
      <p>to requirements in engineering contexts or to explanatory models in scientific
contexts. For example attempts to model emotions or learning capabilities,
in humans or other animals, are often based on inadequate descriptions of
what needs to be explained, for instance poor theories of emotions that focus
only on emotions with characteristic behavioural expressions: a small subset of
phenomena requiring explanation or poor theories of learning that focus only on
a small subset of types of learning (e.g. reinforcement learning where learners
have no understanding of what’s going on). That would exclude the kind of
learning that goes on when people make mathematical discoveries or learn to
program computers or learn to compose music.</p>
      <p>Moreover, much AI research uses a seriously restricted set of forms of
representation (means of encoding information) partly because of the educational
backgrounds of researchers – as a result of which many of them assume that
spatial structures must be represented using mechanisms based on Cartesian
coordinates – and partly because of a failure to analyse in sucient detail the
variety of problems overcome by many animals in their natural environments.</p>
      <p>Standard psychological research techniques are not applicable to the study
of learning capabilities in young children and other animals because there is so
much individual variation, but the widespread availability of cheap video cameras
has led to a large and growing collection of freely available examples.
7</p>
      <sec id="sec-3-1">
        <title>More on o✏ine and online intelligence</title>
        <p>Researchers have to learn what to look for. For example, online intelligence
requires highly trained precisely controlled responses matched to fine details
of the physical environment, e.g. catching a ball, playing table tennis, picking
up a box and putting it on another. In contrast o✏ine intelligence involves
understanding not just existing spatial configurations but also the possibilities
for change and constraints on change, and for some tasks the ability to find
sequences of possible changes to achieve a goal, where some of the possibilities
are not specified in metrical detail because they do not yet exist, but will exist
after part of the plan has been carried out.</p>
        <p>This requires the ability to construct relatively abstract forms of
representation of perceived or remembered situations to allow plans to be constructed with
missing details that can be acquired later during execution. You can think about
making a train trip to another town without having information about where
you will stand when purchasing your ticket or which coach you will board when
the train arrives. You can think about how to rotate a chair to get it through a
doorway without needing information about the precise 3-D coordinates of parts
of the chair or knowing exactly where you will grasp it, or how much force you
will need to apply at various stages of the move.</p>
        <p>There is no reason to believe that humans and other animals have to use
probability distributions over possible precise metrical values, in all planning
contexts where precise measurements are not available. Even thinking about
such precise values probabilistically is highly unintelligent when reasoning about
topological relationships or partial orderings (nearer, thinner, a bigger angle,
etc.) is all that’s needed9 Unfortunately, the mathematically sophisticated, but
nevertheless unintelligent, modes of thinking are used in many robots, after much
statistical learning (to acquire probability distributions) and complex
probabilistic reasoning, that is potentially explosive. That is in part a consequence of
the unjustified assumption that all spatial properties and relations have to be
expressed in Cartesian coordinate systems. Human mathematicians did not know
about them when they proved their first theorems about Euclidean geometry,
built their first shelters.
8</p>
      </sec>
      <sec id="sec-3-2">
        <title>Speculations about early forms of cognition</title>
        <p>It is clear that the earliest spatial cognition could not have used full euclidean
geometry, including its uniform metric. I suspect that the metrical version of
geometry was a result of a collection of transitions adding richer and richer
nonmetrical relationships, including networks of partial orderings of size, distance,
angle, speed, curvature, etc.</p>
        <p>Later, indefinitely extendable partial metrics were added: distance between X
and Y is at least three times the distance between P and Q and at most five times
that distance. Such procedures could allow previously used standards to be
subdivided with arbitrarily increasing precision. At first this must have been applied
only to special cases, then later somehow (using what cognitive mechanisms?)
extrapolated indefinitely, implicitly using a Kantian form of potential infinity
(long before Kant realised the need for this).</p>
        <p>Filling in the details of such a story, and relating it to varieties of cognition
not only in the ancestors of humans but also many other existing species will
be a long term multi-disciplinary collaborative task, with deep implications for
neuroscience, robotics, psychology, philosophy of mathematics and philosophy
of mind. (Among others.)</p>
        <p>
          Moreover, human toddlers appear to be capable of making proto-mathematical
discoveries (“toddler theorems”) even if they are unaware of what they have
done. The learning process starts in infancy, but seems to involve di↵erent kinds
of advance at di↵erent stages of development, involving di↵erent domains as
suggested by Karmilo↵-Smith in [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
        </p>
        <p>For example, I recently saw an 11 month old infant discover, apparently with
great delight, that she could hold a ball between her upturned foot and the palm
of her hand. That sort of discovery could not have been made by a one month
old child. Why not?10</p>
        <p>Animal abilities to perceive and use complex novel a↵ordances appear to be
closely related to the ability to make mathematical discoveries. Compare the
9 As I have tried to illustrate in:
http://www.cs.bham.ac.uk/research/projects/coga↵/misc/changinga↵ordances.html
10 A growing list of toddler theorems and discussions of their requirements
can be found in
http://www.cs.bham.ac.uk/research/projects/coga↵/misc/toddlertheorems.html
abilities to think about changes of configurations involving ropes or strings and
the mathematical ability to think about continuous deformation of closed curves
in various kinds of surface.</p>
        <p>Not only computational models, but also current psychology and
neuroscience, don’t seem to come close to describing these competences accurately
or producing explanations – especially if we consider not only simple numerical
mathematics, on which many psychological studies of mathematics seem to focus,
but also topological and geometrical reasoning, and the essentially mathematical
ability to discover a generative grammar closely related to the verbal patterns a
child has experienced in her locality, where the grammar is very di↵erent from
those discovered by children exposed to thousands of other languages.</p>
        <p>
          There seem to be key features of some of those developmental trajectories
that could provide clues, including some noticed by Piaget in his last two books
on Possibility and Necessity, and his former colleague, Annette Karmilo↵-Smith
[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
9
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>The Meta-Morphogenesis project</title>
        <p>Identifying gaps in our knowledge requires a great deal of careful observation of
many forms of behaviour in humans at various stages of development and many
other species, always asking: “what sort of information-processing mechanism
(or mechanisms) could account for that?”</p>
        <p>
          Partly inspired by one of Alan Turing’s last papers on Morphogenesis [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ],
I proposed the Meta-Morphogenesis (M-M) project in [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], a very long term
collaborative project for building up an agreed collection of explanatory tasks, and
present some ideas about what has been missed in most proposed explanatory
theories.
        </p>
        <p>Perhaps researchers who disagree, often fruitlessly, about what the answers
are can collaborate fruitfully on finding out what the questions are, since much of
what needs to be explained is far from obvious. There are unanswered questions
about uses of vision, varieties of motivation and a↵ect, human and animal
mathematical competences, information-processing architectures required for all
the di↵erent sorts of biological competences to be combined, and questions about
how all these phenomena evolved across species, and develop in individuals. This
leads to questions about what the universe had to be like to support the forms
of evolution and the products of evolution that have existed on this planet.
The Meta-Morphogenesis project is concerned with trying to understand what
varieties of information processing biological evolution has achieved, not only in
humans but across the spectrum of life. Many of the achievements are far from
obvious.11
11 A</p>
        <p>more detailed, but still evolving, introduction to the project can
be found here:
http://www.cs.bham.ac.uk/research/projects/coga↵/misc/metamorphogenesis.html</p>
        <p>Unfortunately, researchers all too often mistake impressive new developments
for steps in the right direction. I am not sure there is any way to change this
without radical changes in our educational systems and research funding systems.</p>
        <p>But those are topics for another time. In the meantime I hope many more
researchers will join the attempts to identify gaps in our knowledge, including
things we know happen but which we do not know how to explain, and in the
longer term by finding gaps we had not previously noticed. I think one way to
do that is to try to investigate transitions in biological information processing
across evolutionary time-scales, since its clear that types of information used, the
types of uses of information, and the purposes for which information is used have
changed enormously since the simplest organisms floating in a sea of chemicals.</p>
        <p>Perhaps some of the undiscovered intermediate states in evolution will turn
out to be keys to unnoticed features of the current most sophisticated biological
information processors, including humans.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Boden</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          :
          <article-title>Mind As Machine: A history of Cognitive Science (Vols 1-2</article-title>
          ). Oxford University Press, Oxford (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Clark</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Whatever next? Predictive brains, situated agents, and the future of cognitive science</article-title>
          .
          <source>Behavioral and Brain Sciences</source>
          <volume>36</volume>
          (
          <issue>3</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>24</lpage>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Karmilo</surname>
            ↵-Smith,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Beyond Modularity: A Developmental Perspective on Cognitive Science</article-title>
          . MIT Press, Cambridge, MA (
          <year>1992</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>McCarthy</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>The well-designed child</article-title>
          .
          <source>Artificial Intelligence</source>
          <volume>172</volume>
          (
          <issue>18</issue>
          ),
          <fpage>2003</fpage>
          -
          <lpage>2014</lpage>
          (
          <year>2008</year>
          ), http://www-formal.stanford.edu/jmc/child.html
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>McCarthy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hayes</surname>
            ,
            <given-names>P.:</given-names>
          </string-name>
          <article-title>Some philosophical problems from the standpoint of AI</article-title>
          . In: Meltzer,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Michie</surname>
          </string-name>
          ,
          <string-name>
            <surname>D</surname>
          </string-name>
          . (eds.)
          <source>Machine Intelligence</source>
          <volume>4</volume>
          , pp.
          <fpage>463</fpage>
          -
          <lpage>502</lpage>
          . Edinburgh University Press, Edinburgh, Scotland (
          <year>1969</year>
          ), http://wwwformal.stanford.edu/jmc/mcchay69/mcchay69.html
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Minsky</surname>
            ,
            <given-names>M.L.</given-names>
          </string-name>
          :
          <article-title>Steps toward artificial intelligence</article-title>
          . In: Feigenbaum,
          <string-name>
            <given-names>E.</given-names>
            ,
            <surname>Feldman</surname>
          </string-name>
          ,
          <string-name>
            <surname>J</surname>
          </string-name>
          . (eds.) Computers and Thought, pp.
          <fpage>406</fpage>
          -
          <lpage>450</lpage>
          .
          <string-name>
            <surname>McGraw-Hill</surname>
          </string-name>
          , New York (
          <year>1963</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Sauvy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sauvy</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>The Child's Discovery of Space: From hopscotch to mazes - an introduction to intuitive topology</article-title>
          .
          <source>Penguin Education</source>
          ,
          <string-name>
            <surname>Harmondsworth</surname>
          </string-name>
          (
          <year>1974</year>
          ),
          <article-title>translated from the French by Pam Wells</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Sloman</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Evolution of minds and languages. What evolved first and develops first in children: Languages for communicating, or languages for thinking (Generalised Languages: GLs)? (</article-title>
          <year>2008</year>
          ), http://www.cs.bham.ac.uk/research/projects/cosy/papers/#pr0702
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Sloman</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Virtual machinery and evolution of mind (part 3) meta-morphogenesis: Evolution of information-processing machinery</article-title>
          . In: Cooper, S.B.,
          <string-name>
            <surname>van Leeuwen</surname>
            ,
            <given-names>J</given-names>
          </string-name>
          . (eds.) Alan Turing - His
          <source>Work and Impact</source>
          , pp.
          <fpage>849</fpage>
          -
          <lpage>856</lpage>
          . Elsevier, Amsterdam (
          <year>2013</year>
          ), http://www.cs.bham.ac.uk/research/projects/coga↵/11.html#1106d
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Turing</surname>
            ,
            <given-names>A.M.</given-names>
          </string-name>
          :
          <source>The Chemical Basis Of Morphogenesis. Phil. Trans. R. Soc. London B 237</source>
          <volume>237</volume>
          ,
          <fpage>37</fpage>
          -
          <lpage>72</lpage>
          (
          <year>1952</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>