<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>International Conference on Artificial Intelligence
https://arxiv.org/abs/</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Morphognosis: the shape of knowledge in space and time</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Thomas E. Portegys Ernst</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Young LLP New York</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>USA tom.portegys@ey.com</string-name>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>1507</year>
      </pub-date>
      <volume>04029</volume>
      <fpage>9</fpage>
      <lpage>14</lpage>
      <abstract>
        <p>Artificial intelligence research to a great degree focuses on the brain and behaviors that the brain generates. But the brain, an extremely complex structure resulting from millions of years of evolution, can be viewed as a solution to problems posed by an environment existing in space and time. The environment generates signals that produce sensory events within an organism. Building an internal spatial and temporal model of the environment allows an organism to navigate and manipulate the environment. Higher intelligence might be the ability to process information coming from a larger extent of space-time. In keeping with nature's penchant for extending rather than replacing, the purpose of the mammalian neocortex might then be to record events from distant reaches of space and time and render them, as though yet near and present, to the older, deeper brain whose instinctual roles have changed little over eons. Here this notion is embodied in a model called morphognosis (morpho = shape and gnosis = knowledge). Its basic structure is a pyramid of event recordings called a morphognostic. At the apex of the pyramid are the most recent and nearby events. Receding from the apex are less recent and possibly more distant events. A morphognostic can thus be viewed as a structure of progressively larger chunks of space-time knowledge. A set of morphognostics forms long-term memories that are learned by exposure to the environment. A cellular automaton is used as the platform to investigate the morphognosis model, using a simulated organism that learns to forage in its world for food, build a nest, and play the game of Pong.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>The human brain is the seat of intelligence. Thus when we
attempt to craft intelligence, naturally we turn to it as a guide.
Fortunately, neuroscience is proceeding at an astounding
pace [Kaiser, 2014; Stetka, 2016], methodically unpacking its
mysteries. Yet the complexity of the brain, with billions of
neurons and trillions of synapses, remains daunting. Teasing
apart which aspects and features of the brain are essential to
the function of intelligence and which are incidental is a
crucial and difficult task. Krakauer et al. [2017] recommend that
neuroscience work takes place after the study of related
behaviors.</p>
      <p>
        Unfortunately, the prospects of understanding complex
systems through examination and dissection are questionable
[Jonas and Kording, 2016]. And as for constructing a
complete precise brain model, it is possible, as John von
Neumann believed [Mühlenbein, 2009], that at a certain level of
complexity the simplest precise description of a thing is the
thing itself. In reaction to this, some efforts, such as The
Human Brain Project [
        <xref ref-type="bibr" rid="ref17">2015</xref>
        ] and Numenta [Hawkins, 2004;
White paper, 2011], have taken the position that analysis
must be complemented with synthesis and simulation to
achieve a satisfactory level of understanding.
      </p>
      <p>From an artificial intelligence (AI) viewpoint, we must
keep in mind that the purpose of a brain is to allow an
organism to navigate and manipulate its environment. Thus it is a
solution to problems posed by the environment. While the
earlier days of AI seemed more focused on this viewpoint,
recently neuroscience has assumed perhaps an outsized role
in directing AI, even to the extent of governmental
encouragement [Vogelstein, 2014].</p>
      <p>
        Some researchers maintain that the environment largely
consists of a body for the brain to interact with. The embodied
brain will thus leverage the sensory and motor capabilities of
a body that are adapted to an environment. Robotics
researchers such as Brooks [
        <xref ref-type="bibr" rid="ref2">1999</xref>
        ], Hoffmann and Pfeifer [2011] have
argued that true artificial intelligence can only be achieved by
machines that have sensory and motor skills and are
connected to the world through a body. However, this approach
belies the problem since the body, like the brain, is also a
solution to its environment.
      </p>
      <p>Determining a model of an organism’s environment is
more tractable than creating a brain model of an
environmental model. But it requires settling on what is in the world that
produces sensory events and reacts to motor responses.
Confounding this is that we of course must use our brains to do
this. There is a common and somewhat ironic tendency to
describe AI inputs and outputs in human cognitive terms, i.e.
post-processed brain output, such as symbolic variables.</p>
      <p>
        Hoffman [
        <xref ref-type="bibr" rid="ref5">2009</xref>
        ] argues that evolution has shaped our
senses and perceptual machinery to only provide information
on events that are ancestrally significant, such as finding food
and safety. Other events in the environment that we cannot
directly sense must be mapped through technology onto our
sensory capabilities. For example, in the age of science the
existence and use of X-rays is important, but we sense them
only indirectly, as shadows on photographic film. Indeed,
Hoffman argues that reality may be more radically alien than
we can imagine.
      </p>
      <p>Epistemological offerings would seem at best too abstract
to be useful for framing a sensory-response environment, and
at worst useless, as in the cases of nihilism and solipsism.
And physics has in recent times become increasingly muddier
on the “true” nature of reality:</p>
      <p>The arrow of time may be related to the perception of
entropy [Halliwell, 1994]
String theory demands a number of extra infinitesimal
dimensions [Rickles, 2014].</p>
      <p>The perception of space may be a holographic projection
[Bousso, 2002].</p>
      <p>Reality could be a cellular automaton [Wolfram, 2002],
a graph [Wolfram, 2015], or a simulation [Moskowitz,
2016].</p>
      <p>Despite these hazards, people universally experience the
environment as a space-time structure. And even if there is a
different underlying substructure, the model is empirically
effective. The presence of mammalian brain structures for
mapping spatial events [Vorhees and Williams, 2014] provides
evidence for the processing of this type of information.
Similarly, brain structures for sensing the passage of time have
also found support [Sanders, 2015].</p>
      <p>Using space-time as a model, it can be speculated that
higher intelligence is the ability to process information
arising from a larger extent of space-time. And in keeping with
nature’s penchant for extending rather than replacing, the
purpose of the mammalian neocortex might then be to record
events from distant reaches of space and time and render
them, as though yet near and present, to the older, deeper
brain whose instinctual roles have changed little over eons. If
this is so, these structures would be repurposed to embody
language and abstract concepts.
Building an internal spatial and temporal model of the
environment allows an organism to navigate and manipulate the
environment. This paper introduces a model called
morphognosis (morpho = shape and gnosis = knowledge). Its basic
structure is a pyramid of event recordings called a
morphognostic, as shown in Figure 1. At the apex of the pyramid are
the most recent and nearby events. Receding from the apex
are less recent and possibly more distant events.</p>
      <p>Morphognosis is partially inspired by an abstract
morphogenesis model called Morphozoic [Portegys et al., 2017].
Morphogenesis is the process of generating complex
structures from simpler ones within an environment. Morphozoic
is based on hierarchically nested neighborhoods within a
cellular automaton. Morphozoic was found to be robust and
noise tolerant in reproducing a number of
morphogenesislike phenomena, including Turing diffusion-reaction systems
[Turing, 1952], gastrulation, and neuron pathfinding. It is
also capable of image reconstruction tasks.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Description</title>
      <p>The morphognosis model is demonstrated in three 2D cellular
environments: (1) a food foraging task, (2) a nest building
task, and (3) the game of Pong. The food foraging task is used
as a venue to further define the model.
2.1</p>
    </sec>
    <sec id="sec-3">
      <title>Food foraging</title>
      <p>In this task a virtual creature called a mox finds itself in a 2D
cellular world as shown in Figure 2. To find food the mox
must navigate around various obstacles of various types
(colors).</p>
      <p>Figure 2 - Mox food foraging in a 2D cellular world.
Metamorph “execution” consists generating a
morphognostic for the current mox position and orientation then finding
the closest morphognostic contained in the learned
metamorph set, where:</p>
      <p>(

ℎ ℎ
∑


∑

ℎ , 



∑


(

ℎ ) =




  , , , −
  , , ,
)
2.1.4</p>
    </sec>
    <sec id="sec-4">
      <title>Artificial neural network implementation</title>
      <p>In a complex environment, generating a large number of
metamorphs may be prohibitive in terms of storage and search
processing. Alternatively, metamorphs can be used to train an
artificial neural network (ANN), as shown in Figure 4, to
learn responses associated with morphognostic inputs.
During operation, a current morphognostic can be input to the
ANN to produce a learned response. The ANN also has these
advantages:</p>
      <sec id="sec-4-1">
        <title>Faster.</title>
      </sec>
      <sec id="sec-4-2">
        <title>More compact.</title>
      </sec>
      <sec id="sec-4-3">
        <title>More noise tolerant.</title>
        <p>2.1.5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Results</title>
      <p>The mox were trained in worlds featuring a number of
randomly placed obstacles of various types. Training was done
by “autopiloting” the mox along an optimal path to the food.
This generated a set of metamorphs suitable for testing. Table
1 shows the results of varying the neighborhood hierarchy
depth in a 10x10 world. Success indicates the mean amount
of food eaten, so 1 is a perfect score. It can be observed that
more obstacles tend to improve performance. This is because
they tend to form unique landmark configurations to guide
the mox. Larger neighborhoods also tend to improve
performance.
3 4 20 1</p>
      <p>Table 1 – Foraging in a 10x10 world.</p>
      <p>The next test examines how well the model performs when
the test world is not a duplicate of a training world, but is
similar to a set of training worlds. Thus for this, multiple
training runs are used. Before each training run, the cell types
of all the cells are probabilistically modified to a random
value. A successful test run must then rely on a composite of
multiple training runs. The results are shown in Table 2. Of
note is how performance only begins to falter under heavy
noise and few training runs.
.</p>
      <p>Food</p>
      <p>Noise
0.1
0.1
0.1
0.25
0.25
0.25
0.5
0.5
1
5
1
5
10
10
1
5
1
1
1
1
1
0.9
0.6
0.8
0.5 10 0.9</p>
      <p>Table 2 – Foraging with noise.
2.2</p>
    </sec>
    <sec id="sec-6">
      <title>Nest building</title>
      <p>This task illustrates how the morphognosis model can be used
to not only navigate but also manipulate the environment.</p>
      <p>Figure 5 – Nest building with gathered stones.</p>
    </sec>
    <sec id="sec-7">
      <title>Left: scattered stones. Right completed nest.</title>
      <p>For this task, the mox is capable of sensing the presence of a
stone immediately in front of it, and sensing the elevation
gradient both laterally and in the forward-backward direction. In
addition to the forward and turning movements used by the
foraging task, the mox is capable of picking up a stone in
front of it and dropping the stone onto an unoccupied cell in
front of it. An internal sense allows the mox to know whether
it is carrying a stone.</p>
      <p>Training was done by running the mox through 10
repetitions on “autopilot” to build a set of metamorphs. The
environment was then reset and the mox tested to discover
whether it is capable of building the nest. Over 50 trials were
performed with 100% success. Internally, the sensory
information from the stone, gradient and stone carry states were
sufficient to achieve success with a neighborhood hierarchy
of only one level.
2.3</p>
    </sec>
    <sec id="sec-8">
      <title>Pong game</title>
      <p>Much of the real world is nondeterministic, taking the form
of unpredictable or probabilistic events that must be acted
upon. If AIs are to engage such phenomena, then they must
be able to learn how to deal with nondeterminism. In this task
the game of Pong poses a nondeterministic environment. The
learner is given an incomplete view of the game state and
underlying deterministic physics, resulting in a
nondeterministic game. This task has been found to be challenging for
conventional machine learning algorithms [Portegys, 2015].</p>
    </sec>
    <sec id="sec-9">
      <title>2.3.1 Game details</title>
      <p>The goal of the game is to vertically move a paddle to prevent
a bouncing ball from striking the right wall, as shown in
Figure 6.</p>
      <p>Ball and paddle move in a cellular grid.</p>
      <p>• Unseen deterministic physics moves ball in
grid.</p>
      <p>Cell state: (ball state, paddle state)
• Ball state: (empty, present,</p>
      <p>left/right/up/down)
• Paddle state: (true | false)
Learner orientation: (north, south, east, west)
Responses: (wait, forward, turn right/left)
If paddle present and orientation north or south, then
forward response moves paddle also.
•
•
•
•
•
moving</p>
      <p>into the model?</p>
    </sec>
    <sec id="sec-10">
      <title>2.3.2 Procedure and results</title>
      <p>Learner was trained with multiple randomly generated initial
ball velocities.
• When the ball moved left and right, the learner moved
with the ball.
• When the ball moved up or down, the learner moved to
the paddle and moved it up or down.</p>
      <p>• This was the challenge: remembering ball state
while traversing empty cells to the paddle so as
to move it correctly, then to turn and return to
ball for next input.</p>
      <p>Testing on random games: 100% successful.
3</p>
    </sec>
    <sec id="sec-11">
      <title>Conclusion</title>
      <p>This is an early exploration of the morphognosis model. The
novelty of the model is both the method for integrating
knowledge of events occurring in space and time dimensions
in linear complexity, and the method of expressing the
behavioral interplay of responses and sensory events. The goal of
this project is to model the environment as something that
could plausibly be in turn modeled by an artificial brain.</p>
      <p>The positive results on the three tasks prompt future
investigation. Moving up the ladder of animal intelligence,
possible next tasks include:
•
•</p>
      <p>Web building. Can a space-time memories of building
one or more training webs allow one to be built in a
quasi-novel environment?
Food foraging social signaling. Bees retain memories of
foraging food sources that they communicate to other
bees through instinctive dancing. Can this task be cast</p>
      <p>The Java code is available at
https://github.com/portegys/MoxWorx</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <source>[Bousso</source>
          ,
          <year>2002</year>
          ]
          <string-name>
            <given-names>R.</given-names>
            <surname>Bousso</surname>
          </string-name>
          .
          <article-title>The holographic principle</article-title>
          .
          <source>Reviews of Modern Physics</source>
          .
          <volume>74</volume>
          (
          <issue>3</issue>
          ):
          <fpage>825</fpage>
          -
          <lpage>874</lpage>
          . arXiv:hepth/0203101.
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <source>[Brooks</source>
          , 1999]
          <string-name>
            <given-names>Rodney</given-names>
            <surname>Brooks</surname>
          </string-name>
          .
          <source>Cambrian Intelligence: The Early History of the New AI</source>
          . Cambridge MA: The MIT Press.
          <source>ISBN 0-262-52263-2</source>
          .
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <source>[Halliwell</source>
          , 1994]
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Halliwell</surname>
          </string-name>
          .
          <source>Physical Origins of Time Asymmetry</source>
          . Cambridge University Press.
          <source>ISBN 0-521- 56837-4</source>
          .
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <source>[Hawkins</source>
          , 2004]
          <string-name>
            <given-names>Jeff</given-names>
            <surname>Hawkins</surname>
          </string-name>
          .
          <source>On Intelligence</source>
          (1 ed.).
          <source>Times Books</source>
          . p.
          <fpage>272</fpage>
          .
          <source>ISBN 0805074562</source>
          .
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <source>[Hoffman</source>
          , 2009]
          <string-name>
            <given-names>D. D.</given-names>
            <surname>Hoffman</surname>
          </string-name>
          .
          <article-title>The interface theory of perception: Natural selection drives true perception to swift extinction</article-title>
          .
          <source>In: Object Categorization: Computer and Human Vision</source>
          Perspectives. Ed.:
          <string-name>
            <given-names>S.J.</given-names>
            <surname>Dickinson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Leonardis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Schiele</surname>
          </string-name>
          &amp;
          <string-name>
            <surname>M.J. Tarr</surname>
          </string-name>
          . Cambridge, Cambridge University Press:
          <fpage>148</fpage>
          -
          <lpage>165</lpage>
          .
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <source>[Hoffmann and Pfeifer</source>
          , 2011]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hoffmann</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Pfeifer</surname>
          </string-name>
          .
          <article-title>The implications of embodiment for behavior and cognition: animal and robotic case studies</article-title>
          , in W. Tschacher &amp; C. Bergomi, ed., '
          <article-title>The Implications of Embodiment: Cognition and Communication'</article-title>
          , Exeter: Imprint Academic, pp.
          <fpage>31</fpage>
          -
          <lpage>58</lpage>
          .
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <source>[Human Brain Project</source>
          ,
          <year>2015</year>
          ]
          <article-title>Human Brain Project, Framework Partnership Agreement</article-title>
          . https://www.humanbrainproject.eu/documents/10180/538356/FPA++
          <source>Annex+1+Part+B/41c4da2e-0e69-4295-8e98- 3484677d661f</source>
          .
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <source>[Jonas and Kording</source>
          , 2016]
          <string-name>
            <given-names>E.</given-names>
            <surname>Jonas</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Kording</surname>
          </string-name>
          .
          <article-title>Could a neuroscientist understand a microprocessor? bioRxiv 055624; doi</article-title>
          : https://doi.org/10.1101/055624.
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <source>[Kaiser</source>
          , 2014]
          <string-name>
            <given-names>U. B.</given-names>
            <surname>Kaiser</surname>
          </string-name>
          . Editorial:
          <article-title>Advances in Neuroscience: The BRAIN Initiative and Implications for Neuroendocrinology</article-title>
          .
          <source>Molecular Endocrinology</source>
          .
          <volume>28</volume>
          (
          <issue>10</issue>
          ),
          <fpage>1589</fpage>
          -
          <lpage>1591</lpage>
          . http://doi.org/10.1210/me.2014-
          <fpage>1288</fpage>
          .
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>[Krakauer</surname>
          </string-name>
          , et al.,
          <year>2017</year>
          ]
          <string-name>
            <given-names>J. W.</given-names>
            <surname>Krakauer</surname>
          </string-name>
          , et al.
          <source>Neuroscience Needs Behavior: Correcting a Reductionist Bias. Neuron</source>
          . Volume
          <volume>93</volume>
          , Issue 3, pp.
          <fpage>480</fpage>
          -
          <lpage>490</lpage>
          .
          <year>2017</year>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <source>[Moskowitz</source>
          , 2016]
          <string-name>
            <given-names>C.</given-names>
            <surname>Moskowitz</surname>
          </string-name>
          .
          <source>Are We Living in a Computer Simulation? Scientific American</source>
          .
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <source>[Mühlenbein</source>
          , 2009]
          <string-name>
            <given-names>H.</given-names>
            <surname>Mühlenbein</surname>
          </string-name>
          . Computational Intelligence:
          <article-title>The Legacy of Alan Turing and John von Neumann</article-title>
          , in Computational Intelligence Collaboration, Fusion and Emergence. Editors: Mumford,
          <string-name>
            <surname>C. L.</surname>
          </string-name>
          (Ed.) Volume
          <volume>1</volume>
          of the series Intelligent Systems Reference Library pp
          <fpage>23</fpage>
          -
          <lpage>43</lpage>
          .
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <source>[Turing</source>
          ,
          <year>1952</year>
          ]
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Turing</surname>
          </string-name>
          .
          <article-title>The chemical basis of morphogenesis</article-title>
          .
          <source>Phil. Trans. Roy. Soc. London B237</source>
          ,
          <fpage>37</fpage>
          -
          <lpage>72</lpage>
          .
          <year>1952</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <source>[Vogelstein</source>
          ,
          <year>2014</year>
          ]
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Vogelstein</surname>
          </string-name>
          .
          <article-title>Machine Intelligence from Cortical Networks (MICrONS) Workshop</article-title>
          . Intelligence Advanced Research Projects Activity (IARPA). https://www.iarpa.gov/index.php/research-programs/microns.
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <source>[Vorhees and Williams</source>
          , 2014]
          <string-name>
            <given-names>C. V.</given-names>
            <surname>Vorhees</surname>
          </string-name>
          and
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Williams</surname>
          </string-name>
          .
          <article-title>Assessing Spatial Learning and Memory in Rodents</article-title>
          .
          <source>ILAR Journal</source>
          .
          <volume>55</volume>
          (
          <issue>2</issue>
          ),
          <fpage>310</fpage>
          -
          <lpage>332</lpage>
          . http://doi.org/10.1093/ilar/ilu013.
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <source>[Wolfram</source>
          , 2002]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wolfram</surname>
          </string-name>
          . A New Kind of Science. Wolfram Media. ISBN-
          <volume>10</volume>
          :
          <fpage>1579550088</fpage>
          .
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <source>[Wolfram</source>
          , 2015]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wolfram</surname>
          </string-name>
          . What Is Spacetime, Really? Stephen Wolfram Blog. http://blog.stephenwolfram.com/
          <year>2015</year>
          /12/what-is
          <string-name>
            <surname>-</surname>
          </string-name>
          spacetime-really/.
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>