CABot3: A Simulated Neural Games Agent Christian Huyck, Roman Belavkin, Fawad Jamshed, Kailash Nadh, Peter Passmore, Middlesex University c.huyck@mdx.ac.uk Emma Byrne University College London and Dan Diaper DDD Systems Abstract 2 The Structure of CABot3 CABot3, the third Cell Assembly roBot, is an agent Due to space constraints, a complete description of implemented entirely in simulated neurons. It is CABot3 is not possible, though an almost complete de- situated in a virtual 3D environment and responds scription of an earlier version, CABot1, is available to commands from a user in that environment. It [Huyck and Byrne, 2009], and the code is available on parses the user’s natural language commands to set http://www.cwa.mdx.ac.uk/cabot/cabot3/CABot3.html. A goals, uses those goals to drive its planning sys- brief description of the neural model is described next, fol- tem, views the environment, moves through it, and lowed by a description of the subnetworks used, and a brief learns a spatial cognitive map of it. Some systems description of how those subnetworks are connected to gen- (e.g. parsing) perform perfectly, but others (e.g. erate CABot3’s functionality. planning) are not always successful. So, CABot3 acts as a proof of concept, showing a simulated neu- 2.1 FLIF Neurons ral agent can function in a 3D environment. FLIF neurons are a modification of the relatively commonly used LIF model [Amit, 1989]. When a neuron has sufficient activation, it fires, and sends activation to neurons to which 1 Introduction it is connected proportional to the weight wji of the synapse CABot3, the third Cell Assembly roBot, is a video game from the firing pre-synaptic neuron j to the post-synaptic neu- agent implemented entirely in simulated neurons. It assists ron i. That weight can be negative. The simulations use dis- a user in the game: viewing the 3D environment; processing crete cycles, so the activation that is sent from a neuron that natural language commands; making simple plans; and mov- fires in a cycle is not collected by the post-synaptic neuron ing through, modifying, and learning about the environment. until the next cycle. If a neuron fires, it loses all its activation, As its name suggests, CABot3 makes extensive use of Cell but if it does not fire, it retains some, while some activation Assemblies (CAs), reverberating circuits of neurons that are leaks away (decay); this is the leaky component and is mod- the basis of short and long-term memories [Hebb, 1949]. elled by a factor D > 1, where the activation is divided by CABot3 represents symbolic knowledge in a neural network D to get the initial activation at the next step. In CABot3, by CAs. Simple rules are implemented by simple state transi- activation of neuron i at time t, Ait is defined by Equation 1. tions, with a particular set of active CAs leading to the activa- Vi is the set of all neurons that fired at t − 1 connected to i. tion of a new set of CAs, and complex rules are implemented Ait−1 X by variable binding combined with state transitions. Ait = + wji (1) CABot3 is a virtual robot that creates and uses plans with D j∈Vi a neural implementation of a Maes net [Maes, 1989], while natural language parsing is based around a standard linguistic Additionally, FLIF neurons fatigue. Each cycle they fire theory [Jackendoff, 2002]. All agent calculations are done the fatigue level is increased by a constant, but when they do with Fatiguing Leaky Integrate and Fire (FLIF) neurons (see not fire, the fatigue level is reduced by another constant, but Section 2.1) and some of the network structure can be related never below 0. The neuron fires at time t if its activity A to brain areas (see Section 4.2). The agent learns a spatial minus fatigue F is greater than the threshold, see Equation 2. cognitive map of the rooms in the video game. Two components of the CABots have been evaluated as Ait − Fit ≥ θ (2) cognitive models. The Natural Language Parser [Huyck, FLIF neurons are a relatively faithful model of neurons, 2009] parses in human-like times, creates compositional se- though are relatively simple compared to compartmental mantic structures, and uses semantics to resolve prepositional models [Hodgkin and Huxley, 1952]. If each cycle is con- phrase attachment ambiguities. It also learned the meaning of sider to take ten ms., it has been shown that 90% of the spikes the verb centre from environmental feedback, closely related emitted fall within one cycle of the spikes of real neurons on to a probability matching task [Belavkin and Huyck, 2010]. the same input [Huyck, 2011]. Aside from their biological 8 .............. .............. ..... scribes the vision subsystem; 3.3 the planning subsystem, 3.4 ..... ... NLP ¾ ... ... ... ... - Vision the natural language processing (NLP) subsystem, 3.5 verb ... ... ... ..... Game . . .. .. . learning, and Section 3.6 describes the spatial cognitive map ......... ....... .............. ....... learning subsystem. Connections between the subsystems are 6 I @ 6 µ ¡ also described in these sections. Section 4 summarizes the ? @ R ? ¡ ª Control Plan evaluation of CABot3. ¾- Verb Learning 6 ¡µ 3 Subsystems ? ¡ ª Each subsystem is explained below, concentrating on those Cognitive that have not been explained elsewhere. Map 3.1 Communication, Control and the Game Figure 1: Gross Topology of CABot3. Boxes represent sub- The game was developed using the Crystal Space [Crystal ystems of subnets. The oval represents the environment. Space, 2008] games engine. It is a black and white 3D envi- ronment with an agent, a user, four rooms connected by four corridors, and a unique object in each room (see Figure 4); fidelity, another benefit is that 100,000 FLIF neurons with a the objects were vertically or horizontally striped pyramids 10ms cycle can be simulated in real time on a standard PC. or stalactites (down facing pyramids). The agent and user can Neurons are grouped into CAs, either manually by the de- move around the rooms independently. The game provides veloper, or through emergent connectivity. A given neuron the input to the vision system using a dynamically updated may be part of one or more CAs. picture of the game from the agent’s perspective. The user issues text commands as input to the NLP system. The game 2.2 SubNetworks also has a bump sensor, and this ignites a CA in the fact sub- The FLIF neurons in CABot3 are grouped into 36 subnet- net in the planning system (see Section 3.3) when the agent works. Each subnet is an array of neurons, and each may bumps into a wall. Similarly, the game takes commands from have different FLIF parameters and learning parameters, in- the agent’s planning system to turn left or right, or move for- cluding no learning. In CABot3, connectivity within a subnet ward or backward. is always sparse, but it varies between subnets; this connectiv- The control subsystem consists of one subnet, the control ity may have some degree of randomness, but in some cases subnet, which in turn consists of five orthogonal CAs2 . These it is tightly specified by the developer to guarantee particular CAs mark the state of the agent, either parsing or clearing a behaviour. Subnets may also be connected to each other with parse, setting a goal or clearing it, or a stub. The initial state neurons from one sending synapses to others; these types of is turned on at agent start up, and one state is always on. connections vary similarly. These reflect differences, possi- In the first state, the system is waiting for input or parsing bly caused in part by genetics, between different types of bi- a sentence. This state has connections to most of the NLP ological neuron. subnets to facilitate the spread of activation. When the last Apart from biological fidelity, another advantage of sub- grammar rule ignites, it forces the control state to move on. nets is that they facilitate software engineering. Tasks can be Most of the CAs involved in parsing, and planning, and all partitioned, with one developer working on one net or a set of of the control CAs are orthogonal oscillators. When active, nets for a particular subsystem. Communication with other they oscillate from having one half of the neurons firing to subsystems may take place via only one subnet allowing a having the other half firing, then back to the first set. This degree of modularity1 . allows the CA to avoid fatigue as its neurons only fire half the time. This is not biologically accurate, but enables precise 2.3 Gross Topology behaviour with relatively few neurons. CABot3 can be divided into a series of subsystems each con- When it has finished parsing, control moves to the clear sisting of subnets (Figure 1. Arrows show directed connec- parse state. This changes the instance counters in the NLP tions from one subsystem to another, each, aside from the subsystem preparing it for the next sentence. After a few game, representing a large number of synapses. Verb learn- steps, activation accumulates in the set goal state causing it ing is not tested in CABot3, thus the connection is represented to ignite, and suppress the clear parse state. with a dotted line and is omitted in later diagrams. Also, for In the third state, the goal in the planning system is set clarity in later diagrams, due to the prevalence of connec- from the semantics of the parse via the intermediate goal set tions from control, connections from the control subsystems subnet. In the fourth state, information is cleared from the to other subsystems are omitted. NLP system after the goal is met, and the fifth is a stub. The basic subsystems are described below. Section 3.1 de- The control allows the system to parse while still process- scribes the game and the control subsystem; the game re- ing a goal. Vision remains active at all times. ceives simple commands from the agent. Section 3.2 de- 1 2 Note this modularity may conflict with actual brain topology. A neuron in an orthogonal CA belongs to that and only that CA. 9 3.2 Vision The visual system of CABot3 consists of six subnets: visual Goal ¾- Module ¾- Action input, retina, V1, gratings, V1Lines, and object recognition. The retina, V1, gratings, and V1Lines share some similari- 6 @ I 6 ties with their human counterparts, but are much simplified @ ? ? models. Higher-level object recognition in CABot3 is not bi- NLP ? Fact CogMap ¾ - ¾ Crystal Space ologically plausible and does not mimic known mechanisms in the human visual system. It does however carry out two Vision ¾ - important functions of the visual system: the simultaneous identification of what is seen and where it is in the visual field. The visual input, retina, V1 and object recognition nets Figure 2: Gross Topology of the Planning Subsystem. Boxes have been described elsewhere and are only slightly modi- represent subnets. fied [Huyck et al., 2006]. The most important modification is the addition of grating cells that mimic known properties of the primate visual system, in that they respond selectively to the planning subsystem. Its primary entry point is from the to textures of a certain orientation and frequency [DeValois et NLP subsystem, which sets the goal. The primary outcome al., 1979]. is to the game; the CAs in the action subnet are polled and a The visual input subnet is a 50x50 network of FLIF neu- symbolic command is emitted to the game. rons that do not fatigue. Input to this subnet is clamped to the This subnet structure was used throughout CABot1, 2 and external stimulus, thus activation is constant until the agent’s 3, and a simple example is the command, Move forward. point of view changes. Each neuron in the 50x50 subnet cor- When parsing is completed, the control subnet in combina- responds to an identically located ”cell” in a 50x50 grid of tion with the NLP subnets cause an orthogonal oscillating CA light levels from the environment. in the goal net to ignite. This is equivalent to a goal being set The CABot1 retina subnet contains six 50x50 grids of FLIF in the Maes net. With a simple action, this goal CA causes neurons. Each subnet contains retinotopic receptive fields of the corresponding module subnet CA to ignite, which in turn a single size and polarity: 3x3 receptive fields with single- causes the corresponding CA in the action subnet to ignite. cell centre; 6x6 receptive fields with a 2x2 cell centre and The action CA is then polled to emit the command to the the 9x9 receptive fields with a 3x3 cell centre. For each of game. Backward inhibition extinguishes the goal and module these sizes there is a subnet with an on-centre/off-surround CAs, and accumulated fatigue causes the action CA to stop. polarity (neurons fire when the centre of the receptive field Simple movements do not require any facts, but actions are is stimulated and the surround is not) and an off-centre/on often predicated on facts that are set by the environment. For surround polarity. example, an environmentally sensitive command is Turn to- In the V1 area of the human visual system there are neu- ward the pyramid. In this case, the vision system ignites a rons, known as simple cells, that are tuned to specific edge fact CA expressing the target’s location in the visual field, for and angle orientations. These simple cells are location spe- instance, “target on left”. The combination of activity from cific. In the CABot3 V1 and V1Lines subnets, FLIF neu- the fact net and the goal net cause the appropriate module CA rons have been connected to replicate this behaviour. V1 and to ignite, which in turn causes the appropriate action CA to V1Lines were split for engineering convenience. Weighted ignite. This is an example of needing two (or more) CAs ig- connections feed activation from on-centre and off-centre nited to ignite a third. This is done by allowing the activation cells in the retina subnet. There are eight orientation specific of the neurons in the third CA to rise, but which is below edge detectors and four angle detectors. threshold when one CA is ignited. The second CA then pro- The edge detectors in V1Lines also have recurrent connec- vides enough activation to ignite the third CA. tions to grating detector subnets. Grating detector cells iden- Note that the full Maes net has a concept of Maes module tify repeated patterns of edges of a given orientation and fre- activation. In CABot3, the module CAs are either on or off, quency. These grating detectors allow CABot3 to recognise and there is no activation level (but see Sections 3.4 and 5). textures in the environment. This allows CABot3 to distin- The system executes 21 commands, four primitives (e.g. guish between objects of the same shape but that are ‘painted’ Turn right), two compounds (e.g. Move left which executes with different textures. a left then forward), turn toward pyramid or stalactite, go to The object recognition net is the least biologically plau- seven objects, explore, stop, and move before four objects. sible of the visual subnets. There are five modules in the The seven objects are door, and pyramid or stalactite either subnet, made up of a number of overlapping cell assem- (vertically) barred, (horizontally) striped, or unspecified. blies. These specialise to recognise pyramids, stalactites, Moving to an object may require several steps. CABot3 door jambs, doors, or unknown objects. The same modules centres the object in the visual field and then moves to it until also carry the “where” (position) as each subnet is a retino- the object fills the visual field, possibly centring again along topic representation of the visual field. the way. Any command can be stopped by the Stop command. The most sophisticated thing the system does, in response 3.3 Planning to the Explore command, is to explore the four rooms and The planning system is basically a Maes net [Maes, 1989]. memorize the objects in the room (see Section 3.6). To test The gross topology is shown in Figure 2. All subsystems link that the system has correctly memorized the map, a command 10 such as Move before the striped pyramid may be used. The system then moves to the room before the striped pyramid and Room1 ¾- Sequence ¾- Room2 stops without having seen it again, showing it has memorized its location (see Section 4.1). In all, the goal subnet contains 26 CAs, including subgoals. Q k Q¡µ Q 6 © ©© * ¡ Q s Q ¼© The fact subnet has 66 CAs, the module subnet seven, and the Counter ¾ - Plan action subnet six including two error conditions. 3.4 Natural Language Processing The stackless parser has been described elsewhere [Huyck, 2009]. Input is provided symbolically from Crystal Space, Figure 3: Subnets involved in spatial cognitive mapping. each word is associated with an orthogonal set of neurons in the input net, and they are clamped on when the particular word is being processed. It has been demonstrated previously that the mechanism The subnets involved follow Jackendoff’s Tripartite theory, described above can be used to learn simple sets of rules in a with NLP broken into three main systems, lexicon, syntax and CA-based architecture [Belavkin and Huyck, 2008], and that semantics, and the systems communicate via subsystems. it can be used to model probability matching observed in an- Stackless parsing is done by activation levels, with the imals and people [Belavkin and Huyck, 2010]. The mecha- number of neurons in a CA firing in a cycle reflecting CA nism was used by CABot2 to learn the verb centre and the activity. In practice this is done by a tightly specified topol- corresponding action associated with a visual stimulus. It is ogy that has the number of neurons firing in the CA decaying unplugged in the currently available version of CABot3. over time; activation levels reflect the order of items. Semantics are handled by overlapping encoding derived 3.6 Spatial Cognitive Map Learning from WordNet. This could be useful in resolving parsing am- Spatial cognitive mapping is the psychological process of biguities, though this is not implemented in CABot3. recording, recollecting and acting on locations and objects Grammar rule CAs are selected by activation of component in a physical environment [Downs and Stea, 1973]. CABot3 (lexical or higher order category) CAs. Variable binding is implements a simple version of this complex process based done with short-term potentiation [Hempel et al., 2000], and on the authors’ previous work [Huyck and Nadh, 2009]; the this is how instances store their semantics. Noun instances CABot3 agent explores the rooms, learns the objects, associ- represent noun phrases and verb instances, verb phrases in- ations between them, and navigates to specific rooms. cluding their arguments. A case frame is generated for each Figure 3 shows the subnets involved. Room1 and room2 parse, and the slots are bound to other instances or to the se- encode adjacent rooms that the agent moves through, where mantics of words. These bindings are learned but decay over room1 is the prior room and room2 is the current room. The time. The next time they are used, two parses later, the in- sequence net encodes the associations between the rooms, stance frames have been erased by automatic weight decay. and the objects in them. The counter net supports the order. On receiving the Explore command, the agent goes around 3.5 Motivation and Reinforcement Learning the environment, room by room, learning the objects it sees. Hebbian learning strengthens the connections between CAs When an object is in its visual field, for instance a striped as well as within a CA. CAs are associated with some atomic pyramid, the current room in association with it is encoded propositions, and more complex propositions (such as impli- as a CA in Room1. The object in view is recognised from cation rules) are represented by groups (e.g. pairs) of asso- activity in the fact net, and learning lasts 200 cycles as it has ciated CAs. However, Hebbian rules do not differentiate be- been observed to be the minimum number of cycles required tween learning ‘good’ or ‘bad’ propositions. After several for CAs to be learnt. When the agent moves to the next room, atomic propositions or symbols have been learnt in the form the same routine happens, but as it has come from an adja- of corresponding CAs, the main problem is to learn the cor- cent room, the current room is also encoded in room2. The rect or favourable propositions from these. previous room CA in room1 is still active, the current room This problem was solved by a motivational system that is CA in room2 ignites, and the association between the two used to control Hebbian learning so that propositions with rooms learnt as a CA in the sequence net. Learning in the higher utility values or rewards are reinforced [Belavkin and sequence subnet happens via co-activation with the two ac- Huyck, 2008]. The mechanism uses two specialised subnets: tive room CAs in the two room nets lasting 200 cycles. This utility and explore. Neurons in the utility network output sig- in essence creates individual CAs representing the rooms and nals corresponding to a reward or payoff obtained from the their constituent objects in the two room nets, and the associa- environment. Neurons in the explore network output signals tion between the rooms the agent passes through in sequence. that represent random noise and they can be connected to any Counter keeps track of the room the agent is currently in. set of CAs that needs to be randomised to allow stochastic When the agent is done exploring, room1 and room2 have a exploration of their interrelations. The utility network has in- CA associated with the item in the fact net, and the sequence hibitory connections to the explore network so that high val- net has five CAs representing the association between each ues of utility correspond to low level of randomness at the room and its adjacent room. output of the explore network. After exploration, when the agent is issued with a com- 11 60 mand such as Move before the striped pyramid, the involved fact such as “striped pyramid” ignites in fact (Figure 2). Fact 50 ¡@ ¡ @ @ @¡¡ in turn ignites the learnt CA in room2 representing the room with the “striped pyramid”. As the sequence net has encoded 40 the association between rooms, the active CA in room2 ac- tivates the associated room in room1, which is the room be- 30 fore the room in room2 that the agent entered through while exploring. Thus the agent deduces the target room from its 20 simple learnt cognitive map. With the target room active, the ¡@ S @ ¡ 10 ¡ @ @¡ agent starts moving, and when it reaches the target room, ac- tivity in the goal subnet informs it of task completion. 0 0 10 20 30 40 50 60 4 Evaluation The evaluation of a CABot3 agent is a complex process. Figure 4: Forward moves of CABot3 while exploring the Many of the components have been evaluated separately. For rooms, starting at S with moves marked by dots. the purposes of testing CABot3 itself, parsing, for example, consists of a few dozen grammar rules that it uses to parse all of the acceptable commands correctly, so as to set an ap- nitive map, and repeats the process for the next two rooms, propriate goal. In parsing, all of the connections are deter- stopping when it identifies the object in the initial room. ministic, and the parsing subnets are insulated by layers of Explore works about half the time. It appears cognitive connections from the more stochastic areas. mapping works each time, and all of the failures are due to The evaluation of the planning system and cognitive map- navigation problems. ping systems are briefly described in Section 4.1. The control 4.2 Subnet Evaluation system is a simple finite state automata which switches states when other systems reach certain states, for example when The subnet topology is important both for software engineer- the parser finishes, the control state changes. This system ing and for relating to brain areas. From the software engi- largely switches states when appropriate, but occasional er- neering perspective, the method has been successful. Break- rors do occur, but these are largely self correcting. However, ing the full network into subnets has enabled development of it occasionally gets into states from which it cannot recover. systems to be partitioned with one developer working on one The vision system works robustly for a limited range of task, (e.g. vision) in isolation. The systems have then been textures. There are two orientations and a limited range of combined to work together in the full CABot3 agent. spatial frequencies that the grating cells can accommodate The brain did not evolve this way, so it is also important due to the size and resolution of the retinal nets. Within to see how different subnets might map to brain areas. There these limitations, however, the system identifies textures reli- is a strong correlation between CABot3’s early vision areas ably. Where objects are presented clearly on the retina (that and biological vision areas, with both accounting for similar is, where the viewing angles are not extreme) the visual sys- behaviour. There is a looser correlation between the explore tem robustly identifies the objects in the 3D world. subnet in reinforcement learning and the basal ganglia. How- ever, in most cases the subnets have little correlation with 4.1 Explore Evaluation brain areas. None the less, the basic subnet topology could be used to closely mimic known brain area topology and be- The planning system is responsible for a relatively wide range haviour. As subnets still have connections to and from other of activities. Most of these it performs entirely correctly; for subnets, so CABot3 is one large network. example the command Turn left. always works correctly. The most sophisticated physical task the agent performs is to ex- plore all of the rooms, making use of vision and spatial cogni- 5 Conclusion tive mapping (see Section 3.6). This exploration is relatively Many researchers thought that implementing AI systems with simple though it can take several hundred moves. An example simulated neurons was too complex (e.g. [Smolensky, 1988]). is shown in Figure 4. Perhaps this was true a few decades ago, but the authors be- CABot3 initially tries to identify the room it is in by the lieve that CABot3 shows that this fear has passed. unique object it sees. In the case of Figure 4, it sees the striped The mere implementation of a relatively simple agent pyramid, and this is put into its spatial cognitive map. It then may miss the point that many connectionists hope to make: finds the corridor, which it can see at a distance. It moves that the neural level is not the correct level to study the to the front of the corridor keeping to the left edge, stopping brain. While the authors would agree that many complex be- when it bumps into the edge of the corridor. It then turns right haviours, such as attractor dynamics and supervised learning, and moves through the corridor along the edge. At the end of are being effectively studied with non-neural connectionist the corridor it turns right to see the object in the next room. It systems, this does not mean that the same problems cannot can see there is an object but the agent is not close enough to be effectively studied in neural systems. identify it. It moves toward the object, in this case the barred Moreover, simulated neural systems have an important ad- pyramid, until it can identify it. It then puts that in the cog- vantage over connectionist systems when it comes to study- 12 ing AI: existing intelligent agents (humans and other animals) [Belavkin and Huyck, 2010] R. Belavkin and C. Huyck. use neurons to think, and the neural and cognitive behaviour Conflict resolution and learning probability matching in a of these animals is being studied. Simulated neural systems, neural cell-assembly architecture. Cognitive Systems Re- which match sensible intermediate behaviour, can be devel- search, 12:93–101, 2010. oped as milestones on the way to full fledged AI systems. [Byrne and Huyck, 2010] E. Byrne and C. Huyck. Process- During the project, it was shown that in general a network ing with cell assemblies. Neurocomputing, 74:76–83, of CAs, and in particular a network of FLIF neuron CAs, 2010. was Turing complete [Byrne and Huyck, 2010]. In some sense, this makes the implementation of CABot3 unsurpris- [Crystal Space, 2008] Crystal Space. http : ing. While CABot3 is obviously not a neuron by neuron sim- //www.crystalspace3d.org/main/main page. 2008. ulation of a human brain, it does have a series of links to neu- [DeValois et al., 1979] K. DeValois, R. DeValois, and robiological and cognitive behaviour that increase its validity. E. Yund. Responses of striate cortex cells to grating The base neural model is a relatively accurate if simplified and checkerboard patterns. The Journal of Physiology, model of neurons. In CABot3, some subnets are reasonable 291(1):483–505, 1979. approximations of brain areas. The use of CAs for long and [Downs and Stea, 1973] Roger M. Downs and David Stea. short-term memories and as the basis of symbols is neuropsy- Cognitive maps and spatial behaviour. Process and prod- chologically supported, and provides a bridge between sub- ucts, pages 8–26. Aldine, Chicago, 1973. symbolic and symbolic processing. Cognitive models provide solid links to psychological behaviour from a neural system. [Hebb, 1949] D. O. Hebb. The Organization of Behavior: A While it is possible to continue to program new and im- Neuropsychological Theory. J. Wiley & Sons, 1949. proved neural systems, the authors believe the key is to have [Hempel et al., 2000] C. Hempel, K. Hartman, X. Wang, the system learn its behaviour. Thus, a vast range of fu- G. Turrigiano, and S. Nelson. Multiple forms of short- ture work is possible such as: improving existing systems; term plasticity at excitatory in rat medial prefrontal cortex. adding new sensory modalities, for example sound detec- Journal of Neurophysiology, 83:3031–3041, 2000. tion and speech recognition; moving from virtual to physi- [Hodgkin and Huxley, 1952] A. Hodgkin and A. Huxley. A cal robots; improving the fit with biological data, for example quantitative description of membrane current and its ap- more neurons, more realistic topologies, and more accurate plication to conduction and excitation in nerve. Journal of neural models; new and more sophisticated cognitive mod- Physiology, 117:500–544, 1952. els; and improving computation, for example by use of spe- cialised neural hardware. Simulated CAs themselves could [Huyck and Byrne, 2009] C. Huyck and E. Byrne. CABot1: also be improved so that a single CA could be learned, and Technical report. Technical report, Middlesex University, persist for an appropriate duration. More radical improve- 2009. ments also present themselves including improved learning, [Huyck and Nadh, 2009] C. Huyck and K. Nadh. Multi- for example at the CA level and in combination with variable associative memory in fLIF cell assemblies. In binding, improved understanding of dual attractor dynamics, R. Cooper (Eds.) A. Howes, D. Peebles, editor, 9th Inter- integration of attention, and experiments with agents that con- national Conference on Cognitive Modeling - ICCM2009, tinue to improve over several days or longer. pages 81–87, Manchester, UK, 2009. CABot3 is an agent in an environment functioning in real [Huyck et al., 2006] C. Huyck, D. Diaper, R. Belavkin, and time, implemented in simulated neurons. It is a solid step in the development of agents implemented in simulated neurons, I. Kenny. Vision in an agent based on fatiguing leaky in- and it is intended that more sophisticated agents will be de- tegrate and fire neurons. In Proceedings of the FifthIn- rived from it. Building systems like this will involve trade offs ternational Conference on Cybernetic Intelligent Systems, between biological and psychological fidelity, and computa- 2006. tional constraints. By building more biologically and psycho- [Huyck, 2009] C. Huyck. A psycholinguistic model of nat- logically plausible systems that perform more tasks, signifi- ural language parsing implemented in simulated neurons. cant advancements in the understanding of general cognition Cognitive Neurodynamics, 3(4):316–330, 2009. can be made. [Huyck, 2011] C. Huyck. Parameter values for FLIF neu- Acknowledgements: rons. In Complexity, Informatics and Cybernetics: IMCIC This work was supported by EPSRC grant EP/D059720. 2011, 2011. [Jackendoff, 2002] R. Jackendoff. Foundations of Language: References Brain, Meaning, Grammar, Evolution. Oxford University [Amit, 1989] D. Amit. Modelling Brain Function: The Press, 2002. world of attractor neural networks. Cambridge University [Maes, 1989] P. Maes. How to do the right thing. Connection Press, 1989. Science, 1:3:291–323, 1989. [Belavkin and Huyck, 2008] R. Belavkin and C. Huyck. [Smolensky, 1988] P. Smolensky. On the proper treatment of Emergence of rules in cell assemblies of fLIF neurons. In connectionism. Behavioral and Brain Sciences, 11:1:1– The 18th European Conference on Artificial Intelligence, 22, 1988. 2008. 13