<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Reinforcement Learning for Autonomous Agents Exploring Environments: an Experimental Framework and Preliminary Results</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Nassim</forename><surname>Habbash</surname></persName>
							<email>n.habbash@campus.unimib.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Informatics, Systems and Communication (DISCo)</orgName>
								<orgName type="institution">University of Milano-Bicocca</orgName>
								<address>
									<settlement>Milan</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Federico</forename><surname>Bottoni</surname></persName>
							<email>f.bottoni1@campus.unimib.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Informatics, Systems and Communication (DISCo)</orgName>
								<orgName type="institution">University of Milano-Bicocca</orgName>
								<address>
									<settlement>Milan</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Giuseppe</forename><surname>Vizzari</surname></persName>
							<email>giuseppe.vizzari@unimib.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Informatics, Systems and Communication (DISCo)</orgName>
								<orgName type="institution">University of Milano-Bicocca</orgName>
								<address>
									<settlement>Milan</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Reinforcement Learning for Autonomous Agents Exploring Environments: an Experimental Framework and Preliminary Results</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">BAD133CDF978D96952A5049A47250EFA</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T16:33+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>agent-based modeling and simulation</term>
					<term>reinforcement learning</term>
					<term>complex-systems</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Reinforcement Learning (RL) is being growingly investigated as an approach to achieve autonomous agents, where the term autonomous has a stronger acceptation than the current most widespread one. On a more pragmatic level, recent developments and results in the RL area suggest that this approach might even be a promising alternative to current agent-based approaches to the modeling of complex systems. This work presents an investigation of the level of readiness of a state-of-the-art model to tackle issues of orientation and exploration of a randomly generated environment, as a toy problem to evaluate the adequacy of the RL approach to provide support to modelers in the area of complex systems simulation, and in particular pedestrian and crowd simulation. The paper presents the adopted approach, the achieved results, and discusses future developments on this line of work.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Reinforcement Learning (RL) <ref type="bibr" target="#b0">[1]</ref> is being growingly investigated as an approach to implement autonomous agents, where the acceptation of the term "autonomous" is closer to Russell and Norvig's <ref type="bibr" target="#b1">[2]</ref> than the most widely adopted ones in agent computing. Russell and Norvig state that:</p><p>A system is autonomous to the extent that its behavior is determined by its own experience A certain amount of initial knowledge (in an analogy to built-in reflexes in animals and humans) is reasonable, but it should be sided by the ability to learn. RL approaches, reinvigorated by the energy, efforts, and promises brought by the deep learning revolution, seems one of the most promising ways to investigate how to provide an agent this kind of autonomy. On a more pragmatic level, recent developments and results in the RL area suggest that this approach might even be a promising alternative to current agent-based approaches to the modeling of complex systems <ref type="bibr" target="#b2">[3]</ref>: whereas currently behavioral models for agents are carefully hand crafted, often following a complicated interdisciplinary effort involving different roles and types of knowledge, as well as validation processes based on the acquisition and analysis of data describing the studied phenomenon, RL could simplify this work, focusing on the definition of an environment representation, the definition of a model for agent perception and action, and defining a reward function. The learning process could, in theory, be able to explore the potential space of the policies (i.e. agent behavioral specifications) and converge to the desired decision making model. While the definition of a model of the environment, as well as agent perception and action, and the definition of a reward function are tasks requiring substantial knowledge about the studied domain and phenomenon, the learning process could significantly simplify modeler's work, and at the same time it could solve issues related to model calibration.</p><p>The present work is set in this scenario: in particular, we want here to explore the level of readiness of state-of-the-art models to tackle issues of orientation and exploration of an environment <ref type="bibr" target="#b3">[4]</ref> by an agent that does not own prior knowledge about its topology. The environment is characterised by the presence of obstacles, generated randomly, and by a target for agent's movement, a goal that must be reached while, at the same time, avoiding obstacles. This represents a toy problem allowing us to investigate the adequacy of the RL approach to support modelers in the area of complex systems simulation, and in particular pedestrian and crowd simulation <ref type="bibr" target="#b4">[5]</ref>. We adopted Proximal Policy Optimization (PPO) <ref type="bibr" target="#b5">[6]</ref> and trained agents in the above introduced type of environment: the achieved decision making model was evaluated in new environments analogous to the ones employed for training, but we also evaluated the adequacy of the final model to guide agents in different types of environment, less random and more similar to human built environments (i.e. including rooms, corridors, passages) to start evaluating if agents for simulating typical pedestrians could be trained through a RL approach.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Problem Statement</head><p>The main focus of this paper is automatic exploration of environments without a-priori knowledge of their topology. This is modeled through a single-agent system, where an agent is encouraged to look out for a target placed randomly in a semi-randomly generated environment. This environment presents an arbitrary number of obstacles placed randomly on its space. The environment can be seen as a rough approximation of natural, mountainous terrain, or artificial, post-disaster terrain, such as a wrecked room. The agent can observe the environment through its front-mounted sensors and move on the XY Cartesian plane. In order to solve the problem of automatic exploration in an ever changing obstacle-ridden environment, the main task is to generalize the exploration procedure, to achieve an agent able to explore different environments from the ones it was trained on.</p><p>In this paper we develop a framework around this task using Reinforcement Learning. Section 3 provides a definition of the agent, the environment and their interactions. Section 4 goes through Reinforcement Learning and the specific technique adopted for this work. Section 5 provides an architecture to the system, with some details on the tools used. Section 6 reports the experimental results obtained, and Section 7 provides some considerations on the work and possible future developments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Agent and Environment</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Agent model</head><p>The agent is modeled after a simplified rover robot with omnidirectional wheels, capable of moving on the ground in all directions. The location of the agent in the environment is described by the triple (𝑥, 𝑦, 𝜗), where (𝑥, 𝑦) denotes its position on the XY plane, and 𝜗 denotes its orientation. The agent size is 1x1x1 units.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Observation space</head><p>The agent can observe the environment through a set of LIDARs that create a array of surveying rays: these are time-of-flight sensors which provide information on both the distance between the agent and the collided object and the object's type. If a ray is not long enough to reach an object because it is too far away, the data provides the over-maximum-range information to the agent. The standard LIDAR length is 20 units. The agent is equipped with 14 LIDARs, placed radially on a plane starting from the middle of its front-facing side, giving it a field of view of [- 2  3 𝜋; 2  3 𝜋] for 20 units of range. More formally, we can define an observation or state as a set of tuples, as follows:</p><formula xml:id="formula_0">𝑠 = {(𝑥 1 , 𝑜 1 ), (𝑥 2 , 𝑜 2 ), ..., (𝑥 𝑛 , 𝑜 𝑛 )}, 𝑠 ∈ 𝑆 (<label>1</label></formula><formula xml:id="formula_1">)</formula><p>Where 𝑛 is the number of LIDARs on the agent, 𝑥 𝑖 represents the distance from a certain LI-DAR to a colliding object in range, and 𝑜 𝑖 is the type of said object, with 𝑜 𝑖 ∈ {𝑜𝑏𝑠𝑡𝑎𝑐𝑙𝑒, 𝑡𝑎𝑟𝑔𝑒𝑡, ∅}.</p><p>The observations (or state) space is hence defined as 𝑆, the set of all possible states 𝑠.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Action space</head><p>The possible actions that the agent can perform are:</p><p>• Move forward or backward • Move to the left or to the right, stepping aside (literally), without changing orientation • Rotate counterclockwise or clockwise (yaw)</p><p>The agent can also combine the actions, for example going forward-right while rotating counterclockwise.</p><p>More formally, we can define the action space as:</p><formula xml:id="formula_2">𝐴 = {𝐹 𝑜𝑟𝑤𝑎𝑟𝑑, 𝑆𝑖𝑑𝑒, 𝑅𝑜𝑡𝑎𝑡𝑖𝑜𝑛}<label>(2)</label></formula><p>Where 𝐹 𝑜𝑟𝑤𝑎𝑟𝑑, 𝑆𝑖𝑑𝑒 and 𝑅𝑜𝑡𝑎𝑡𝑖𝑜𝑛 represent the movement on the associated axes, and their value can be either {−1, 0, 1}, where 0 represents no movement, and -1 and 1 represent movement towards one or the other reference point on the axis.</p><p>Figure <ref type="figure">1</ref>: Environment: the blue cube is the agent, the checkerboard pattern cubes are the obstacles and the orange cube is the target. The environment is generated considering the distance between agent and target (white line connecting both) and the minimum spawn distance (the white sphere around objects) to ensure a parametric distribution between objects in the environment. The agent collects observations through its LIDAR array (white rays exiting the agent). On the top are present the episode's cumulative reward on the left and the current number of collisions on the right.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Environment model</head><p>The environment is a flat area of 50x50 units, bounded on its extremities by walls tall 1 unit. A set of gray cubes of 3x3x3 units each are randomly placed on this area as obstacles. The targetthe goal the agent must reach -is positioned randomly between this set of obstacles, and is an orange cube of 3x3x3 units. The provided interactions between agent and environment are collisions. The agent collides with another object in the environment if there's an intersection between the bounding boxes of the two entities. The floor is excluded from collisions. If a collision happens between agent and obstacles, the agent suffers a penalty, while if a collision happens between agent and target, the episode ends successfully, as the agent achieved its goal.</p><p>The environment is generated every time the episodes ends, successfully or not, so no two identical episodes are played by the agent. This generation is parametric, allowing for more or less dense obstacle distribution in the environment, or longer or shorter distance between agent and the target.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Reinforcement Learning</head><p>In the past couple of years Reinforcement Learning has seen many successful and remarkable applications in the robot and locomotive field, such as <ref type="bibr" target="#b6">[7]</ref> and <ref type="bibr" target="#b7">[8]</ref>. This approach provides many benefits: experimentation can be done in a safe, simulated environment, and it's possible to train models through millions of iterations of experience to learn an optimal behaviour. In some fields -such as robot movement -the RL approach currently outperforms classic heuristic and evolutionary methods <ref type="bibr" target="#b0">[1]</ref>.</p><p>Reinforcement Learning is a technique where an agent learns by interacting with the environment. The agent ought to take actions that maximize a reward, selecting these from its past experiences (Exploitation) and completely new choices (Exploration), making this essentially a trial-and-error learning strategy. After sufficient training, an agent can generalize an optimal strategy, allowing itself to actively adapt to the environment and maximize future rewards. Generally, an RL algorithm is composed of the following components:</p><p>1. A policy function, which is a mapping between the state space and the action space of the agent 2. A reward signal, which defines the goal of the problem, and is sent by the environment to the agent at each time-step 3. A value function, which defines the expected future reward the agent can gain from the current and all subsequent future states 4. A model, which defines the behaviour of the environment At any time, an agent is in a given states of the overall environment 𝑠 ∈ 𝑆 (that it should be able to perceive; from now on, we can consider the state the portion of the environment that is perceivable by the agent), and it can choose to take one of many actions 𝑎 ∈ 𝐴, to cause a change of state to another one with a given probability 𝑃 . Taken an action 𝑎 chosen by an agent, the environment returns a reward signal 𝑟 ∈ 𝑅 as a feedback on the goodness of the action. The behaviour of the agent is regulated by what's called a policy function 𝜋, which can be defined as</p><formula xml:id="formula_3">𝜋 Θ (𝑎|𝑠) = 𝑃 (𝐴 𝑡 = 𝑎|𝑆 𝑡 = 𝑠)<label>(3)</label></formula><p>and represents a distribution over actions given states at time 𝑡 with parameters Θ -in this case the policy function is stochastic, as it maps over probabilities. Following is a brief introduction to the two main algorithms used in this work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Proximal Policy Optimization</head><p>RL presents a plethora of different approaches. Proximal Policy Optimization (PPO) <ref type="bibr" target="#b5">[6]</ref>, the one used in this work, is a policy gradient algorithm which works by learning the policy function 𝜋 directly. These methods have a better convergence properties compared to dynamic programming methods, but need a more abundant set of training samples. Policy gradients work by learning the policy's parameters through a policy score function, 𝐽(Θ), through which is then possible to apply gradient ascent to maximize the score of the policy with respect to the policy's parameters, Θ. A common way to define the policy score function is through a loss function:</p><formula xml:id="formula_4">𝐿 𝑃 𝐺 (Θ) = 𝐸 𝑡 [𝑙𝑜𝑔𝜋 Θ (𝑎 𝑡 |𝑠 𝑡 )]𝐴 𝑡<label>(4)</label></formula><p>which is the expected value of the log probability of taking action 𝑎 𝑡 at state 𝑠 𝑡 times the advantage function 𝐴 𝑡 , representing an estimate of the relative value of the taken action. As such, when the advantage estimate is positive, the gradient will be positive as well; through gradient ascent the probability of taking the correct action will increase, while decreasing the probabilities of the actions associated to negative advantage, in the other case. The main issue with this vanilla policy gradient approach is that gradient ascent might eventually lead out of the range of states where the current experience data of the agent has been collected, changing completely the policy. One way to solve this issue is to update the policy conservatively, so as to not move too far in one single update. This is the solution applied by the Trust Region Policy Optimization algorithm <ref type="bibr" target="#b8">[9]</ref>, which forms the basis of PPO. PPO implements this update constraint in its objective function through what it calls Clipped Surrogate Objective. First, it defines a probability ratio between new and old policy 𝑟 𝑡 (Θ), which tells if an action for a state is more likely or less likely to happen after the policy update, and it is defined as</p><formula xml:id="formula_5">𝑟 𝑡 (Θ) = 𝜋 Θ (𝑎𝑡|𝑠𝑡) 𝜋 Θ 𝑜𝑙𝑑 (𝑎 𝑡 |𝑠 𝑡 )</formula><p>. PPO's loss function, is then defined as:</p><formula xml:id="formula_6">𝐿 𝐶𝐿𝐼𝑃 (Θ) = 𝐸 𝑡 [𝑚𝑖𝑛(𝑟 𝑡 (Θ)𝐴 𝑡 , 𝑐𝑙𝑖𝑝(𝑟 𝑡 (Θ), 1 − 𝜖, 1 + 𝜖)𝐴 𝑡 )]<label>(5)</label></formula><p>The Clipped Surrogate Objective presents two probability ratios, one non clipped, which is the default objective as expressed in 4 in terms of policy ratio, and one clipped in a range. The function presents two cases depending on whether the advantage function is positive or negative:</p><p>1. 𝐴 &gt; 0: the action taken had a better than expected effect, therefore, the new policy is encouraged to taking this action in that state; 2. 𝐴 &lt; 0: the action had a negative effect on the outcome, therefore, the new policy is discouraged to taking this action in that state.</p><p>In both cases, because of the clipping, the actions will only increase or decrease in probability of 1 ± 𝜖, preventing updating the policy too much, while allowing the gradient updates to undo bad updates (e.g. the action was good but it was accidentally made less probable) by choosing the non-clipped objective when it is lower than the clipped one. Note that the final loss function for PPO adds two other terms to be optimized at the same time, but we suggest the original paper <ref type="bibr" target="#b5">[6]</ref> for a more complete overview of PPO.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Curiosity</head><p>Reward sparseness is one of the main issues with RL. If an environment is defined with a sparse reward function, the agent won't get any feedback about whether its actions at the current time step are good or bad, but only at the end of the episode, where it either managed to succeed in the task or fail. This means that the reward signal is 0 most of the time, and is positive in only few states and actions. One simple example is the game of chess: the reward could be obtained only at the end of the match, but at the beginning, when the reward might be 10, 50, 100 time steps away, if the agent can't receive feedback for its current actions it can only move randomly until, by sheer luck, it manages to get a positive reward; long range dependencies must then be learned, leading to a complicated and potentially overly long learning process. There are many ways to solve the problem of reward sparseness, such as reward shaping, which requires domain-specific knowledge on the problem, or intrinsic reward signals, additional reward signals to mitigate the sparseness of extrinsic reward signals.</p><p>Curiosity <ref type="bibr" target="#b9">[10]</ref> falls into the second category, and its goal is to make the agent actively seek out and explore states of the environment that it would not explore. This is done by supplying the default reward signal with an additional intrinsic component which is computed by a curiosity module. This module is comprised of a forward model, which takes in 𝑠 𝑡 and 𝑎 𝑡 , and tries to predict the features of next state the agent will find itself in Φ ˆ(𝑠 𝑡+1 ). The more different this value compared to the features of the real next state Φ(𝑠 𝑡+1 ), the higher the intrinsic reward.</p><p>To avoid getting stuck into unexpected states that are produced by random processes non influenced by the agent, the module is comprised of an inverse model, which takes Φ(𝑠 𝑡 ), Φ(𝑠 𝑡+1 ) and tries to predict the action 𝑎 𝑡 ˆthat was taken to get from 𝑠 𝑡 to 𝑠 𝑡+1 . By training the encoder (Φ) together with the inverse model, it s possible to make it so that the feature extracted ignore those states and events that are impossible to influence, retaining only features actually influenced by the agent's actions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">System Architecture</head><p>The system has been developed using Unity3D as the main simulation platform and ML-Agents for Reinforcement Learning <ref type="foot" target="#foot_0">1</ref> .</p><p>Unity3D is a well-established open-source game engine. It provides a lot of out-of-the-box functionalities including tools to assemble a scene, the 3D engine to render it, a physics engine to physically simulate object interaction under physical laws, and many plugins and utilities. In Unity an object is defined as a GameObject, and it can have attached different components according to necessity, such as RigidBodies for physical computations, Controllers for movement, decision-making and elements of the learning system (Agent, Academy and others). An object's life-cycle starts with an initialization and then a cyclic refresh of its state, while the engine provides handler methods for these phases for customizing them through an event-driven programming.</p><p>Unity keeps track of time and events on a stratified pipeline: physics, game logic and scene rendering logic are each computed sequentially and asynchronously:</p><p>1. Objects initialization. 2. Physics cycle (triggers, collisions, etc). May happen more than once per frame if the fixed time-step is less than the actual frame update time. 3. Input events 4. Game logic, co-routines 5. Scene rendering 6. Decommissioning (objects destruction) One notable caveat is that physics updates may happen at a different rate than game logic. This, in a game development scenario is sometimes treated by buffering either inputs or events, to result in smoother physical handling, while for more simulations in which a precise correspondence between simulated and simulation time is necessary this might pose a slight inconvenience. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">System design</head><p>Figure <ref type="figure" target="#fig_0">2</ref> represents the main actors of the system and their interactions. At the start of each episode the environment is randomly generated according to its parameters, placing obstacles, the target and the agent. The agent can then perform its actions, moving around the environment, while collecting observations through its sensors and receiving rewards according to the goodness of its actions. Physical collisions trigger negative or positive instantaneous rewards according to the time of collision: obstacle collisions produce negative rewards, while target collisions produce positive rewards and end the episode, as the task is successful. The agent class, ExplorationAgent, is responsible of the agent initialization, observation collection, collision detection and physical movement through the decisions received by the Brain interface. The decision-making of the agent is made through its Brain interface, which provides the actions produced by the model to the agent class. The environment is comprised of two classes, ExplorationArea is responsible for resetting and starting every episode, rendering the UI, logging information on the simulation process and placing every object in the environment according to its parameters, while the Academy works in tandem with the Brain to regulate the learning process, acting as a hub routing information -whether it's observations from the environment or inferences made by the models -between the system and the RL algorithms under the hood.</p><p>Once the training phase ends, ML-Agents generates a model file which can be connected to the brain and used directly for inference. Figure <ref type="figure" target="#fig_1">3</ref> explains how exactly the learning system is structured between Unity and ML-Agents.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">Reward Signal</head><p>The main RL algorithm used in this work is PPO. The reward signal is composed of:</p><p>• Extrinsic signal: the standard reward produced by the environment according to the goodness of the agent's actions • Intrinsic signal: the Curiosity of the agent The extrinsic signal presents some reward shaping, and is defined as: 𝑟 = 5 * 𝑐 𝑡 − 𝑝. The 𝑝 term stands for penalty, and is a negative reward formulated as 𝑝 = 𝑐 𝑜 * 0.1 + 𝑡𝑖𝑚𝑒 * 0.001. Every time the agent reaches a target, indicated by 𝑐 𝑡 (target collisions), the episode ends, and the positive reward is 5, while if it hits an obstacle, indicated by 𝑐 𝑜 (obstacle collisions), it receives a penalty of 0.1 for each collision. The agent is also penalized as time passes, receiving a 0.001 negative reward for each timestep, to incentive the agent to solve the task faster.</p><p>The intrinsic signal is the Curiosity module, which as section 4.2 goes through, provides an additional reward signal to encourage the exploration of unseen states.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Experiments</head><p>Different scenarios for the analysis of the effectiveness of the proposed system have been investigated.</p><p>The main differences between scenarios consist in:</p><p>1. Curriculum environment parameters 2. Penalty function 3. Observation source (LIDAR or camera) 4. Knowledge transferability to structured environments</p><p>Comparison between the scenarios is conducted on two aspects: the firsts depends on the canonical RL charts built in training-time in order to define the reward, their trends over times, their balance and other information about the system; the other aspect is an environmental performance comparison, conducted through three performance measures pertaining to the investigated setting, these being CPM (collisions per minute), measuring the mean number of collisions made by the agent on obstacles, TPM (targets per minute), measuring the mean number of goal targets successfully reached by the agent and CPT (collisions per target), measuring the mean number of collisions the agents does before getting to a target. As models between scenarios have been trained with varying curricula and environments, these measures estimate the performance of every model on a shared environment, making comparison between the models happen on the same plane and circumstances. This environmental performances have been measured in a parallel fashion in order to gather more accurate data.</p><p>An interactive demo of the system is also available to allow visual comparison of the different scenarios<ref type="foot" target="#foot_1">2</ref> .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1.">Baseline</head><p>The first experiment acts as a baseline for other variations, and was conducted on the following parameters: The parameters in this setting generate fairly open environments with at most 10 obstacles. The minimum distance between obstacles is 2 units, while the target spawns at a distance of 45 units, which is more then half of the environment size.</p><p>Table <ref type="table" target="#tab_2">1</ref> shows how the model collides roughly 5 times per minute and manages to reach a target about 4 and times per minute, making roughly 1.25 obstacles per target reached.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2.">Curriculum</head><p>The second experiment implemented curriculum learning into the training pipeline. The curriculum was structured in seven lessons scaling along with the cumulative reward. Following are its settings: The parameters in this setting generate an increasingly harder environment, with the target getting farther from the agent, and the obstacles getting more cluttered and closer together.</p><p>Table <ref type="table" target="#tab_2">1</ref> shows how the model collides roughly 10.6 times per minute and manages to reach a target about 9.6 and times per minute, making roughly 1.09 obstacles per target reached. This is a significant improvement compared to the baseline, as not only the agent is able to reach the target faster, reaching 2.6 times more targets, but also with less collisions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.3.">Harder penalty</head><p>The third experiment implemented curriculum learning into the training pipeline, and added a harsher penalty to the agent. The curriculum is structured as the above experiment, with the exception of the new parameter Penalty offset. Following are its settings:  As for the previous situation, the parameters generate a harder environment as the cumulative reward increase, but this time the penalty function too increases in difficulty. The rationale of this experiment is that, as the agent learns how to move to reach the target, the agent should learn to not collide frequently, but instead just search in the environment for the target smoothly. Figure <ref type="figure" target="#fig_3">4</ref> shows how the different penalty relate to each other and the Cumulative Reward lower limit without considering time decrease (same in all penalties).</p><p>Table <ref type="table" target="#tab_2">1</ref> shows how the model collides roughly 4.5 times per minute and manages to reach a target about 3.9 and times per minute, making roughly 1.18 obstacles per target reached. This model obtains results similar to the baseline, while staying below the performances the curriculum model obtained. This may be due to the harsher penalty that does not give the model time to adapt to an optimal policy.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.4.">Camera sensors</head><p>The fourth experiment implemented the same curriculum and penalty as the second experiment. The main difference consists in the use of a camera sensor instead of the LIDAR array, thus generating images as observations. 1. Curriculum parameters: same as the second experiment 2. Penalty function: same as the second experiment 3. Observations source: Camera 84x84 RGB The model performs significantly worse than the others. This is plausibly due to the low number of steps taken by the training of the model (74k) compared to the others (700k), which 84-100  did not let the algorithm converge to an optimal policy. We must add that the choice not to investigate a longer training phase is due to the fact that the need to analyse this heavier form of input, that nonetheless might not necessarily be more informative, made the training much more expensive in terms of computation time, so whereas the steps are less than in the other experiments, the overall computation time for learning is very similar. The model manages to reach a target with roughly 2.5 obstacles hits each time, but taking roughly 3 times as much as the optimal curriculum model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.5.">Structured environment transferability</head><p>Taking the best performing model (i.e. Curriculum), the experiment consisted in testing how well does the model generalize its task to structured environments -and how well does what the model learned during training in the chaotic environments transfer to other structured environments, which consist in:</p><p>1. "Rooms", two rooms are linked by a tight space in a wall, the agent has to pass through the opening to reach the target; 2. "Corridor" is a long environment -literally a corridor -that the agent has to run across to reach the target; 3. "Corridor tight" is similar to the previous environment but tighter 4. "Turn" is a corridor including a 90°left turn 5. "Crossroad" represents two ways cross and the agent has either to go straight on, to turn left or to turn right.</p><p>These environments tested the capability of the agent to follow directions and to look for the target in every space of the scene. The model does not perform as well as in every environments. This seems to be due the strategy that the model has learned to be optimal for the resolution of the task at hand, seems to be random exploration, and will be addressed later on. The agent seems able to follow a linear path, if the environment is wide enough to allow it to stay away from the borders, it is apparent comparing Corridor and Corridor Tight outcomes: the second experiment has less targets and more collisions per minute then the first one. The model's performances in Crossroad and Turn are similar but, in the first case the agent has to change path more often then in the second one, so collisions happen more frequently. Rooms experiment has low both TPM and CPM because the agent tends to stay in the spawn room, without attempting to pass from the door.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.6.">Performance measures comparison</head><p>The models have comparable performances: the curriculum, hardpenalty and camera models reach a similar cumulative reward -with the camera ending on top, followed by the curriculum and then hardpenalty models. This comparison shows the first caveat of the experiments: cumulative reward notwithstanding, the curriculum model achieved way higher environmental performances than the other two models. The comparison makes even more sense if compared to the baseline model, which almost perfectly converged on a higher cumulative reward, but even then, the curriculum model achieved better performances, even with a lower cumulative reward. We also performed experiments in a 3D version of the environment, whose complete description is omitted for sake of space, but in which (intuitively) actions included the possibility to maneuver in the Z axis as well, increasing significantly the dimension of the state space. The measurements of such a 3D maneuvering model took a very long time to converge and its lack of progress in the curriculum shows how the model still hasn't converged to an optimal policy.</p><p>For the models employing curriculum learning, we discovered a tendency for the models to reach a plateau early on in terms of steps. This may be due to the manual thresholding setup, which does not accurately increase difficulty. The only model which the learning process managed to reach the last difficulty level is the curriculum model.</p><p>It is of note how the distribution of reward of the models capable of obtaining a working policy shifts in time from the curiosity itself to the environmental reward, meaning a significant shift between curiosity-driven exploration and exploitation of the strategy learnt by the policy.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Conclusions</head><p>The empirical results show a certain decoupling between the environmental measures and the classical performance measures. Measuring performance in reinforcement learning settings is well-known to be tricky, as not always cumulative rewards, policy losses and other low-level measures are able to capture the effectiveness of the behaviour of the agent in the setting.</p><p>In the proposed setting, the experiments showed how curriculum learning can be an effective solution for improving the generalizing capabilities of the model, improving significantly how the agent behaves.</p><p>The proposed learning system shows that the policy that most commonly is converged to is essentially a random search strategy: the agent randomly explores the environment to find the target. This is demonstrated by the behaviours of the different models -to different levels of performances -which shows the agent randomly moving between obstacles, revisiting previously seen areas until he manages to get the target in the range of its sensors. This is probably due to the random nature of the environment generation, as no two episodes present the same environment, and as such the agent isn't able to memorize the layout of the environment (or portions of it), but is only able to generally try to avoid obstacles until the targets comes into sight.</p><p>This consideration represents a reasonable way to interpret results in environments whose structure is closer to the human built environment: whenever looking around to see if the target is finally in sight and moving towards it is possible without excessive risk of hitting an obstacles, the process yields good results, otherwise it leads to a failure. While this does not represent a negative result per se, it is a clear warning that the learning procedure, the environments employed to shape the learning process, can lead to unforeseen and undesirable results. It must be stressed that, whereas this is a sort of malicious exploitation of a behavioural model that was trained on some types of environments and that is being tested in different situations, other state of the art approaches specifically devised and trained to achieve proper pedestrian dynamics still do not produce results that are competitive with hand crafted models <ref type="bibr" target="#b10">[11]</ref>.</p><p>Possible future developments on the RL model side are:</p><p>1. Implement memory: adding memory to the agent (in the form of a RNN module) might allow it to form a sort of experience buffer for the current episode and allows it to explore the environment in a non-random fashion. 2. Rework the reward and penalty functions: the proposed reward and penalty are pretty simplistic, a possible enhancement to the penalty could be implementing softcollisions, that is, scaling the negative reward obtained by the agent in a collision according to the velocity of the collision -safe, soft touches can be allowed. 3. Compare different RL algorithms: different reinforcement learning algorithms (A3C, DQN) might show different insights on the optimal way to implement intelligent agents in the proposed setting.</p><p>On the other hand, a different training procedure, including specific environments aimed at representing a sort of grammar of the built environment, that could be used to define a sort of curriculum for training specifically pedestrian agents, should be defined to more specifically evaluate the plausibility of the application of this RL approach to pedestrian modeling and simulation.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Schematized system architecture, shows the main interactions between the actors in the system</figDesc><graphic coords="8,89.29,84.19,416.69,220.48" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Learning system: the agent receives the actions from the Brain interface, which communicates with the Academy to send and receive observations and actions. The Academy communicates through an external communicator to the ML-Agents back-end, which wraps over Tensorflow and holds the various RL models involved, receiving training/inference data and returning the models outputs</figDesc><graphic coords="9,89.29,84.19,416.71,212.94" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>2 .</head><label>2</label><figDesc>Penalty function: 𝑝 = 𝑐 𝑜 * 0.1 + 𝑡𝑖𝑚𝑒 * 0.001 3. Observations source: LIDAR set</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Penalty comparison: standard is the penalty used in most settings in the project, Offset1 and Offset2 are the penalties from the third experiment at different stages of difficulty, and the Cumulative Reward lower limit shows the number of collision before the episode aborts: Offset1 can get out with at most 5 collisions, Offset2 has a maximum of 3</figDesc><graphic coords="12,160.13,84.19,275.02,189.30" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Environmental performances comparisons between the different random experiments</figDesc><graphic coords="13,160.13,84.19,275.03,261.04" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Environmental performances comparisons between the different structured experiments</figDesc><graphic coords="15,160.13,84.19,275.03,267.70" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="4,110.13,84.19,375.04,208.79" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>1. Dynamic parameters:</figDesc><table><row><cell>Reward thresholds</cell><cell>1</cell><cell cols="6">2 2.5 2.8 3 3.5 4</cell></row><row><cell cols="8">Number of obstacles 8 10 13 15 17 18 20</cell></row><row><cell>Min spawn distance</cell><cell>6</cell><cell>6</cell><cell>4</cell><cell>4</cell><cell>3</cell><cell>3</cell><cell>2</cell></row><row><cell>Target distance</cell><cell cols="7">25 28 30 33 35 37 40</cell></row></table><note>2. Penalty function: 𝑝 = 𝑐 𝑜 * 0.1 + 𝑡𝑖𝑚𝑒 * 0.001 3. Observations source: LIDAR set</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head></head><label></label><figDesc>𝑝 = 𝑝 𝑜𝑓 𝑓 𝑠𝑒𝑡 + 𝑐 𝑜 + 𝑡𝑖𝑚𝑒 * 0.001 3. Observations source: LIDAR set</figDesc><table><row><cell>Reward thresholds</cell><cell>1</cell><cell>2</cell><cell cols="2">2.5 2.8</cell><cell>3</cell><cell>3.5</cell><cell>4</cell></row><row><cell>Number of obstacles</cell><cell>8</cell><cell cols="6">10 13 15 17 18 20</cell></row><row><cell>Min spawn distance</cell><cell>6</cell><cell>6</cell><cell>4</cell><cell>4</cell><cell>3</cell><cell>3</cell><cell>2</cell></row><row><cell>Target distance</cell><cell cols="7">25 28 30 33 35 37 40</cell></row><row><cell>Penalty offset</cell><cell cols="2">0.5 1.5</cell><cell>2</cell><cell cols="4">2.5 2.5 2.5 2.5</cell></row><row><cell>2. Harder penalty function:</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table><note>1. Curriculum parameters:</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 1</head><label>1</label><figDesc>Comparison of performance of the different experimented training approaches.</figDesc><table><row><cell cols="2">Training approach CPM</cell><cell cols="2">TPM CPT</cell></row><row><cell>Baseline</cell><cell>4,984</cell><cell>3,96</cell><cell>1,25</cell></row><row><cell>Curriculum</cell><cell cols="3">10,568 9,616 1,09</cell></row><row><cell>Hardpenalty</cell><cell>4,592</cell><cell cols="2">3,896 1,18</cell></row><row><cell>Camera</cell><cell>7,264</cell><cell cols="2">3,048 2,38</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 2</head><label>2</label><figDesc>Summary of the performance of the curriculum based model (achieved by training in random scenarios) applied to structured environments.</figDesc><table><row><cell>Environment</cell><cell cols="3">CPM TPM CPT</cell></row><row><cell>Rooms</cell><cell>7,2</cell><cell>1,8</cell><cell>4</cell></row><row><cell>Corridor</cell><cell>4,8</cell><cell>14,2</cell><cell>0,34</cell></row><row><cell cols="2">Corridor tight 29,6</cell><cell>2,2</cell><cell>13,45</cell></row><row><cell>Turn</cell><cell>10,4</cell><cell>10,4</cell><cell>1</cell></row><row><cell>Crossroad</cell><cell>23,4</cell><cell>9,6</cell><cell>2,44</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">The project's source code is available on Github: https://github.com/nhabbash/autonomous-exploration-agent.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">Demo at https://nhabbash.github.io/autonomous-exploration-agent/</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Reinforcement Learning: an Introduction</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">S</forename><surname>Sutton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">G</forename><surname>Barto</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>MIT press Cambridge</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J</forename><surname>Russell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Norvig</surname></persName>
		</author>
		<title level="m">Artificial Intelligence: A Modern Approach</title>
				<imprint>
			<publisher>Pearson</publisher>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note>4th ed.</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Agent based modeling and simulation: An informatics perspective</title>
		<author>
			<persName><forename type="first">S</forename><surname>Bandini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Manzoni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Vizzari</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Artificial Societies and Social Simulation</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page">4</biblScope>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Environment as a first class abstraction in multiagent systems</title>
		<author>
			<persName><forename type="first">D</forename><surname>Weyns</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Omicini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Odell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Autonomous Agents Multi-Agent Systems</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="5" to="30" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">An agent-based model for plausible wayfinding in pedestrian simulation</title>
		<author>
			<persName><forename type="first">G</forename><surname>Vizzari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Crociani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bandini</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.engappai.2019.103241</idno>
		<ptr target="https://doi.org/10.1016/j.engappai.2019.103241.doi:10.1016/j.engappai.2019.103241" />
	</analytic>
	<monogr>
		<title level="j">Eng. Appl. Artif. Intell</title>
		<imprint>
			<biblScope unit="volume">87</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Schulman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Wolski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dhariwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Radford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Klimov</surname></persName>
		</author>
		<idno>CoRR abs/1707.06347</idno>
		<ptr target="http://arxiv.org/abs/1707.06347.arXiv:1707.06347" />
		<title level="m">Proximal policy optimization algorithms</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Emergence of locomotion behaviours in rich environments</title>
		<author>
			<persName><forename type="first">N</forename><surname>Heess</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Tb</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sriram</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lemmon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Merel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Wayne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Tassa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Erez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M A</forename><surname>Eslami</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Riedmiller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Silver</surname></persName>
		</author>
		<idno>CoRR abs/1707.02286</idno>
		<ptr target="http://arxiv.org/abs/1707.02286.arXiv:1707.02286" />
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Learning symmetric and low-energy locomotion</title>
		<author>
			<persName><forename type="first">W</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Turk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">K</forename><surname>Liu</surname></persName>
		</author>
		<idno type="DOI">10.1145/3197517.3201397</idno>
		<idno>doi:10. 1145/3197517.3201397</idno>
		<ptr target="https://doi.org/10.1145/3197517.3201397" />
	</analytic>
	<monogr>
		<title level="j">ACM Trans. Graph</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="page">12</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Trust region policy optimization</title>
		<author>
			<persName><forename type="first">J</forename><surname>Schulman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Levine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Abbeel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">I</forename><surname>Jordan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Moritz</surname></persName>
		</author>
		<ptr target="http://proceedings.mlr.press/v37/schulman15.html" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 32nd International Conference on Machine Learning, ICML 2015</title>
				<editor>
			<persName><forename type="first">F</forename><forename type="middle">R</forename><surname>Bach</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Blei</surname></persName>
		</editor>
		<meeting>the 32nd International Conference on Machine Learning, ICML 2015<address><addrLine>Lille, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015-07-11">6-11 July 2015. 2015</date>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="page" from="1889" to="1897" />
		</imprint>
	</monogr>
	<note>JMLR Workshop and Conference Proceedings</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Curiosity-driven exploration by selfsupervised prediction</title>
		<author>
			<persName><forename type="first">D</forename><surname>Pathak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Agrawal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Efros</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Darrell</surname></persName>
		</author>
		<ptr target="http://proceedings.mlr.press/v70/pathak17a.html" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 34th International Conference on Machine Learning, ICML 2017</title>
				<editor>
			<persName><forename type="first">D</forename><surname>Precup</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Y</forename><forename type="middle">W</forename><surname>Teh</surname></persName>
		</editor>
		<meeting>the 34th International Conference on Machine Learning, ICML 2017<address><addrLine>Sydney, NSW, Australia; PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017-08-11">6-11 August 2017. 2017</date>
			<biblScope unit="volume">70</biblScope>
			<biblScope unit="page" from="2778" to="2787" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Emergent behaviors and scalability for multi-agent reinforcement learning-based pedestrian models</title>
		<author>
			<persName><forename type="first">F</forename><surname>Martinez-Gil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lozano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Fernández-Rebollo</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.simpat.2017.03.003</idno>
		<ptr target="https://doi.org/10.1016/j.simpat.2017.03.003.doi:10.1016/j.simpat.2017.03.003" />
	</analytic>
	<monogr>
		<title level="j">Simul. Model. Pract. Theory</title>
		<imprint>
			<biblScope unit="volume">74</biblScope>
			<biblScope unit="page" from="117" to="133" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
