<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Towards Fast Visual Explanations of Local Path Planning with LIME and GAN</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Amar</forename><surname>Halilović</surname></persName>
							<email>amar.halilovic@uni-ulm.de</email>
							<affiliation key="aff0">
								<orgName type="department">Institute of Artificial Intelligence</orgName>
								<orgName type="institution">Ulm University</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Senka</forename><surname>Krivić</surname></persName>
							<email>senka.krivic@etf.unsa.ba</email>
							<affiliation key="aff1">
								<orgName type="department">Faculty of Electrical Engineering</orgName>
								<orgName type="institution">University of Sarajevo</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Towards Fast Visual Explanations of Local Path Planning with LIME and GAN</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">268BB0DB5A29506B3F16516130172388</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:47+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Robotics</term>
					<term>Path Planning</term>
					<term>Explainable Artificial Intelligence</term>
					<term>Explainability</term>
					<term>Interpretability</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>As robots become a more significant part of humans' daily lives, bridging the gap between robot actions and human understanding of what robots do and how they make their decisions becomes challenging. We present an approach to local navigation explanation based on Local Interpretable Model-agnostic Explanations (LIME), a popular approach from the Explainable Artificial Intelligence (XAI) community for explaining individual predictions of black-box models. We show how LIME can be applied to a robot's local path planner. Moreover, we show how the General Adversarial Network (GAN) can be trained and used for fast explanation generation. We also analyze the quality and runtime of GAN explanations and present a tool for visualizing these explanations online as the robot navigates.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Robots in social environments raise the requirement for explainability of robot behavior <ref type="bibr" target="#b0">[1]</ref>. As the tendency of robots' presence in society grows, this requirement becomes more pronounced. The introduction of the "Right to explanation" <ref type="bibr" target="#b1">[2]</ref> in the European Union as a part of the General Data Protection Regulation (GDPR) <ref type="bibr" target="#b2">[3]</ref> underlines the human right to explanation in the face of machines making decisions that affect humans. Current decision-making methods in robotics largely lack explainability and thus limit the faster adoption of robots in important tasks. A lack of explainability can also become a safety issue when robots behave unexpectedly, putting humans in highly sensitive environments at risk.</p><p>We address explainability in robotics by focusing on explainable robot navigation in social environments: Imagine a robot navigating in a known environment with the possibility of encountering humans and obstacles. Local path planners allow robots to follow a global path plan while dynamically reacting to unexpected occurrences. Some of the robot's decisions may require abrupt stops or changes of direction and path deviations, thus surprising people in the neighborhood or even scaring them. This can lead to trust loss, which needs to be mitigated <ref type="bibr" target="#b3">[4]</ref>. One mitigation strategy is explanation. We want to mitigate trust loss by enabling robots to explain their navigational choices. Using Local Interpretable Model-agnostic Explanations (LIME) <ref type="bibr" target="#b4">[5]</ref>, an established method from Explainable Artificial Intelligence (XAI) <ref type="bibr" target="#b5">[6]</ref>, we demonstrate how a robot can generate visual explanations of its local decision-making in path planning and obstacle avoidance. To approach explanation generation in real-time, we train a Generative Adversarial Network (GAN) <ref type="bibr" target="#b6">[7]</ref> model on a dataset produced by LIME. We demonstrate how the trained GAN model generates visual explanations of local path plans.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Technical background</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Local Interpretable Model-agnostic Explanations (LIME)</head><p>LIME <ref type="bibr" target="#b4">[5]</ref> is a model-agnostic local XAI technique that explains predictions of a black-box model by learning an interpretable model around the instance of interest. The instance of interest can be anything that is an input to an AI model, be it text, numerical data, or images. We focus on visual explanations and use an image (viz., the local costmap, see below) as the instance of interest. LIME for images <ref type="foot" target="#foot_0">1</ref> takes the input image and partitions it into segments -superpixels, thereby creating interpretable features. Then, it perturbs interpretable features, turning them off to generate perturbed samples (perturbations) in the neighborhood of the instance of interest. For every perturbation, LIME queries the black-box model and thereby generates a local data set of (perturbed) neighbor images and the respective black-box model's predictions. On this new dataset, LIME trains an interpretable model, viz., a weighted linear regression model. The explanation is obtained by interpreting the coefficients of the trained linear model: The importance of each segment in the image for the behavior of the black-box model is represented by one coefficient in the linear model. Depending on the sign of the coefficient, the interpretable feature (viz., the segment in the image) positively or negatively affects the black-box model's prediction. That said, applying LIME to explain local navigation visually, one needs to provide a suitable method for computing a segmentation of a local costmap (viz., the interpretable features). Moreover, the output of the local planner has to be interpreted as the prediction of some black-box model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Generative Adversarial Networks (GANs)</head><p>GANs were introduced by Ian Goodfellow et al. <ref type="bibr" target="#b6">[7]</ref> as a deep learning framework for the estimation of generative models. Estimation is done by an adversarial process where a generative model, Generator (G), and a discriminative model, Discriminator (D), are trained concurrently. G generates new samples by learning the training data distribution, while D estimates whether the provided sample is from the training data or is produced by G. Mehdi and Osindero <ref type="bibr" target="#b7">[8]</ref> introduced conditional GAN (cGAN), where G and D are conditioned on some information. Isola and colleagues <ref type="bibr" target="#b8">[9]</ref> show how cGAN can be used for image-to-image translation by conditioning on images. G is trained to learn translation between input and output images and fool D, while D learns to classify output images as real (coming from the training dataset) or fake (generated by G). In our work, we employ their pix2pix cGAN architecture 2 to achieve a fast explanation generation of local navigation decisions. D is trained by minimizing the negative log-likelihood of identifying real and fake images conditioned on input images, while G is trained using the adversarial loss of D (whether it fools the discriminator or not) and L1 loss (mean absolute per pixel difference between real and fake images) which are combined into a composite loss function. We condition G and D on local costmaps (see Fig. <ref type="figure" target="#fig_1">1b</ref>,1f,1j) as inputs and explanation images (LIME outputs) (see Fig. <ref type="figure" target="#fig_1">1c</ref>,1g,1k) as outputs. Both input and output images include (besides obstacle information) the robot's location, the local plan, and the global plan.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Experiment I: Explanations with LIME</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Technical Set-Up</head><p>Our set-up is situated in the context of the ROS navigation stack <ref type="bibr" target="#b9">[10]</ref>. A global path planner has generated a global path plan for the robot to navigate to a specified goal position. For path following and obstacle avoidance, a local path planner takes the local costmap and the global path as input and outputs a local path (in terms of a velocity vector) for the robot to execute. For LIME to be applicable, the black-box behavior needs to be deterministic. Therefore, we do not employ sampling-based planners, such as DWA or RRT, but instead, employ the TEB planner <ref type="bibr" target="#b10">[11]</ref>. To use LIME for generating visual explanations of local path plans in terms of obstacles in the local neighborhood of the robot, we use the local costmap as an instance of interest and the TEB planner as the black box that takes that costmap as input and outputs some local path. LIME first segments the local costmap into obstacles as interpretable features. The SLIC <ref type="bibr" target="#b11">[12]</ref> segmentation algorithm is used to get obstacle segments. As the second step, LIME obtains perturbations of the segmented costmap by turning off segments by replacing them with free space. The perturbed local costmap, together with the global plan, the robot's footprint, and its current velocities, form the input to the TEB, which then outputs a local plan for the perturbation at hand. The deviation of the so-calculated local plan from the global plan is taken as a target for the interpretable model and is calculated as a sum of the minimal point-to-point L2 differences between the local and the global plan.</p><p>We get an explanation image for each local navigation decision by coloring segments based on their LIME coefficients. The sign of the coefficient dictates the color: Positive-weighted segments are colored green, and negative-weighted segments are colored red. A green-colored segment contributed positively to the deviation; that is, green indicates "without that segment, the local plan would deviate less from the global plan". Conversely, a red-colored segment indicates "without that segment, the local plan would deviate more from the global plan". Color intensity is set proportional to the coefficient with intensities in the range [0, 255] in RGB color space.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Qualitative results</head><p>Figures <ref type="figure" target="#fig_1">1a, 1e</ref>, and 1i show three characteristic local navigation cases (C1, C2, and C3) in our lab. The robot (a TIAGo from PAL robotics) tries to follow the global plan that leads it through the doorway. Figure <ref type="figure" target="#fig_1">1b, 1f, and 1j</ref> show the local costmaps for three local navigation cases with black obstacles, white robot's location, and grey free space. In C1, the local plan (yellow dots) mostly coincides with the global plan (blue dots), while in C2, the starting and ending points of the local plan could not be connected into a joint trajectory. In C3, the local plan deviates  from the global plan. The LIME explanation explains how obstacles and/or parts of obstacles contribute to the deviation. From the explanation images 1c, 1g, and 1k we have:</p><p>• C1: "The right (green) wall segment increases deviation, while the left (red) wall segment decreases it, squeezing it to the doorway. " • C2: "The (green) obstacle increases the deviation because if it were not there, the robot would follow the global plan. If the wall (red) were not there, the local path planner could create the connected local plan and deviate from the global plan. " • C3: "Both obstacles increase the deviation, but the round one does so more significantly.</p><p>If it were not there, the robot would follow the global plan. If the rectangular obstacle were not there, the robot would still deviate, but less. "</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Quantitative Results</head><p>We analyze explanation runtime. LIME's runtime is generally high and increases linearly in the number of perturbations as shown in Fig. <ref type="figure" target="#fig_3">2a</ref>, where the runtimes of the most important parts of the LIME are plotted. Planner total time takes the biggest part of the total explanation runtime and includes the preparation of input data for the planner (TEB), the planner calculation time of all the paths for each perturbation, and the collection of the planner's outputs. The planner calculation time takes the biggest part of the planner total time. Both runtimes increase relative to the increase in the number of perturbations. As segmentation only needs to be done once for each explanation, its runtime is unaffected by the number of perturbations.  LIME has clear limitations in that this method alone cannot be used for real-time explanations. Fast-changing and socially complex environments like streets or places with people might require real-time explanations. Even when using a small number of perturbations (which affects the explanation quality), not every TEB call (every 200ms) can be explained in real time.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiment II: LIME Explanations with GAN</head><p>The first experiment showed that LIME can be used to generate meaningful visual explanations but that the generation procedure is too slow for online usage. To approach explanation generation in real time, we utilize GAN as an explanation method. Our main idea is to use LIME only offline to generate a dataset of pairs of local costmaps and respective explanations. With this dataset, we train GAN for image-to-image translation. This way, explanation generation becomes independent of the number of samples.</p><p>We have trained an image-to-image GAN for 200 epochs with 240 training image pairs, 60 validation image pairs, and 60 test image pairs. The image-pairs dataset was generated using LIME with the configuration as outlined in the description of our first experiment; see Section 3.1. Of the important GAN settings, resnet_9blocks is used as Generator architecture. Other settings are kept as in the pix2pix standard implementation. The trained GAN model generates an explanation image by taking a local costmap with a plotted robot's position and local and global plans as input. In the following, we refer to the trained GAN model simply as GAN.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Results</head><p>We assess the quality of the visual explanations generated by the GAN by human visual examination. This is a recommended practice <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b13">14]</ref>. Figures <ref type="figure" target="#fig_1">1d, 1h, and 1l</ref> show GAN explanations for the use-cases C1, C2, and C3, respectively. One can see distortions in the GAN explanations, which, however, do not do any harm to the conveyed meaning. In C1, one can see that the colors of colored segments are not as sharp as in LIME, but the explanation does not suffer qualitatively. GAN's explanation for C2 has similar properties, with even somewhat different coloring of less important segments on the right wall. Still, this does not hamper the explanation very much, as the main contributors are still clearly distinguished. In the GAN explanation for C3, the green color is somewhat duller and blurred compared to LIME, but the contributions of the segments are still visible. We report a mean GAN calculation runtime of 0.25 seconds and a mean GAN model loading runtime of 0.36 seconds. Hence, once the GAN model is loaded, it can output four explanations per second.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Demonstrator: Visual Explanations with RViz</head><p>We demonstrate how GAN explanations can be visualized in Rviz in real-time in Fig. <ref type="figure" target="#fig_3">2b</ref>. The GAN output is published as PointCloud2 and overlayed over the map view in RViz as a local explanation layer. The GAN model is loaded once at the beginning of navigation and called periodically with every new local plan produced by TEB, allowing for the local explanation layer refresh frequency of 4Hz. This tool thus enables humans to observe which parts of the environment the robot considers important for its navigational decisions. We envision the tool to be used for inspection and debugging, teaching path planning, and demonstrating the robot's internal reasoning processes to interested laymen.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Discussion</head><p>GAN achieves huge runtime savings compared to LIME and approaches the upper real-time performance limit of 200 ms. Most importantly, explanations generated by GAN do not depend on any image segmentation preprocessing, and the performance-hungry process of replanning the local path for every input image perturbation is no longer needed. This translates to the possibility of achieving explanations in near real-time even when many obstacles are considered potential explanations. This allows for explanation generation in highly dynamic environments.</p><p>One drawback of the GAN model is some distortions in the visual explanation. However, these are not too harmful as they are local and do not significantly affect the color and shade of color. A limitation of our work is that we have not systematically analyzed how well the GAN explanations generalize to very complex environments. The GAN explanation procedure does not make assumptions about the robot platform and its kino-dynamic constraints. It also does not assume a specific underlying local planner. It generates the explanation only based on an image containing the local obstacles along with the local plan and the global plan. Thus, it</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>(a) C1: robot (b) C1: costmap (c) C1: LIME expl. (d) C1: GAN expl. (e) C2: robot (f) C2: costmap (g) C2: LIME expl. (h) C2: GAN expl. (i) C3: robot (j) C3: costmap (k) C3: LIME expl. (l) C3: GAN expl.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: C1: A free doorway allows TIAGo to follow the initial trajectory and move through the doorway. Because TIAGo is too close to the right wall, it has to adjust its position and proceed through the doorway. C2: The same doorway is blocked with a chair, so TIAGo cannot progress through the doorway. It stops and rotates in place to try to go left. C3: A table, box, and wall form two doorways where the right doorway through which TIAGo should go is suddenly blocked by the trash can. The robot must deviate from the initial trajectory, traversing through the free doorway.</figDesc><graphic coords="4,200.14,295.98,90.01,75.01" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: a) Total runtimes of different parts of the explanation method. b) Visualization of the visual explanations in Rviz in real-time as generated by the GAN: TIAGo is adapting its initial trajectory, deviating from the global plan, which would lead to the collision with the wall.</figDesc><graphic coords="5,92.21,225.09,204.18,141.73" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://github.com/marcotcr/lime</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">shorturl.at/giFUX</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>may turn out that the GAN has to be retrained for every robotic platform. Another limitation is that our explanation approach relies on the underlying local planner to be deterministic. This is necessary because the procedure must be certain that a variation in the local path is due to the obstacles in the surroundings rather than random fluctuations. In the future, we will also investigate how non-deterministic path planners could be explained.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Explaining robot actions</title>
		<author>
			<persName><forename type="first">M</forename><surname>Lomas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Chevalier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">V</forename><surname>Cross</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Garrett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hoare</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kopack</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction</title>
				<meeting>the seventh annual ACM/IEEE international conference on Human-Robot Interaction</meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="187" to="188" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">The right to explanation, explained</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Kaminski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Berkeley Tech. LJ</title>
		<imprint>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="page">189</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">The eu general data protection regulation (gdpr), A Practical Guide, 1st Ed</title>
		<author>
			<persName><forename type="first">P</forename><surname>Voigt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Von Dem Bussche</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Cham</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="10" to="5555" />
			<date type="published" when="2017">2017</date>
			<publisher>Springer International Publishing</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Taxonomy of trust-relevant failures and mitigation strategies</title>
		<author>
			<persName><forename type="first">S</forename><surname>Tolmeijer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Weiss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hanheide</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Lindner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Powers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Dixon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tielman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of HRI 2020</title>
				<meeting>HRI 2020</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Why should I trust you?&quot; explaining the predictions of any classifier</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Ribeiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Guestrin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining</title>
				<meeting>the 22nd ACM SIGKDD international conference on knowledge discovery and data mining</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1135" to="1144" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Explanation in artificial intelligence: Insights from the social sciences</title>
		<author>
			<persName><forename type="first">T</forename><surname>Miller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial intelligence</title>
		<imprint>
			<biblScope unit="page" from="1" to="38" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Generative adversarial nets</title>
		<author>
			<persName><forename type="first">I</forename><surname>Goodfellow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Pouget-Abadie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mirza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Warde-Farley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ozair</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Courville</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in neural information processing systems</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Mirza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Osindero</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1411.1784</idno>
		<title level="m">Conditional generative adversarial nets</title>
				<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Image-to-image translation with conditional adversarial networks</title>
		<author>
			<persName><forename type="first">P</forename><surname>Isola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-Y</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Efros</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Conference on Computer Vision and Pattern Recognition</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Ros navigation: Concepts and tutorial, Robot Operating System (ROS) The Complete Reference</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">L</forename><surname>Guimarães</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>De Oliveira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Fabro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Becker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">A</forename><surname>Brenner</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="121" to="160" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Trajectory modification considering dynamic constraints of autonomous robots</title>
		<author>
			<persName><forename type="first">C</forename><surname>Rösmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Feiten</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Wösch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Hoffmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Bertram</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ROBOTIK 2012; 7th German Conference on Robotics</title>
				<imprint>
			<publisher>VDE</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Slic superpixels compared to state-of-the-art superpixel methods</title>
		<author>
			<persName><forename type="first">R</forename><surname>Achanta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Shaji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lucchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Fua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Süsstrunk</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE transactions on pattern analysis and machine intelligence</title>
		<imprint>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="page" from="2274" to="2282" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Improved techniques for training gans</title>
		<author>
			<persName><forename type="first">T</forename><surname>Salimans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Goodfellow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Zaremba</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Cheung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Radford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Chen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in neural information processing systems</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Pros and cons of gan evaluation measures</title>
		<author>
			<persName><forename type="first">A</forename><surname>Borji</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computer Vision and Image Understanding</title>
		<imprint>
			<biblScope unit="volume">179</biblScope>
			<biblScope unit="page" from="41" to="65" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
