<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A Novel Semantic SLAM Framework for Humanlike High-Level Interaction and Planning in Global Environment</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Sumaira</forename><surname>Manzoor</surname></persName>
							<email>sumaira11@skku.edu</email>
							<affiliation key="aff0">
								<orgName type="department">College of Information and Communication Engineering</orgName>
								<orgName type="institution">Sungkyunkwan University</orgName>
								<address>
									<country key="KR">South Korea</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Sung-Hyeon</forename><surname>Joo</surname></persName>
							<email>sh.joo@skku.edu</email>
							<affiliation key="aff0">
								<orgName type="department">College of Information and Communication Engineering</orgName>
								<orgName type="institution">Sungkyunkwan University</orgName>
								<address>
									<country key="KR">South Korea</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Yuri</forename><forename type="middle">Goncalves</forename><surname>Rocha</surname></persName>
							<email>yurirocha@skku.edu</email>
							<affiliation key="aff0">
								<orgName type="department">College of Information and Communication Engineering</orgName>
								<orgName type="institution">Sungkyunkwan University</orgName>
								<address>
									<country key="KR">South Korea</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Hyun-Uk</forename><surname>Lee</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">College of Information and Communication Engineering</orgName>
								<orgName type="institution">Sungkyunkwan University</orgName>
								<address>
									<country key="KR">South Korea</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Tae-Yong</forename><surname>Kuc</surname></persName>
							<email>tykuc@skku.edu</email>
							<affiliation key="aff0">
								<orgName type="department">College of Information and Communication Engineering</orgName>
								<orgName type="institution">Sungkyunkwan University</orgName>
								<address>
									<country key="KR">South Korea</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">A Novel Semantic SLAM Framework for Humanlike High-Level Interaction and Planning in Global Environment</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">E3B7F747549D161CCE397F6D78451E79</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T12:04+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In this paper, we propose a novel semantic SLAM framework based on human cognitive skills and capabilities that endow the robot with high level interaction and planning in real-world dynamic environment. Two-fold strengths of our framework aims at contributing: 1) A semantic map resulting from the integration of SLAM with the Triplet Ontological Semantic Model (TOSM); 2) Human-like robotic perception system that is optimal and biologically plausible for place and object recognition in dynamic environment proposing semantic descriptor and CNN .We demonstrate the effectiveness of our proposed framework using mobile robot with Zed camera (3D sensor) and a laser range finder (2D sensor) in real-world indoor environment. Experimental results demonstrate the practical merit of our proposed framework.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Building the autonomous mobile robot with human-like intelligence for semantic map construction and cognitive vision-based perception are two the most significant challenges for long-term planning and high-level interaction in indoor environment.</p><p>The problem to determine the appropriate method for building and maintaining the map that encodes both casual and world knowledges has become an active research area in the robotics. Many studies in the last decades have focused on spatial representation of the environment for building metric, topological and appearance-based maps. However, semantic mapping of environment for the robots has not been as intensively studied. The information provided by the conventional mapping approaches assists only in robot navigation while qualitative information about the structure of environment for task planning is not generated. For instance, metric map that contains geometric representation of the environment provides shape of the room without any semantic understanding to indicate whether it is office or lecture room. Our proposed framework tackles this issue by constructing the map that combines spatial representation with semantic knowledge of environment and provide autonomous navigation to robot for perform high-level task without human intervention in global dynamic environment.</p><p>The semantic interpretation of the environment also plays an essential role to improve the perception ability of the robot for performing real-world operations such as object and place recognition in more reliable and intelligent</p><p>The 1 st International Workshop on the Semantic Descriptor, Semantic Modelingand Mapping for Humanlike Perceptionand Navigation of Mobile Robots toward Large Scale Long-Term Autonomy (SDMM19) manner. Nowadays, approaches for robotic perception range from traditional computer vision using handcrafted features to advanced deep learning with convolutional neural network or combination of both. However, these artificial vision algorithms have practical limitations to process in real time <ref type="bibr">[bohg17]</ref>. Therefore, biologically plausible algorithms combined with analogies of artificial perception are getting the attention. Our proposed framework handles the current challenges by developing the effective solution that enables the robot with the potential of human-like vision for recognizing the objects and places using semantic perception. The primary goal of our novel semantic framework is twofold for developing semantic perception system and endowing the robot to incrementally build a consistent semantic map while simultaneously determining its location within map.</p><p>Our proposed semantic SLAM framework makes an original contribution to three important research areas in robotics with the following characteristics:</p><p>• Human-like brain GPS system for building semantic maps with emphasis on qualitative description of robot's surrounding</p><p>• Human cognition based 1TOSM with deeper domain knowledge acquired by semantic, topological and geometric properties of the objects for providing the robot higher degree of autonomy and intelligence.</p><p>• Bio-inspired semantic perception system combined with object and place recognition that allows the robot to relate what it perceives using semantic descriptor This paper is organized as follows. In Section II, we provide an extensive literature review of semantic mapping, semantic SLAM and perception system for autonomous mobile robot. In Section III, we explain the key features of our proposed framework with complete details of major components of TOSM and recognition model. In Section IV, we examine the significant effects of our proposed framework in simulated environment as an illustration of its contents. Finally, we conclude our work with future direction in Section V.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Related Work</head><p>We focus our review on studies of three major concepts, which we consider are the most closely related to our work: a) semantic SLAM b) ontology c) semantic perception for object and place recognition d) semantic descriptor.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Semantic SLAM</head><p>This section, gives the understanding of SLAM, explain semantic SLAM structure, its concepts and related work in this area.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Semantic Mapping</head><p>In the last few years, embedding the map with semantic information has become an active research area with the motivation of human-like robot interaction and understanding of the environment. High-level features in semantic map are used to model the human concepts about the objects, places and relationship between them <ref type="bibr" target="#b1">[Capobianco15]</ref>. Semantic mapping has recently become the center of attraction in research community which divides the semantic mapping approaches into three groups based on object, appearance and activity <ref type="bibr" target="#b2">[Pendleton17]</ref>. Object based semantic mapping <ref type="bibr" target="#b3">[Vasudevan08]</ref>methods depends on the occurrence of key objects to perform object recognition and classification tasks by semantic understanding of environment. Appearance based semantic mapping approaches take sensor readings and interpret them for constructing semantic information of the environment. Some studies use geometric features <ref type="bibr" target="#b4">[Burgard07]</ref> and vision fused with LIDAR data for world understanding and classification <ref type="bibr" target="#b5">[Nüchter08]</ref> task. The activity based semantic mapping <ref type="bibr" target="#b6">[Xie13]</ref> techniques use information of external activities (e.g. sidewalk verses roads) around the robot for semantic understanding and contextual classification of environment. These techniques are at their formative stage compared to other two semantic mapping methods.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Semantic SLAM: Concepts</head><p>The large number of concepts and relationship among them in real-world environment lead to several taskdriven decisions which depends on the level of semantic organization and context of environment in which robot performs its task. Literature review shows two major concepts of constructing semantic relationships <ref type="bibr">[cadena16]</ref> based on the details and organization. The detail of semantic concept significantly affects the complexity of the problem at different levels. For example, a robot needs only coarse categories such as rooms, doors and corridors to perform a task "going form 1st room to 2nd room" while for the other task "pick up the glass" it needs to know finer categories such as table, glass or any other object. The semantic concepts are not limited because a single entity or object in real-world environment has many properties or concepts. For example, "moveable" and "sittable" are the properties of a chair while "movable" and "unsittable" are the properties of a table. Both table and chair have same class "Furniture". However, they share "moveable" property with different usability. So, this multiplicity of concepts is handled by Flat or hierarchical organization of properties Semantic SLAM: Object/ Place Recognition Semantic to the SLAM is included by using human-spatial concepts into the maps. Humans locate themselves by object centric concepts instead of metric information and they use reference points rather than global coordinates.</p><p>The initial research into semantic mapping uses direct approach <ref type="bibr" target="#b8">[Lowry16]</ref> with metric map segmentation built by traditional SLAM system into semantic concepts. An early work in <ref type="bibr" target="#b9">[Sabourin10]</ref> develops a system for scene understanding via semantic analysis using image segmentation techniques and the SLAM algorithm is driven by object recognition using human spatial concepts. The work shows that semantic concepts are organized in in coarse to finer manner for indoor environment. An online semantic mapping framework <ref type="bibr" target="#b11">[Pronobis12]</ref> of indoor environment combined with object observations such as shape, size, room's appearance that is built using three layers of reasoning to address the problem of detection and learning of novel properties and room categories for fully self-extendable semantic mapping. Data association problem also exists in metric and semantic SLAM when building a map of environment with large number of objects of the same or different class and scales. This problem is addressed in <ref type="bibr" target="#b12">[Bowman17]</ref> by coupling geometric and semantic observations and taking the advantage of object recognition for providing meaningful scene interpretation with semantically labeled landmarks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Ontology</head><p>In recent years, reducing the semantic gap using ontologies has been studied by many researchers. An early study <ref type="bibr" target="#b13">[Durand07]</ref>, has introduced an object recognition approach based on ontology and assigned the semantic meaning to objects by matching process between concepts and objects. The work in <ref type="bibr" target="#b14">[Ji12]</ref>, handles the robot task planning issues in domestic environment at the high symbolic level by combining classical AI approaches with semantic knowledge representation. Its framework is based on semantic knowledge ontology to represent robot primitive actions and description of environment. A study in <ref type="bibr" target="#b15">[Riazuelo15]</ref>, described the RoboEarth project using knowledge-based system to provide web and cloud services to multiple robots. Its semantic mapping system is based on visual SLAM mapping and ontology to describe the concepts, relations in maps and objects. A robotic system with advanced abilities leads to the complexity in its software development. A case study presented in <ref type="bibr" target="#b16">[Saigol15]</ref> addresses this issue using an ontology as the central data store to process all information and showed that knowledge-base makes the robotic system easier to develop, modify and understand. In the last few years, a variety of approaches have been investigated to process the sensory information in dynamic world. Among them, OnPercept <ref type="bibr" target="#b17">[Azevedo18]</ref> is a recent approach that is based on cognitive ontology for performing the HRI tasks by modeling the sensory information. A study <ref type="bibr" target="#b18">[Lee18]</ref>, proposes context query-processing framework using spatio-temporal context ontology for enabling the indoor service robots to adapt the dynamic change from the sensors in highly complex environment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Perception</head><p>Perception system endows the robot to perceive and reason about its environment. The autonomous mobile robot can perform its complex tasks such as object and place recognition, collision avoidance, task planning, decision making, mapping, dynamic interaction, localization, and intelligent reasoning with high accuracy if perception information is carefully processed. A recent study has <ref type="bibr" target="#b19">[Sünderhauf18]</ref> highlighted the fact that robotic perception is different from conventional computer vision perception because in computer vision image output is taken as information while a robotics perception system translates the image output from information into actions for taking decisions and actions in real world environment. Therefore, perception plays vital role for the success of goal-driven robotic system. However, despite this difference, robot perception incorporates the techniques from computer vision, and it is particularly evolving with the recent development in deep learning networks.</p><p>The 1 st International Workshop on the Semantic Descriptor, Semantic Modelingand Mapping for Humanlike Perceptionand Navigation of Mobile Robots toward Large Scale Long-Term Autonomy (SDMM19)</p><p>In real-world applications, endowing a robot with human like-perception for navigation is a challenging task that enables the robot to recognize scene and object when navigating through a dynamic complex environment and building a 3D map by observing the surrounding. Therefore, regardless of selected navigation system, object identification and place recognition play a vital role for environment representation and modeling.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Object Recognition</head><p>Reliable object recognition is an important and early step for a mobile robot to achieve its goal. Real time object recognition systems work in two stages: Offline and online. Offline stage aims at reducing the execution time without affecting system efficiency. Image pre-processing, feature extraction, segmentation and training processes are performed in this stage. Online stage runs the process in real time to ensure the high-level interaction between robot and its surrounding environment. Image retrieval, classification, object detection and recognition are the examples of few processes that are carried out at this stage.</p><p>A key issue in this context is the interaction with object of different shapes and sizes. Despite significant achievements and advent of digital camera, accurate object detection and recognition is still a challenging task when real-world environment is considered. The reasons for this difficulty are occlusions, complex object shapes, variations in geometric and photometric pose, noise and illumination changes.</p><p>Early efforts <ref type="bibr" target="#b20">[Zou19]</ref> to handle this issue are based on template matching. Later approaches include statistical classifiers including SVM, Adaboost and neural networks. On the other hand, computationally simple and efficient approached based on local features such as scale-invariant descripts (e.g. SURF, SIFT), haarlike features also exist. However, the limitations of these methods include accuracy that depends on number of features that describe an image, segmentation that becomes highly complex in real world scenarios and not robust to relatively large affine transformations. In literature, its alternative is to use Object Action Complexes (OACs) <ref type="bibr" target="#b21">[Petrick08]</ref> that combines the action, object and learning process to deal with the representational difficulties in diverse areas.</p><p>The perception-action relationship based on cognitive understanding has been explored in <ref type="bibr" target="#b22">[Yan14]</ref> by linking both tasks through a memory component. In these studies, perception system uses three sensor modalities: vision, audio and touch and their data are passed to the memory module for generating the motor control signals and action unit translate them into robot responses. This intermediate process acts as robot's brain for improving the recognition task when mobile robot navigates in unknown environment. The study of attention based cognitive architecture in <ref type="bibr" target="#b23">[Palomino16]</ref> uses the reasoning as a bound between perception and action. The core of this work is selection of active task based on the context data and accomplishment of task depends on the presence of specific element in the scene. However, object-based visual attention system still requires considerable efforts to accurately detect and categorize different objects.A recent study <ref type="bibr" target="#b24">[Ye17]</ref> presents a vision system for object detection and recognition from a visual input in real time by computing motion, color, motion and shape cues and combining them in a probabilistic manner for assistive robots.</p><p>However, despite the vast analysis of existing perceptual systems for autonomous mobile robots, semantic recognition system remains to be addressed for robust object recognition in real-world scenario.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Place Recognition</head><p>Visual place recognition becomes very challenging when real-world scenario is concerned. Therefore, visual place recognition algorithms must endow the autonomous mobile robot to robustly handle the variation in visual environment that occur due to dynamic, geographical and categorical changes <ref type="bibr" target="#b25">[Martinez17]</ref>. The visual appearance of places varies due to illumination changes (day and night), moving the furniture or different objects from one place to another. The same place (room or corridor) might look different in different viewpoints, despite sharing some common visual features. Humans can recognize a room (office or kitchen) because of their ability to build categorical models of places. However, it is difficult for the robot to recognize the rooms based on their distinctive features and categories.</p><p>Literature review <ref type="bibr" target="#b26">[Ullah08]</ref> shows that contextual understanding of the place is very important for autonomous mobile robot to effectively perform its task. A mobile robot can effectively interact with its environment if it recognized the place and have a functional understanding of area</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4">Semantic Descriptor</head><p>There have been few empirical investigations in recognizing the objects that have semantic similarities in their shapes. A recent study <ref type="bibr" target="#b27">[Tasse16]</ref>, address this challenge and computes the semantic similarities between shapes,</p><p>The 1 st International Workshop on the Semantic Descriptor, Semantic Modelingand Mapping for Humanlike Perceptionand Navigation of Mobile Robots toward Large Scale Long-Term Autonomy (SDMM19)</p><p>images and depth maps using semantic based descriptors. The central idea is to combine labeled 3D shapes with semantic information in their labels for generating semantic-based 3D shape descriptor. An early study <ref type="bibr" target="#b28">[Zen12]</ref>, uses enhanced semantic descriptors for complex video scene understanding by embedding semantic information in the visual words. Recent developments in robot localization and mapping approaches have heightened the need to use semantic descriptors for robot localization and mapping. A seminal study in <ref type="bibr" target="#b29">[Panphattarasap18]</ref>, uses 4-bit binary semantic descriptor (BSD) for robot localization in 2-D map and performs semantic matching.</p><p>The semantic features such as gap between buildings and road junctions are detected using CNN in urban environment. The purpose of BSD is to endow the robot with ability akin to human map reading.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Framework</head><p>Our proposed framework adds semantic techniques to SLAM to cope with the challenges in dynamic environment by providing the robot advanced perception that is closer to human vision and improving the world understanding capabilities of the robot for carrying out high-level navigation task in complex unstructured environments. Our framework provides a closer representation with global environment by defining the Triplet Ontological Semantic Model (TOSM) in which relations between the concepts are described for explaining semantic interoperability of environment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">TOSM: Triplet Ontological Semantic Model</head><p>We accelerate the implementation of cognitive system in autonomous mobile robot by developing Triplet Ontological Semantic Model (TOSM) which is based on cognitive process of human perception and brain GPS model from neuroscience research and physiology. The main characteristics of TOSM are:</p><p>• To endow the robot with semantic mapping of environment based on cognitive architecture modeling</p><p>• To define the relations between domain concepts (knowledge), their attributes (properties) with high-level of abstraction and rules to reason based on the task and the environment</p><p>• To model the sensory information for performing the task planning Our TOSM approach consisting of three major components for effective representation of domain knowledge and information retrieval in indoor environment is shown in Fig 1 <ref type="figure">.</ref> Unique characteristics of these three components represent relationship information with different objects that have spatial and non-spatial properties for performing a specific task in overall robotic environment. The spatial properties represent the concepts of position, shape and size of the objects in robotic environment while the non-spatial properties determine the object category. We describe complete domain knowledge using spatial representation of the objects. Our proposed TOSM approach endows the robot to semantically map the objects and their positions in unexplored environment by defining explicit, implicit and symbolic models, shown in Figure Figure <ref type="figure" target="#fig_0">1</ref> A. Explicit Model Explicit model specifies the spatial representation of the entities such as shape of an object and its position in the domain (global environment) by extracting all the geometrical features of that object and retrieving its physical information from sensors.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Implicit Model</head><p>Implicit model describes the behavior of the robot and series of actions such as robot navigation to perform a task. This spatial representation also defines the intrinsic relations between the entities, gives the semantic interpretation of environment which cannot be obtained using sensors and processes the fuzzy information to provide the effective interaction of the mobile robot with its surrounding along with planning capabilities. Introducing this model in our framework also enables the robot to take high-level decision by understanding the semantic concepts that constitute task success, such as it allows the robot to interpret the semantics of automatic door by understanding its salient events that auto-door opens and closes automatically, on sensing the approach of a person.  We use symbolic model to encode the domain knowledge for describing semantic descriptions, sequence of actions and complex capabilities of our environment in language-oriented way. Robot uses this knowledge through relations that are represented by the links between existing entities. Based on the integrated components of implicit, explicit and symbolic models, the TOSM approach coexist in SLAM allows the robots to perceive, learn, understand and interact with the surroundings based on geometric and semantic information.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">TOSM on demand database</head><p>We design robot-mounted on-demand database to construct semantic model of the environment for providing the robot a semantic mapping and perception closer to human cognitive skills using TOSM. Our TOSM on-demand database approach has three main practical advantages:</p><p>• It eliminates the demand to store several different maps</p><p>• It generates the maps only when they are required for a robot to perform the assigned task in global dynamic environment</p><p>• It enriches the database semantically by adding conceptual meaning to data and relationships</p><p>We store, environmental and behavioral information together with robot knowledge and map data in ondemand database. Robot uses cloud database to plan the behavioral actions and on-demand database to builds a dynamic driving map according Figure <ref type="figure" target="#fig_0">1</ref>: Triplet Ontological Semantic Model (TOSM)to assigned task in operating environment. If the robot needs additional information to download from network or cloud database for performing a specific task, this information is also merged with the robot's current knowledge and ondemand database is concurrently updated. The on-demand database of environment based on TOSM describes the semantics of the domain with the set of relations. We have developed it using the protégé tool to explicitly represent the class hierarchy for each individual. Individuals, also called instances, are defined to represent a specific object in a class. For instance, automatic door is an individual of 'Door' class, as shown in Figure <ref type="figure">2(a)</ref>. We describe our ontological model by creating individuals (instances) in corresponding classes, connecting them with typed literals and defining relationships between objects of different classes. TOSM for on-demand database is composed of three main components: classes, object properties and data properties.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Classes</head><p>We use classes to describe the concepts using collections or types of objects that share common properties in indoor environment. Our ontological model consists of five classes: Map, MathematicalStructure, Time, Behavior and EnvironmentElement. Ecah class represents an abstract group of objects that belongs to the specific class. TOSM allows the classes to have single inheritance (one parent) and multiple inheritance. For example, subclasses of object, Occupant, Robot and Place in EvnirnmentElement class have single inheritance while AutomaticDoor class has multiple inheritance. Thus, all the properties of parents' classes (Door, Object and EnvironmentElement) are inherited by child class (AutomaticDoor). TOSM uses subclasses to represent the concepts more specifically than super classes. Figure <ref type="figure">2</ref>(a) also shows that we have developed our class hierarchy with the systematic top-down view of domain in which we define the most general concepts of an entity in high-level (superclass) and more specific concepts in low-level (subclass). These properties explain the relationship between the classes based on their instances. The category of object and set of properties determine the type of relationship between them. Figure <ref type="figure">3</ref> shows the expression of 3D geometric relation between two classes: "Room1 hasBounday Boundary1" in which an object property "hasBoundary" links the individual "boundary1" of MathematicalStructure class to the individual "room1" of "EnvironmentElement" class. This geometric relation is inferred from visual perception and semantic map. We divide the object properties into describedInMap, mathemeticalProperty, spatialRelationKnowledge and temporalK. Figure <ref type="figure">2(b)</ref> shows that mathematicalProperty includes hasBoundary,relativeToFrame and trans-formedBy, whereas spatialRelationKnowledge includes connectedTo and directionalRelations which is divided into inFrontOf, insideOf and nextTo. Finally, temporalKnowledge include isAvailableAt and Timeinterval properties C. Data Properties These properties specify object parameters or typed literal, also called datatype (string, int, float). We retrieve the individuals by connecting them with the specified literal values using placeSemanticKnowledge, temporalSe-manticKnowledge, objectSemanticKnowledge , expliciitModel and symbol that are defined as data properties in </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Semantic descriptor-based Learning and Recognition</head><p>Our proposed framework introduces real-time and near-perfect object detection and place recognition approaches to mimic human visual system using semantic descriptor-based learning. The overview of our recognition model inspired by human visual cortex and semantic descriptor is illustrated in Figure <ref type="figure" target="#fig_3">4</ref>. When autonomous mobile robot explores complex indoor environment to perform a task, the perception module recognizes the objects and places by extracting data from sensors and retrieving from on-demand TOSM database. It continuously updates symbolic state of the task based on semantic information of newly obtained data from sensors and adds the implicit data about novel objects and places by identifying their classes in knowledge base.</p><p>Our framework allows open-ended learning that enables the robot to adapt to new environment by acquiring the knowledge in incremental fashion and accumulating conceptualization of new object categories. Apart from extensive training data for learning, a robot might always be confronted with an unknown objects and places in operating environment. Our framework handles this issue by processing visual information continuously and performs learning and recognition simultaneously. Our recognition model performs object detection and place recognition using convolutional neural network and semantic descriptor that is based on human perception system. The overview of our recognition model is described in Figure <ref type="figure" target="#fig_3">4</ref>.</p><p>Our proposed recognition model consists of two stages: Training stage and Testing stag </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Training Stage</head><p>At training stage, we use CNN to train the object detection and place recognition model using our own indoor dataset for making the prediction using sensory input data and on-demand database. This stage is composed of three major components: semantic analysis, sematic descriptor and training the recognition model. We perform semantic analysis for explicit and implicit models to get the semantic information and characteristics of each object. Two major operations related to preprocessing of visual data and feature extraction are involved in this step. We perform preprocessing to improve the performance of recognition model by reducing the noise from data for better local and global feature extraction and detection. We extract semantic object features from processed visual data including both global features and local features. We get the overall properties of each object by extracting the global features (edges, corners and color) while salient regions by retrieving the local features.</p><p>For semantic analysis, we extract the geometric features such as edges, lines, corners and shape in conjunction with metric information related to size and pose estimation of object are extracted and integrated into explicit model of our framework as global features. We store object properties and relationship between them as sensory input data while actions of an object such as movability as information of object's behavior in on-demand database.</p><p>The result of object analysis at semantic level is the extraction of semantic descriptions as per human perception. Thus, we reduce the semantic gap by combining the visual features extracted at low-level and information at high-level using semantic descriptor. We pass features vectors containing the geometric properties of the objects such as edges instead of the whole image to train our recognition model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Testing Stage</head><p>At this stage, we run our recognition model in real world by performing the semantic analysis on the visual data and passing the feature vector to run our trained CNN model for object and recognition. Computational simplicity and minimum storage requirements are the major motivating factors for us to pass the extracted feature vectors instead of whole image to the recognition model. It also endows the robot with the ability of human-like perception and semantic understanding of the environment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Experiment</head><p>We perform the real-world experiments in conventional center to evaluate the performance of our proposed semantic SLAM framework and extract the information of the environment and objects. These evaluations are conducted on an Intel Core i7-4712MQ 2.30 GHz CPU, NVIDIA GeForce 840M GPU, and 12GB RAM. Our recognition module uses ZED camera to detect objects and places while we perform localization and mapping using the data obtained from laser range finder (2D sensor).</p><p>We use TOSM to represent semantic information by establishing the concepts and linking the conceptual and physical objects of the environment. Figure <ref type="figure" target="#fig_4">5</ref>. Shows the model of our environment in which operating areas is highlighted in red color. Figure <ref type="figure">6</ref> shows all the steps involved in our real-world experiment. The robot localizes itself using topological map that endows the robot with spatial awareness. We build the semantic map by establishing semantic relationship between a place node in a topological map and its concepts. After that, we associate the objects that are recognized in a specific place with their topological nodes in the semantic map. Robot connects to the database that stores semantic information and properties of objects and places in order to match the relations. Figure <ref type="figure">6</ref>(a) shows the objects recognized by our recognition model.</p><p>Our semantic map explains the structure of the environment at higher level that is closer to human-mapping system. Figure <ref type="figure">6</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusion</head><p>In our semantic SLAM framework, we have presented the central idea to endow the mobile robot with intelligent behavior. It has introduced the biological vision-based perception system for object and place recognition using CNN and semantic descriptor. Furthermore, we have proposed human brain inspired semantic mapping system to modulate the robot's behavior when it navigates in the environment to perform a task. Moreover, our TOSM approach represents the knowledge about the elements in the map. The experimental results indicate the feasibly of our prof framework in real-world indoor environment. In the future we plan to investigate to build </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Triplet Ontological Semantic Model (TOSM)</figDesc><graphic coords="6,224.71,54.05,166.19,113.39" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>The 1 st</head><label>1</label><figDesc>Figure 2: TOSM Properties for on demand database. (a) Class Properties; (b) Object Properties; (c) Data Properties</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>The 1 stFigure 3 :</head><label>13</label><figDesc>Figure 3: Geometric Relation between Two Classes</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Semantic Descriptor-based Learning and Recognition Model</figDesc><graphic coords="8,94.77,386.89,426.07,283.48" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Experiential Real-world Environment</figDesc><graphic coords="9,188.23,528.65,239.15,141.74" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head></head><label></label><figDesc>Figure6shows all the steps involved in our real-world experiment. The robot localizes itself using topological map that endows the robot with spatial awareness. We build the semantic map by establishing semantic relationship between a place node in a topological map and its concepts. After that, we associate the objects that are recognized in a specific place with their topological nodes in the semantic map. Robot connects to the database that stores semantic information and properties of objects and places in order to match the relations. Figure6(a) shows the objects recognized by our recognition model.Our semantic map explains the structure of the environment at higher level that is closer to human-mapping system. Figure6(b) shows the semantic representation of the environment, in which places are represented by rectangular boxes and objects are represented by circles. The blue circles indicate the columns, the orange circles show the vending machine along with the green circles that represent tri-columns while red boxes are places. Figure6(c) shows the database that stores the ontology information for the robot mapping system and properties of the physical objects that are recognized when robot navigates in the environment. Our topological map shown in Figure6(d) represents the environment by linking geometrical information and establishing the relations of semantic information to edges and nodes of relation graph. The proposed relation graph is focused on the environment mapping task and demonstrates semantic knowledge with the conceptual and spatial hierarchy. It represents the relationships between the information of corridor-1 containing elevators and columns along with the objects in coordidor-2 and coorid-3 that the robot knows.We extract semantic map of our environment model based on occupancy grid as shown in Figure Figure7and add semantic concepts such as corridors and spatial relations like connectivity between different objects in the environment.</figDesc><graphic coords="10,73.91,54.02,467.79,255.16" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>The 1 stFigure 7 :</head><label>17</label><figDesc>Figure 7: Semantic Map with Occupancy Grid Map</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>The 1 st International Workshop on the Semantic Descriptor, Semantic Modelingand Mapping for Humanlike Perceptionand Navigation of Mobile Robots toward Large Scale Long-Term Autonomy (SDMM19)</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head></head><label></label><figDesc>The 1 st International Workshop on the Semantic Descriptor, Semantic Modelingand Mapping for Humanlike Perceptionand Navigation of Mobile Robots toward Large Scale Long-Term Autonomy (SDMM19)</figDesc><table /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Acknowledgement This research was supported by Korea Evaluation Institute of Industrial Technology (KEIT) funded by the Ministry of Trade, Industry &amp; Energy (MOTIE) (No. 1415162366 and No. 141562820)</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Interactive perception: Leveraging action in perception and perception in action</title>
		<author>
			<persName><forename type="first">J</forename><surname>Bohg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. Robot</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="1273" to="1291" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A proposal for semantic map representation and evaluation</title>
		<author>
			<persName><forename type="first">R</forename><surname>Capobianco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Serafin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Dichtl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Grisetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Iocchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Nardi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">European Conference on Mobile Robots (ECMR) 2015 -Proc</title>
				<imprint>
			<date type="published" when="2015">2015. 2015</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Perception, Planning, Control, and Coordination for Autonomous Vehicles</title>
		<author>
			<persName><forename type="first">S</forename><surname>Pendleton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Machines</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">1</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Bayesian space conceptualization and place classification for semantic maps in mobile robotics</title>
		<author>
			<persName><forename type="first">S</forename><surname>Vasudevan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Siegwart</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Robotics and Autonomous Systems</title>
		<imprint>
			<biblScope unit="volume">56</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="522" to="537" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Martínez Supervised semantic labeling of places using information extracted from sensor data</title>
		<author>
			<persName><forename type="first">W</forename><surname>Burgard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Jensfelt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Triebel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ó</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Robotics and Autonomous Systems</title>
		<imprint>
			<biblScope unit="volume">55</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="391" to="402" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Towards semantic maps for mobile robots</title>
		<author>
			<persName><forename type="first">A</forename><surname>Nüchter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hertzberg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Robotics and Autonomous Systems</title>
		<imprint>
			<biblScope unit="volume">56</biblScope>
			<biblScope unit="issue">11</biblScope>
			<biblScope unit="page" from="915" to="926" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Inferring &quot;dark matter&quot; and &quot;dark energy</title>
		<author>
			<persName><forename type="first">D</forename><surname>Xie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Todorovic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">C</forename><surname>Zhu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">from videos IEEE International Conference on Computer Vision</title>
				<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="2224" to="2231" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age</title>
		<author>
			<persName><forename type="first">Cesar</forename><surname>Cadena</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on robotics</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="1309" to="1332" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
	<note>IEEE Trans. Robot.</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Visual Visual Place Recognition: A Survey</title>
		<author>
			<persName><forename type="first">S</forename><surname>Lowry</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Robotics</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="19" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Towards Human Inspired Semantic Slam ICINCO</title>
		<author>
			<persName><forename type="first">C</forename><surname>Sabourin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Madani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 7th International Conference on Informatics in Control, Automation and Robotics</title>
				<meeting>the 7th International Conference on Informatics in Control, Automation and Robotics</meeting>
		<imprint>
			<date type="published" when="2010">2010. 2010</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="360" to="363" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m">The 1 st International Workshop on the Semantic Descriptor, Semantic Modelingand Mapping for Humanlike Perceptionand Navigation of Mobile Robots toward Large Scale Long-Term Autonomy (SDMM19)</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Jensfelt Large-scale semantic mapping and reasoning with heterogeneous modalities</title>
		<author>
			<persName><forename type="first">A</forename><surname>Pronobis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Robotics and Automation</title>
				<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="3515" to="3522" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Probabilistic Data Association for Semantic SLAM</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">L</forename><surname>Bowman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">J</forename><surname>Pappas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Robotics and Automation (ICRA)</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="1722" to="1729" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Ontology-based object recognition for remote sensing image interpretation</title>
		<author>
			<persName><forename type="first">N</forename><surname>Durand</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">19th IEEE International Conference on Tools with Artificial Intelligence (ICTAI</title>
				<imprint>
			<date type="published" when="2007">2007. 2007</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="472" to="479" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Towards automated task planning for service robots using semantic knowledge representation</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Ji</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE 10th International Conference on Industrial Informatics</title>
				<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="1194" to="1201" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">RoboEarth Semantic Mapping: A Cloud Enabled Knowledge-Based Approach</title>
		<author>
			<persName><forename type="first">L</forename><surname>Riazuelo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Automation Science and Engineering</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="432" to="443" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">The Benefits of Explicit Ontological Knowledge-Bases for Robotic Systems Towards Autonomous Robotic Systems</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Saigol</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Ridder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Lane</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="229" to="235" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">OntPercept: A Perception Ontology for Robotic Systems Latin American Robotic Symposium</title>
		<author>
			<persName><forename type="first">H</forename><surname>Azevedo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Ribeiro Belo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A F</forename><surname>Romero</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Brazilian Symposium on Robotics (SBR) and 2018 Workshop on Robotics in Education (WRE)</title>
				<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="469" to="475" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">A robotic context query-processing framework based on spatio-temporal context ontology Sensors</title>
		<author>
			<persName><forename type="first">S</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kim</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">18</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">The limits and potentials of deep learning for robotics The</title>
		<author>
			<persName><forename type="first">N</forename><surname>Sünderhauf</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Robotics Research</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="issue">4-5</biblScope>
			<biblScope unit="page" from="405" to="420" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">Z</forename><surname>Zou</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1905.05055-</idno>
		<title level="m">Object Detection in 20 Years: A Survey arXiv preprint</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="39" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Representation and Integration: Combining Robot Control, High</title>
		<author>
			<persName><forename type="first">R</forename><surname>Petrick</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">-Level Planning, and Action Learning Proceedings of the 6th international cognitive robotics workshop</title>
				<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="32" to="41" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">A Survey on Perception Methods for Human-Robot Interaction in Social Robots</title>
		<author>
			<persName><forename type="first">H</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">H</forename><surname>Ang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Poo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Social Robotics</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="85" to="119" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">A new cognitive architecture for bidirectional loop closing Robot</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">J</forename><surname>Palomino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Marfil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Bandera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bandera</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Second Iberian Robotics Conference</title>
				<imprint>
			<date type="published" when="2015-11">2015. November. 2016</date>
			<biblScope unit="volume">418</biblScope>
			<biblScope unit="page" from="721" to="732" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">What can i do around here? Deep functional scene understanding for cognitive robots</title>
		<author>
			<persName><forename type="first">Chengxi</forename><surname>Ye</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Robotics and Automation (ICRA)</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="4604" to="4611" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Object detection and recognition for assistive robots: Experimentation and implementation</title>
		<author>
			<persName><forename type="first">E</forename><surname>Martinez-Martin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">P</forename><surname>Del Pobil</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">EEE Robotics &amp; Automation Magazine</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="123" to="138" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Towards robust place recognition for robot localization</title>
		<author>
			<persName><forename type="first">Muhammad</forename><surname>Ullah</surname></persName>
		</author>
		<author>
			<persName><surname>Muneeb</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Robotics and Automation</title>
				<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="530" to="537" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Shape2vec: Semantic-based descriptors for 3D shapes, sketches and images</title>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">P</forename><surname>Tasse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Dodgson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Graphics (TOG)</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="1" to="12" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Enhanced semantic descriptors for functional scene categorization Proceedings of the 21st International</title>
		<author>
			<persName><forename type="first">Gloria</forename><surname>Zen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Conference on Pattern Recognition</title>
				<imprint>
			<date type="published" when="2012">ICPR2012. 2012</date>
			<biblScope unit="page" from="1985" to="1988" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Automated Map Reading: Image Based Localisation in 2-D Maps Using Binary Semantic Descriptors</title>
		<author>
			<persName><forename type="first">P</forename><surname>Panphattarasap</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Calway</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</title>
				<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="6341" to="6348" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<title level="m">The 1 st International Workshop on the Semantic Descriptor, Semantic Modelingand Mapping for Humanlike Perceptionand Navigation of Mobile Robots toward Large Scale Long-Term Autonomy (SDMM19)</title>
				<imprint/>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
