=Paper=
{{Paper
|id=Vol-2525/paper17
|storemode=property
|title=RLACS: Robotic Lab Assistant Cognitive System for interactive robot programming
|pdfUrl=https://ceur-ws.org/Vol-2525/ITTCS-19_paper_32.pdf
|volume=Vol-2525
|authors=Aydar Akhmetzyanov,Alexandr Klimchik,Mikhail Ostanin,Skvortsova Valeria
|dblpUrl=https://dblp.org/rec/conf/ittcs/AkhmetzyanovKOV19
}}
==RLACS: Robotic Lab Assistant Cognitive System for interactive robot programming==
RLACS: Robotic Lab Assistant Cognitive System for interactive robot programming * Aydar Akhmetzyanov Alexandr Klimchik Mikhail Ostanin Skvortsova Valeria Innopolis University Innopolis University Innopolis University Innopolis University Innopolis, Russia Innopolis, Russia Innopolis, Russia Innopolis, Russia a.ahmetzyanov@innopolis.university a.klimchik@innopolis.ru m.ostanin@innopolis.ru v.skvortsova@innopolis.university Abstract Integration of robotics system is challenging task and requires specific skills including domain area knowledge, programming, and robotics engineering experience. Programming process could be improved with advanced interfaces, for instance Mixed Reality device or cognitive dialog system. Our previous work related to interactive robot programming with mixed reality HoloLens device. To achieve the goal of efficient interactive robot control, the scope of the project should be extended to cognition system of the robotics system. Efficiency could be achieved by improving autonomy of the system. To develop the system, which unifies cognition and human-computer interaction we should define strong requirement to the artificial cognitive system. In this work we define requirements for cognitive properties such as perception, anticipation, adaptation, motivation, knowledge representation, and autonomy of the system. In addition, we propose high-level architecture of cognitive system which represent the relationship between inner modules of the robotics system. 1 Introduction Our task is to develop a cognitive robot for an industrial process laboratory. Lab processes include a lot of various tasks. Developing a lab helper is a complicated task which possibly could involve activity from helping with providing and processing information to acting, for example connecting related components. The main goal for the lab robots is usually performing the same actions which human already performed. Such robot can perform common tasks more quickly and safely than its human counterparts. Furthermore, some work could be dangerous for human (toxic chemicals or radiation), thus performing such work with robot is reasonable. Artificial cognitive systems theory covers high-level architecture of complex intellectual systems. Cognition, both natural and artificial, is about anticipating the need for action and developing the capacity to predict the outcome of those actions [1]. This theory covers embodiment, autonomy, anticipation, adaptation and other aspects of cognitive systems. The goal of our work is to develop cognitive architecture for intellectual robotics lab assistant system and provide interconnectivity overview of required components. * Copyright © 2019 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). Our cognitive agent could be a general-purpose industrial manipulator or set of manipulators which will perform requested tasks. Task could be formulated with gestures as we or with natural language voice commands. Hoenig, Wolfgang, et al. [2] demonstrated the potential of mixed reality for robotics application. In our previous work, we developed interactive programming system which combines industrial robot and Microsoft HoloLens Mixed Reality device to provide direct robot control with intuitive gestures (Figure 1) [2]. In addition, we developed a system for exploration of obscured environments [3] which allows to control hidden robot which generate mixed reality knowledge representation of environment for HoloLens device and provide egocentric view of environment for human. Figure 1: Mixed Reality Robots paths example Below, we present the list of desiderate for our robotic cognitive assistant. 1. Natural language processing. For example, if scientist or engineer work with some project and performs experiments, it will request required robot in real time and performs some action immediately. 2. Utilization of online resources and services. The utilization of online component databases and combining those knowledges with lab inventory will provide ability to give to robot high level command capabilities. Robot will analyze equipment and will perform required task without explicit human guidance. 3. Anticipating the need for action. If human performs possibly dangerous operation with robotic system, artificial assistant should prevent those operation to keep human and inventory security. 4. Learning from human actions. Command should be not only formally specified but robot should generate executing plan and formalize human action to provide editable repeating routine based on previous human interaction with equipment. Desiderata’s serve as a good starting point for discussing the possible implementations and application scenarios of lab assistant. 2 Overview of the cognitive architecture The RLACS (Robotic lab assistant cognitive system) is an autonomous hybrid system, overview architecture provided in Figure 2. Figure 2: A diagrammatic description of the architecture One of the main components is knowledge database, which can be still extended and gives it possibility to solve tasks and anticipate future events and results. Knowledge database consists of three types of memory: semantic, episodic and procedural. Computational system component, which allows to find best solutions by means of planning results, in some cases, produce new knowledge, which can be saved and reused in the future. Another part is interaction system, which gives it a possibility to receive tasks, provide results and notify about completion of the tasks. Sensory processing system gives a possibility to adapt to current situation and play significant role in making judgments, it makes system change in its behavior to achieve success. It also has a possibility to make judgments about values, as it can compare data which is expected and stored in knowledge database and actual data received from sensors. System is designed in the way so it can anticipate and adapt, it is also expected to improve it is own results and performance by having feedback loops, which provide information, that is needed to trace potential failures. The cognitive architecture is hybrid because of symbolic representation of the world model and a process of enaction [5]. System uses symbolic language to understand the commands it receive and store knowledge to complete the tasks. It also converts the data it receives from its sensors to symbolic knowledge representation. That basically means that symbol grounding problem is solved in this system. We can call this system enactive because it has such properties as: Autonomy - in the way it gains knowledge and experience from the achieved results and past events, and in the methods, it tries to find solutions to achieve its task. 1. Embodiment - as it uses actuators and sensors (as a part of real body) to solve tasks, analyze results, make conclusions and gain knowledge. 2. Emergence - as it combines the actual knowledge about expected results and methods to achieve it by means of planning or search, which it can get from sensors and computations. 3. Experience - as it gains experience after each task, and then uses it solve the tasks, which were completed before, and from fails, which tell it to look for additional knowledge and methods to solve the problem, which in the end extends its experience and knowledge. However, trials and unexpected results can not affect the system structure, it can only give additional experience, so it will not violate its autonomy or learning. 4. Sense-making - as it gets motivated to learn and get experience by the events which happen to it (successful and unsuccessful job, expected and unexpected result, too long time of completing task, which can motivate it to look for faster solutions) By having these properties, we can conclude that our system has properties of cognitivist [1] and emergent [1] approaches, which makes this system hybrid. 3 Functional characteristics of components Here we describe components from the point of view of goals, in contradiction to agent's activity and computations. Input and output components of the system compose interaction system. All functional characteristics except nature of knowledge representation are given not only for specific task, but for any reasonable application. Inputs are given in symbolic representation and may be entered by means of: 1. Entered through software interface between laboratory computer and the cognitive system 2. Recognized through agent's vision system 3. Received voice commands 4. Perceived by gestures At the finishing tasks the system sends message to the laboratory computer through software interface about completing the tasks with laconic appropriate version of log. Through all processing detailed log is recorded. Firstly, computational system (inference mechanism) are run. Then data about how exactly move physical elements of the system are taken from procedural memory. Then physical actions are performed with checking expected values taken from semantic memory, with actual values received from sensors. There is planner that connects input, semantic and episodic memory, and executor. There is also executor that connects actuators with executor and procedural memory. Finally, there is judgement that connects sensors, semantic memory and actuators for precision in performing physical activity. 4 Review of the cognitive architecture We provide excessive review on cognitive architecture properties in Table 1. In addition, we provide interaction abstraction diagram in Figure 3. Table 1: Characteristics of the cognitive architecture Criteria Presence Justification System has sensors and actuators to perform actions and analyze results of its Embodiment + actions. This is done by physical components, which are obviously a part of the system. System gets data from sensors, and receives tasks by interaction system. That gives Perception × system an ability to understand its tasks, current progress and its success by understanding what exactly this data means. RLACS is designed to make actions, it is expected to make lab operations, which it Action × should do with success, however, if it makes mistakes, it should be able to find the possibility to fix it, or at least try not to repeat it again. Machine leaning models and knowledge database give the system ability to predict Anticipation × future events. System can perform statistical analysis to anticipate future task, and neural network computation can also give it some information about the future. Data from sensors, compared to expected values stored in knowledge database give Adaptation + the system ability to make judgments and adapt to current situation in order to perform better and achieve better results. When system doesn't succeed in the task, it tries to find the solutions and succeed Motivation + next time. It also tries to improve its performance as it gets new knowledge, which gives it additional ways to solve problems. If system faces missing of the knowledge, it tries to use data search in order to Autonomy + fulfill its knowledge database, other autonomy property is that it verifies its results by itself, as it has stored expected values for many final states. Figure 3: A diagram showing the interaction in the abstraction level 5 Conclusion In this work we covered requirement to our project prom cognitive system theory. In order to provide intuitive and intelligent interface to robotics system, this system should have the properties of anticipation and cognition. The degree and strength of autonomy is crucial for the system to improve the efficiency of operator. We provide excessive architecture of the system with components description and interaction diagrams. We covered properties of embodiment, perception, anticipation, adaptation, motivation and autonomy in the context of robotics system. This architecture allows us to continue our development with further integration of different interaction mechanisms, data sources and data processing algorithms to achieve cognitive behavior of our system. Source code of our project available on GitHub [6]. References 1. Vernon. Artificial cognitive systems : a primer MIT Press, 2014. 2. Hoenig, Wolfgang, et al. "Mixed reality for robotics." 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2015. 3. Ostanin, M., and A. Klimchik. "Interactive Robot Programing Using Mixed Reality." IFAC-PapersOnLine 51.22 (2018): 50-55. 4. Akhmetzyanov, Aydar, et al. "Exploration of Underinvestigated Indoor Environment Based on Mobile Robot and Mixed Reality." International Conference on Human Interaction and Emerging Technologies. Springer, Cham, 2019. 5. Vernon, David. "Enaction as a conceptual framework for developmental cognitive robotics." Paladyn, Journal of Behavioral Robotics 1.2 (2010): 89-98. 6. HoloRobotic project source code, https://github.com/MOstanin/HoloRobotic