=Paper= {{Paper |id=Vol-3749/akr3-preface |storemode=property |title=1st International Workshop on Actionable Knowledge Representation and Reasoning for Robots (AKR³) |pdfUrl=https://ceur-ws.org/Vol-3749/akr3-preface.pdf |volume=Vol-3749 |authors=Michael Beetz,Philipp Cimiano,Michaela Kümpel,Enrico Motta,Ilaria Tiddi,Jan-Philipp Töberg |dblpUrl=https://dblp.org/rec/conf/esws/BeetzCKMTT24 }} ==1st International Workshop on Actionable Knowledge Representation and Reasoning for Robots (AKR³)== https://ceur-ws.org/Vol-3749/akr3-preface.pdf
                         1st International Workshop on Actionable Knowledge
                         Representation and Reasoning for Robots (AKR³)
                         Michael Beetz1 , Philipp Cimiano2 , Michaela Kümpel1 , Enrico Motta3 , Ilaria Tiddi4 and
                         Jan-Philipp Töberg2
                         1
                           Institute for Artificial Intelligence, University of Bremen, Bremen, Germany
                         2
                           Cluster of Excellence Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
                         3
                           Knowledge Media Institute, The Open University, Milton Keynes, United Kingdom
                         4
                           Knowledge Representation and Reasoning Group, Vrije Universiteit Amsterdam, The Netherlands

                         Keywords
                         Knowledge Representation, Reasoning, Cognitive Robotics, Web Knowledge, Actionable Knowledge




                         1. Introduction
                         The “Actionable Knowledge Representation and Reasoning for Robots (AKR3 )” workshop is dedicated
                         to Knowledge Representation and Reasoning (KRR) in the area of cognitive robotics, with the focus on
                         acquiring knowledge from the Web and making it actionable for robotic applications in the sense that
                         robots can use acquired knowledge for action execution and understand what they are doing. We aim
                         to bring together the European communities specialising in KRR and robotics to increase collaboration
                         and accelerate advancements in the field.
                            Household robots are still not able to autonomously prepare meals, set or clean the table or do other
                         chores besides vacuum cleaning. Much of the knowledge needed to refine vague task instructions and
                         transfer them to new task variations is contained in instruction web sites like WikiHow, encyclopedic
                         web sites like Wikipedia, and many other web-based information sources. We argue that such knowledge
                         can be used to teach robots to perform new task variations, similarly to how humans can use Web
                         information.
                            Given the availability of a plethora of sources and datasets of common sense knowledge on the Web
                         (e.g. ConceptNet [1] or OMICS [2]) as well as recent advances in language modelling, it is a timely
                         research question to investigate which methods and approaches can enable robots to take advantage
                         of this existing common sense knowledge to reason on how to perform tasks in the real world. The
                         main issue to be addressed in particular is how to allow robots to perform tasks flexibly and adaptively,
                         gracefully handling contextually determined variance in task execution. We expect this line of research
                         to contribute to better generalizability and robustness of robots performing in every-day environments.
                            For this first edition of the workshop we received 6 submissions, which were all accepted and
                         presented. We had roughly 20 participants, not including the organisers and the invited speaker.


                         2. Program Overview
                         The workshop began with a short introduction by Philipp Cimiano, focusing on the motivation for
                         organising a workshop at the intersection of the knowledge representation and reasoning for robotics
                         domain. He also presented the Best Paper Award and introduced the invited speaker: Lars Kunze1 .


                          ESWC 2024 Workshops and Tutorials Joint Proceedings, May 26-27, Heraklion, Greece
                          $ beetz@cs.uni-bremen.de (M. Beetz); cimiano@techfak.uni-bielefeld.de (P. Cimiano); michaela.kuempel@uni-bremen.de
                          (M. Kümpel); enrico.motta@open.ac.uk (E. Motta); i.tiddi@vu.nl (I. Tiddi); jtoeberg@techfak.uni-bielefeld.de (J. Töberg)
                           0000-0002-7888-7444 (M. Beetz); 0000-0002-4771-441X (P. Cimiano); 0000-0002-0408-3953 (M. Kümpel);
                          0000-0003-0015-1952 (E. Motta); 0000-0001-7116-9338 (I. Tiddi); 0000-0003-0434-6781 (J. Töberg)
                                     © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
                         1
                             https://ori.ox.ac.uk/people/lars-kunze/

CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
   Afterwards, Lars Kunze held his invited talk on the topic of Making Robots Explainable and
Trustworthy. In his talk, he introduced three current challenges with actionable understanding: i)
Mastering everyday tasks, ii) Dealing with change and iii) Explainability and trustworthiness. For each
challenge, he focused on aspects for potential solutions, showing what has been done and what still
needs to be done. For the first challenge he focused on the understanding of tasks, robots and the
environment, for the second challenge the talk focused on lifelong learning (focused on understanding
objects) and lastly, he focused on the semantic interpretation of spatio-temporal observations to generate
contextualised explanations.
   After the Invited Talk, the Best Paper winning paper Towards Improving Large Language Models’
Planning Capabilities on WoT Thing Descriptions by Generating Python Objects as Interme-
diary Representation was presented by Lukas Kinder. Their work focused on equipping LLMs in
planning tasks with domain knowledge through WoT thing descriptions. These descriptions are trans-
lated into Python classes using LLMs before generating action sequences based on the task description
and participating things.
   After the coffee break, Michaela Kümpel presented the paper Steps Towards Generalized Ma-
nipulation Action Plans - Tackling Mixing Task on behalf of her colleagues. In this work, the
authors present a theoretical model for guiding the creation of adaptable action plans using the CRAM
cognitive architecture. Each model consists of an action designator, pre- and postconditions as well as
task-specific requirements. Their theoretical model is exemplified for the task of Mixing.
   The third paper The SPA Ontology: Towards a Web of Things Ready for Robotic Agents
was presented by Michael Freund and also focused on WoT thing descriptions by presenting the SPA
ontology that enhance these descriptions by also modelling preconditions and interaction effects. Based
on these enhanced descriptions, a PDDL problem description can be derived and solved before mapping
the created plan back to create suitable WoT plans.
   In the fourth paper called Towards a Knowledge Engineering Methodology for Flexible Robot
Manipulation in Everyday Tasks, Jan-Philipp Töberg presented a knowledge engineering method-
ology and its application on the concrete manipulation task of cutting fruits and vegetables. The
methodology is semi-automatic and focuses on dispositions & affordances, task-specific object proper-
ties as well as action groups & their operational properties.
   Afterwards, Diego Reforgiato Recupero presented the paper Towards Seamless Human-Robot
Dialogue through a Robot Action Ontology, which enables a robot to listen to speech instructions
and either perform an action or answer a posed question using ChatGPT. For perforing the action, an
ontology is used to decide whether the robot can and should perform the action.
   In the last presentation, Lobna Joualy presented the paper KB4RL: Towards a Knowledge Base
for automatic creation of State and Action Spaces for Reinforcement Learning, which uses a
knowledge base to support the creation of the state and action space in reinforcement learning tasks
based on the task to learn and the robot type.


Acknowledgments
The workshop is organized by the SAIL Network in collaboration with the Joint Research Center on
Cooperative and Cognition-enabled AI (CoAI JRC).


References
[1] R. Speer, J. Chin, C. Havasi, ConceptNet 5.5: An Open Multilingual Graph of General Knowledge,
    AAAI 31 (2017). doi:10.1609/aaai.v31i1.11164.
[2] R. Gupta, M. J. Kochenderfer, Common Sense Data Acquisition for Indoor Mobile Robot, in:
    Proceedings of the 19th National Conference on Artifical Intelligence, AAAI’04, AAAI Press, San
    Jose, California, 2004, pp. 605–610. URL: http://alumni.media.mit.edu/~rgupta/pdf/aaai04.pdf.