=Paper=
{{Paper
|id=Vol-3794/paper02
|storemode=property
|title=How to use a cognitive architecture for a dynamic person model
with a social robot in human collaboration
|pdfUrl=https://ceur-ws.org/Vol-3794/paper2.pdf
|volume=Vol-3794
|authors=Thomas Sievers,Nele Russwinkel
|dblpUrl=https://dblp.org/rec/conf/rfh/SieversR24
}}
==How to use a cognitive architecture for a dynamic person model
with a social robot in human collaboration==
How to use a cognitive architecture for a dynamic person model
with a social robot in human collaboration
Thomas Sievers* , Nele Russwinkel
Institute of Information Systems, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
Abstract
The use of cognitive architectures is promising in order to achieve more human-like reactions and behavior in social robots. For
example, ACT-R can be used to create a dynamic cognitive person model of a human cooperation partner of the robot. A proof-of-
concept for a direct and easy-to-implement integration of ACT-R with the humanoid social robot Pepper is described in this work. An
exemplary setup of the system consisting of cognitive architecture and robot application and the type of connection between ACT-R
and the robot is explained. Furthermore, an idea is outlined of how the cognitive person model of the human cooperation partner in
ACT-R is updated with dynamic data from the real world using the example of emotion recognition by the robot.
Keywords
ACT-R, cognitive architecture, human-robot interaction, social robotics
1. Introduction also inspired by the progress of cognitive neuroscience, and
ACT-R can be seen and described as a way of specifying
The development of situated human-aware agents that inter- how the brain itself is organized to produce cognition [3].
act with human partners is a new field of research in terms of For an envisioned scenario, this cognitive architecture
using a cognitive architecture for controlling the application can generate flexible task knowledge and build mental repre-
and modeling human-like interaction. The use of cognitive sentations of the relevant information about the individual
architectures is promising in order to achieve more human- with whom the robot is collaborating, the state of the task
like reactions and behavior in social robots. Adaptability to be accomplished together and/or the person model of the
to changing situations in human-robot dialog and the com- human. If at some point it turns out that the intention of
prehensibility and thus the acceptance of robots, even in the human cooperation partner cannot be achieved directly
environments that are sensitive and anxiety-inducing for because, for example, some relevant information is missing,
humans, could also be improved as a result. This work at- this person will probably be frustrated. When something
tempts to make a first step towards the utilization of different fails in completing the desired task, the human perception
cognitive concepts (e.g. situation understanding, prediction of the robot can be a critical component for the acceptance
and adaptation to the emotional state of the partner, flexible of social robots in general. Greater autonomy of the robot
task anticipation) by describing a proof-of-concept for the can lead to greater blame if something goes wrong. In their
integration of a cognitive architecture with the humanoid workshop report, Förster et al. provide a comprehensive
social robot Pepper and preparing a technical basis for a overview of all the things that can go wrong in conver-
more human-like perception of human interaction partners. sations between humans and robots, including a detailed
In this context, we have carried out an initial study with the analysis of failures [4]. Appropriate reactions need to be
application scenario of a public authority [1]. However, a retrieved by the robot to relate to possible failures, e.g. to
detailed evaluation and further studies that could confirm find an alternative solution. Frustration on the part of the
an effective benefit are still pending. human counterpart should be avoided as far as possible. [5].
Cognitive architectures refer both to a theory about the After giving some examples from previous research on
structure of the human mind and to a computational real- connections between ACT-R and robots, we present our
ization of such a theory. Their formalized models can be exemplary system setup, which consists of the cognitive
used to further refine a comprehensive theory of cognition architecture and a robot application programmed for the
in order to provide common ground for working towards a purpose of a direct connection between ACT-R and the robot.
specific goal, and to flexibly react to actions of the human The standalone application of ACT-R we use is available for
collaboration partner and to develop situation understand- the main computer platforms Linux, macOS and Windows.
ing for adequate reactions. Well-known and successfully We show a dynamic update of the cognitive person model
used cognitive architectures are ACT-R (Adaptive Control of the human cooperation partner in ACT-R with data from
of Thought - Rational) and SOAR [2]. the real world using the example of emotion recognition by
Like any cognitive architecture, ACT-R as a theory for the robot.
simulating and understanding human cognition aims to
define the basic and irreducible cognitive and perceptual
operations that enable the human mind. In theory, each task 2. Related work
that humans can perform should consist of a series of these
discrete operations. Most of ACT-R’s basic assumptions are A coupling of ACT-R as a cognitive architecture with dif-
ferent types of robots has already been realized and used
Workshop Robots for Humans 2024, Advanced Visual Interfaces, Aren- for various purposes. For example, an interactive narrative
zano, June 3rd, 2024 system is described in which the characters in the story are
*
Corresponding author. interpreted by humanoid robots, which is achieved by defin-
" sievers@uni-luebeck.de (T. Sievers); russwinkel@uni-luebeck.de ing suitable cognitive models [6]. These robots are using
(N. Russwinkel)
the NarRob framework [7].
0000-0002-8675-0122 (T. Sievers); 0000-0003-2606-9690
(N. Russwinkel) A storytelling robot controlled by ACT-R is able to adopt
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attri-
bution 4.0 International (CC BY 4.0).
different persuasion techniques and ethical stances while
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
talking about certain topics [8]. In this case, the cognitive
ACT-R architecture is connected to a Unity 3D engine.
An adaption of the ACT-R architecture for embod-
iment, then called Adaptive Character of Thought-
Rational/Embodied (ACT-R/E) was created to function in
the embodied world, placing an additional constraint on
cognition namely that cognition occurs within a physical
body that must navigate in real surroundings, as well as
perceive the world and manipulate objects [9].
ACT-R is also used in human-robot collaboration (HRC)
for mobile service robots, connecting and integrating mod-
ules of human, robot, perception, HRI, and HRC in the ACT-
R architecture [10].
The inner voice of a robot cooperating with human part-
ners is made audible via ACT-R integrated in the Robot
Operating System (ROS) [11]. Also an implementation of a
robotic self-recognition method by inner speech is demon-
strated by using ACT-R [12].
The distinctive feature of this approach is that the robot
is directly connected to the ACT-R environment via Wi-
Fi without using a special framework. It is therefore not Figure 1: ACT-R modules, buffers and pattern matcher
necessary to install the ACT-R application on the robot in
order to run the model. In this way, there is no need to deal
with specific requirements of a particular framework. through conversation, gestures and its touch screen. Pep-
per can focus on, identify, and recognize people. Speech
recognition and dialog is available in 15 languages. Beyond,
3. Connect ACT-R to a Pepper robot Pepper manages to perceive basic human emotions. The
The ability of ACT-R as a system to perform a wide range robot features an open and fully programmable platform so
of human cognitive tasks can be directly combined with a that developers can program their own applications to run
social robot that interacts with humans. The assumption on Pepper.
behind these efforts is that this could make a conversation Since research has generally shown that trust is the basis
between a robot and a human more human-like on the part for successful communication tasks and trust in robots is in-
of the robot and thus more pleasant for the human. creased by anthropomorphism, a humanoid social robot like
Pepper is a good choice for social interaction and the pro-
vision of services when dealing with customers. A human
3.1. ACT-R face, the possibility of human-like expressions and body
The basic mechanism of ACT-R consists of the main compo- language and the use of voice are seen as beneficial for the
nents modules, buffers and pattern matcher [13]. There are trust of customers in the robot [15]. It has the advantage
two types of modules: Perceptual-motor modules forming over a chatbot that it also shows physical gestures, which
the interface with the real world (motor module and visual makes communication much more vivid and strengthens
module), and the memory modules comprising declarative a personal relationship. The Pepper robot is already being
memory consisting of facts and procedural memory consist- used in many HRI projects and has also been tested in real
ing of productions. Productions represent knowledge about production use.
how something should be done. Figure 1 gives an overview
of the main components. 3.3. The robot application
ACT-R accesses its modules (with the exception of the
procedural memory) via special buffers. The buffers form We developed an application that controls the robot’s reac-
the interface to this module. The buffer content represents tions to what the human conversation partner says. To do
the state of ACT-R over time. The pattern matcher attempts this, we used Android Studio with the Kotlin programming
to find a production that corresponds to the current state of language and the Pepper SDK for Android [16], which en-
the buffers. Only one production can be executed at a time. ables the robot to be controlled via an app from its Android
Productions can modify the buffers during execution and tablet. The Pepper SDK as an Android Studio plug-in pro-
thus change the state of the system. Cognition is therefore vides a set of graphical tools and a Java respectively Kotlin
represented in ACT-R as a sequence of production firings. library, the QiSDK, so that specific functionalities of Pep-
In our approach, we do not use the visual and motor per’s operating system could be used in a straightforward
modules to provide input to the system. The buffers are way directly from an Android application, e.g., for focusing
used directly to exchange information between the real upon a person, listening, talking and chatting as well as
world of the robot and the ACT-R model. movements of head and arm to stress what has been said.
3.3.1. Listen and talk
3.2. Humanoid robot Pepper
Pepper’s native speech recognition capabilities and a speech
The social humanoid robot Pepper [14] as seen in Figure 2 de-
output with the – in our case German – language pack
veloped by Aldebaran is 120 centimeters tall and optimized
are used for speech input and output and Pepper’s Chat
for human interaction. It is able to engage with people
feature [17] is utilized to conduct the dialog. The chat feature
Figure 3: Schematic diagram of the topic file process
3.4. System setup for ACT-R and the robot
The standalone version of ACT-R is used for this work, i.e.
the application provided at https://act-r.psy.cmu.edu/ in-
Figure 2: Humanoid Robot Pepper
stead of running the Lisp sources. To establish a remote
connection from the robot to ACT-R, the remote interface –
the dispatcher – has to be used, which is implemented by a
central command server. The ACT-R core software connects
allows the robot to understand individual words and short to this dispatcher to provide access to its commands, and
phrases even if they are spoken as part of a longer sentence. the dispatcher accepts TCP/IP socket connections that allow
Words and phrases that the robot should understand, as well clients to access these commands and provide their own
as the corresponding answers, are stored in topic files in commands for use. The commands available via the dis-
the form of dictionaries and dialog branches. The flexible patcher can be used wherever a Lisp function was formerly
options for using variables or randomly selected parts of required. By default, the standalone version forces the dis-
sentences in the robot’s responses enable a natural dialog patcher to use the localhost IP address of the computer on
flow. The Pepper SDK also provides parameters for using which it is running for connections instead of an external IP
pauses, intonation and voice modulation to further enhance address. This means that only programs on the same com-
a human-like dialog. puter can establish a connection, and once ACT-R has been
With regard to controlling the reactions and statements started, this can no longer be changed. To disable this func-
of the robot by an ACT-R model, which is supplied with tion, the file force-local.lisp must be removed from the ACT-
relevant data for interaction from the real world, the use of R/patches directory before the application is executed. Then
these topic files offers for the robot the possibility to make it will use the machine’s real IP address for the dispatcher’s
statements adapted to the current situation by referring to connections and setting *allow-external-connections* in the
the appropriate sections in the topic file. Figure 3 shows a model file will let other machines connect. Another option
schematic diagram of the topic file process within the robot is to place the model file in the ACT-R/user-loads directory.
application. External connections are then always permitted. The ad-
dress and port used by the dispatcher is displayed at the top
3.3.2. Animation of the ACT-R terminal window. This information must be
used on the remote computer for connection.
Robot gesture animation depending on a specific context
The Pepper application contains a program section for the
can be used to support what is said depending on the situ-
remote connection to the dispatcher. This client connection
ation. These animations increase anthropomorphism and
can be used to start and control an ACT-R model that maps
comprehensibility through the indirect effect of body lan-
the cognitive processes for controlling human-robot inter-
guage. Groups of suitable animations can be defined, of
action. The client is able to interact directly with the model
which a randomly selected one is executed at certain points
by calling commands. The run-full-time command, for ex-
of the interaction, e.g. when greeting, in response to a ques-
ample, together with a number of seconds, starts and runs
tion from the human, when the robot asks a question, etc.
the model for the specified time. The evaluate method is
These animations support the interaction with the human
used to evaluate commands from the dispatcher. It requires
as they emphasize the robot’s statements.
the name of the command to evaluate.
Depending on the course of the conversation and the
findings about the emotional state of the human counterpart,
for example, the ACT-R model can be used to control the
robot’s gestures in conjunction with the robot’s utterances.
3.4.1. The ACT-R model
The ACT-R model created in Lisp for this proof-of-concept
study uses a goal slot pepper_out for sending commands
to the client application using ACT-R productions. This
goal slot is evaluated via a permanently running while loop
using the buffer-slot-value command that gets the value of
a slot from the chunk in a buffer of the current model. The
buffer-slot-value is sent as a string in JSON format via the
TCP/IP socket stream. Each evaluation command is assigned
a unique ID. This ID is used to identify the correct part of the
data in the stream received by the socket. The permanent
evaluation of the content of the goal slot pepper_out in the
client application is used to create special commands for
the robot depending on this slot content, e.g. to execute a
certain animation or to make a corresponding utterance.
To illustrate the syntax, the following lines show an ex-
ample of using the evaluate method for the retrieval of a
goal slot as a control signal from the model using the buffer-
slot-value command in a while loop and a production in the Figure 4: Information exchange between robot and ACT-R
Lisp code of the ACT-R model using a goal slot pepper_out
for sending such a signal to the client application.
Client application with buffer-slot-value command: Table 1
Transformation matrix to get the basic emotions
while (true) {
{method:evaluate, params:[buffer-slot-value, PleasureState
nil, goal, pepper_out], id:10} ExitementState Positive Neutral Negative
}
Calm Content Neutral Sad
ACT-R production with pepper_out goal slot: Exited Joyful Neutral Angry
(p checking-intention
=goal>
isa goal
==> application via feedback. The ACT-R model therefore con-
=goal> trols the verbal reaction of the robot and/or an animation in
pepper_out pepper-checks-intention the interaction with the human and adapts it to the emotion
) that has just been recognized. A combination of the possi-
To transmit information from the robot application to bilities of ACT-R with a humanoid social robot interacting
the ACT-R model, the client uses the overwrite-buffer-chunk directly with humans could be a way to improve the dialog
command to copy a chunk into the goal buffer. The model between a human and a robot and make the robot appear
has predefined goal chunks in its declarative memory. If a more compassionate and empathetic.
predefined chunk matches the chunk from the client, all in- A socket connection via the WLAN network from a robot
formation from this model chunk is placed in the buffer and application as a client to the dispatcher of the ACT-R appli-
can be used to trigger a production in the model. Figure 4 cation running on a PC or laptop as described in Section 3.4
illustrates the exchange of information between the robot enables an ACT-R model to receive and process the basic
and ACT-R. emotion values shown in Table 1 transmitted by the robot’s
emotion recognition. Feedback from the model to Pepper
controls the robot’s further behavior and the dialog. Fig-
4. Combining emotion recognition ure 5 depicts the emotion recognition and processing by the
and ACT-R robot and ACT-R.
For transmitting a recognized emotion the overwrite-
Pepper has the ability to interpret the basic emotion of the buffer-chunk command is used to trigger the right produc-
human in front of the robot via facial recognition using tions of the ACT-R model. How the model handles the
the ExcitementState and PleasureState characteristics [18]. information about the person’s current emotion depends
The ExcitementState can have the values calm or exited, the on the structure of the ACT-R model with its productions
PleasureState the values positive, neutral or negative. Based and the respective application. Predefined goal chunks in
on the work of psychologist James Russel [19], whose work the declarative memory of the model enable productions to
focuses on emotions, a transformation matrix shown in be fired depending on the emotion values transmitted. Ex-
Table 1 is used for the conversion of these states into the amples of such goal chunks, which are prepared in the Lisp
basic emotions neutral, content, joyful, sad and angry. These code of the ACT-R model, and an example production that
basic emotions should provide a sufficient basis for adapting fills a pepper_out goal slot with a value that is evaluated
the robot’s behavior and statements to the emotional state in the client application of the robot, can be found in the
of the human conversation partner. following lines:
The idea is to pass these findings on to an ACT-R model, (add-dm
which in turn draws conclusions within the framework of (mood-content-chunk isa goal mood content state
the human-like cognitive architecture and controls the robot pepper-changes-mood)
} else if ((MainActivity.humanPleasure == "
NEGATIVE") && (MainActivity.humanExcitement
== "CALM")) {
pepperMoodAction = "mood-sad-chunk"
} else if ((MainActivity.humanPleasure == "
NEGATIVE") && (MainActivity.humanExcitement
== "EXCITED")) {
pepperMoodAction = "mood-angry-chunk"
}
...
// Copy a chunk into the goal buffer and trigger
the right productions of the ACT-R model
using overwrite-buffer-chunk command
{method:evaluate, params: [overwrite-buffer-
chunk, nil, goal, pepperMoodAction], id:50}
...
// Permanent evaluation of goal slot pepper_out
{method:evaluate, params: [buffer-slot-value,
nil, goal, pepper_out], id:10}
...
// A variable bufferSlotValueOut contains the
current value of the goal slot pepper_out
transmitted by the ACT-R model and sets a
corresponding variable in the client
application
if (bufferSlotValueOut == "PEPPER-CONTENT") {
Figure 5: Emotion recognition with Pepper and ACT-R
MainActivity.modelMood = "CONTENT"
} else if (bufferSlotValueOut == "PEPPER-JOYFUL
") {
(mood-joyful-chunk isa goal mood joyful state MainActivity.modelMood = "JOYFUL"
pepper-changes-mood) } else if (bufferSlotValueOut == "PEPPER-SAD") {
(mood-sad-chunk isa goal mood sad state pepper- MainActivity.modelMood = "SAD"
changes-mood) } else if (bufferSlotValueOut == "PEPPER-ANGRY")
(mood-angry-chunk isa goal mood angry state {
pepper-changes-mood) MainActivity.modelMood = "ANGRY"
) }
...
(p pepper-content // React to the model and go to a bookmark
=goal> section in topic file to speak the
isa goal appropriate text
mood content if (MainActivity.modelMood == "CONTENT") {
state pepper-changes-mood qiChatbot.async()?.goToBookmark(topic.
==> bookmarks["intention_content"]
=goal> } else if (MainActivity.modelMood == "JOYFUL") {
pepper_out pepper-content qiChatbot.async()?.goToBookmark(topic.
state pepper-changed-mood bookmarks["intention_joyful"]
) } else if (MainActivity.modelMood == "SAD") {
qiChatbot.async()?.goToBookmark(topic.
The robot’s statements, which are controlled via the Chat bookmarks["intention_sad"]
feature of the client application and saved in dialog topic } else if (MainActivity.modelMood == "ANGRY") {
files as explained in Section 3.3.1, can be influenced in this qiChatbot.async()?.goToBookmark(topic.
way. Depending on the goal slot value, different dialogs, re- bookmarks["intention_angry"]
sponses and/or animations can be triggered. The while loop }
that runs continuously in the client application essentially }
contains the following functionalities and simple IF queries
for assigning the basic emotions from Pepper’s emotion
recognition to model chunks, evaluating the goal slot pep- 5. Conclusion
per_out of the ACT-R model and selecting the corresponding
text passage in the topic file: Our proof-of-concept application shows that a coupling of
ACT-R and a social robot is possible and relatively easy to
while (true) {
// Setting chunks by Pepper’s ExcitementState
implement and that the transmission of emotion data and
and PleasureState characteristics their evaluation by an ACT-R model as well as a control of
if ((MainActivity.humanPleasure == "POSITIVE") the robot via the ACT-R model works. This was achieved by
&& (MainActivity.humanExcitement == "CALM") directly connecting the robot application to ACT-R without
) { using additional frameworks.
pepperMoodAction = "mood-content-chunk" The fact that the robot can be controlled via a cognitive
} else if ((MainActivity.humanPleasure == " architecture opens up a wide range of possibilities that these
POSITIVE") && (MainActivity.humanExcitement architectures offer in terms of better situated human percep-
== "EXCITED")) {
tion and improved adaptability to the behavior of a human
pepperMoodAction = "mood-joyful-chunk"
conversation partner. However, it remains important to D. Hernandez Garcia, D. Kontogiorgos, J. Williams,
consider whether the effort required for implementation, Working with troubles and failures in conversation be-
modeling and resilience is appropriate in relation to the tween humans and robots: workshop report, Frontiers
achievable functionality. in Robotics and AI 10 (2023). doi:10.3389/frobt.
2023.1202306.
[5] A. Weidemann, N. Russwinkel, The role of frustration
6. Prospects and further ideas in human–robot interaction – what is needed for a
successful collaboration?, Frontiers in Psychology 12
The use of a cognitive architecture in conjunction with a
(2021). doi:10.3389/fpsyg.2021.640186.
social robot offers far-reaching possibilities for the joint
[6] A. Bono, A. Augello, G. Pilato, F. Vella, S. Gaglio, An
creation of added value in terms of robot behavior that is as
act-r based humanoid social robot to manage story-
easy as possible for humans to understand and comprehend.
telling activities, Robotics 9 (2020) 25. URL: http:
A dynamic person model, which reacts flexibly and as ac-
//dx.doi.org/10.3390/robotics9020025. doi:10.3390/
curately as possible to changes in the behavior of a human
robotics9020025.
interaction partner and adapts based on human-like cogni-
[7] A. Augello, G. Pilato, An annotated corpus of stories
tive rules and experiences, enables interaction experiences
and gestures for a robotic storyteller, 2019, pp. 630–
on common ground between humans and robots.
635. doi:10.1109/IRC.2019.00127.
Enriching the cognitive model with real world data, which
[8] A. Augello, G. Città, M. Gentile, A. Lieto, A
the robot perceives via its sensors, in turn enables the model
storytelling robot managing persuasive and ethical
to react to the outside world. The robot’s body serves as
stances via act-r: An exploratory study, Interna-
the executive organ of the cognitive model. Ultimately, the
tional Journal of Social Robotics (2021). doi:10.1007/
overall result can only be as good as the quality of percep-
s12369-021-00847-w.
tion by the sensors and the possibilities offered by the robot.
[9] G. Trafton, L. Hiatt, A. Harrison, F. Tanborello,
The Pepper robot’s emotion recognition via facial expres-
S. Khemlani, A. Schultz, Act-r/e: An embodied
sion and voice tones is always a snapshot and not perfectly
cognitive architecture for human-robot interaction,
reliable. Sometimes it is simply wrong or misinterprets a
Journal of Human-Robot Interaction 2 (2013) 30–55.
brief irritation on the part of the human. Therefore, ways
doi:10.5898/JHRI.2.1.Trafton.
and means must be devised for the cognitive person model
[10] S. Xu, D. Tu, Y. He, S. Tan, M. Fang, Act-r-typed
to deal with these possibly contradictory impressions and
human–robot collaboration mechanism for elderly
draw appropriate conclusions from them.
and disabled assistance, Robotica 32 (2014) 711–721.
Our first test study on this in the assumed scenario of an
doi:10.1017/S0263574713001094.
public authority with changing courses has shown that the
[11] A. Pipitone, A. Chella, What robots want? hearing
participants perceive changes in the robot’s behavior from
the inner voice of a robot, iScience 24 (2021) 102371.
case to case depending on the course and the emotional
doi:10.1016/j.isci.2021.102371.
reactions of the participant. The next steps would be to
[12] A. Pipitone, A. Chella, Robot passes the mirror
develop a more extensive scenario and a more sophisticated
test by inner speech, Robotics and Autonomous
ACT-R model in order to conduct more detailed studies.
Systems 144 (2021) 103838. doi:10.1016/j.robot.
Another promising idea might be the use of large lan-
2021.103838.
guage models (LLMs) such as ChatGPT with their ability to
[13] Raluca Budiu, ACT-R / About, Technical Report, 2024.
generate human-sounding answers to almost any question
URL: http://act-r.psy.cmu.edu/about/.
for interaction and collaboration between humans and ma-
[14] Aldebaran, United Robotics Group and Softbank
chines. Prompt generation is the key to successful use. It is
Robotics, Pepper, Technical Report, 2024. URL: https:
conceivable to generate prompts for LLMs with the help of
//www.aldebaran.com/en/pepper.
a cognitive architecture from an ACT-R model. This would
[15] J. Fink, Anthropomorphism and Human Likeness
combine human-like cognition with human-like language
in the Design of Robots and Human-Robot Interac-
skills and could – in combination with emotion recogni-
tion, Springer Berlin Heidelberg, 2012. doi:10.1007/
tion – perhaps evoke something like empathetic reactions
978-3-642-34103-8\_20.
from the robot and make an interaction on the path to real
[16] Aldebaran, United Robotics Group and Softbank
understanding even more pleasant for the human.
Robotics, Pepper SDK for Android, Technical Re-
port, 2024. URL: https://qisdk.softbankrobotics.com/
References sdk/doc/pepper-sdk/index.html.
[17] QiSDK, Chat, Technical Report, 2024. URL: https:
[1] A. Werk, S. Scholz, T. Sievers, N. Russwinkel, How to //qisdk.softbankrobotics.com/sdk/doc/pepper-sdk/
provide a dynamic cognitive person model of a human ch4_api/conversation/reference/chat.html.
collaboration partner to a pepper robot, Society for [18] QiSDK, Mastering Emotion detection, Technical
Mathematical Psychology, 2024 forthcoming. Report, 2024. URL: https://qisdk.softbankrobotics.
[2] J. R. Anderson, D. Bothell, M. D. Byrne, S. Dou- com/sdk/doc/pepper-sdk/ch4_api/perception/tuto/
glass, C. Lebiere, Y. Qin, An integrated theory basic_emotion_tutorial.html.
of the mind 111 (2004) 1036–1060. doi:10.1037/ [19] J. A. Russell, Emotion, core affect, and psychological
0033-295X.111.4.1036. construction, Cognition and Emotion 23 (2009) 1259–
[3] F. E. Ritter, F. Tehranchi, J. D. Oury, Act-r: A cognitive 1283. doi:10.1080/02699930902809375.
architecture for modeling cognition 10 (2019). doi:10.
1002/wcs.1488.
[4] F. Förster, M. Romeo, P. Holthaus, L. Wood, C. Don-
drup, J. Fischer, F. Liza, S. Kaszuba, J. Hough, B. Nesset,