<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">How to use a cognitive architecture for a dynamic person model with a social robot in human collaboration</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Thomas</forename><surname>Sievers</surname></persName>
							<email>sievers@uni-luebeck.de</email>
							<affiliation key="aff0">
								<orgName type="department">Institute of Information Systems</orgName>
								<orgName type="institution">University of Lübeck</orgName>
								<address>
									<addrLine>Ratzeburger Allee 160</addrLine>
									<postCode>23562</postCode>
									<settlement>Lübeck</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Nele</forename><surname>Russwinkel</surname></persName>
							<email>russwinkel@uni-luebeck.de</email>
							<affiliation key="aff0">
								<orgName type="department">Institute of Information Systems</orgName>
								<orgName type="institution">University of Lübeck</orgName>
								<address>
									<addrLine>Ratzeburger Allee 160</addrLine>
									<postCode>23562</postCode>
									<settlement>Lübeck</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">How to use a cognitive architecture for a dynamic person model with a social robot in human collaboration</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">2DA4F361E8628108D6EAF0285766D1BE</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:20+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>ACT-R</term>
					<term>cognitive architecture</term>
					<term>human-robot interaction</term>
					<term>social robotics</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The use of cognitive architectures is promising in order to achieve more human-like reactions and behavior in social robots. For example, ACT-R can be used to create a dynamic cognitive person model of a human cooperation partner of the robot. A proof-ofconcept for a direct and easy-to-implement integration of ACT-R with the humanoid social robot Pepper is described in this work. An exemplary setup of the system consisting of cognitive architecture and robot application and the type of connection between ACT-R and the robot is explained. Furthermore, an idea is outlined of how the cognitive person model of the human cooperation partner in ACT-R is updated with dynamic data from the real world using the example of emotion recognition by the robot.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The development of situated human-aware agents that interact with human partners is a new field of research in terms of using a cognitive architecture for controlling the application and modeling human-like interaction. The use of cognitive architectures is promising in order to achieve more humanlike reactions and behavior in social robots. Adaptability to changing situations in human-robot dialog and the comprehensibility and thus the acceptance of robots, even in environments that are sensitive and anxiety-inducing for humans, could also be improved as a result. This work attempts to make a first step towards the utilization of different cognitive concepts (e.g. situation understanding, prediction and adaptation to the emotional state of the partner, flexible task anticipation) by describing a proof-of-concept for the integration of a cognitive architecture with the humanoid social robot Pepper and preparing a technical basis for a more human-like perception of human interaction partners. In this context, we have carried out an initial study with the application scenario of a public authority <ref type="bibr" target="#b0">[1]</ref>. However, a detailed evaluation and further studies that could confirm an effective benefit are still pending.</p><p>Cognitive architectures refer both to a theory about the structure of the human mind and to a computational realization of such a theory. Their formalized models can be used to further refine a comprehensive theory of cognition in order to provide common ground for working towards a specific goal, and to flexibly react to actions of the human collaboration partner and to develop situation understanding for adequate reactions. Well-known and successfully used cognitive architectures are ACT-R (Adaptive Control of Thought -Rational) and SOAR <ref type="bibr" target="#b1">[2]</ref>.</p><p>Like any cognitive architecture, ACT-R as a theory for simulating and understanding human cognition aims to define the basic and irreducible cognitive and perceptual operations that enable the human mind. In theory, each task that humans can perform should consist of a series of these discrete operations. Most of ACT-R's basic assumptions are also inspired by the progress of cognitive neuroscience, and ACT-R can be seen and described as a way of specifying how the brain itself is organized to produce cognition <ref type="bibr" target="#b2">[3]</ref>.</p><p>For an envisioned scenario, this cognitive architecture can generate flexible task knowledge and build mental representations of the relevant information about the individual with whom the robot is collaborating, the state of the task to be accomplished together and/or the person model of the human. If at some point it turns out that the intention of the human cooperation partner cannot be achieved directly because, for example, some relevant information is missing, this person will probably be frustrated. When something fails in completing the desired task, the human perception of the robot can be a critical component for the acceptance of social robots in general. Greater autonomy of the robot can lead to greater blame if something goes wrong. In their workshop report, Förster et al. provide a comprehensive overview of all the things that can go wrong in conversations between humans and robots, including a detailed analysis of failures <ref type="bibr" target="#b3">[4]</ref>. Appropriate reactions need to be retrieved by the robot to relate to possible failures, e.g. to find an alternative solution. Frustration on the part of the human counterpart should be avoided as far as possible. <ref type="bibr" target="#b4">[5]</ref>.</p><p>After giving some examples from previous research on connections between ACT-R and robots, we present our exemplary system setup, which consists of the cognitive architecture and a robot application programmed for the purpose of a direct connection between ACT-R and the robot. The standalone application of ACT-R we use is available for the main computer platforms Linux, macOS and Windows. We show a dynamic update of the cognitive person model of the human cooperation partner in ACT-R with data from the real world using the example of emotion recognition by the robot.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related work</head><p>A coupling of ACT-R as a cognitive architecture with different types of robots has already been realized and used for various purposes. For example, an interactive narrative system is described in which the characters in the story are interpreted by humanoid robots, which is achieved by defining suitable cognitive models <ref type="bibr" target="#b5">[6]</ref>. These robots are using the NarRob framework <ref type="bibr" target="#b6">[7]</ref>.</p><p>A storytelling robot controlled by ACT-R is able to adopt different persuasion techniques and ethical stances while talking about certain topics <ref type="bibr" target="#b7">[8]</ref>. In this case, the cognitive ACT-R architecture is connected to a Unity 3D engine.</p><p>An adaption of the ACT-R architecture for embodiment, then called Adaptive Character of Thought-Rational/Embodied (ACT-R/E) was created to function in the embodied world, placing an additional constraint on cognition namely that cognition occurs within a physical body that must navigate in real surroundings, as well as perceive the world and manipulate objects <ref type="bibr" target="#b8">[9]</ref>.</p><p>ACT-R is also used in human-robot collaboration (HRC) for mobile service robots, connecting and integrating modules of human, robot, perception, HRI, and HRC in the ACT-R architecture <ref type="bibr" target="#b9">[10]</ref>.</p><p>The inner voice of a robot cooperating with human partners is made audible via ACT-R integrated in the Robot Operating System (ROS) <ref type="bibr" target="#b10">[11]</ref>. Also an implementation of a robotic self-recognition method by inner speech is demonstrated by using ACT-R <ref type="bibr" target="#b11">[12]</ref>.</p><p>The distinctive feature of this approach is that the robot is directly connected to the ACT-R environment via Wi-Fi without using a special framework. It is therefore not necessary to install the ACT-R application on the robot in order to run the model. In this way, there is no need to deal with specific requirements of a particular framework.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Connect ACT-R to a Pepper robot</head><p>The ability of ACT-R as a system to perform a wide range of human cognitive tasks can be directly combined with a social robot that interacts with humans. The assumption behind these efforts is that this could make a conversation between a robot and a human more human-like on the part of the robot and thus more pleasant for the human.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">ACT-R</head><p>The basic mechanism of ACT-R consists of the main components modules, buffers and pattern matcher <ref type="bibr" target="#b12">[13]</ref>. There are two types of modules: Perceptual-motor modules forming the interface with the real world (motor module and visual module), and the memory modules comprising declarative memory consisting of facts and procedural memory consisting of productions. Productions represent knowledge about how something should be done. Figure <ref type="figure" target="#fig_0">1</ref> gives an overview of the main components.</p><p>ACT-R accesses its modules (with the exception of the procedural memory) via special buffers. The buffers form the interface to this module. The buffer content represents the state of ACT-R over time. The pattern matcher attempts to find a production that corresponds to the current state of the buffers. Only one production can be executed at a time. Productions can modify the buffers during execution and thus change the state of the system. Cognition is therefore represented in ACT-R as a sequence of production firings.</p><p>In our approach, we do not use the visual and motor modules to provide input to the system. The buffers are used directly to exchange information between the real world of the robot and the ACT-R model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Humanoid robot Pepper</head><p>The social humanoid robot Pepper <ref type="bibr" target="#b13">[14]</ref> as seen in Figure <ref type="figure" target="#fig_1">2</ref> developed by Aldebaran is 120 centimeters tall and optimized for human interaction. It is able to engage with people Since research has generally shown that trust is the basis for successful communication tasks and trust in robots is increased by anthropomorphism, a humanoid social robot like Pepper is a good choice for social interaction and the provision of services when dealing with customers. A human face, the possibility of human-like expressions and body language and the use of voice are seen as beneficial for the trust of customers in the robot <ref type="bibr" target="#b14">[15]</ref>. It has the advantage over a chatbot that it also shows physical gestures, which makes communication much more vivid and strengthens a personal relationship. The Pepper robot is already being used in many HRI projects and has also been tested in real production use.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">The robot application</head><p>We developed an application that controls the robot's reactions to what the human conversation partner says. To do this, we used Android Studio with the Kotlin programming language and the Pepper SDK for Android <ref type="bibr" target="#b15">[16]</ref>, which enables the robot to be controlled via an app from its Android tablet. The Pepper SDK as an Android Studio plug-in provides a set of graphical tools and a Java respectively Kotlin library, the QiSDK, so that specific functionalities of Pepper's operating system could be used in a straightforward way directly from an Android application, e.g., for focusing upon a person, listening, talking and chatting as well as movements of head and arm to stress what has been said.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.1.">Listen and talk</head><p>Pepper's native speech recognition capabilities and a speech output with the -in our case German -language pack are used for speech input and output and Pepper's Chat feature <ref type="bibr" target="#b16">[17]</ref> is utilized to conduct the dialog. The chat feature Words and phrases that the robot should understand, as well as the corresponding answers, are stored in topic files in the form of dictionaries and dialog branches. The flexible options for using variables or randomly selected parts of sentences in the robot's responses enable a natural dialog flow. The Pepper SDK also provides parameters for using pauses, intonation and voice modulation to further enhance a human-like dialog.</p><p>With regard to controlling the reactions and statements of the robot by an ACT-R model, which is supplied with relevant data for interaction from the real world, the use of these topic files offers for the robot the possibility to make statements adapted to the current situation by referring to the appropriate sections in the topic file. Figure <ref type="figure" target="#fig_2">3</ref> shows a schematic diagram of the topic file process within the robot application.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.2.">Animation</head><p>Robot gesture animation depending on a specific context can be used to support what is said depending on the situation. These animations increase anthropomorphism and comprehensibility through the indirect effect of body language. Groups of suitable animations can be defined, of which a randomly selected one is executed at certain points of the interaction, e.g. when greeting, in response to a question from the human, when the robot asks a question, etc. These animations support the interaction with the human as they emphasize the robot's statements.</p><p>Depending on the course of the conversation and the findings about the emotional state of the human counterpart, for example, the ACT-R model can be used to control the robot's gestures in conjunction with the robot's utterances. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">System setup for ACT-R and the robot</head><p>The standalone version of ACT-R is used for this work, i.e. the application provided at https://act-r.psy.cmu.edu/ instead of running the Lisp sources. To establish a remote connection from the robot to ACT-R, the remote interfacethe dispatcher -has to be used, which is implemented by a central command server. The ACT-R core software connects to this dispatcher to provide access to its commands, and the dispatcher accepts TCP/IP socket connections that allow clients to access these commands and provide their own commands for use. The commands available via the dispatcher can be used wherever a Lisp function was formerly required. By default, the standalone version forces the dispatcher to use the localhost IP address of the computer on which it is running for connections instead of an external IP address. This means that only programs on the same computer can establish a connection, and once ACT-R has been started, this can no longer be changed. To disable this function, the file force-local.lisp must be removed from the ACT-R/patches directory before the application is executed. Then it will use the machine's real IP address for the dispatcher's connections and setting *allow-external-connections* in the model file will let other machines connect. Another option is to place the model file in the ACT-R/user-loads directory. External connections are then always permitted. The address and port used by the dispatcher is displayed at the top of the ACT-R terminal window. This information must be used on the remote computer for connection.</p><p>The Pepper application contains a program section for the remote connection to the dispatcher. This client connection can be used to start and control an ACT-R model that maps the cognitive processes for controlling human-robot interaction. The client is able to interact directly with the model by calling commands. The run-full-time command, for example, together with a number of seconds, starts and runs the model for the specified time. The evaluate method is used to evaluate commands from the dispatcher. It requires the name of the command to evaluate.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.1.">The ACT-R model</head><p>The ACT-R model created in Lisp for this proof-of-concept study uses a goal slot pepper_out for sending commands to the client application using ACT-R productions. This goal slot is evaluated via a permanently running while loop using the buffer-slot-value command that gets the value of a slot from the chunk in a buffer of the current model. The buffer-slot-value is sent as a string in JSON format via the TCP/IP socket stream. Each evaluation command is assigned a unique ID. This ID is used to identify the correct part of the data in the stream received by the socket. The permanent evaluation of the content of the goal slot pepper_out in the client application is used to create special commands for the robot depending on this slot content, e.g. to execute a certain animation or to make a corresponding utterance.</p><p>To illustrate the syntax, the following lines show an example of using the evaluate method for the retrieval of a goal slot as a control signal from the model using the bufferslot-value command in a while loop and a production in the Lisp code of the ACT-R model using a goal slot pepper_out for sending such a signal to the client application.</p><p>Client application with buffer-slot-value command: To transmit information from the robot application to the ACT-R model, the client uses the overwrite-buffer-chunk command to copy a chunk into the goal buffer. The model has predefined goal chunks in its declarative memory. If a predefined chunk matches the chunk from the client, all information from this model chunk is placed in the buffer and can be used to trigger a production in the model. Figure <ref type="figure" target="#fig_4">4</ref> illustrates the exchange of information between the robot and ACT-R.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Combining emotion recognition and ACT-R</head><p>Pepper has the ability to interpret the basic emotion of the human in front of the robot via facial recognition using the ExcitementState and PleasureState characteristics <ref type="bibr" target="#b17">[18]</ref>. The ExcitementState can have the values calm or exited, the PleasureState the values positive, neutral or negative. Based on the work of psychologist James Russel <ref type="bibr" target="#b18">[19]</ref>, whose work focuses on emotions, a transformation matrix shown in Table <ref type="table" target="#tab_0">1</ref> is used for the conversion of these states into the basic emotions neutral, content, joyful, sad and angry. These basic emotions should provide a sufficient basis for adapting the robot's behavior and statements to the emotional state of the human conversation partner. The idea is to pass these findings on to an ACT-R model, which in turn draws conclusions within the framework of the human-like cognitive architecture and controls the robot  The ACT-R model therefore controls the verbal reaction of the robot and/or an animation in the interaction with the human and adapts it to the emotion that has just been recognized. A combination of the possibilities of ACT-R with a humanoid social robot interacting directly with humans could be a way to improve the dialog between a human and a robot and make the robot appear more compassionate and empathetic.</p><p>A socket connection via the WLAN network from a robot application as a client to the dispatcher of the ACT-R application running on a PC or laptop as described in Section 3.4 enables an ACT-R model to receive and process the basic emotion values shown in Table <ref type="table" target="#tab_0">1</ref> transmitted by the robot's emotion recognition. Feedback from the model to Pepper controls the robot's further behavior and the dialog. Figure <ref type="figure" target="#fig_5">5</ref> depicts the emotion recognition and processing by the robot and ACT-R.</p><p>For transmitting a recognized emotion the overwritebuffer-chunk command is used to trigger the right productions of the ACT-R model. How the model handles the information about the person's current emotion depends on the structure of the ACT-R model with its productions and the respective application. Predefined goal chunks in the declarative memory of the model enable productions to be fired depending on the emotion values transmitted. Examples of such goal chunks, which are prepared in the Lisp code of the ACT-R model, and an example production that fills a pepper_out goal slot with a value that is evaluated in the client application of the robot, can be found in the following lines: The robot's statements, which are controlled via the Chat feature of the client application and saved in dialog topic files as explained in Section 3.3.1, can be influenced in this way. Depending on the goal slot value, different dialogs, responses and/or animations can be triggered. The while loop that runs continuously in the client application essentially contains the following functionalities and simple IF queries for assigning the basic emotions from Pepper's emotion recognition to model chunks, evaluating the goal slot pep-per_out of the ACT-R model and selecting the corresponding text passage in the topic file: </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>Our proof-of-concept application shows that a coupling of ACT-R and a social robot is possible and relatively easy to implement and that the transmission of emotion data and their evaluation by an ACT-R model as well as a control of the robot via the ACT-R model works. This was achieved by directly connecting the robot application to ACT-R without using additional frameworks.</p><p>The fact that the robot can be controlled via a cognitive architecture opens up a wide range of possibilities that these architectures offer in terms of better situated human perception and improved adaptability to the behavior of a human conversation partner. However, it remains important to consider whether the effort required for implementation, modeling and resilience is appropriate in relation to the achievable functionality.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Prospects and further ideas</head><p>The use of a cognitive architecture in conjunction with a social robot offers far-reaching possibilities for the joint creation of added value in terms of robot behavior that is as easy as possible for humans to understand and comprehend. A dynamic person model, which reacts flexibly and as accurately as possible to changes in the behavior of a human interaction partner and adapts based on human-like cognitive rules and experiences, enables interaction experiences on common ground between humans and robots.</p><p>Enriching the cognitive model with real world data, which the robot perceives via its sensors, in turn enables the model to react to the outside world. The robot's body serves as the executive organ of the cognitive model. Ultimately, the overall result can only be as good as the quality of perception by the sensors and the possibilities offered by the robot. The Pepper robot's emotion recognition via facial expression and voice tones is always a snapshot and not perfectly reliable. Sometimes it is simply wrong or misinterprets a brief irritation on the part of the human. Therefore, ways and means must be devised for the cognitive person model to deal with these possibly contradictory impressions and draw appropriate conclusions from them.</p><p>Our first test study on this in the assumed scenario of an public authority with changing courses has shown that the participants perceive changes in the robot's behavior from case to case depending on the course and the emotional reactions of the participant. The next steps would be to develop a more extensive scenario and a more sophisticated ACT-R model in order to conduct more detailed studies.</p><p>Another promising idea might be the use of large language models (LLMs) such as ChatGPT with their ability to generate human-sounding answers to almost any question for interaction and collaboration between humans and machines. Prompt generation is the key to successful use. It is conceivable to generate prompts for LLMs with the help of a cognitive architecture from an ACT-R model. This would combine human-like cognition with human-like language skills and could -in combination with emotion recognition -perhaps evoke something like empathetic reactions from the robot and make an interaction on the path to real understanding even more pleasant for the human.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: ACT-R modules, buffers and pattern matcher</figDesc><graphic coords="2,309.59,65.60,213.69,207.52" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Humanoid Robot Pepper</figDesc><graphic coords="3,72.00,65.61,213.68,279.57" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Schematic diagram of the topic file process</figDesc><graphic coords="3,309.59,65.61,213.68,200.32" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head></head><label></label><figDesc>while (true) { {method:evaluate, params:[buffer-slot-value, nil, goal, pepper_out], id:10} } ACT-R production with pepper_out goal slot:</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Information exchange between robot and ACT-R</figDesc><graphic coords="4,309.59,65.61,213.68,192.31" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Emotion recognition with Pepper and ACT-R</figDesc><graphic coords="5,72.00,65.61,213.68,243.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Transformation matrix to get the basic emotions</figDesc><table><row><cell></cell><cell></cell><cell>PleasureState</cell><cell></cell></row><row><cell>ExitementState</cell><cell>Positive</cell><cell cols="2">Neutral Negative</cell></row><row><cell>Calm</cell><cell cols="2">Content Neutral</cell><cell>Sad</cell></row><row><cell>Exited</cell><cell>Joyful</cell><cell>Neutral</cell><cell>Angry</cell></row><row><cell cols="2">application via feedback.</cell><cell></cell><cell></cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_0">(add-dm (mood-content-chunk isa goal mood content state pepper-changes-mood)</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">How to provide a dynamic cognitive person model of a human collaboration partner to a pepper robot</title>
		<author>
			<persName><forename type="first">A</forename><surname>Werk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Scholz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Sievers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Russwinkel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Society for Mathematical Psychology</title>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note>forthcoming</note>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">An integrated theory of the mind</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Anderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Bothell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">D</forename><surname>Byrne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Douglass</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lebiere</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Qin</surname></persName>
		</author>
		<idno type="DOI">10.1037/0033-295X.111.4.1036</idno>
		<imprint>
			<date type="published" when="2004">2004</date>
			<biblScope unit="volume">111</biblScope>
			<biblScope unit="page" from="1036" to="1060" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Act-r: A cognitive architecture for modeling cognition</title>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">E</forename><surname>Ritter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Tehranchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Oury</surname></persName>
		</author>
		<idno type="DOI">10.1002/wcs.1488</idno>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">10</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Working with troubles and failures in conversation between humans and robots: workshop report</title>
		<author>
			<persName><forename type="first">F</forename><surname>Förster</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Romeo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Holthaus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wood</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Dondrup</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fischer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Liza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kaszuba</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hough</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Nesset</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Hernandez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Garcia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kontogiorgos</surname></persName>
		</author>
		<author>
			<persName><surname>Williams</surname></persName>
		</author>
		<idno type="DOI">10.3389/frobt.2023.1202306</idno>
	</analytic>
	<monogr>
		<title level="j">Frontiers in Robotics and AI</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">The role of frustration in human-robot interaction -what is needed for a successful collaboration?</title>
		<author>
			<persName><forename type="first">A</forename><surname>Weidemann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Russwinkel</surname></persName>
		</author>
		<idno type="DOI">10.3389/fpsyg.2021.640186</idno>
	</analytic>
	<monogr>
		<title level="j">Frontiers in Psychology</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">An act-r based humanoid social robot to manage storytelling activities</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bono</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Augello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Pilato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Vella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gaglio</surname></persName>
		</author>
		<idno type="DOI">10.3390/robotics9020025</idno>
		<ptr target="http://dx.doi.org/10.3390/robotics9020025.doi:10.3390/robotics9020025" />
	</analytic>
	<monogr>
		<title level="j">Robotics</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page">25</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">An annotated corpus of stories and gestures for a robotic storyteller</title>
		<author>
			<persName><forename type="first">A</forename><surname>Augello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Pilato</surname></persName>
		</author>
		<idno type="DOI">10.1109/IRC.2019.00127</idno>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="630" to="635" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A storytelling robot managing persuasive and ethical stances via act-r: An exploratory study</title>
		<author>
			<persName><forename type="first">A</forename><surname>Augello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Città</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gentile</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lieto</surname></persName>
		</author>
		<idno type="DOI">10.1007/s12369-021-00847-w</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of Social Robotics</title>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Act-r/e: An embodied cognitive architecture for human-robot interaction</title>
		<author>
			<persName><forename type="first">G</forename><surname>Trafton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Hiatt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Harrison</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Tanborello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Khemlani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Schultz</surname></persName>
		</author>
		<idno type="DOI">10.5898/JHRI.2.1.Trafton</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Human-Robot Interaction</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="30" to="55" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Act-r-typed human-robot collaboration mechanism for elderly and disabled assistance</title>
		<author>
			<persName><forename type="first">S</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Tu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Fang</surname></persName>
		</author>
		<idno type="DOI">10.1017/S0263574713001094</idno>
	</analytic>
	<monogr>
		<title level="j">Robotica</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="page" from="711" to="721" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">What robots want? hearing the inner voice of a robot</title>
		<author>
			<persName><forename type="first">A</forename><surname>Pipitone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Chella</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.isci.2021.102371</idno>
	</analytic>
	<monogr>
		<title level="j">iScience</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page">102371</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Robot passes the mirror test by inner speech</title>
		<author>
			<persName><forename type="first">A</forename><surname>Pipitone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Chella</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.robot.2021.103838</idno>
	</analytic>
	<monogr>
		<title level="j">Robotics and Autonomous Systems</title>
		<imprint>
			<biblScope unit="volume">144</biblScope>
			<biblScope unit="page">103838</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">Raluca</forename><surname>Budiu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Act-R /</forename><surname>About</surname></persName>
		</author>
		<ptr target="http://act-r.psy.cmu.edu/about/" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">Aldebaran</forename></persName>
		</author>
		<ptr target="https://www.aldebaran.com/en/pepper" />
		<title level="m">United Robotics Group and Softbank Robotics, Pepper</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Anthropomorphism and Human Likeness in the Design of Robots and Human-Robot Interaction</title>
		<author>
			<persName><forename type="first">J</forename><surname>Fink</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-642-34103-8_20</idno>
		<imprint>
			<date type="published" when="2012">2012</date>
			<publisher>Springer</publisher>
			<pubPlace>Berlin Heidelberg</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">Aldebaran</forename></persName>
		</author>
		<ptr target="https://qisdk.softbankrobotics.com/sdk/doc/pepper-sdk/index.html" />
		<title level="m">United Robotics Group and Softbank Robotics, Pepper SDK for Android, Technical Report</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">Chat</forename><surname>Qisdk</surname></persName>
		</author>
		<ptr target="https://qisdk.softbankrobotics.com/sdk/doc/pepper-sdk/ch4_api/conversation/reference/chat.html" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">Mastering Emotion detection</title>
		<author>
			<persName><surname>Qisdk</surname></persName>
		</author>
		<ptr target="https://qisdk.softbankrobotics.com/sdk/doc/pepper-sdk/ch4_api/perception/tuto/basic_emotion_tutorial.html" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Emotion, core affect, and psychological construction</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Russell</surname></persName>
		</author>
		<idno type="DOI">10.1080/02699930902809375</idno>
	</analytic>
	<monogr>
		<title level="j">Cognition and Emotion</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="1259" to="1283" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
