<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Explainability and self-disclosure for robot ethical introspection</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Valeria</forename><surname>Seidita</surname></persName>
							<email>valeria.seidita@unipa.it</email>
							<affiliation key="aff0">
								<orgName type="department">Dipartimento di Ingegneria</orgName>
								<orgName type="institution">Universitá degli Studi di Palermo</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Antonio</forename><surname>Chella</surname></persName>
							<email>antonio.chella@unipa.it</email>
							<affiliation key="aff0">
								<orgName type="department">Dipartimento di Ingegneria</orgName>
								<orgName type="institution">Universitá degli Studi di Palermo</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">ICAR-CNR National Research Council</orgName>
								<address>
									<settlement>Palermo</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="department">Workshop on Multidisciplinary Perspectives on Human-AI Team</orgName>
								<address>
									<addrLine>Dec 04</addrLine>
									<postCode>2023</postCode>
									<settlement>Gothenburg</settlement>
									<country key="SE">Sweden</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Explainability and self-disclosure for robot ethical introspection</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">A655C79E0ADD28473A4FFA164AD93858</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:36+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Ethics</term>
					<term>Self-Disclosure</term>
					<term>Ethical Introspection</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Human-robot or human-AI interaction systems require a high degree of autonomy, proactivity, and adaptvity. The decisions that intelligent systems must make are highly dependent on the application context and trust is an essential element in task assignment. Explainability and ethical introspection capabilities are important in building trust and understanding in artificial processes. In this paper, we present our ongoing work aimed at equipping robots with ethical introspection capabilities when interacting with humans by designing and implementing explainable and self-disclosure capabilities. Using a computational model of ethical introspection that incorporates theories of psychology, ethics, and AI, we built robots that examine and reflect on their actions to evaluate and validate them. We use the Belief-Desire-Intention (BDI) agent paradigm and related programming languages along with the speech act mechanism to improve and extend the robot's ethical values to better guide its decision-making process and the impact it has on humans.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Robots that emulate human behavior and assist individuals in their daily lives have long been a dream. A pioneer of this concept was Isaac Asimov, whose Three Laws of Robotics encapsulated the notion of robots interacting with humans. Asimov's three laws provide a valuable foundation; however, they are not sufficient when addressing contexts where robots must exhibit autonomy, proactivity, and self-adaptation.</p><p>The scenarios under consideration pertain to human-robot team interaction (HRTI) <ref type="bibr" target="#b0">[1]</ref>[2] <ref type="bibr" target="#b2">[3]</ref>. Within HRTI, the focal point is the collaboration between teams of humans and robots to achieve common goals. Irrespective of the nature of the task, whether it is purely social, such as providing companionship to an elderly individual, or high-risk, as in military settings, humans and robots must exchange information regarding the objectives, mission, their respective limitations, capabilities, and the work environment.</p><p>Typically, drawing upon their knowledge of the aforementioned elements, each team member selects an action to fulfill their mission. Nevertheless, this process is not strictly individualistic. The choice of action is also contingent upon the presence and actions of other team members, their knowledge, and competencies. Each team member must possess the ability to comprehend and anticipate the actions of their peers, as well as make decisions regarding which actions in the plan they can execute. Ultimately, each team member must decide which actions to undertake personally and which to delegate.</p><p>Several factors come into play in this decision-making process, with trust in one another being a pivotal element. One of the key factors that engenders or enhances trust in a fellow team member is the ability to expound on the rationale behind their actions. In a complex environment characterized by high dynamism, leading to uncertainty and decision-making challenges, such as in healthcare or military contexts, the selection of the optimal action is contingent upon the ability to evaluate the outcomes of actions in relation to predefined goals and conditions.</p><p>In some of our previous work, we have explored strategies for enhancing a robot's decisionmaking abilities through the concept of 'anticipation' <ref type="bibr" target="#b3">[4]</ref>[5] <ref type="bibr" target="#b5">[6]</ref>. We have developed a tool that allows the robot to transparently present its decision-making process. The robot selects an action given at design time and simulates the outcome before execution. If the simulated result aligns with the post-conditions of the goal, the mission is deemed successful; otherwise, the robot must opt for an alternative action. Concurrently, the robot provides an explanation to its human companion regarding its actions.</p><p>In any complex scenario involving the autonomy of a robot in the interactive domain, considerations concerning 'ethics' also come to the forefront. For instance, the potential invasion or violation of privacy when employing robots to assist patients necessitates that robots adhere to two fundamental principles: (i) making decisions aligned with the ethical standards of the society in which they operate, and (ii) articulating the rationale behind their actions to build human trust and influence human decisions. For example, in the context of a robot assisting a medical doctor, the robot can suggest an action to the doctor, justifying it from an ethical perspective, thereby stimulating thoughtful consideration. This process supports not only efficient decision-making but also provokes reflection in the human user.</p><p>Our endeavor revolves around the exploration of the roles of introspection and self-disclosure in ethical deliberation and social influence. In this paper, we present an initial hypothesis on how to formulate a computational model of ethical introspection in robots, building upon prior work on anticipation and trust <ref type="bibr" target="#b3">[4]</ref>[7] <ref type="bibr" target="#b7">[8]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Enabling Ethical Reasoning in Artificial Agents: a Model</head><p>Ethical introspection is an activity that focuses on mental states and mental processes as they occur or have just occurred. Our concept of endowing a robot with human-driven ethical introspection is to integrate it with the robot's self-awareness. Self-awareness implies a critical examination of one's own actions, intending to comprehend the ethical ramifications of those actions. In our approach, we draw inspiration from psychological, philosophical, and neuroscientific studies on ethical introspection, with Sullins serving as primary references <ref type="bibr" target="#b8">[9]</ref>[10] <ref type="bibr" target="#b10">[11]</ref>.</p><p>Sullins proposed a theoretical framework based on the concept of 'artificial phronesis, ' which pertains to human practical reasoning abilities and the virtue of rational thinking. 'Phronesis' represents philosophical wisdom in practical ethics and serves as the foundation for constructing machines capable of emulating human moral and practical reasoning.</p><p>Sullins asserts that ethical decision-making cannot always be reduced to a simple 'if situation (x), then action (y)' formula. Ethical decisions are predicated on the agent's habits, practices, and a comprehensive analysis of all elements of an ethical problem and potential consequences of actions. The framework proposed by Sullins involves two interacting agents: one for problem classification and one for case analysis. The former examines the situation to extract critical features, such as context and urgency. The latter collaborates in searching for analogous ethical cases within specific repositories and online sources to enhance the analysis.</p><p>Our objective is to fuse computational models of introspection, self-awareness, justification, and anticipation with mechanisms for ethical reasoning. In Figure <ref type="figure" target="#fig_0">1</ref>, we present an integration of Sullins' model with our work spanning several years. In structuring the decision-making process, we also take internal states into account. To this end, we have conducted experiments and opted to employ BDI (Belief-Desire-Intention) agent technology <ref type="bibr" target="#b11">[12]</ref>[13], utilizing the JaCaMo implementation framework <ref type="bibr" target="#b13">[14]</ref> <ref type="bibr" target="#b14">[15]</ref>. BDI agents aptly embody practical thinking, encompassing the process of agents determining how to translate their intentions into actionable steps within their environment.</p><p>Practical reasoning encompasses actions such as planning, resource allocation, and action sequence management, all of which consider agent constraints, environmental conditions, and potential consequences. This aligns well with the underlying concept of 'artificial phronesis.' Ethical reasoning can involve multiple agents, including humans. In Figure <ref type="figure" target="#fig_0">1</ref>, we depict three artificial agents: the problem classification agent (PC), the case analysis agent (CA), and the planner agent (P). Upon encountering an ethical quandary, the first two agents engage in continuous interaction at the outset of the reasoning process. The initial step involves inspecting the environment and situation to formulate the problem and context. The CA leverages this context to seek analogous situations, potentially engaging with humans. A novel scenario is then created and analyzed, potentially prompting a reevaluation of the problem. Eventually, the CA and PC concur on the context and contact the planner, who selects a plan and executes actions, which may also be simulated. These actions impact the environment, initiating a new situation and reinitiating the cycle. Actions receive positive or negative feedback, signifying their contribution to or deviation from the desired goal. This cycle embodies a sensing-decisionaction process, with key points focused on ethical introspection through self-disclosure.</p><p>Figure <ref type="figure" target="#fig_1">2</ref> illustrates the reasoning cycle of each agent of Figure <ref type="figure" target="#fig_0">1</ref>, highlighting the points at which we introduce extensions to, in order: (i) generate a self-model, (ii) explain actions, and (iii) facilitate ethical introspection. Our goal is to adapt the reasoning cycle of the agents featured in Figure <ref type="figure" target="#fig_0">1</ref>, as depicted in Figure <ref type="figure" target="#fig_1">2</ref>. To create an agent with ethical introspection capabilities, we prioritize the need for explanations, even before it needs to construct a self-model.</p><p>From a technical standpoint, actions can be explained as a function of the couple &lt;belief, capabilities&gt;. The agent decomposes the plan for mission execution into actions, closely associated with its knowledge of those actions and its ability to execute them. Consequently, the agent maintains a self-model and justifies action outcomes. We propose that, for each action, a rehearsal function can be defined, closely tied to self. The rehearsal function facilitates The practice of rehearsing has been implemented in previous works through the mechanism of speech acts <ref type="bibr" target="#b15">[16]</ref>. A speech act <ref type="bibr">[17][18]</ref> embodies the communicative action of the agent in which it expresses its actions, thus enabling self-revelation capabilities within an agent or agent system. The effect on the thinking process is indirect, indeed there is a change in the agents' knowledge as the result of speech acts. Speech acts along with agent technology allow to autonomously produce self-disclosure and then explainability. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Conclusion</head><p>This paper presents our ongoing research focused on the implementation of an ethical introspection model within robots endowed with self-awareness. Our research endeavors encompass computational models of introspection, self-awareness, justification, and anticipation, inte-grated with processes conducive to ethical deliberation. The principal objective underlying this research has been the facilitation of ethics within the decision-making framework of artificial agents. Initial progress was achieved through the integration of Sullins' model with our prior advancements, utilizing a Belief-Desire-Intention (BDI) agent-based paradigm. This model inherently captures the agent's practical reasoning process, placing significant emphasis on aspects such as planning, resource allocation, and the management of action sequences.</p><p>Furthermore, we have introduced an extended facet to the model, integrating ethical practice through the medium of speech acts. This augmentation empowers the agent to articulate and elucidate its actions. This communicative process catalyzes ethical evaluation and selfawareness, with the overarching aim of augmenting the decision-making capabilities and ethical conduct of these agents.</p><p>In the future, our research trajectory will involve further refinement of the model, drawing inspiration from in-depth interdisciplinary studies spanning the realms of psychology, philosophy, and neuroscience, all of which delve into the domains of ethics and introspection.</p><p>Collectively, our work lays the foundation for an in-depth exploration of ethics and introspection within the domain of autonomous robots, holding substantial potential for application in diverse domains, including healthcare and assistive robotics, among others. The incorporation of ethical models into robots represents a pivotal stride towards the responsible utilization of technology in complex and dynamic contexts, thereby enhancing the synergy between machines and human agents.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1:The reasoning process of a system equipped with Artificial Phronesis</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Towards the implementation of self-disclosure and ethical introspection. The reasoning cycle of each agent and the points in which we add functions at the interpreter level (low implementation level) for realizing self-modeling, explainability and introspection.</figDesc><graphic coords="4,130.96,412.05,333.34,121.50" type="bitmap" /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>International Exchanges 2022 Cost Share (Italy only) IEC\R2\222031 -Joint Research Program on Assistive Robots using Theory of Mind</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Human-robot teaming: Concepts and components for design</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">Mingyue</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Fong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Micire</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">K</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Feigh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Field and Service Robotics: Results of the 11th International Conference</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="649" to="663" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><surname>Chakraborti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kambhampati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Scheutz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1707.04775</idno>
		<title level="m">AI challenges in human-robot cognitive teaming</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A cognitive architecture for human-robot teaming interaction</title>
		<author>
			<persName><forename type="first">A</forename><surname>Chella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Lanza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Seidita</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 6th International Workshop on Artificial Intelligence and Cognition</title>
				<meeting>the 6th International Workshop on Artificial Intelligence and Cognition<address><addrLine>Palermo</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Endowing robots with self-modeling abilities for trustful human-robot interactions</title>
		<author>
			<persName><forename type="first">C</forename><surname>Castelfranchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Chella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Falcone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Lanza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Seidita</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CEUR WORKSHOP PROCEEDINGS</title>
				<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">2404</biblScope>
			<biblScope unit="page" from="22" to="28" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Decision Process in Human-Agent Interaction: Extending Jason Reasoning Cycle</title>
		<author>
			<persName><forename type="first">A</forename><surname>Chella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Lanza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Seidita</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-25693-7_17</idno>
	</analytic>
	<monogr>
		<title level="s">Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics</title>
		<imprint>
			<biblScope unit="volume">11375</biblScope>
			<biblScope unit="page" from="320" to="339" />
			<date type="published" when="2019">2019</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Inside the robot&apos;s mind during human-robot interaction</title>
		<author>
			<persName><forename type="first">V</forename><surname>Seidita</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Diliberto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Zanardi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Chella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Lanza</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">7th International Workshop on Artificial Intelligence and Cognition, AIC 2019</title>
				<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">2483</biblScope>
			<biblScope unit="page" from="54" to="67" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Robot&apos; s inner speech effects on human trust and anthropomorphism</title>
		<author>
			<persName><forename type="first">A</forename><surname>Pipitone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Geraci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>D' Amico</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Seidita</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Chella</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Social Robotics</title>
		<imprint>
			<biblScope unit="page" from="1" to="13" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Toward virtuous machines: When ethics meets robotics</title>
		<author>
			<persName><forename type="first">A</forename><surname>Chella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pipitone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Lanza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Seidita</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Ethics in Research: Principles and Practical Considerations</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="81" to="91" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">The role of consciousness and artificial phronēsis in ai ethical reasoning</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Sullins</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AAAI spring symposium: towards conscious AI systems</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Sullins</surname></persName>
		</author>
		<title level="m">Artificial phronesis, Science, Technology, and Virtues: Contemporary Perspectives</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page">136</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Competent moral reasoning in robot applications: Inner dialog as a step towards artificial phronesis</title>
		<author>
			<persName><forename type="first">A</forename><surname>Chella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pipitone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Sullins</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Trolley Crash: Approaching Key Metrics for Ethical AI Practitioners</title>
				<imprint>
			<publisher>Researchers, and Policy Makers</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">BDI agents: from theory to practice</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Rao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">P</forename><surname>Georgeff</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Icmas</title>
		<imprint>
			<biblScope unit="volume">95</biblScope>
			<biblScope unit="page" from="312" to="319" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">BDI agent architectures: A survey</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">De</forename><surname>Silva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">R</forename><surname>Meneguzzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Logan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI)</title>
				<meeting>the 29th International Joint Conference on Artificial Intelligence (IJCAI)<address><addrLine>Japão</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Multi-agent oriented programming with jacamo</title>
		<author>
			<persName><forename type="first">O</forename><surname>Boissier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">H</forename><surname>Bordini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Hübner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ricci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Santi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Science of Computer Programming</title>
		<imprint>
			<biblScope unit="volume">78</biblScope>
			<biblScope unit="page" from="747" to="761" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Programming multi-agent systems in AgentSpeak using Jason</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">H</forename><surname>Bordini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Hübner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wooldridge</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2007">2007</date>
			<publisher>John Wiley &amp; Sons</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Agent talks about itself: an implementation using jason, cartago and speech acts</title>
		<author>
			<persName><forename type="first">V</forename><surname>Seidita</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M P</forename><surname>Sabella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Lanza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Chella</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Intelligenza Artificiale</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="page" from="7" to="18" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Speech acts: An essay in the philosophy of language</title>
		<author>
			<persName><forename type="first">J</forename><surname>Searle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Searle</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1969">1969</date>
			<publisher>Cambridge university press</publisher>
			<biblScope unit="volume">626</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Austin</surname></persName>
		</author>
		<title level="m">How to do things with words</title>
				<imprint>
			<publisher>Oxford university press</publisher>
			<date type="published" when="1975">1975</date>
			<biblScope unit="volume">88</biblScope>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
