<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Defining an Adaptable Framework for Behaviour Support Agents in Default Logic</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Johanna</forename><surname>Wolff</surname></persName>
							<email>j.d.wolff@utwente.nl</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Twente</orgName>
								<address>
									<addrLine>Drienerlolaan 5</addrLine>
									<postCode>7522 NB</postCode>
									<settlement>Enschede</settlement>
									<country key="NL">The Netherlands</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Victor</forename><surname>De Boer</surname></persName>
							<email>v.de.boer@vu.nl</email>
							<affiliation key="aff1">
								<orgName type="institution">Vrije Universiteit Amsterdam</orgName>
								<address>
									<addrLine>De Boelelaan 1105</addrLine>
									<postCode>1081 HV</postCode>
									<settlement>Amsterdam</settlement>
									<country key="NL">The Netherlands</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Dirk</forename><surname>Heylen</surname></persName>
							<email>d.k.j.heylen@utwente.nl</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Twente</orgName>
								<address>
									<addrLine>Drienerlolaan 5</addrLine>
									<postCode>7522 NB</postCode>
									<settlement>Enschede</settlement>
									<country key="NL">The Netherlands</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">M</forename><surname>Birna Van Riemsdijk</surname></persName>
							<email>m.b.vanriemsdijk@utwente.nl</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Twente</orgName>
								<address>
									<addrLine>Drienerlolaan 5</addrLine>
									<postCode>7522 NB</postCode>
									<settlement>Enschede</settlement>
									<country key="NL">The Netherlands</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Defining an Adaptable Framework for Behaviour Support Agents in Default Logic</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">C8D7C1B479AAD32DD6D3E004309F5D7D</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:52+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Default Logic, Belief Revision, Behaviour Support Agent Orcid 0009-0005-0178-9570 (J. Wolff)</term>
					<term>0000-0001-9079-039X (V. d. Boer)</term>
					<term>0000-0003-4288-3334 (D. Heylen)</term>
					<term>0000-0001-9089-5271 (M. B. v. Riemsdijk)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In order to provide personalised advice, behaviour support agents need to consider the user's needs and preferences. This user model should be easily adaptable as the user's requirements will change during the long-term use of the agent. We propose a formal framework for such an agent in which the knowledge and the beliefs of the agent are represented explicitly and can be updated directly. Our framework is based on ordered default logic as defeasible reasoning allows the agent to infer additional information based on possibly incomplete knowledge about the world and the user. We also define updates on each component of the agent's framework and demonstrate how these updates can be used to resolve potential misalignments between the agent and the user. Throughout the paper we illustrate our work using a simplified example of a behaviour support agent intended to assist the user in finding a suitable form of exercise.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The rise of artificial assistants has lead to an increased interest in behaviour change support agents <ref type="bibr" target="#b0">[1]</ref>, which can support the user in establishing new routines and finding ways to achieve their goals consistently. In order for these agents to support each user as effectively as possible, the agents need to model the user's desires, needs and preferences as accurately as possible <ref type="bibr" target="#b1">[2]</ref>. Since the agent should offer support over longer periods of time, it is likely that both the user and the surrounding context will go through changes throughout the agent's use <ref type="bibr" target="#b2">[3]</ref>. Based on the emerging design principles of hybrid intelligence <ref type="bibr" target="#b3">[4]</ref>,we propose that the agent and the user should be able to collaborate in order to identify and implement the updates that are necessary to adapt the agent over time. This means that the user is in control of the agent's knowledge and beliefs <ref type="bibr" target="#b4">[5]</ref>, but the agent should be able to assist the user in determining how each change can be realised and explaining the effects that this will have.</p><p>While data-driven approaches such as machine learning, can be used to create a detailed and accurate user model <ref type="bibr" target="#b5">[6]</ref>, these models can be hard to adapt when the user's needs change <ref type="bibr" target="#b6">[7]</ref>. The concepts captured in these user models are often not explicitly represented, which in turn means that they cannot be updated directly. This also makes it difficult for the user to understand exactly how their changes will affect the agent's output <ref type="bibr" target="#b7">[8]</ref>. By using knowledge-driven methods, we can formalise changes to the user model within the framework, similarly for example to the work in <ref type="bibr" target="#b6">[7]</ref>. In particular, we use default logic to model an agent with both dynamic knowledge and beliefs.</p><p>In this paper, we introduce a formal framework for a behaviour support agent which includes a model of the world and the user (Section 3.1). We use this framework to represent the agent's knowledge and beliefs explicitly within a default theory (Section 3.2). We use the defeasible nature of default logic to express beliefs about both the user and the world, which allows the agent to reason with incomplete knowledge and provide advice based on this. In order to make changes to the agent's knowledge and beliefs possible, we define updates to our formal framework (Section 4). These updates are based on existing work on belief revision updates for default logic <ref type="bibr" target="#b8">[9]</ref>. We then compare this to previous work on user-agent misalignment <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11]</ref> and showcase how the formal updates can be used to resolve potential misalignment scenarios (Section 5) . Throughout the paper we will illustrate the framework using a simple running example of a support agent intended to assist the user in finding a suitable exercise based on their needs. framework in Section 3.2. We also present the belief revision operators that we will be using in Section 4.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Ordered Default Logic</head><p>Default logic was first introduced in <ref type="bibr" target="#b11">[12]</ref> to formalise inference rules which are usually true but allow for exceptions. This is done using default rules of the form Prerequisite ∶ Justification Consequent 𝛿 .</p><p>This rule states that if the prerequisite is proven and it is consistent to assume the justification, then the consequent is inferred.</p><p>In the work of <ref type="bibr" target="#b12">[13]</ref> there is additionally a strict partial ordering 𝛿 1 &lt; 𝛿 2 on these default rules which expresses that 𝛿 1 should only be applied if 𝛿 2 has already been applied or is inapplicable. This results in an ordered default theory of the form (𝐾 , 𝐷, &lt;) in which 𝐾 is a set of sentences, 𝐷 is a set of default rules and &lt; is an ordering on the default rules in 𝐷. Intuitively, we understand the sentences in 𝐾 to describe our, possibly incomplete, knowledge of the world while the default rules in 𝐷 allow us to derive additional information based on our beliefs. The ordering &lt; may be used to express either preferences or priorities between these beliefs. A theory of this ordered default logic can be translated into standard default logic, allowing for an implementation in theorem provers for standard default logic <ref type="bibr" target="#b12">[13]</ref>.</p><p>When working with default theories, we are interested in the complete views of the world that are consistent with the initial theory, which we refer to as extensions. For an ordered default theory 𝑇 = (𝐾 , 𝐷, &lt;) and any set of sentences 𝑆, we define Γ(𝑆) to be the smallest set satisfying the following properties:</p><formula xml:id="formula_0">1. 𝐾 ⊆ Γ(𝑆) 2. 𝑇 ℎ(Γ(𝑆)) = Γ(𝑆) 3. for all default rules 𝛼 ∶ 𝛽 𝛾 ∈ 𝐷, if 𝛼 ∈ Γ(𝑆) and ¬𝛽 ∉ 𝑆 then 𝛾 ∈ Γ(𝑆)</formula><p>Here 𝑇 ℎ(Γ(𝑆)) stands for the deductive closure of Γ(𝑆). We call a set of sentences 𝐸 an extension of the theory 𝑇 if 𝐸 = Γ(𝐸). In the following, we will discuss only the consistent extensions of a theory. If we restrict ourselves to normal default rules where the justification and the consequent are the same, <ref type="bibr" target="#b11">[12]</ref> has shown that this ensures the existence of a consistent extension. In the following, we will only consider default rules of this form. Definition 1. We define ℰ (𝑇 ) to be the set of all consistent extensions of the default theory 𝑇 = (𝐾 , 𝐷, &lt;).</p><p>The consistent extensions we have defined above do not yet take the ordering &lt; into account. To include this we use the notion of &lt;-preserving extensions from <ref type="bibr" target="#b12">[13]</ref>.</p><p>Definition 2. We define 𝑃𝑟𝑒𝑟𝑒𝑞(Δ),𝐽 𝑢𝑠𝑡𝑖𝑓 (Δ) and 𝐶𝑜𝑛𝑠𝑒𝑞(Δ) to be the set of prerequisites, justifications and consequents of the default rules 𝛿 in Δ. We take 𝐺𝐷(𝐷, 𝐸) to be the set of default rules which generate the extension 𝐸 and a grounded enumeration (𝛿 𝑖 ) 𝑖∈𝐼 of 𝐺𝐷(𝐷, 𝐸) to be an order in which these rules can be applied.</p><p>For a theory 𝑇 = (𝐾 , 𝐷, &lt;), an extension 𝐸 ∈ ℰ (𝑇 ) is &lt;-preserving if there is a grounded enumeration (𝛿 𝑖 ) 𝑖∈𝐼 of 𝐺𝐷(𝐷, 𝐸) so that for all 𝑖, 𝑗 ∈ 𝐼 and 𝛿 ∈ 𝐷 ∖ 𝐺𝐷(𝐷, 𝐸) it holds that</p><formula xml:id="formula_1">1. if 𝛿 𝑖 &lt; 𝛿 𝑗 then 𝑗 &lt; 𝑖 and 2. if 𝛿 𝑖 &lt; 𝛿 then 𝑃𝑟𝑒𝑟𝑒𝑞(𝛿) ∉ 𝐸 or 𝐾 ∪ 𝐶𝑜𝑛𝑠𝑒𝑞({𝛿 0 , … , 𝛿 𝑖−1 }) ⊢ ¬𝐽 𝑢𝑠𝑡𝑖𝑓 (𝛿).</formula><p>Even if we know that ℰ (𝑇 ) is not empty, this does not ensure that a &lt;-preserving extension of 𝑇 = (𝐾 , 𝐷, &lt;) exists. Intuitively, this is because lower ranked default rules may have a consequent which can be used to infer the prerequisite of otherwise inapplicable, higher ranked default rules. This means that a higher ranked rule may be applied after the application of a lower ranked rule. As a result, the grounded enumeration of 𝐺𝐷(𝐷, 𝐸) will not satisfy the first condition from Definition 2.</p><p>In <ref type="bibr" target="#b13">[14]</ref> these inference relationships between the default rules of a theory are formalised using the dependency graph of the theory. The dependency graph 𝒢 (𝐷, 𝐾 ) captures whether default rules influence the applicability of other default rules, either positively by inferring the prerequisite or negatively by inferring the negation of the justification. We take 𝒢 (𝐷, 𝐾 ) to be the set of directed edges between the default rules in 𝐷.</p><p>In <ref type="bibr" target="#b12">[13]</ref> this is used to specify conditions under which an order default theory has a &lt;-preserving extension. For this, a default theory is considered even if all cycles of the dependency graph have an even number of negative relations. Intuitively this means that the application of a default rules does not negatively influence its own applicability. The ordering &lt; specifies that a lower ranked rule is only applicable after all higher ranked rules have been applied. This means that for each relation (𝛿 &lt; 𝛿 ′ ), we want to ensure that 𝛿 does not affect the applicability of 𝛿 ′ . Proposition 1. As proven in <ref type="bibr" target="#b12">[13]</ref>, an ordered default theory 𝑇 = (𝐾 , 𝐷, &lt;) is guaranteed to have a &lt;-preserving extension if the dependency graph 𝒢 (𝐷, 𝐾 ) 1. is even and 2. including the ordering &lt; does not create new cycles, so for all cycles</p><formula xml:id="formula_2">𝒞 of 𝒢 (𝐷, 𝐾 ) ∪ {(𝛿 ′ , 𝛿) | 𝛿 &lt; 𝛿 ′ }, 𝒞 is a cycle of 𝒢 (𝐷, 𝐾 ).</formula><p>Since the ordering &lt; is not necessarily total, it is possible that there are multiple &lt;-preserving extensions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Belief Revision</head><p>The field of belief revision is concerned with formalising changes to knowledge and belief bases. Since the knowledge and beliefs of a behaviour support agent are subject to change over time, we want to use update operations from belief revision to reflect this.</p><p>In general, belief revision is used to update a set of sentences 𝑆. We will be working with theory base revision operators <ref type="bibr" target="#b14">[15]</ref>, which do not require 𝑆 to be deductively closed, as opposed to AGM operators <ref type="bibr" target="#b15">[16]</ref>. Specifically we will use the operator 𝑆 * 𝜑 to add a sentence 𝜑 to 𝑆 while ensuring the resulting set remains consistent and the operator 𝑆 ÷ 𝜑 to remove sentences from 𝑆 until 𝜑 can no longer be inferred.</p><p>There is a range of work specifically concerned with integrating belief revision methods into default logic such as <ref type="bibr" target="#b16">[17,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b18">19]</ref>. In our work we will use the operators defined in <ref type="bibr" target="#b8">[9]</ref>, which includes updates to the knowledge base and the default rules of a default theory.</p><p>If we use theory base revision operators, 𝐾 * 𝜑 and 𝐾 ÷𝜑 on the knowledge base 𝐾 of a default theory 𝑇, <ref type="bibr" target="#b8">[9]</ref> shows that this can be used to either add 𝜑 to all extensions of 𝑇 or remove 𝜑 from 𝑇 ℎ(𝐾 ).</p><p>Additionally, <ref type="bibr" target="#b8">[9]</ref> introduces updates on the set of default rules 𝐷. We use 𝐷 ÷𝛿 = 𝐷 ∖{𝛿} as an operator which removes the default rule 𝛿 from 𝐷 and 𝐷 * 𝛿 = 𝐷 ∪ {𝛿} which adds a default rule 𝛿 to 𝐷.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Behaviour Support Agent</head><p>In the following section we introduce our framework that can be used to formalise a behaviour support agent. The agent will be able to select a suitable goal for the user to pursue based on the context that the user is currently in. The agent will then recommend actions which result in this goal being achieved, based on the user's preferences.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Syntax</head><p>We define a support agent for a set of possible actions 𝐴 that the agent can recommend, the set of goals 𝐺 the user may have and a set of contexts 𝐶 that may affect the user's goals and actions. Definition 3 (Atoms). We define the following sets of propositional atoms:</p><formula xml:id="formula_3">• 𝐴 = {𝑎 1 , … , 𝑎 𝑛 } describing the possible actions, • 𝐺 = {𝑔 1 , … , 𝑔 𝑚 } describing the goals, • 𝐶 = {𝑐 1 , … , 𝑐 𝑙 } describing different contexts and • 𝐴𝑡𝑜𝑚𝑠 = 𝐴 ∪ 𝐺 ∪ 𝐶.</formula><p>Definition 4 (Language). Let 𝑂 = {⊤, ¬, ∧, ∨, →} be a standard set of logical operators. We introduce the following propositional languages, defined over the operators 𝑂 and sets of atoms in the usual way:</p><p>• The action language ℒ 𝐴 over 𝑂 and atoms 𝐴 • The goal language ℒ 𝐺 over 𝑂 and atoms 𝐺 • The context language ℒ 𝐶 over 𝑂 and atoms 𝐶 • The agent language ℒ over 𝑂 and atoms 𝐴𝑡𝑜𝑚𝑠 A plan for a goal 𝑔 ∈ 𝐺 is a tuple of the form (𝑔, 𝜑), in which 𝜑 is a formula from ℒ 𝐴 describing the actions that must be taken or avoided to achieve the goal 𝑔. Definition 5. The set of all possible plans 𝐿𝑃 is defined as follows: 𝐿𝑃 = {(𝑔, 𝜑) | 𝑔 ∈ 𝐺, 𝜑 ∈ ℒ 𝐴 , 𝜑 ≢ ⊥}.</p><p>We introduce several types of rules which allow the agent to infer information based on its initial knowledge. Each rule is represented as a tuple (𝜑, 𝜓 ) in which 𝜑 is the prerequisite and 𝜓 is the consequent. These rules will capture a form of defeasible reasoning in which we only infer the consequent if it is consistent with all other information. This means that if 𝜑 is true and nothing suggests otherwise, then 𝜓 is inferred. For all types of rules 𝜑 may be ⊤ to signify that the rule has no prerequisite.</p><p>Context assumption rules are of the form (𝜑, 𝜓 ) with 𝜑, 𝜓 ∈ ℒ 𝐶 describing aspects of the context. We can use these rules to make assumptions about the standard context that the user is in or to represent the beliefs the agent has about the relation between different contexts.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 6. The set of all possible context assumption rules is defined as</head><formula xml:id="formula_4">ℛ 𝐶 = {(𝜑, 𝜓 ) | 𝜑, 𝜓 ∈ ℒ 𝐶 }.</formula><p>Goal selection rules are of the form (𝜑, 𝑔) with 𝜑 ∈ ℒ 𝐶 describing the context and 𝑔 ∈ 𝐺 describing the goal that should be achieved in this context. These are used to describe which goal the user should strive for in a certain context, if possible.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 7. The set of all possible goal selection rules is defined as</head><formula xml:id="formula_5">ℛ 𝐺 = {(𝜑, 𝑔) | 𝜑 ∈ ℒ 𝑐 , 𝑔 ∈ 𝐺}.</formula><p>Action selection rules are of the form (𝜑, 𝜓 ) with 𝜑 ∈ ℒ and 𝜓 ∈ ℒ 𝐴 . Here 𝜑 describes the circumstances in which the actions described in 𝜓 may be taken, if they are possible. These circumstances can include certain context factors, the selected goals and other selected actions depending on the application.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 8. The set of action selection rules is defined as</head><formula xml:id="formula_6">ℛ 𝐴 = {(𝜙, 𝜓 ) | 𝜙 ∈ ℒ , 𝜓 ∈ ℒ 𝐴 }.</formula><p>We use ℛ = ℛ 𝐶 ∪ ℛ 𝐺 ∪ ℛ 𝐴 to refer to all rules collectively. In order to be able to reason with these rules, we assign each rule 𝑟 ∈ ℛ a unique name 𝑛(𝑟). For this, we define an injective naming function 𝑛 from the set of all rules ℛ to a set of names 𝑁. We use these names to define an ordering on the rules. For simplicity of notation we will use the name and the rule itself interchangeably.</p><p>We represent the current state of the agent through its configuration, a tuple which specifies the formulas, rules and orderings that the agent reasons with. Definition 9. The configuration of an agent is a tuple Conf = (𝑊 , 𝐶𝐶, 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) where 𝑊 ⊆ ℒ is the world knowledge, 𝐶𝐶 ⊆ 𝐿 𝐶 are literals describing the current context, 𝑃 ⊆ 𝐿𝑃 is a set of plans, 𝐷 𝐶 ⊆ ℛ 𝐶 is a set of context assumption rules, 𝐷 𝐺 ⊆ ℛ 𝐺 is a set of goal selection rules, 𝐷 𝐴 ⊆ ℛ 𝐴 is a set of action selection rules and &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 are acyclic orderings on 𝐷 𝐶 , 𝐷 𝐺 and 𝐷 𝐴 .</p><p>We also specify that for each goal 𝑔 ∈ 𝐺, there is only one plan 𝑝 = (𝑔, 𝜑) ∈ 𝑃. If there are multiple ways to achieve a goal this should be specified through disjunctions in 𝜑, rather than separate plans.</p><p>To illustrate the use of each component of the agent's configuration we introduce a simplified example. We consider an agent which can give the user advice on how to lead a healthier lifestyle based on the user's medical data. For our purposes we assume that the agent should recommend one exercise for the user each day, but if the user's blood pressure is elevated, this should be a higher-intensity workout. The agent knows of two types of low-intensity exercises, namely walking and yoga, and two types of higher-intensity exercises, namely going for a run and weight training.</p><p>Example 1. The agent is defined for the context factor 𝐶 = {𝐵𝑃} which indicates that the user's blood pressure is elevated, the set of goals 𝐺 = {𝐿𝐼 , 𝐻 𝐼 } which stand for lowintensity or higher-intensity workouts and the set of actions 𝐴 = {Walk, Yoga, Run, Weights} which are available.</p><p>The world knowledge is given by 𝑊 = {𝜑 1𝑔 , 𝜑 1𝑎 }, in which the formulas 𝜑 1𝑔 , 𝜑 1𝑎 express that at most one goal and one action proposition can be true at the same time and therefore included in the agent's advice. This is needed to ensure that the agent only gives one recommended action each day. The current context 𝐶𝐶 contains the blood pressure information of the user. In this example we will assume that the blood pressure is high, so 𝐶𝐶 = {𝐵𝑃}. The plans corresponding to the goals are 𝑃 = {(𝐿𝐼 , 𝑊 𝑎𝑙𝑘 ∨ 𝑌 𝑜𝑔𝑎), (𝐻 𝐼 , 𝑅𝑢𝑛 ∨ 𝑊 𝑒𝑖𝑔ℎ𝑡𝑠)}.</p><p>We assume that if we have no information suggesting otherwise, the user's blood pressure is normal. Therefore 𝐷 𝐶 = {(⊤, ¬𝐵𝑃)}. The goal of a higher intensity workout should only be selected if 𝐵𝑃 is true, but the goal of a lower intensity workout can be selected in any situation so 𝐷 𝐺 = {(𝐵𝑃, 𝐻 𝐼 ), (⊤, 𝐿𝐼 )}. For the sake of this example we assume that all the considered actions can be done in any context, which gives us the action selection rules {(⊤, 𝑊 𝑎𝑙𝑘), (⊤, 𝑌 𝑜𝑔𝑎), (⊤, 𝑅𝑢𝑛), (⊤, 𝑊 𝑒𝑖𝑔ℎ𝑡𝑠)}.</p><p>Since we only have one context assumption rule, we do not specify any ordering on this type of rule. The goal of a higher intensity workout, if applicable, is more important than the lower intensity workout so we have (⊤, 𝐿𝐼 ) &lt; 𝐺 (𝐵𝑃, 𝐻 𝐼 ) The user has expressed that they prefer Yoga over Walking and Running over Weights so we specify (⊤, 𝑊 𝑎𝑙𝑘) &lt; 𝐴 (⊤, 𝑌 𝑜𝑔𝑎) and (⊤, 𝑊 𝑒𝑖𝑔ℎ𝑡𝑠) &lt; 𝐴 (⊤, 𝑅𝑢𝑛).</p><p>This results in an agent configuration 𝐸𝑥 = (𝑊 , 𝐶𝐶, 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 )</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Determining the Agent's Advice</head><p>For a given configuration Conf of the agent, we define a corresponding theory of ordered default logic 𝑇 = (𝐾 , 𝐷, &lt;). We define the knowledge base 𝐾 based on 𝑊, 𝐶𝐶 and 𝑃, the set of default rules based on 𝐷 𝐶 , 𝐷 𝐺 and 𝐷 𝐴 and the ordering &lt; based on &lt; 𝐶 , &lt; 𝐺 and &lt; 𝐴 . We take the sentences in 𝐾 to describe the agent's, possibly incomplete, knowledge of the world while the default rules in 𝐷 allow the agent to derive additional information based on its beliefs. The ordering &lt; provides a way to prioritise between these beliefs, either based on other beliefs about the world or based on the the user's preferences.</p><p>For this we translate every plan 𝑝 ∈ 𝑃 of the form 𝑝 = (𝑔, 𝜑) into a formula 𝑔 → 𝜑 ∈ ℒ. We write 𝑇 𝑟(𝑃) = {𝑔 → 𝜑 | (𝑔, 𝜑) ∈ 𝑃} for the set of all such translated plans. We also translate each rule 𝑟 = (𝜑, 𝜓 ) ∈ 𝐷 𝑖 for 𝑖 ∈ {𝐶, 𝐺, 𝐴} in the agent's configuration to a default rule of the form</p><formula xml:id="formula_7">𝜑 ∶ 𝜓 𝜓 𝑟 .</formula><p>We take the transitive closures &lt; + 𝐴 , &lt; + 𝐺 , &lt; + 𝐶 of the orderings to obtain strict partial orderings and define &lt; as the series composition partial order of 𝐷 𝐶 , 𝐷 𝐺 and 𝐷 𝐴 . This means in addition to ordering given in Conf, we also consider all rules regarding the context to be ranked higher than goal and action selection rules and we rank all goal selection rules higher than the action selection rules. We do this to make sure that the agent first considers the context it is in, then selects a goal for the user to pursue and then selects actions based on this. Definition 10. We define the ordered default theory 𝐷𝐿(Conf) corresponding to the agent whose configuration is given by Conf = (𝑊 , 𝐶𝐶, 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) to be 𝐷𝐿(Conf) = (𝐾 , 𝐷, &lt;) where 𝐾 = 𝑊 ∪ 𝐶𝐶 ∪ 𝑇 𝑟(𝑃) is the knowledge base, 𝐷 = {𝜑 ∶ 𝜓 /𝜓 | (𝜑, 𝜓 ) ∈ 𝐷 𝐶 ∪ 𝐷 𝐺 ∪ 𝐷 𝐴 } is the set of default rules that make up the belief base and &lt; is the series partial order of 𝐷 𝐶 , 𝐷 𝐺 and 𝐷 𝐴 .</p><p>Based on our definition of a configuration, it is not yet guaranteed that this ordered default theory has a consistent extension. We define valid configurations to be those which do.</p><formula xml:id="formula_8">Definition 11. A configuration Conf is valid if 𝑊 ∪ 𝐶𝐶 ∪ 𝑇 𝑟(𝑃) is consistent.</formula><p>Proposition 2. For an ordered default theory 𝐷𝐿(Conf) = (𝐾 , 𝐷, &lt;) based on a valid configuration Conf of an agent, the unordered default theory (𝐾 , 𝐷) has at least one consistent extension.</p><p>This follows directly from <ref type="bibr" target="#b11">[12]</ref> as, by definition, 𝐾 is consistent and all default rules in 𝐷 are normal. However, as discussed in Section 2, this does not yet guarantee the existence of a &lt;-preserving extension. For this we define the notion of an effective configuration. Definition 12. An agent configuration is effective if it is valid and the ordered default theory 𝐷𝐿(Conf) fulfils the requirements from Proposition 1.</p><p>We argue that an agent which is defined in an intuitively sensible way, will fulfil these conditions. If the dependency graph of the theory 𝐷𝐿(Conf) is not even, or goes against the ordering &lt;, then this signifies an implicit inconsistency in the reasoning formalised in the agent. However, these conditions are difficult to formalise for the configuration of the agent, as they require us to consider the default theory. In future work we hope to determine clear requirements for agent configurations which guarantee for the existence of a &lt;-preserving extension.</p><p>We use the &lt;-preserving extensions of the default theory 𝐷𝐿(Conf) based on the agent's configuration to determine the advice that the agent should give the user. If there are multiple suitable extensions of 𝐷𝐿(Conf) then the agent requires a way to choose one of these extensions. This requires a meta-logic above the default logic that we have specified, so we will simply assume that such a selection can be made. Definition 13. For an agent with the effective configuration Conf = (𝑊 , 𝐶𝐶, 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) which is translated into the ordered default theory 𝐷𝐿(Conf) with the &lt;-preserving extension 𝐸, the agent's advice consists of the set of selected goals 𝒜 𝐺 = 𝐺 ∩ 𝐸 and the set of recommended actions 𝒜 𝐴 = 𝐿(𝐴) ∩ 𝐸.</p><p>We showcase how the advice is obtained from the configuration of an agent by going through the configuration from Example 1.</p><p>Example 2. The configuration 𝐸𝑥 as defined in Example 1 is translated into the ordered default theory 𝑇 = (𝐾 , 𝐷, &lt;) with</p><formula xml:id="formula_9">• 𝐾 = {𝜑 1𝑔 , 𝜑 1𝑎 , 𝐵𝑃, 𝐻 𝐼 → (𝑅𝑢𝑛 ∨ 𝑊 𝑒𝑖𝑔ℎ𝑡𝑠), 𝐿𝐼 → (𝑊 𝑎𝑙𝑘 ∨ 𝑌 𝑜𝑔𝑎)}, • 𝐷 = { ⊤∶¬𝐵𝑃 ¬𝐵𝑃 𝛿 1 , 𝐵𝑃∶𝐻 𝐼 𝐻 𝐼 𝛿 2 , ⊤∶𝐿𝐼 𝐿𝐼 𝛿 3 , ⊤∶𝑊 𝑎𝑙𝑘 𝑊 𝑎𝑙𝑘 𝛿 4 , ⊤∶𝑌 𝑜𝑔𝑎 𝑌 𝑜𝑔𝑎 𝛿 5 , ⊤∶𝑅𝑢𝑛 𝑅𝑢𝑛 𝛿 6 , ⊤∶𝑊 𝑒𝑖𝑔ℎ𝑡𝑠 𝑊 𝑒𝑖𝑔ℎ𝑡𝑠 𝛿 7 } and • &lt; = 𝐷 𝐶 ; 𝐷 𝐺 ; 𝐷 𝐴 = {(𝛿 𝑖 , 𝛿 1 ) | 𝑖 = 2, … , 7} ∪ {(𝛿 𝑖 , 𝛿 𝑗 ) | 𝑖 = 4 … , 7; 𝑗 = 2, 3} ∪ {(𝛿 3 , 𝛿 2 )} ∪ {(𝛿 4 , 𝛿 5 ), (𝛿 7 , 𝛿 6 )}</formula><p>The default theory (𝐾 , 𝐷) has four possible extensions. We write only the relevant parts of each extension. These are 𝐸 1 = {𝐻 𝐼 , 𝑅𝑢𝑛}, 𝐸 2 = {𝐻 𝐼 , 𝑊 𝑒𝑖𝑔ℎ𝑡𝑠}, 𝐸 3 = {𝐿𝐼 , 𝑌 𝑜𝑔𝑎} and 𝐸 4 = {𝐿𝐼 , 𝑊 𝑎𝑙𝑘}. However, only 𝐸 1 is &lt;-preserving. Therefore the agent's advice consists of the selected goal 𝐻 𝐼 and the recommended action 𝑅𝑢𝑛.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Agent Updates</head><p>In the previous section we have defined the configuration of a behaviour support agent and detailed how this determines the advice that the agent gives. In practice, the knowledge and beliefs of the agent change over time, so we need to be able to adapt the configuration of the agent. In this section, we define update operations on the agent's configuration which will allow us to add or remove information from each component individually.</p><p>For each of these components, we want the updates to be defined in such a way that the knowledge base 𝑊 ∪ 𝐶𝐶 ∪ 𝑇 𝑟(𝑃) remains consistent. This is necessary to ensure that we obtain a valid configuration as the result of the update. Unfortunately, we cannot always guarantee that the new configuration will also be effective due to the requirements from Definition 12. We will formally define the updates and also highlight such possible problems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Updates to Knowledge Base</head><p>The knowledge base of the agent is made up of the world knowledge 𝑊, the current context information 𝐶𝐶 and the set of plans 𝑃. We want to be able to update these parts individually, but as explained above we have to consider them all to ensure the updates yield a valid configuration.</p><p>We can update the world knowledge 𝑊 of the agent by adding a sentence 𝜑 ∈ ℒ using the following update. This means we use the theory base revision operator on 𝑊 and update it with 𝜑 but also with 𝑇 𝑟(𝑃) and 𝐶𝐶. While we remove 𝑇 𝑟(𝑃) and 𝐶𝐶 again afterwards, this approach guarantees that 𝑊 ′ ∪ 𝐶𝐶 ∪ 𝑇 𝑟(𝑃) will be consistent.</p><p>If we want to remove a formula from the world knowledge 𝑊, this is unproblematic for the consistency of the knowledge base.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 15. For a configuration Conf</head><p>= (𝑊 , 𝐶𝐶, 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) and a formula 𝜑 ∈ ℒ, we define the update operation Conf ÷ 𝑊 𝜑 = ((𝑊 ÷ 𝜑), 𝐶𝐶, 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) to remove 𝜑 from 𝑊 and its deductive closure 𝑇 ℎ(𝑊 ).</p><p>We note that by defining the operator in this way, it is possible that 𝜑 is still contained in an extension 𝐸 of 𝐷𝐿(Conf') due to the information in 𝐶𝐶 ∪ 𝑇 𝑟(𝑃) and the rules in 𝐷 𝐶 , 𝐷 𝐺 and 𝐷 𝐴 .</p><p>In order to update the current context 𝐶𝐶 we use the following operators, similarly to the ones for 𝑊.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 16. For a configuration Conf</head><p>= (𝑊 , 𝐶𝐶, 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) and context information 𝜑 ∈ 𝐿(𝐶) so that {𝜑} ∪ 𝑊 ∪ 𝑇 𝑟(𝑃) consistent, we define the update operation Conf * 𝐶𝐶 𝜑 = (𝑊 , 𝐶𝐶 ′ , 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) with 𝐶𝐶 ′ = (𝐶𝐶 * (𝜑 ∪ 𝐶𝐶 ∪ 𝑇 𝑟(𝑃))) ∖ (𝑊 ∪ 𝑇 𝑟(𝑃)) Definition 17. For a configuration Conf = (𝑊 , 𝐶𝐶, 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) and context information 𝜑 ∈ 𝐶 ∪ ¬𝐶, we define the update operations Conf ÷ 𝐶𝐶 𝜑 = (𝑊 , (𝐶𝐶 ÷ 𝜑), 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) to remove 𝜑 from 𝐶𝐶.</p><p>In order to update the plans 𝑃 by adding or removing a plan 𝜋 = (𝑔, 𝜑) we use similar updates as for the knowledge base. When adding a new plan to 𝑃 we need to ensure that the resulting set of plans only contains at most one plan per goal. This means we have to remove any previous plan (𝑔, 𝜓 ) for the goal 𝑔 before adding (𝑔, 𝜑) to 𝑃. We also make sure that the new information is consistent with the other components of the knowledge base.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 18. For a configuration Conf</head><p>= (𝑊 , 𝐶𝐶, 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) and a plan 𝜋 = (𝑔, 𝜑) ∈ 𝐿𝑃 so that {𝑇 𝑟(𝜋)} ∪ 𝑊 ∪ 𝐶𝐶 consistent, we define the update operation Conf * 𝑃 𝜋 = (𝑊 , 𝐶𝐶, 𝑃 ′ , 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) with 𝑃 ′ = (𝑃 ∖ {(𝑔, 𝜓 )} ∪ {𝜋}) to add 𝜋 to 𝑃.</p><p>If we remove a plan 𝜋 = (𝑔, 𝜑) from 𝑃, this will result in a valid configuration. However, it is possible that the goal 𝑔 is still the result of a goal selection rule and may be contained in an extension of 𝐷𝐿(Conf). This means the agent may advise the user to pursue the goal, despite there not being any action recommendation which corresponds to this. For this reason, this update should usually not be performed in isolation in practice.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 19. For a configuration Conf</head><p>= (𝑊 , 𝐶𝐶, 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) and a plan 𝜋 = (𝑔, 𝜑) ∈ 𝐿𝑃, we define the update operations Conf ÷ 𝑃 𝜋 = (𝑊 , 𝐶𝐶, 𝑃 ′ , 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) with 𝑃 ′ = 𝑃 ∖ 𝜋 to remove 𝜋 from 𝑃. Proof. This follows directly from the definitions of * 𝜑 and ÷𝜑 and Proposition 2.</p><p>Unfortunately we cannot make the same claim regarding &lt;-preserving consistent extensions. This is because any update to the knowledge base of a theory will affect the dependency graph 𝒢 (𝐷, 𝐾 ) of the theory 𝐷𝐿(Conf).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Updates to the Beliefs</head><p>The beliefs of the agent are made up of the context assumption rules 𝐷 𝐶 , the goal selection rules 𝐷 𝐺 and the action selection rules 𝐷 𝐴 . Since all types of rules and their respective orderings are defined and translated in the same way, we will only go through the updates of the context assumption rules in detail, the rest are analogous.</p><p>When adding a new context assumption rule to the agent's belief base, it is likely that this belief should also be integrated into the ordering &lt; 𝐶 . However, this is not mandatory and can be done separately with the update operator on &lt; 𝐶 that we introduce below.</p><p>Definition 20. For a configuration Conf = (𝑊 , 𝐶𝐶, 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) and a context assumption rule 𝑟 = (𝜑, 𝜓 ) ∈ ℛ 𝐶 we define the update Conf * 𝐷 𝐶 𝑟 = (𝑊 , 𝐶𝐶, 𝑃, 𝐷 ′ 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) where</p><formula xml:id="formula_10">𝐷 ′ 𝐶 = 𝐷 𝐶 ∪ {𝑟}.</formula><p>When an existing context assumption rule 𝑟 needs to be removed from 𝐷 𝐶 , then we have to remove it from the ordering &lt; 𝐶 as well. This follows from the requirement that the ordering &lt; 𝐶 should be defined on the set 𝐷 𝐶 . Proof. This follows directly from the definition and Proposition 2, since all default rules are still normal and the knowledge base is still consistent.</p><p>Unfortunately, when adding a new rule we cannot guarantee the existence of a &lt;-preserving extension as this rule could generate new cycles in the dependency graph that might not be even. However, when removing a rule this does result in a configuration Conf ′ for which 𝐷𝐿(Conf) has a &lt;-preserving extension. Proposition 3. For an effective configuration Conf = (𝑊 , 𝐶𝐶, 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) where 𝐷𝐿(Conf) has a &lt;-preserving extension and a rule 𝑟 ∈ ℛ 𝐶 , the ordered default theory 𝐷𝐿(Conf ′ ) with Conf ′ = Conf ÷ 𝐷 𝐶 𝑟, also has a &lt;-preserving extension.</p><p>Proof. To see this we check the conditions of Proposition 1. Removing a rule from the configuration and thereby removing a default rule from the default theory, cannot create any new cycles in the dependency graph. Since</p><p>Conf is an effective configuration, we know that all existing cycles are even, which means that the dependency graph of 𝐷𝐿(Conf ′ ) is even too. Additionally, any cycles that are removed from the dependency graph by removing 𝑟 are also removed from the ordering &lt;, so the ordering cannot introduce any new cycles.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Updates to the Ordering</head><p>The ordering of the agent is made up of the orderings &lt; 𝐶 , &lt; 𝐺 and &lt; 𝐴 on 𝐷 𝐶 , 𝐷 𝐺 and 𝐷 𝐴 respectively. We can update each of these orderings individually and only need to ensure that the resulting ordering is acyclic. Since all three orderings of the agent are defined in the same way, we will only go through the updates to the context ordering in detail; the others are analogous.</p><p>We can add a relation to &lt; 𝐶 using the following update. When removing a relation (𝑟 1 , 𝑟 2 ) from &lt; 𝐶 , we ideally want to remove the relation from &lt; + 𝐶 to make sure it does not appear in 𝐷𝐿(Conf). However, this may require removing multiple relations from &lt; 𝐶 in the process. Since we have multiple choices for this, we will not include this in the update. If necessary, the ordering will need to be updated multiple times to fully remove the relation from &lt; + 𝐶 .</p><p>Definition 23. For a configuration Conf = (𝑊 , 𝐶𝐶, 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) and a relation (𝑟 1 , 𝑟 2 ) with 𝑟 1 , 𝑟 2 ∈ 𝐷 𝐶 we define the update Conf ÷ &lt; 𝐶 (𝑟 1 , 𝑟 2 ) = (𝑊 , 𝐶𝐶, 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; ′ 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) where &lt; ′ 𝐶 =&lt; 𝐶 ∖{(𝑟 1 , 𝑟 2 )}.</p><p>Lemma 3. The update operators Conf * &lt; 𝐶 (𝑟 1 , 𝑟 2 ) and Conf ÷ &lt; 𝐶 (𝑟 1 , 𝑟 2 ) are well-defined. The resulting ordering &lt; ′ 𝐶 is acyclic. If the default theory 𝐷𝐿(Conf) has a consistent extension, then the updated theory will also have a consistent extension.</p><p>Proof. This follows directly from definition as the knowledge base is still consistent and the default rules are still normal.</p><p>When adding a new relation to the ordering &lt; 𝐶 , this may create new cycles when combined with the dependency graph of 𝐷𝐿(Conf), which means we cannot guarantee that the resulting configuration will be effective. On the other hand, it is obvious that removing a relation does not have this issue, meaning that a useful configuration will be updated to another useful configuration.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Resolving Misalignments</head><p>With the framework we introduced, the agent is able to reason about a user model and a world model in order to provide personalised support to the user. By representing this explicitly, the user can interact with and adapt the agent's reasoning process directly using the updates that we have defined in the previous section. We chose to use default logic for this purpose because this allows the user to interact with and adapt the agent's reasoning process directly using the updates that we have defined in the previous section. A revision of the agent's reasoning process is necessary if the agent's advice does not align with the needs and wants of the user because the agent's advice contains either an action 𝑎 or a goal 𝑔 that the user does not agree with. In the following, we refer to these situations as misalignment scenarios, based on <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11]</ref>. In this section we will discuss the causes of misalignments that are identified in <ref type="bibr" target="#b9">[10]</ref> and discuss how these can be resolved using the update operators defined in the previous section.</p><p>The three causes for these misalignments that are differentiated in <ref type="bibr" target="#b9">[10]</ref> are the reasoning process of the agent being wrong, the agent's user model being wrong or something having changed in such a way that the agent needs to adapt to the new situation. For our purposes, we do not need to differentiate whether the misalignments occur due to a change or because of a mistake in the initialisation of the agent. Formally, these are handled the same way in this framework. We will discuss how each of the scenarios can be addressed by updating the configuration of the agent. We will give examples of potential misalignments with the advice provided by the agent we introduced in Example 1 and showcase how the realignment updates affect that configuration.</p><p>For the sake of this section we will assume that the agent and the user are able to identify the exact cause of the misalignment together. While this is not a trivial assumption and still a topic of active research, this is not something that can be achieved purely within the logical framework of an agent, making it out of the scope of this paper. For simplicity, we also assume that there is only one misalignment at a time.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">Incorrect Reasoning</head><p>The reasoning process of the agent is based on logical inference, which cannot be incorrect by itself. However, if the world model of the agent is incorrect, then the agent may draw the wrong conclusions even if the user model is correct. This may refer to either the knowledge or the beliefs about the world, the latter including the prioritisation of these beliefs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.1.">Incorrect World Knowledge</head><p>The first misalignment scenario we consider is the situation in which the agent has incorrect knowledge about the world. This means that there is either a sentence 𝜑 ∉ 𝑊 that the agent does not know or the agent incorrectly accepts 𝜑 ∈ 𝑊 as known.</p><p>If the agent is missing the information 𝜑, we can update the configuration Conf of the agent using Conf * 𝑊 𝜑. By definition, this update requires 𝜑 to be consistent with 𝐶𝐶 ∪ 𝑇 𝑟(𝑃). If we assume that 𝜑 is the only cause of misalignment then this requirement also makes sense intuitively. In order for the agent to also be able to give advice, there are the additional requirements of the effective configuration. While we have explained above that these requirements are reasonable, they might be hard for the user to understand, especially if the agent becomes more complex. In future work we hope to look into ways to identify problematic cycles in the agents configuration and assist the user in resolving them.</p><p>Example 3. In Example 2, we have identified that the agent's advice would be to pursue higher intensity exercising and specifically to go for a run. However, the user may be unable to go for a run because their regular running route is under construction. While this is related to a certain context in a way, which we discuss later on, we can treat this as direct information about the world. This means we update the agent's configuration with 𝐸𝑥 ′ = 𝐸𝑥 * 𝑊 ¬𝑅𝑢𝑛. As a result, {𝐻 𝐼 , 𝑅𝑢𝑛} is no longer an extension of 𝐸𝑥 ′ , and the agent's advice will instead be based on the extension 𝐸 ′ = {𝐻 𝐼 , 𝑊 𝑒𝑖𝑔ℎ𝑡𝑠}.</p><p>Next we consider that the agent has wrongly identified the current context 𝐶𝐶. If the mistake concerns a context factor 𝑐 that is already in 𝐶, this can easily be resolved using the updates Conf * 𝐶𝐶 𝑐 and Conf ÷ 𝐶𝐶 𝑐, similarly to the updates to the knowledge base we described above. However, if the user thinks that a new context factor should be considered which is not yet in the language of the agent, then simply adding it to the current context 𝐶𝐶 is not enough. We likely need to add a context assumption rule which specifies whether this context factor is normally assumed to be true or false. Additionally, we probably want to include this context factor in the relevant goal and action selection rules. Since we do not have an update that can modify individual rules, this needs to be achieved by deleting the original rule, then including the modified rule and lastly reinstating the relevant orderings.</p><p>Example 4. We consider that the user does not want to go for a run because it is raining. The original configuration of the agent did not account for the context of rain, so we need to perform a series of updates to include this. We begin by adding 𝑅𝑎𝑖𝑛 to the description of the current context using 𝐸𝑥 1 = 𝐸𝑥 * 𝐶𝐶 𝑅𝑎𝑖𝑛. We add a context assumption rule to specify that unless we have other knowledge, we assume it is not raining. We use the update 𝐸𝑥 2 = 𝐸𝑥 1 * 𝐷 𝐶 ⊤∶¬𝑅𝑎𝑖𝑛 ¬𝑅𝑎𝑖𝑛 𝛿 8 . We remove the action selection rule 𝛿 6 which is concerned with running through 𝐸𝑥 3 = 𝐸𝑥 2 ÷ 𝐷 𝐴 𝛿6. We add the modified action selection rule and obtain 𝐸𝑥 4 = 𝐸𝑥 3 * 𝐷 𝐴</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>¬𝑅𝑎𝑖𝑛∶𝑅𝑢𝑛 𝑅𝑢𝑛</head><p>}. We now restore the ordering by including 𝛿 7 &lt; 𝛿 9 . This gives us the final updated configuration 𝐸𝑥 ′ = 𝐸𝑥 5 * &lt; 𝐴 (𝛿 7 , 𝛿 9 )</p><p>The resulting default theory 𝐷𝐿(𝐸𝑥 ′ ) only has one &lt;preserving extension 𝐸 ′ = {𝐻 𝐼 , 𝑊 𝑒𝑖𝑔ℎ𝑡𝑠}.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.2.">Incorrect Beliefs about the world</head><p>The agent's beliefs about the world, modelled as context assumption rules in 𝐷 𝐶 , may also be incorrect.</p><p>If a new belief 𝛿 needs to be adopted, then this can be done using the update Conf * 𝐷 𝐶 𝛿. This will cause the dependency graph of the agent to change, which may mean that the resulting configuration is not effective. This brings similar problems as a change in the world knowledge.</p><p>A belief 𝛿 can be removed from the agent's configuration with the update Conf ÷ 𝐷 𝐶 𝛿. While this will produce an effective configuration, this may lead to the agent's advice being less specific towards the user's context.</p><p>The beliefs about the world may also need to be prioritised differently, by updating the ordering &lt; 𝐶 . While we have introduced updates Conf * &lt; 𝐶 (𝛿 𝑖 , 𝛿 𝑗 ) and Conf ÷ &lt; 𝐶 (𝛿 𝑖 , 𝛿 𝑗 ) to add or remove a relation to the ordering, in practice we will likely want to make more complex changes. While these can all be broken down into multiple applications of the two updates we have defined, this may be too complicated for the user to oversee themselves. Additionally, we also need to make sure that the relation remains acyclic and does not contradict the implicit ordering of the default theory modelled in the dependency graph.</p><p>Example 5. So far the agent's configuration has included the context assumption rule 𝛿 1 that unless other information is available, the user's blood pressure is normal. For this we remove the original rule 𝛿 1 using 𝐸𝑥 1 = 𝐸𝑥 ÷ 𝐷 𝐶 𝛿 1 and then add the new rule through 𝐸𝑥 ′ = 𝐸𝑥 1 * 𝐷 𝐶 ⊤∶𝐵𝑃 𝐵𝑃 . This means that even if the agent does not know the blood pressure levels of the user, so 𝐵𝑃 ∉ 𝐶𝐶, it will still recommend higher intensity exercises.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">Incorrect User Model</head><p>The user model of our agent contains information about the user's goals, the user's possible actions and the preferences regarding these. While humans may have goals and preferences that are not strictly logical, the formal framework of the agent requires the goals to be consistent with the current context and the knowledge about the world and the dependency graph to fulfil the requirements from Proposition 1. The agent will need to collaborate with the user to ensure that the user model is as accurate as possible while still meeting the formal requirements.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.1.">Incorrect Goals</head><p>The goals of the user are what motivates the advice that the agent gives, but they are also subject to change as the needs and desires of the user develop. Each goal 𝑔 ∈ 𝐺 has to correspond to a plan 𝜋 ∈ 𝑃, so that the agent knows how each goal can be achieved. Additionally, a goal should occur in the consequent of a goal selection rule, otherwise it cannot be considered in the agent's advice. Any changes to the goals of the user therefore have to be captured in the set of plans and the goal selection rules.</p><p>If a new goal 𝑔 is added, this goal needs a corresponding plan 𝜋, which can be added with the update Conf * 𝑃 𝜋. Usually we will also add a goal selection rule (𝜑 ∶ 𝑔/𝑔) 𝛿 𝑔 , for a sentence 𝜑 ∈ ℒ 𝐶 describing the context in which the goal can be selected, by Conf * 𝐷 𝐺 𝛿 𝑔 . The goal selection rule will then likely need to prioritised adequately, by adding relations to the ordering &lt; 𝐺 using Conf * &lt; 𝐺 and Conf÷ &lt; 𝐺 . Each of these updates will affect the dependency graph, which means there is a risk that the resulting agent is not effective.</p><p>If the user no longer wants to pursue a goal 𝑔, then the relevant goal selection rules as well as their orderings need to be removed using the appropriate update.</p><p>If a plan or a goal selection rule needs to be changed, the original rule has to be deleted and the new version needs to be added in separate updates as for updates to the world model. Example 6. We want the agent to consider the additional goal Rest when giving advice. This goal is achieved if no exercise is done, so the plan is 𝜋 = (Rest, ¬𝑊 𝑎𝑙𝑘 ∧ ¬𝑌 𝑜𝑔𝑎 ∧ ¬𝑅𝑢𝑛 ∧ ¬𝑊 𝑒𝑖𝑔ℎ𝑡𝑠). For now we do not have any context requirements for this goal to be selected but we prioritise it above the other goal selection rules. We begin by including the plan 𝑝 in the set of plans using 𝐸𝑥 1 = 𝐸𝑥 * 𝑃 𝑝. We then include the goal selection rule ⊤ ∶ 𝑅𝑒𝑠𝑡/𝑅𝑒𝑠𝑡 𝛿 𝑔 with the update 𝐸𝑥 2 = 𝐸𝑥 1 * 𝐷 𝐺 𝛿 𝑔 . Lastly, we include the relations (𝛿 2 , 𝛿 𝑔 ) and (𝛿 3 , 𝛿 𝑔 ) in the ordering &lt; 𝐺 through the update 𝐸𝑥 ′ = 𝐸𝑥 2 * &lt; 𝐺 {(𝛿 2 , 𝛿 𝑔 ), (𝛿 3 , 𝛿 𝑔 )}. This results in an effective configuration 𝐸𝑥 ′ with the &lt;-preserving extension 𝐸 ′ = {𝑅𝑒𝑠𝑡, ¬𝑊 𝑎𝑙𝑘, ¬𝑌 𝑜𝑔𝑎, ¬𝑅𝑢𝑛, ¬𝑊 𝑒𝑖𝑔ℎ𝑡𝑠}.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.2.">Incorrect Actions</head><p>The actions that are recommended by the agent are determined by the plans for the selected goals and the action selection rules. The user may either want to change the context prerequisites for selecting certain actions, add a new action selection rule or remove an existing action selection rule. These can each be achieved using the updates Conf * 𝐷 𝐴 and Conf÷ 𝐷 𝐴 . If the preferences of the user regarding the actions need to be changed, this can be handled analogous to the change of the ordering &lt; 𝐶 , using the updates Conf * &lt; 𝐴 and Conf÷ &lt; 𝐶 .</p><p>Example 7. In Example 3, the user was not able to go for a run and we added ¬𝑅𝑢𝑛 to the agent's world knowledge. This time we will remove the action selection rule for 𝑅𝑢𝑛 instead. By performing the update 𝐸𝑥 ′ = 𝐸𝑥 ÷ 𝐷 𝐴 𝛿 6 , the action selection rule and its corresponding ordering in &lt; 𝐴 are removed. This leads to the same &lt; −preserving extension as in Example 3, 𝐸 ′ = {𝐻 𝐼 , 𝑊 𝑒𝑖𝑔ℎ𝑡𝑠}. While these updates formally have the same result, they intuitively mean different things. In Example 3 the user is not able to run due to outside circumstances from the world that may be resolved at some point. We can remove ¬𝑅𝑢𝑛 from the knowledge base 𝑊 and are still able to use the previous user model. The update in this example on the other hand removes running as a possible action from the user model, indicating that the user no longer views this as an option.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Discussion</head><p>We have introduced a formal framework which can be used to specify the configuration of a behaviour support agent. The configuration can be translated into a theory of ordered default logic and the &lt;-preserving expansions of this theory determine the advice that the agent presents to the user. We have also defined updates on the configuration of the agent which can add or remove information from each of the components. These updates can be used to resolve misalignments between the user and the agent.</p><p>In order to use the updates for realignment, it is necessary for the agent and the user to accurately identify the precise cause of the misalignment. While this problem needs to be addressed through communication between the agent and the user <ref type="bibr" target="#b9">[10]</ref>, we want to facilitate this process using the formal framework of the agent. In future work we hope to study whether the structure of the framework is understandable to users, how we can formally identify potential causes of misalignment and how we can explain problematic cycles in the dependency graph to assist the user in resolving these.</p><p>So far we have only included the basic updates which add or remove information from each component of the agent's configuration. However, in <ref type="bibr" target="#b8">[9]</ref> there are also other updates on default theories, such as introducing a possibility by ensuring that there is at least one consistent extension which contains a sentence 𝜑. It may be interesting to see whether these updates can be adapted for ordered default logic and what they would mean for the agent's configuration. The updates we have used so far have also each been permanent changes of the agent's configuration. In practice there may be situations which require different advice in the moment but should not be considered in the future. These might require different types of updates to optimise the computational complexity of the agent.</p><p>In order to further demonstrate the potential of our framework, we also hope to implement the example agent we have presented in this paper. Since ordered default logic can be translated into regular default logic using the process described in <ref type="bibr" target="#b12">[13]</ref>, we can use existing solvers for default logic to implement the reasoning of the agent. By combining this with implementations of belief revision operators, we can study how our framework behaves in a real application. For this, we will likely also need to optimise the agent to reduce the computational complexity. A first step for this is to consider the work of <ref type="bibr" target="#b13">[14]</ref> which discusses specific types of default theories for which an extension can be found in polynomial time.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Definition 14 .</head><label>14</label><figDesc>For a configuration Conf = (𝑊 , 𝐶𝐶, 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) and a formula 𝜑 ∈ ℒ with {𝜑} ∪ 𝐶𝐶 ∪ 𝑇 𝑟(𝑃) consistent, we define the update operation Conf * 𝑊 𝜑 = (𝑊 ′ , 𝐶𝐶, 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) with 𝑊 ′ = (𝑊 * ({𝜑} ∪ 𝐶𝐶 ∪ 𝑇 𝑟(𝑃))) ∖ (𝐶𝐶 ∪ 𝑇 𝑟(𝑃)).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Lemma 1 .</head><label>1</label><figDesc>The updates Conf * 𝐾 𝜑,Conf * 𝑊 𝜑, Conf ÷ 𝑊 𝜑, Conf * 𝐶𝐶 𝜑, Conf ÷ 𝐶𝐶 𝜑 Conf * 𝑃 𝜋 and Conf ÷ 𝑃 𝜋, are welldefined. Additionally, if the default theory 𝐷𝐿(Conf) has a consistent extension, then the updated default theory 𝐷𝐿(Conf ′ ) will also have a consistent extension.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>Definition 21. For a configuration Conf = (𝑊 , 𝐶𝐶, 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) and a context assumption rule 𝑟 = (𝜑, 𝜓 ) ∈ ℛ 𝐶 we define the update Conf ÷ 𝐷 𝐶 𝑟 = (𝑊 , 𝐶𝐶, 𝑃, 𝐷 ′ 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; ′ 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) where 𝐷 ′ 𝐶 = 𝐷 𝐶 ∖ {𝑟} and &lt; ′ 𝐶 =&lt; 𝐶 | 𝐷 ′ 𝐶 . Lemma 2. The update operators Conf * 𝐷 𝐶 𝑟 and Conf÷ 𝐷 𝐶 𝑟 are well-defined. If the default theory 𝐷𝐿(Conf) has a consistent extension, then the updated theory will also have a consistent extension.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head></head><label></label><figDesc>𝐶𝐶, 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) and a relation (𝑟 1 , 𝑟 2 ) with 𝑟 1 , 𝑟 2 ∈ 𝐷 𝐶 and (𝑟 2 , 𝑟 1 ) ∉ &lt; + 𝐶 we define the update Conf * &lt; 𝐶 (𝑟 1 , 𝑟 2 ) = (𝑊 , 𝐶𝐶, 𝑃, 𝐷 𝐶 , 𝐷 𝐺 , 𝐷 𝐴 , &lt; ′ 𝐶 , &lt; 𝐺 , &lt; 𝐴 ) where &lt; ′ 𝐶 =&lt; 𝐶 ∪{(𝑟 1 , 𝑟 2 )}.</figDesc><table><row><cell>Definition 22. For a configuration Conf</cell><cell>=</cell></row><row><cell>(𝑊 ,</cell><cell></cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This research was partly funded by the Hybrid Intelligence Center, a 10-year programme funded by the Dutch Ministry of Education, Culture and Science through the Netherlands Organisation for Scientific Research, https:// hybrid-intelligence-centre.nl, grant number 024.004.022.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Behavior change support systems: A research model and agenda</title>
		<author>
			<persName><forename type="first">H</forename><surname>Oinas-Kukkonen</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-642-13226-1_3</idno>
	</analytic>
	<monogr>
		<title level="m">Persuasive Technology</title>
				<editor>
			<persName><forename type="first">T</forename><surname>Ploug</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Hasle</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Oinas-Kukkonen</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin Heidelberg; Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="4" to="14" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Persuading to prepare for quitting smoking with a virtual coach: Using states and user characteristics to predict behavior</title>
		<author>
			<persName><forename type="first">N</forename><surname>Albers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Neerincx</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W.-P</forename><surname>Brinkman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems, AAMAS &apos;23, International Foundation for Autonomous Agents and Multiagent Systems</title>
				<meeting>the 2023 International Conference on Autonomous Agents and Multiagent Systems, AAMAS &apos;23, International Foundation for Autonomous Agents and Multiagent Systems</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="717" to="726" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Creating socially adaptive electronic partners: Interaction, reasoning and ethical challenges</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">B</forename><surname>Van Riemsdijk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">M</forename><surname>Jonker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Lesser</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, AA-MAS &apos;15, International Foundation for Autonomous Agents and Multiagent Systems</title>
				<meeting>the 2015 International Conference on Autonomous Agents and Multiagent Systems, AA-MAS &apos;15, International Foundation for Autonomous Agents and Multiagent Systems<address><addrLine>Richland, SC</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="1201" to="1206" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A research agenda for hybrid intelligence: Augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Akata</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Balliet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>De Rijke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Dignum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Dignum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Eiben</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Fokkens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Grossi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Hindriks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hoos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Jonker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Monz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Neerincx</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Oliehoek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Prakken</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Schlobach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Van Der Gaag</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Van Harmelen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Van Hoof</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Van Riemsdijk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Van Wynsberghe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Verbrugge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Verheij</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Vossen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Welling</surname></persName>
		</author>
		<idno type="DOI">10.1109/MC.2020.2996587</idno>
	</analytic>
	<monogr>
		<title level="j">Computer</title>
		<imprint>
			<biblScope unit="volume">53</biblScope>
			<biblScope unit="page" from="18" to="28" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Rise of Machine Agency: A Framework for Studying the Psychology of Human-AI Interaction (HAII)</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Sundar</surname></persName>
		</author>
		<idno type="DOI">10.1093/jcmc/zmz026</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Computer-Mediated Communication</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="74" to="88" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Personalization in practice: Methods and applications</title>
		<author>
			<persName><forename type="first">D</forename><surname>Goldenberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kofman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Albert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mizrachi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Horowitz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Teinemaa</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 14th ACM International Conference on Web Search and Data Mining, WSDM &apos;21</title>
				<meeting>the 14th ACM International Conference on Web Search and Data Mining, WSDM &apos;21<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="1123" to="1126" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Knowledge-driven profile dynamics</title>
		<author>
			<persName><forename type="first">E</forename><surname>Fermé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Garapa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">D L</forename><surname>Reis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Almeida</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Paulino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rodrigues</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.artint.2024.104117</idno>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">331</biblScope>
			<biblScope unit="page">104117</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Algorithmic decision-making based on machine learning from big data: can transparency restore accountability?</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">B</forename><surname>De Laat</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Philosophy &amp; technology</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="page" from="525" to="541" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">On the dynamics of default reasoning</title>
		<author>
			<persName><forename type="first">G</forename><surname>Antoniou</surname></persName>
		</author>
		<idno type="DOI">10.1002/int.10065</idno>
		<idno>doi:</idno>
		<ptr target="https://doi.org/10.1002/int.10065" />
	</analytic>
	<monogr>
		<title level="j">International Journal of Intelligent Systems</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="page" from="1143" to="1155" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Acquiring semantic knowledge for user model updates via human-agent alignment dialogues: An exploratory focus group study</title>
		<author>
			<persName><forename type="first">P.-Y</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tielman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Heylen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Jonker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Riemsdijk</surname></persName>
		</author>
		<idno type="DOI">10.3233/FAIA230077</idno>
	</analytic>
	<monogr>
		<title level="m">HHAI 2023: Augmenting Human Intellect -Proceedings of the 2nd International Conference on Hybrid Human-Artificial Intelligence</title>
				<imprint>
			<publisher>IOS Press</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="93" to="108" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Using default logic to create adaptable user models for behavior support agents</title>
		<author>
			<persName><forename type="first">J</forename><surname>Wolff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">De</forename><surname>Boer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Heylen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">B</forename><surname>Van Riemsdijk</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">HYBRID HUMAN AI SYSTEMS FOR THE SOCIAL GOOD</title>
		<imprint>
			<biblScope unit="page">350</biblScope>
			<date type="published" when="2024">2024. 2024</date>
		</imprint>
	</monogr>
	<note>HHAI</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">A logic for default reasoning</title>
		<author>
			<persName><forename type="first">R</forename><surname>Reiter</surname></persName>
		</author>
		<idno type="DOI">10.1016/0004-3702(80)90014-4</idno>
		<idno>(80)90014-4</idno>
		<ptr target="https://doi.org/10.1016/0004-3702" />
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="81" to="132" />
			<date type="published" when="1980">1980</date>
		</imprint>
	</monogr>
	<note>, special Issue on Non-Monotonic Logic</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Expressing preferences in default logic</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Delgrande</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Schaub</surname></persName>
		</author>
		<idno type="DOI">10.1016/S0004-3702(00)00049-7</idno>
		<idno>(00)00049-7</idno>
		<ptr target="https://doi.org/10.1016/S0004-3702" />
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">123</biblScope>
			<biblScope unit="page" from="41" to="87" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Default theories that always have extensions</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">H</forename><surname>Papadimitriou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sideri</surname></persName>
		</author>
		<idno type="DOI">10.1016/0004-3702(94)90087-6</idno>
		<idno>-3702(94)90087-6</idno>
		<ptr target="org/10.1016/0004" />
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">69</biblScope>
			<biblScope unit="page" from="347" to="357" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Iterated theory base change: A computational model</title>
		<author>
			<persName><forename type="first">M.-A</forename><surname>Williams</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence</title>
				<meeting>the Fourteenth International Joint Conference on Artificial Intelligence</meeting>
		<imprint>
			<date type="published" when="1995">1995</date>
			<biblScope unit="page" from="1541" to="1549" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">On the logic of theory change: Partial meet contraction and revision functions</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">E</forename><surname>Alchourrón</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Gärdenfors</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Makinson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Journal of Symbolic Logic</title>
		<imprint>
			<biblScope unit="volume">50</biblScope>
			<biblScope unit="page" from="510" to="530" />
			<date type="published" when="1985">1985</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Belief revision in answer set programming</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">I</forename><surname>Aravanis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Peppas</surname></persName>
		</author>
		<idno type="DOI">10.1145/3139367.3139387</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 21st Pan-Hellenic Conference on Informatics, PCI &apos;17</title>
				<meeting>the 21st Pan-Hellenic Conference on Informatics, PCI &apos;17<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Belief base change operations for answer set programming</title>
		<author>
			<persName><forename type="first">P</forename><surname>Krümpelmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Kern-Isberner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Logics in Artificial Intelligence</title>
				<editor>
			<persName><forename type="first">L</forename><forename type="middle">F</forename><surname>Del Cerro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Herzig</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Mengin</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin Heidelberg; Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="294" to="306" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">On the dynamics of structured argumentation: Modeling changes in default justification logic</title>
		<author>
			<persName><forename type="first">S</forename><surname>Pandžić</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-39951-1_14</idno>
	</analytic>
	<monogr>
		<title level="m">matics) 12012 LNCS</title>
		<title level="s">Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinfor</title>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="222" to="241" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
