<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Arguments Based on Domain Rules in Prediction Justifications</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Joeri</forename><surname>Peters</surname></persName>
							<email>peters@uu.nl</email>
							<affiliation key="aff0">
								<orgName type="institution">Utrecht University</orgName>
								<address>
									<settlement>Utrecht</settlement>
									<country key="NL">The Netherlands</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">Netherlands National Police</orgName>
								<address>
									<settlement>Driebergen</settlement>
									<country key="NL">The Netherlands</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Floris</forename><surname>Bex</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Utrecht University</orgName>
								<address>
									<settlement>Utrecht</settlement>
									<country key="NL">The Netherlands</country>
								</address>
							</affiliation>
							<affiliation key="aff2">
								<orgName type="institution">Tilburg University</orgName>
								<address>
									<settlement>Tilburg</settlement>
									<country key="NL">The Netherlands</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Henry</forename><surname>Prakken</surname></persName>
							<email>h.prakken@uu.nl</email>
							<affiliation key="aff0">
								<orgName type="institution">Utrecht University</orgName>
								<address>
									<settlement>Utrecht</settlement>
									<country key="NL">The Netherlands</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Arguments Based on Domain Rules in Prediction Justifications</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">84421B11E7E5797CA8E3B7FEFA93802E</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T20:16+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Case-Based Argumentation</term>
					<term>Precedential Constraint</term>
					<term>Explainable AI</term>
					<term>Domain Knowledge</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Ensuring the interpretability of trained machine learning models is often paramount, particularly in high-stakes domains such as counter-terrorism and other forms of law enforcement. Post hoc techniques have emerged as a promising avenue for justifying the predictions of complex models. However, while these approaches provide valuable insights, they often lack the ability to directly reference familiar domain rules and make use of these rules to guide explanations. This paper introduces a method for incorporating arguments about the applicability of domain rules in justifying classifier predictions.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>This paper is concerned with explainability in machine learning <ref type="bibr">(ML)</ref>. Specifically, we focus on enhancing the explainable artificial intelligence (XAI <ref type="bibr" target="#b11">[12]</ref>) approach known as 'a fortiori case-based argumentation' (AF-CBA <ref type="bibr" target="#b16">[17]</ref>). AF-CBA justifies binary classification predictions using the theory of precedential constraint <ref type="bibr" target="#b9">[10]</ref>, that is, referencing precedential cases from a case base constructed from training (or historical <ref type="bibr" target="#b14">[15]</ref>) data. Our goal is to extend this framework by incorporating domain rules, recognising that domain-specific knowledge plays a pivotal role in decision-making processes.</p><p>ML models are often regarded as 'black boxes' when their opacity is high, whether due to relative complexity or proprietary protection <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b7">8]</ref>. Neural networks serve as a typical example of intricate models that have revolutionised predictive accuracy at the cost of increased opacity. Transparency and explainability concerns become particularly critical in high-stakes domains, such as law enforcement, where decisions may carry significant consequences for individuals or court cases. Predictions have to be highly accurate-possibly necessitating opaque models-yet explainable. Post hoc approaches like AF-CBA are aimed at solving this problem by justifying ML predictions 'after the fact', meaning that the approach does not access the ML model itself and is therefore model agnostic. In our experience, the need for such an approach arises relatively frequently in practice. ML models can be inaccessible at the moment an explanation is required or the type of explanation it can offer is too technical for the intended users, thereby rendering it a black box. Furthermore, there can be situations when the performance metrics of an interpretable alternative to black box approaches is deemed unsatisfactory, necessitating a post hoc solution. AF-CBA produces such justifications on the basis of earlier cases (precedents).</p><p>Applicable scenarios can be drawn from the domain of counter-terrorism, where ML classifiers can be used to quickly yet objectively distinguish between two outcomes. For example, there may be a need to decide whether a particular incident is the responsibility of a specific terrorist organisation, judging by the modus operandi and objectives of its members. Another binary categorisation could be between the incident forming a part of a large-scale coordinated attack and it being a 'lone-wolf' incident, the outcome of which warrants a different police response. As a running example, we adopt the scenario in which officials seek to determine whether a violent event should be classified as an act of terrorism. It is realistic that a classifier should be used in order to facilitate quick yet valid judgement in such a situation, to avoid responders acting on gut feeling alone. However, the number of applicable precedents is relatively low and much contextual knowledge is involved in making this decision. Hence, the system should be transparently constrained by experts' knowledge of this domain. Our approach is in line with a tradition of viewing rule-and case-based reasoning as complementary. For instance, the two were combined in an overall architecture by Golding &amp; Rosenbloom <ref type="bibr" target="#b6">[7]</ref> to allow the latter to produce analogies in order to handle exceptions to the (incomplete) rule set. A similar integration of rules and cases was used by Rissland &amp; Skalak <ref type="bibr" target="#b17">[18]</ref> in their CABARET system, aimed at an area of income tax law. Our goal, however, is to let AF-CBA make use of and refer to such domain knowledge in its justifications of the predicted outcomes of a black-box model.</p><p>The rest of this paper is structured as follows. We introduce our XAI approach in Section 2 before considering how to incorporate domain rules in Section 3. Finally, we discuss conclusions and future work directions in Section 4.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Preliminaries</head><p>In justifying binary class labels, the predictions of a classifier trained on labelled data during its training phase can be likened to court decisions based on judicial precedents. In this vein, Prakken &amp; Ratsma <ref type="bibr" target="#b16">[17]</ref> propose a top-level model, afterwards dubbed AF-CBA, drawing on AI &amp; Law research and utilising case-based argumentation inspired by Horty's model of a fortiori reasoning <ref type="bibr" target="#b8">[9]</ref>. AF-CBA is influenced by CATO <ref type="bibr" target="#b0">[1]</ref> and work by Čyras et al. <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b4">5]</ref>. Contrary to <ref type="bibr" target="#b2">[3]</ref>, AF-CBA is not its own explainable classification approach, but a post hoc approach used to justify the classification predictions of another ML model. A schematic depiction of AF-CBA's workflow. The case base is constructed either instantly on the basis of the labelled data (dashed line) or stepwise on the basis of earlier predictions (dotted line, as in <ref type="bibr" target="#b14">[15]</ref>).</p><p>The context of AF-CBA is illustrated in Figure <ref type="figure">1</ref>. A labelled dataset constitutes a random sample from the overall population, to which annotators or decision-makers assign labels, and on which a classifier is trained (supervised ML). A focus case represents a single, random sample from the same population, and the classifier assigns a predicted outcome to it. Due to the black-box nature of the classifier, it lacks the capability to explain the rationale behind the prediction. AF-CBA addresses this limitation by utilising either the labelled set or an archive of previous case predictions <ref type="bibr" target="#b14">[15]</ref> as a case base, engaging in an argument game between a proponent and an opponent of the predicted outcome. In this argument game, cases which are similar to the focus case are cited in order to argue that the focus case ought to receive the same outcome. The decision is forced when no relevant differences exist between the focus case and the precedent. Moves in the argumentation game follow Dung's abstract argumentation framework <ref type="bibr" target="#b5">[6]</ref>, with the game modelled on grounded semantics <ref type="bibr" target="#b15">[16]</ref>. The notion of precedential constraint is that a focus case ought to receive the same outcome as a precedential case if any differences between those cases only serve to strengthen the focus case for that particular outcome. A winning strategy for the proponent is then presented as a justification for the predicted outcome in the form of an argument graph.</p><p>An abstract argument framework (AF), introduced by Dung <ref type="bibr" target="#b5">[6]</ref>, consists of a pair AF = ⟨A, attack⟩, where A represents a set of arguments, and attack is a binary relation on A. A subset B of A is termed conflict-free if no arguments in B attacks arguments in B and admissible if it is both conflict-free and capable of defending itself against attacks. In other words, if an argument A 1 is in B, and some argument A 2 in A attacks A 1 , then some argument in B must attack A 2 . There are different types of admissible sets, known as extensions. We focus on the grounded extension, which has the additional properties that it contains all arguments it defends and is subset-minimal for these conditions.</p><p>Formally, a case in the case base (CB) comprises an outcome and a fact situation. The case's outcome is a binary label, denoted as o or o ′ . Variables s and s ¯represent the two sides, such that s = o if s ¯= o ′ and vice versa. The fact situation includes dimensions (features), where each dimension is a tuple</p><formula xml:id="formula_0">d = (V, ≤ o , ≤ o ′ ).</formula><p>The tuple consists of a value set V and two partial orderings on V , ≤ o and</p><formula xml:id="formula_1">≤ o ′ , such that v ≤ o v ′ if and only if v ′ ≤ o ′ v for v, v ′ ∈ V .</formula><p>Each dimension has a tendency, with a positive tendency indicating a higher value is associated with one outcome (e.g., 1 or true), and vice versa for the other. The tendency is sometimes given explicitly, that is:</p><formula xml:id="formula_2">d + i or d − i .</formula><p>A value assignment, represented as (d, v), signifies the value x of dimension d in case c ∈ CB as v(d, c) = x. The collective value assignments for all dimensions d in the non-empty set D form a fact situation denoted as F. We assume that two fact situations pertain to the same set D. Defining a case as c = (F, outcome(c)) where outcome(c) ∈ {o, o ′ }, the fact situation of case c can be expressed as F(c).</p><p>When assessing two fact situations, one may find that one case is 'stronger' or 'better' for a specific outcome than the other. The outcome of a focus case is considered forced if there exists a precedent in the CB with the same outcome, and all differences between the focus case and that precedent serve to strengthen the focus case for that very outcome <ref type="bibr" target="#b9">[10]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 1 (Preference relation for fact situations). Given two fact situations F and F</head><formula xml:id="formula_3">′ , F ≤ s F ′ iff v ≤ s v ′ for all (d, v) ∈ F and (d, v ′ ) ∈ F ′ .</formula><p>Definition 2 (Precedential constraint). Given case base CB and fact situation F, deciding F for s is forced iff CB contains a case c = (F ′ , s) such that F ′ ≤ s F.</p><p>A fact situation could be forced for both outcomes o and o ′ by different precedents, in which case we can speak of an inconsistent CB:</p><formula xml:id="formula_4">Definition 3 (Case base consistency). A case base CB is consistent iff it does not contain two cases c = (F, s) and c ′ = (F ′ , s ¯) such that F ≤ s F ′ . Otherwise it is inconsistent.</formula><p>A best precedent has the same outcome as the focus case and as few as possible relevant differences. Multiple cases can meet these criteria.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 4 (Differences between cases)</head><formula xml:id="formula_5">. Let c = (F(c), outcome(c)) and f = (F( f ), outcome( f )) be two cases. The set D(c, f ) of differences between c and f is D(c, f ) = {(d, v) ∈ F(c) | v(d, c) ≰ outcome( f ) v(d, f )}.</formula><p>Definition 5 (Best precedent). Let c = (F(c), outcome(c)) and f = (F( f ), outcome( f )) be two cases, where c ∈ CB and f / ∈ CB. c is a best precedent for f iff:</p><formula xml:id="formula_6">• outcome(c) = outcome( f ) and • there is no c ′ ∈ CB such that outcome(c ′ ) = outcome(c) and D(c ′ , f ) ⊂ D(c, f ).</formula><p>The two players argue about differences between the focus case and precedents from the CB. The proponent does so in favour of the focus case's predicted outcome and the opponent to the contrary. The proponent starts by citing a best precedent. The opponent aims to respond to the proponent's initial citation by either presenting a counterexample or making a distinguishing move Worse(c, x) (the focus case is inferior to precedent c in dimensions x). A distinguishing move can be countered with a compensation move Compensates(c, y, x) (dimensions y make up for the shortcomings in dimensions x compared to precedent c). Finally, there is the transformation move Trans f ormed(c, c ′ ) (the citation can be transformed into a case where D(c, f ) = / 0). The proponent can respond using these moves, then the opponent can do the same in turn, and this back-and-forth continues until the opponent cannot make any more moves. Note that y in Compensates(c, y, x) can be the empty set. This is intended to guarantee the possibility of using a compensation move, ensuring the existence of a winning strategy for the proponent and thus that of a justification for the focus case's predicted outcome.</p><p>Definition 6 outlines the argumentation framework. The compensation move utilises the set sc, containing compensation definitions. The specifics and structure of sc were intentionally left open by Prakken &amp; Ratsma <ref type="bibr" target="#b16">[17]</ref>. In the most straightforward scenario, sc serves as a partial ordering on dimensions, indicating, for example, when a high value for d i compensates for a low value for d j . Essentially, sc imparts specific domain knowledge. In this paper, we employ the set sc to explicitly introduce domain rules into the framework for use by the compensation move.</p><p>Definition 6 (Case-based argumentation framework). Given a case base CB, a focus case f / ∈ CB, and definitions of compensation sc, an abstract argumentation framework AF is a pair &lt; A , attack &gt;, where: </p><formula xml:id="formula_7">• A = CB ∪ M, with M = {Worse(c, x) | c ∈ CB, x ̸ = / 0 and x = {(d, v) ∈ F( f ) | v(d, f ) &lt; outcome( f ) v(d, c)}} ∪ {Compensates(c, y, x) | c ∈ CB, y ⊆ {(d, v) ∈ F( f ) | v(d, c) &lt; outcome( f ) v(d, f )}, x = {(d, v) ∈ F( f ) | v(d, f ) &lt; outcome( f ) v(d,</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Argument-Based Compensation Moves</head><p>The set sc is used in Definition 6 as a placeholder for any construct that incorporates domain knowledge of some type-including CATO-like hierarchical relations <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b18">19]</ref> and perhaps complex ontologies <ref type="bibr" target="#b13">[14]</ref>-depending on how the phrase "...y compensates x according to sc..." is interpreted. In this paper, we restrict ourselves to domain rules about fact situations to underpin compensation moves. To this end, we want to construct arguments with conclusions of the form compensates(c, y, x) on the basis of such domain rules. We formalise sc as a reasoning system, which takes the form of an argumentation framework AF sc = ⟨A, attack⟩ containing arguments constructed by instantiating argumentation schemes based on domain rules <ref type="bibr" target="#b19">[20]</ref>. Using AF sc , we want to evaluate which compensation moves are available-that is, which conclusions are part of the grounded extension. We assume a body of domain rules exists. It may be provided by a single domain expert or multiple, or through some unspecified statistical mechanism like a rule discovery technique. We allow for the possibility that not every single situation relevant to the domain is explicitly covered by these domain rules. Instead, we assume there may be exceptions to these rules which may themselves be informative to the user who interprets the justification of the compensation move.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Compensation as an Argument Scheme</head><p>The application of a domain rule for compensation can be described by Scheme 1, which states that the compensation is the conclusion from the premises that the fact situation is worse for f in dimensions D w (premise w) while it is better for f in dimensions D b (premise b) and that D b is preferred over D w (premise p) according to the preference relation D w ≺ D b . The fact that the focus case f has worse values for dimensions in D w than the precedent c can then be downplayed, because f has better values for dimensions in D b and those dimensions are deemed more relevant for the outcome of f . Note that 'worse' and 'better' are relative to the tendency of the dimension in question and therefore do not necessarily correspond to 'lower' and 'higher' values, respsectively.</p><p>Argumentation Scheme 1 (Compensation). Let c ∈ CB be a precedent, f be a focus case, and D b , D w ⊆ D two sets of dimensions where D b ∩ D w = / 0, then the compensation scheme COMP( f , c, D b , D w ) is defined as the following reasoning pattern: </p><formula xml:id="formula_8">w: D w = {d ∈ D | d( f ) &lt; outcome( f ) d(c)} b: D b = {d ∈ D | d( f ) &gt; outcome( f ) d(c)} p: D w ≺ D b ----------- Conc: compensates(c, D b , D w )</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>. True</head><p>In Table <ref type="table" target="#tab_0">1</ref>, f has a higher weapon sophistication (d weapon ) than c, which is associated with non-terrorist incidents, making f worse than c on this dimension. However, f also has a higher number of casualties (d casualties ), which is a strong indicator of a terrorist incident. Using the domain rule that a higher number of casualties compensates for a higher weapon sophistication, we instantiate the following argument:</p><formula xml:id="formula_9">COMP( f , c, D b , D w ): w: D w = {d weapon } b: D b = {d casualties } p: {d weapon } ≺ {d casualties } ----------- Conc: compensates(c, {d casualties }, {d weapon })</formula><p>In this example, the argument states that although f has a higher ('worse') weapon sophistication than c, the higher ('better') number of casualties of f compensates for this, justifying the predicted outcome of true for f on the basis of this precedent.</p><p>We assume that the fact situations of both the precedents from the CB and the focus case are known. Therefore, we cannot argue against the first two premises of this scheme. Furthermore, the scheme is strict in that the conclusion of this scheme cannot be negated if all its premises are true. But first, we must consider how we know that premise p (the preference relation underpinning the compensation move) is true. In practice, conditions may apply for a preference relation and we will now consider the various forms these conditions may take.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Conditional Preference Relations</head><p>There may be situations where a specific threshold value must be met for a preference relation to be considered to hold through application of by Scheme 1. For instance, the aforementioned relation {d weapon } ≺ {d casualties } may only hold for high numbers of casualties, say at least 4. Fewer casualties may not be considered a good enough reason to compensate for the fact that the highly sophisticated weapon used in this incident is so irregular. In other words, the premise p of this instance of Scheme 1 depends on the condition that d casualties ≥ 4. We consider additional examples of conditions below, but for now we summarise the sets of conditions for a preference relation D w ≺ D b with an abstract premise Ψ.</p><p>Argumentation Scheme 2 (Preference). Let f be a focus case, s ∈ {o, o ′ }, D b , D w ⊆ D be two sets of dimensions where D b ∩ D w = / 0, Ψ be an abstract placeholder whose thruth value represents whether the preference conditions are fulfilled. Then the preference scheme PREF( f , D b , D w , D) is defined as the following reasoning pattern: ψ: Ψ (preference conditions fulfilled) =================== Conc: D w ≺ D b Scheme 2 evaluates whether Ψ holds in a particular instance. If so, the relevant preference relation can be concluded and subsequently used as a premise p in the instantiation of Scheme 1. In Table <ref type="table" target="#tab_0">1</ref>, the focus case f has a 'worse' level of sophistication in the weapon used (d weapon ) and a 'better' number of casualties (d casualties ), with respect to the outcome true. Instantiating Schemes 2 (PREF( f , D b , D w , D)) and 1 (COMP( f , c, D b , D w )) lets us construct the following argument:</p><formula xml:id="formula_10">PREF( f , D b , D w , D): ψ: d casualties ( f ) ≥ 4 =================== Conc: {d weapon } ≺ {d casualties } COMP( f , c, D b , D w ): w: D w = {d weapon } b: D b = {d casualties } p: {d casualties } ≺ {d weapon } ----------- Conc: compensates(c, {d casualties }, {d weapon })</formula><p>Furthermore, there can be more than one threshold as part of Ψ. We could have a preference relation that states that d casualties in combination with d f ear (a numerical expression of public fear) is more relevant than (i.e. is preferred over) d weapon if both dimensions exceed their respective thresholds. We would then instantiate Scheme 2 as:</p><formula xml:id="formula_11">PREF( f , D b , D w , D): ψ: d casualties ( f ) ≥ 4 ∧ d f ear ( f ) ≥ 10 =================== Conc: {d weapon } ≺ {d casualties , d f ear }</formula><p>Whether certain dimensions surpass certain thresholds is a type of condition that presumes that each dimension must independently meet a sub-condition. Alternatively, the condition for a preference relation might hinge on a combination of dimensions, in the form of some evaluation function surpassing a single threshold. For an example of such a rule, we can imagine a preference relation {d weapon } ≺ {d casualties , d wounded } and its condition that d casualties ( f ) + d wounded ( f ) ≥ 10. Here, the evaluation function is the sum of the number of fatal casualties and non-fatally wounded that compensates for a high level of weapon sophistication. In other words, the distinction between fatally and non-fatally harmed victims is of no consequence in this domain rule; what matters is the number of victims.</p><formula xml:id="formula_12">PREF( f , D b , D w , D): ψ: d casualties ( f ) + d wounded ( f ) ≥ 10 =================== Conc: {d weapon } ≺ {d casualties , d wounded }</formula><p>Alternatively, one can imagine rules in which the difference between the number of perpetrators and victims plays a role in distinguishing terrorist incidents from, say, assassinations. Or perhaps the ratio between wounded and deceased victims modulates the impact of weapon sophistication in some hypothetical rule. A weighted mean of several dimensions may have to surpass a certain value. And so on, domain experts may have dozens of rules for a domain that is particularly well understood and rich in descriptive dimensions, possibly assisted by some statistical analysis or rule discovery approach. More complex functions are also possible. One could argue that at least some of such evaluations ought to be captured in the feature engineering phase before training a model, rather than in post-hoc justifications; we remind the reader that our approach is model-and data-agnostic, so we should generally support relevant evaluations.</p><p>Consider the following scenario: an attack involving a sophisticated bomb (d weapon ) that does not result in a high number of casualties (d casualties ). Under normal circumstances, the sophistication of the weapon might suggest a targeted assassination rather than a terrorist attack. However, if the event generates an exceptionally high level of public fear (d f ear ), this could compensate for the lower casualty count, as the primary goal of terrorism is often to instil fear and disrupt societal normalcy. In this case, the evaluation function might give significant weight to d f ear , such that a weighted sum of d f ear and d casualties is compared to a threshold value.</p><formula xml:id="formula_13">PREF( f , D b , D w , D): ψ: 0.3 • d casualties ( f ) + 0.7 • d f ear ( f ) ≥ 10 =================== Conc: {d weapon } ≺ {d casualties , d f ear }</formula><p>Aforementioned thresholds form conditions on the very dimensions within the preference relation, D w and D b . However, there may be situations where contextual factors influence the applicability of the preference relation. For example, the additional dimension d measures (number of security measures in place) might in certain cases modulate the impact of a the number of casualties in compensating for weapon sophistication. In this condition, the threshold value pertains to a dimension that is itself not involved in the preference relation. The conditions for a preference relation can also involve spatiotemporal factors. For instance, the same set of dimensions might have different thresholds or weights depending on whether the event occurs in a region currently experiencing political instability. This adaptability is crucial in counter-terrorism, where the nature of threats and societal impact can change rapidly. When trying to attribute historical incidents to terrorist organisations, one would have to take into account that an organisation was founded at a certain moment in time, or was only active within a particular part of the world. For example, IS (ISIS/ISIL) did not rise to prominence until 2014 in areas of Syria and Iraq. Any domain rule that is concerned with characteristics of IS incidents or public claims by this organisation is likely specific to the appropriate time and place. The same type of corncern applies to the Taliban in Afghanistan before the American invasion in 2001 or their departure in 2021, or the Troubles in Ireland and Great Britain between 1966 and 1998. For a (simplified) example, consider the following, where the fact that an incident takes place during the Troubles in Belfast means that {d wounded } compensates for {d casualties , d weapon }:</p><formula xml:id="formula_14">PREF( f , D b , D w , D): ψ: d year ( f ) = 1969 ∧ d location ( f ) = Belfast =================== Conc: {d casualties , d weapon } ≺ {d wounded }</formula><p>Alternatively, this particular insight from the domain expert could be used to construct an empty compensation move on the basis of domain knowledge. Note that if D b = / 0, Scheme 1 describes the special case of empty compensation. We may want to overlook poor values for {d casualties , d claims } out of hand. Normally in AF-CBA, we allow for compensation moves with D b = / 0 in order to guarantee a winning strategy (Section 2), as somewhat of an unsatisfactory but necessary default substituting for a more informative justification. With an argument such as the following, we can actually provide expert-informed justifications why values in D b are not relevant to the outcome of the focus case despite not having any compensating dimensions, making an empty compensation move more valuable than it would otherwise be: Transitivity (where {d 1 } ≺ {d 2 } and {d 2 } ≺ {d 3 } implies {d 1 } ≺ {d 3 }) and antisymmetry (where {d 1 } ≺ {d 2 } implies {d 2 } ̸ ≺ {d 1 }) are not generally assumed and depend on the domain. Symmetric preference relations, such as {d casualties } ≺ {d weapon } and {d weapon } ≺ {d casualties }, can coexist for the same focus case, indicating that a better value in one dimension can compensate for a worse value in another. For instance, a high number of casualties (d casualties ) may compensate for high weapon sophistication (d weapon ) and vice versa. This symmetry may imply that dimensions are equivalent in their influence on an outcome, acting as proxies for a more abstract notion. For example, d alert (security alerted) and d measures (number of security measures) could be subcategories of a dimension d security (overall security preparedness). This implies a certain kind of equivalence. Thus our approach implicitly allows for the drawing of abstract parallels similar to the factor hierarchies in CATO <ref type="bibr" target="#b0">[1]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Arguing About Preference Relations</head><p>As mentioned, we do not assume the body of domain knowledge to be uncontested. While Schemes 1 and 2 provide an approach to assess whether the conditions of a compensation move have been met, exceptions may be possible and premises can be contested. Exactly what kinds of attacks are possible may depend on the domain, but in general we can state that attacks between arguments can be modelled in a structured argumentation framework like ASPIC+ <ref type="bibr" target="#b12">[13]</ref> or ABA <ref type="bibr" target="#b1">[2]</ref>.</p><p>For example, a domain expert may consider there to be another caveat for the preference relation {d casualties } ≺ {d weapon } besides d casualties ( f ) ≥ 4, namely that it does not hold if the weapon sophistication is extremely high. The abstract placeholder Ψ could then simply refer to two separate thresholds for this instance of PREF( f , D b , D w , D):</p><formula xml:id="formula_15">PREF( f , D b , D w , D): ψ: d casualties ( f ) &gt; 4 ∧ d weapon ( f ) &lt; 'Extremely high' =================== Conc: {d casualties } ≺ {d weapon }</formula><p>However, one might argue that it is more informative if exceptions are modelled explicitly as separate arguments. The preference relation {d casualties } ≺ {d weapon } would then be attacked by an exception argument detailing how it can be concluded from the fact that d weapon ( f ) ≮ 'Extremely high' that {d casualties } ≺ {d weapon } does not hold for f . This exception argument would have to be successfully attacked in order for {d casualties } ≺ {d weapon } to be usable in Scheme 1, perhaps by an exception to the exception. For instance, the exception d weapon ( f ) ≮ 'Extremely high' might be considered irrelevant if the number of casualties is sufficiently high, e.g. d casualties ≥ 30. This second exception would attack the first exception, thereby defending the preference relation from Scheme 2 and thus reinstating the compensation move from Scheme 1. And so on for additional exceptions. This notion is illustrated in the argument graph Figure <ref type="figure">2</ref>.</p><p>We allow for chains of arguments about preference relations. Whether long, complex arguments are always desirable is up to the domain experts themselves, based on what they deem appropriate for the intended user. Our approach allows them to decide on how elaborately to justify the domain knowledge used to justify ML predictions as they see fit. The goal is always to justify compensation moves in the eyes of the user, who may or may not be a domain expert, in order to provide an appropriate level of justification for ML predictions. Other types of argument could be valueable too, such as expert opinions (after Walton et al. <ref type="bibr" target="#b19">[20]</ref>) on the basis of additional domain knowledge-based on professional experience, domain literature, or statistical/data analysis (e.g. rule discovery). Of course, there are domain-specific reasons why a preference relation is included in the first place, which suggests that conflicting opinions are possible. Similarly to how we could allow chains of exceptions to argue about preference relations, experts may decide that it is equally informative to explicitly model dissent between experts and perhaps to show how the latest analysis, literature research or most senior expert wins the debate.</p><p>For example, the preference relation {d casualties } ≺ {d weapon } could be attacked by an opinion stating that it does not hold, based on the experience of the expert. This could itself be attacked by an opinion on the basis of statistical analysis suggesting that even in scenarios where weapon sophistication was extremely high, the number of casualties had a more significant impact on outcomes. Thus, the argument from statistical analysis would then successfully defend the original preference relation.This example only describes the approach generally and more detailed decisions regarding possible arguments and attack types have to be made when it is implemented using a structured argumentation framework.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusion</head><p>We have extended the XAI approach AF-CBA by adding a mechanism by which the justifications' compensation moves are determined using arguments based on domain knowledge provided by domain experts. This not only allows compensation moves to be informed by established domain rules, but also communicates reasons for those compensation moves in terms that are likely to be familiar to domain experts. We have implemented this mechanism as a secondary argument graph, which likewise can be shown to the user as a justification. This secondary argument graph provides an avenue for future extensions aimed at more in-depth justifications and possibly disputes about the domain knowledge itself.</p><p>Our extension of AF-CBA relies on argumentation schemes to capture defeasible reasoning patterns, providing a foundation for persuasive justifications. Further extending these patterns within a structured argumentation framework would enhance the sophistication of arguments, allowing arguments about premisses or about the implied entailment of preference relations. This could, for instance, take the form of arguments stemming from different sources of domain knowledge. The current paper is not on rule discovery or information extraction, but those techniques could be integrated in a larger framework in which any disagreements between sources of domain knowledge have to be resolved. Refining rules (e.g. thresholds) is another aspect that deserves attention in future work. Another possible future work direction is to take an experimental approach in the form of usability studies, which would allow us to evaluate various design choices for AF-CBA from a user's perspective.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>Figure 1:A schematic depiction of AF-CBA's workflow. The case base is constructed either instantly on the basis of the labelled data (dashed line) or stepwise on the basis of earlier predictions (dotted line, as in<ref type="bibr" target="#b14">[15]</ref>).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>c)} and y compensates x according to sc} ∪ {Trans f ormed(c, c ′ ) | c ∈ CB and c can be transformed into c ′ and D(c ′ , f ) = / 0} • A attacks B iff: -A, B ∈ CB and outcome(A) ̸ = outcome(B) and D(B, f ) ̸ ⊂ D(A, f ); -B ∈ CB with outcome(B) = outcome( f ) and A is of the form Worse(B, x); -B is of the form Worse(c, x) and A is of the form Compensates(c, y, x); -B ∈ CB and outcome(B) ̸ = outcome( f ) and A is of the form Trans f ormed(c, c ′ ).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>PREF( f , D b , D w , D): ψ: d year ( f ) = 1969 ∧ d location ( f ) = Belfast =================== Conc: {d casualties , d weapon } ≺ / 0 COMP( f , c, D b , D w ): w: D w = {d casualties , d weapon } b: D b = / 0 p: {d casualties , d weapon } ≺ / 0 -----------Conc: compensates(c, / 0, {d casualties , d weapon })</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>4 Figure 2 :</head><label>42</label><figDesc>Figure 2: An illustration of a how an exception to an exception can defend a preference relation and thus a compensation move. Shaded boxes are not in the grounded extension, attacks are indicated by arrows.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Precedent case c and focus case f .</figDesc><table><row><cell>Case</cell><cell>d + casualties</cell><cell>d − weapon</cell><cell>...</cell><cell>Outcome</cell></row><row><cell>c</cell><cell>5</cell><cell>low</cell><cell>...</cell><cell>True</cell></row><row><cell>f</cell><cell>10</cell><cell>high</cell><cell>..</cell><cell></cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Using background knowledge in case-based legal reasoning: A computational model and an intelligent learning environment</title>
		<author>
			<persName><surname>Vincent Aleven</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">150</biblScope>
			<biblScope unit="issue">1-2</biblScope>
			<biblScope unit="page" from="183" to="237" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">An abstract, argumentation-theoretic approach to default reasoning</title>
		<author>
			<persName><forename type="first">Andrei</forename><surname>Bondarenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">M</forename><surname>Dung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Kowalski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Toni</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">93</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="63" to="101" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Abstract Argumentation for Case-Based Reasoning</title>
		<author>
			<persName><forename type="first">Kristijonas</forename><surname>Čyras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ken</forename><surname>Satoh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Francesca</forename><surname>Toni</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fifteenth International Conference on Principles of Knowledge Representation</title>
				<meeting>the Fifteenth International Conference on Principles of Knowledge Representation</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Explanation for Case-Based Reasoning via Abstract Argumentation</title>
		<author>
			<persName><forename type="first">Kristijonas</forename><surname>Čyras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ken</forename><surname>Satoh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Francesca</forename><surname>Toni</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of COMMA 2016</title>
				<meeting>COMMA 2016</meeting>
		<imprint>
			<publisher>IOS Press</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="243" to="254" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Explanations by arbitrated argumentative dispute</title>
		<author>
			<persName><forename type="first">Kristijonas</forename><surname>Čyras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">David</forename><surname>Birch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yike</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Francesca</forename><surname>Toni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rajvinder</forename><surname>Dulay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sally</forename><surname>Turvey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Daniel</forename><surname>Greenberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tharindi</forename><surname>Hapuarachchi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">127</biblScope>
			<biblScope unit="page" from="141" to="156" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games</title>
		<author>
			<persName><forename type="first">Phan</forename><surname>Minh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dung</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">77</biblScope>
			<biblScope unit="page" from="321" to="357" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Improving accuracy by combining rule-based and case-based reasoning</title>
		<author>
			<persName><forename type="first">Andrew</forename><forename type="middle">R</forename><surname>Golding</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Paul</forename><forename type="middle">S</forename><surname>Rosenbloom</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">87</biblScope>
			<biblScope unit="issue">1-2</biblScope>
			<biblScope unit="page" from="215" to="254" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A survey of methods for explaining black box models</title>
		<author>
			<persName><forename type="first">Riccardo</forename><surname>Guidotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anna</forename><surname>Monreale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Salvatore</forename><surname>Ruggieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Franco</forename><surname>Turini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Fosca</forename><surname>Giannotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dino</forename><surname>Pedreschi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Computing Surveys</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page">42</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Reasoning with dimensions and magnitudes</title>
		<author>
			<persName><forename type="first">John</forename><surname>Horty</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence and Law</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="309" to="345" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Rules and reasons in the theory of precedent</title>
		<author>
			<persName><forename type="first">John</forename><forename type="middle">F</forename><surname>Horty</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Legal Theory</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="33" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">The mythos of model interpretability</title>
		<author>
			<persName><forename type="first">Zachary</forename><surname>Lipton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Communications of the ACM</title>
		<imprint>
			<biblScope unit="volume">61</biblScope>
			<biblScope unit="page" from="96" to="100" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Explanation in artificial intelligence: Insights from the social sciences</title>
		<author>
			<persName><forename type="first">Tim</forename><surname>Miller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">267</biblScope>
			<biblScope unit="page" from="1" to="38" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">The ASPIC+ framework for structured argumentation: A tutorial</title>
		<author>
			<persName><forename type="first">Sanjay</forename><surname>Modgil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Henry</forename><surname>Prakken</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Argument &amp; Computation</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="31" to="62" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Towards a Story Scheme Ontology of Terrorist MOs</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">T</forename><surname>Joeri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Floris</forename><forename type="middle">J</forename><surname>Peters</surname></persName>
		</author>
		<author>
			<persName><surname>Bex</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Intelligence and Security Informatics (ISI)</title>
				<imprint>
			<date type="published" when="2020">2020. 2020</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Model-and data-agnostic justifications with A Fortiori Case-Based Argumentation</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">T</forename><surname>Joeri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Floris</forename><forename type="middle">J</forename><surname>Peters</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Henry</forename><surname>Bex</surname></persName>
		</author>
		<author>
			<persName><surname>Prakken</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 19th International Conference on Artificial Intelligence and Law</title>
				<meeting>the 19th International Conference on Artificial Intelligence and Law<address><addrLine>Braga, Portugal</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="207" to="216" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Dialectical proof theory for defeasible argumentation with defeasible priorities (preliminary report)</title>
		<author>
			<persName><forename type="first">Henry</forename><surname>Prakken</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Formal Models of Agents</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">John-Jules</forename><surname>Ch</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Pierre-Yves</forename><surname>Meyer</surname></persName>
		</editor>
		<editor>
			<persName><surname>Schobbens</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="1999">1999</date>
			<biblScope unit="page" from="202" to="215" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">A top-level model of case-based argumentation for explanation: Formalisation and experiments</title>
		<author>
			<persName><forename type="first">Henry</forename><surname>Prakken</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rosa</forename><surname>Ratsma</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Argument &amp; Computation</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="159" to="194" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">CABARET: Rule interpretation in a hybrid architecture</title>
		<author>
			<persName><forename type="first">Edwina</forename><forename type="middle">L</forename><surname>Rissland</surname></persName>
		</author>
		<author>
			<persName><forename type="first">David</forename><forename type="middle">B</forename><surname>Skalak</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Man-Machine Studies</title>
		<imprint>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="839" to="887" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Hierarchical Precedential Constraint</title>
		<author>
			<persName><forename type="first">Davide</forename><surname>Wijnand Van Woerkom</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Henry</forename><surname>Grossi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bart</forename><surname>Prakken</surname></persName>
		</author>
		<author>
			<persName><surname>Verheij</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 19th International Conference on Artificial Intelligence and Law</title>
				<meeting>the 19th International Conference on Artificial Intelligence and Law<address><addrLine>Braga, Portugal</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="333" to="342" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Argumentation Schemes</title>
		<author>
			<persName><forename type="first">Douglas</forename><surname>Walton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christopher</forename><surname>Reed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Fabrizio</forename><surname>Macagno</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2008">2008</date>
			<publisher>Cambridge University Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
