<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Stable versus Layered Logic Program Semantics</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Luís</forename><forename type="middle">Moniz</forename><surname>Pereira</surname></persName>
							<affiliation key="aff0">
								<orgName type="laboratory">Centro de Inteligência Artificial (CENTRIA</orgName>
								<orgName type="institution">Universidade Nova de Lisboa</orgName>
								<address>
									<postCode>2829-516</postCode>
									<settlement>Caparica</settlement>
									<country key="PT">Portugal</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Alexandre</forename><forename type="middle">Miguel</forename><surname>Pinto</surname></persName>
							<affiliation key="aff0">
								<orgName type="laboratory">Centro de Inteligência Artificial (CENTRIA</orgName>
								<orgName type="institution">Universidade Nova de Lisboa</orgName>
								<address>
									<postCode>2829-516</postCode>
									<settlement>Caparica</settlement>
									<country key="PT">Portugal</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Stable versus Layered Logic Program Semantics</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">7ECBC7A28F91B2B59DD672F9FFED9A5C</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T12:23+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Stable Models</term>
					<term>Layer Supported Models</term>
					<term>Relevance</term>
					<term>Layering</term>
					<term>Abduction</term>
					<term>Side-effects</term>
					<term>Program Transformation</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>For practical applications, the use of top-down query-driven proofprocedures is convenient for an efficient use and computation of answers using Logic Programs as knowledge bases. Additionally, abductive reasoning on demand is intrinsically a top-down search method. A 2-valued semantics for Normal Logic Programs (NLPs) allowing for top-down query-solving is thus highly desirable, but the Stable Models semantics (SM) does not allow it, for lack of the relevance property. To overcome this limitation we introduced in [24], and review here, a new 2-valued semantics for NLPs -the Layer Supported Models semantics -which conservatively extends the SM semantics, enjoys relevance and cumulativity, guarantees model existence, and respects the Well-Founded Model.</p><p>In this paper we also exhibit a transformation, TR, from one propositional NLP into another, whose Layer Supported Models are precisely the Stable Models of the transform, which can then be computed by extant Stable Model implementations, providing a tool for the immediate generalized use of the new semantics and its applications. In the context of abduction in Logic Programs, when finding an abductive solution for a query, one may want to check too whether some other literals become true (or false) as a consequence, strictly within the abductive solution found, that is without performing additional abductions, and without having to produce a complete model to do so. That is, such consequence literals may consume, but not produce, the abduced literals of the solution. To accomplish this mechanism, we present the concept of Inspection Point in Abductive Logic Programs, and show, by means of examples, how one can employ it to investigate side-effects of interest (the inspection points) in order to help choose among abductive solutions.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction and Motivation</head><p>The semantics of Stable Models (SM) <ref type="bibr" target="#b13">[14]</ref> is a cornerstone for the definition of some of the most important results in logic programming of the past two decades, providing an increase in logic programming declarativity and a new paradigm for program evaluation. When we need to know the 2-valued truth value of all the literals in a logic program for the problem we are modeling and solving, the only solution is to produce complete models. Depending on the intended semantics, in such cases, tools like SModels <ref type="bibr" target="#b18">[19]</ref> or DLV <ref type="bibr" target="#b6">[7]</ref> may be adequate because they can indeed compute whole models according to the SM semantics. However, the lack of some important properties of language semantics, like relevance, cumulativity and guarantee of model existence (enjoyed by, say, Well-Founded Semantics <ref type="bibr" target="#b12">[13]</ref> (WFS)), somewhat reduces its applicability in practice, namely regarding abduction, creating difficulties in required pre-and post-processing. But WFS in turn does not produce 2-valued models, though these are often desired, nor guarantees 2-valued model existence. SM semantics does not allow for top-down query-solving precisely because it does not enjoy the relevance property -and moreover, does not guarantee the existence of a model. Furthermore, frequently there is no need to compute whole models, like its implementations do, but just the partial models that sustain the answer to a query. Relevance would ensure these could be extended to whole models. Furthermore, with SM semantics, in an abductive reasoning situation, computing the whole model entails pronouncement about every abducible whether or not it is relevant to the problem at hand, and subsequently filtering the irrelevant ones. When we just want to find an existential answer to a query, we either compute a whole model and check if it entails the query (the way SM semantics does), or, if the underlying semantics we use enjoys the relevance property -which SM semantics do not -we can simply use a top-down proof-procedure (à la Prolog), and abduce by need. In this second case, the user does not pay the price of computing a whole model, nor the price of abducing all possible abducibles or their negations, and then filtering irrelevant ones, because the only abducibles considered will be those needed to answer the query.</p><p>To overcome these limitations we developed in <ref type="bibr" target="#b23">[24]</ref> (reviewed here) a new 2-valued semantics for NLPs -the Layer Supported Models (LSM) -which conservatively extends the SMs, enjoys relevance and cumulativity, guarantees model existence, and respects the Well-Founded Model (WFM) <ref type="bibr" target="#b12">[13]</ref>. We say a semantics is a conservative extension of another one if it gives the same semantics to programs as the latter, whenever the latter is defined, and also provides semantics to programs for which the latter does not.</p><p>The LSM semantics builds upon the foundational step of the Layered Models semantics presented in <ref type="bibr" target="#b26">[27]</ref> by imposing a layered support requirement resulting in WFM consonance. In <ref type="bibr" target="#b26">[27]</ref>, neither layered support nor WFM respect are required. Intuitively, a program is conceptually partitioned into "layers" which are subsets of its rules, possibly involved in mutual loops. An atom is considered true if there is some rule for it at some layer, where all the literals in its body which are supported by rules of lower layers are also true. Otherwise that conclusion is false. That is, a rule in a layer must, to be usable, have the support of rules in the layers below.</p><p>Although Stable Models is a great semantics with a simple definition, it nevertheless fails to provide semantics for all normal programs. LSM coincides with Stable Models whenever the latter is defined (no odd loops ou infinite chains), - <ref type="bibr" target="#b11">[12]</ref> -and thus can afford the same simple definition in that class of programs. The definition for the class of all programs necessarily requires a more complex definition, essentially resting on a required generalization of stratification -the layering -and then simply using minimal models on successive layers.</p><p>The core reason SM semantics fails to guarantee model existence for every NLP is that the stability condition it imposes on models is impossible to be complied with by Odd Loops Over Negation (OLONs). An OLON is a loop with an odd number of default negations in its circular call dependency path. In fact, the SM semantics community uses that inability as a means to impose Integrity Constraints (ICs), such as a ← not a, X, where the OLON over a prevents X from being true in any model.</p><p>The LSM semantics provides a semantics to all NLPs. In the a ← not a, X example above, whenever X is true, the only LSM is {a, X}. For LSM semantics OLONs are not ICs. ICs are implemented with rules for reserved atom f alsum, of the form f alsum ← X, where X is the body of the IC we wish to prevent being true. This does not prevent f alsum from being in some models. From a theoretical standpoint this means that the LSM semantics does not include an IC compliance enforcement mechanism. ICs must be dealt with in two possible ways: either by 1) a syntactic post-processing step, as a "test phase" after the model generation "generate phase"; or by 2) embedding the IC compliance in a query-driven (partial) model computation, where such method can be a top-down query-solving one a la Prolog, since the LSM semantics enjoys relevance. In this second case, the user must conjoin query goals with not f alsum. If inconsistency examination is desired, like in the 1) case above, models including f alsum can be discarded a posteriori. This is how LSM semantics separates OLON semantics from IC compliance.</p><p>After notation and background definitions, we summarize the formal definition of LSM semantics and its properties. Thereafter, in a section that may be skipped without loss of continuity, we present a program transformation, TR, from one propositional (or ground) NLP into another, whose Layer Supported Models are precisely the Stable Models of the transform, which can be computed by extant Stable Model implementations, which also require grounding of programs. TR's space and time complexities are then examined. TR can be used to answer queries but is also of theoretical interest, for it may be used to prove properties of programs, say. Moreover, TR can be employed in combination with the top-down query procedure of XSB-Prolog, and be applied just to the residual program corresponding to a query (in compliance with the relevance property of Layer Supported Models). The XSB-XASP interface then allows the program transform to be sent for Smodels for 2-valued evaluation. Thus, for existential query answering there is no need to compute total models, but just the partial models that sustain the answer to the query, or one might simply know a model exists without producing it; relevance ensures these can be extended to total models.</p><p>In its specific implementation section we indicate how TR can be employed, in combination with the top-down query procedure of XSB-Prolog, it being sufficient to apply it solely to the residual program corresponding to a query (in compliance with the relevance property of Layer Supported Model ). The XSB-XASP interface allows the program transform to be sent for Smodels for 2-valued evaluation. Implementation details and attending algorithms can be found in <ref type="bibr" target="#b24">[25]</ref>.</p><p>Next, issues of reasoning with logic programs are addressed in section 7 where, in particular, we look at abductive reasoning and the nature of backward and forward chaining in their relationship to the issues of query answering and of side-effects examination within an abductive framework. In section 8 we then introduce inspection points to address such issues, illustrate their need and use with examples, and provide for them a declarative semantics. The touchstone of enabling inspection points can be construed as meta-abduction, by (meta-)abducing an "abduction" to check (i.e. to passively verify) that a certain concrete abduction is indeed adopted in a purported abductive solution. In section 8.2 we surmise their implementation, the latter's workings being fully given and exemplified in <ref type="bibr" target="#b22">[23]</ref>. We have implemented inspection points on top of already existing 3-and 2-valued abduction solving systems -ABDUAL and XSB-XASP -in a way that can be adopted by other systems too.</p><p>Our semantics can easily be extended to deal with Disjunctive Logic Programs (Dis-jLPs) and Extended Logic Programs (ELPs, including explicit negation), by means of the shifting rule and other well-known program transformations <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b14">15]</ref>, thus providing a practical, comprehensive and advantageous alternative to SMs-based Answer-Set Programming. Conclusions and future work close the paper. We call H the head of the rule -also denoted by head(r). And body(r) denotes the set {B 1 , . . . , B n , not C 1 , . . . , not C m } of all the literals in the body of r. Throughout this paper we will use 'not ' to denote default negation. When the body of the rule is empty, we say the head of rule is a fact and we write the rule just as H. A Logic Program (LP for short) P is a (possibly infinite) set of ground Logic Rules of the form in Definition 1. In this paper we focus mainly on NLPs, those whose heads of rules are positive literals, i.e., simple atoms; and there is default negation just in the bodies of the rules. Hence, when we write simply "program" or "logic program" we mean an NLP.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Layering of Logic Programs</head><p>The well-known notion of stratification of LPs has been studied and used for decades now. But the common notion of stratification does not cover all LPs, i.e., there are some LPs which are non-stratified. The usual syntactic notions of dependency are mainly focused on atoms. They are based on a dependency graph induced by the rules of the program. Useful these notions might be, for our purposes they are insufficient as they leave out important structural information about the call-graph of P . To cover that information we also define below the notion of a rule's dependency. Indeed, layering puts rules in layers, not atoms. An atom B directly depends on atom A in P iff there is at least one rule with head B and with A or not A in the body. An atom's dependency is just the transitive closure of the atom's direct dependency. A rule directly depends on atom B iff any of B or not B is in its body. A rule's dependency is just the transitive closure of the rule's direct dependency. The Relevant part of P for some atom A, represented by Rel P (A), is the subset of rules of P with head A plus the set of rules of P whose heads the atom A depends on, cf. <ref type="bibr" target="#b8">[9]</ref>. Likewise for the relevant part for an atom A notion <ref type="bibr" target="#b8">[9]</ref>, we define and present the notion of relevant part for a rule r. The Relevant part of P for rule r, represented by Rel P (r), is the set containing the rule r itself plus the set of rules relevant for each atom r depends on. Definition 2. Parts of the body of a rule. Let r = H ← B 1 , . . . , B n , not C 1 , . . . , not C m be a rule of P . Then, r l = {B i , not C j : B i depends on H ∧ C j depends on H}. Also, r B = {B i : B i ∈ (body(r) \ r l )}, and r C = {C j : not C j ∈ (body(r) \ r l )}. Definition 3. HighLayer function. The HighLayer function is defined over a set of literals: its result is the highest layer number of all the rules for all the literals in the set, or zero if the set is empty. Definition 4. Layering of a Logic Program P . Given a logic program P a layering function L/1 is just any function defined over the rules of P , where P is obtained from P by adding a rule of the form H ← f alsum for every atom H with no rules in P , assigning each rule r ∈ P a positive integer, such that:</p><p>-L(r) = 0 if f alsum ∈ body(r), otherwise -L(r) ≥ max(HighLayer(r l ), HighLayer(r B ), (HighLayer(r C ) + 1))</p><p>A layering of program P is a partition P 1 , . . . , P n of P such that P i contains all rules r having L(r) = i, i.e., those which depend only on the rules in the same layer or layers below it.</p><p>This notion of layering does not correspond to any level-mapping <ref type="bibr" target="#b15">[16]</ref>, since the later is defined over atoms, and the former is defined over rules. Also, due to the definition of dependency, layering does not coincide with stratification <ref type="bibr" target="#b2">[3]</ref>, nor does it coincide with the layer definition of <ref type="bibr" target="#b27">[28]</ref>. However, when the program at hand is stratified (according to <ref type="bibr" target="#b2">[3]</ref>) it can easily be seen that its respective layering coincides with its stratification. In this sense, layering, which is always defined, is a generalization of the stratification.</p><p>Amongst the several possible layerings of a program P we can always find the least one, i.e., the layering with least number of layers and where the integers of the layers are smallest. In the remainder of the paper when referring to the program's layering we mean such least layering (easily seen to be unique). Example 1. Layering example. Consider the following program P , depicted along with the layer numbers for its least layering:</p><formula xml:id="formula_0">c ← not d, not y, not a Layer 3 d ← not c y ← not x b ← not x Layer 2 x ← not x b Layer 1 a ← f alsum Layer 0</formula><p>Atom a has no rules so its now created unique rule a ← f alsum is placed in Layer 0. Atom b has a fact rule r b1 : its body is empty, and therefore all HighLayer(r l b1 ), HighLayer(r B b1 ), and HighLayer(r C b1 ) are 0 (zero). Hence, L(r b1 ) = max(0, 0, (0 + 1)) = max(0, 0, 1) = 1, where r b1 is the fact rule for b, placed in Layer 1.</p><p>The unique rule for x, r x is also placed in Layer 1 in the least layering of P because HighLayer(r l x ) = L(r x ), HighLayer(r B x ) = 0, and HighLayer(r C x ) = 0. So, L(r x ) = max(L(r x ), 0, (0 + 1)) = max(L(r x ), 0, 1) = 1, in the least layering.</p><p>The unique rule for c, r c is placed in Layer 3 because HighLayer(r C c ) = 2, HighLayer(r B c ) = 0, and HighLayer(r l c ) = HighLayer(r d ) = 3. By the same token, r d is placed in the same Layer 3. Both r b2 and r y are placed in Layer 2.</p><p>This program has two LSMs: {b, c, x}, and {b, d, x}.</p><p>The complexity of computing the Layering of a given ground NLP is conjectured to be in the same order as that of the Tarjan's Strongly Connected Components detection Algorithm, which is known to be linear in the number of nodes and edges on the dependency graph: O(|N | + |E|).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Layer Supported Models Semantics</head><p>The Layer Supported Models semantics we now present is the result of the two new notions we introduced: the layering, formally introduced in section 3, which is a generalization of stratification; and the layered support, as a generalization of classical support. These two notions are the means to provide the desired 2-valued semantics which respects the WFM, as we will see below.</p><p>An interpretation M of P is classically supported iff every atom a of M is classically supported in M , i.e., all the literals in the body of some rule for a are true under M in order for a to be supported under M . Definition 5. Layer Supported interpretation. An interpretation M of P is layer supported iff every atom a of M is layer supported in M , and this holds iff a has a rule r where all literals in (body(r) \ r l ) are true in M . Otherwise, it follows that a is false.</p><p>Theorem 1. Classical Support implies Layered Support. Given an NLP P , an interpretation M , and an atom a such that a ∈ M , if a is classically supported in M then a is also layer supported in M .</p><p>Proof. Trivial from the definitions of classical support and layered support.</p><p>In programs without odd loops layered supported models are classically supported too.</p><p>Example 2. Layered Unsupported Loop. Consider program P : c ← not a a ← c, not b b The only rule for b is in the first layer of P . Since it is a fact it is always true in the WFM. Knowing this, the body of the rule for a is false because unsupported (both classically and layered). Since it is the only rule for a, its truth value is false in the WFM, and, consequently, c is true in the WFM. This is the intuitively desirable semantics for P , which corresponds to its LSM semantics. LM and the LSM semantics differences reside both in their layering notion and the layered support requisite of Def. 5. In this example, if we used LM semantics, which does not exact layered support, there would be two models: LM 1 = {b, a} and LM 2 = {b, c}. {b} is the only minimal model for the first layer and there are two minimal model extensions for the second layer, as a is not necessarily false in the LM semantics because Def. 5 is not adopted. Lack of layered support lets LM semantics fail to comply with the WFM. Note that adding a rule like b ← a would not affect the semantics of the program, according to LSM. This is so because, such rule would be placed in the same layer with the rules for a and c, but leaving the fact rule b in the strictly lower layer.</p><p>Intuitively, the minimal layer supported models up to and including a given layer, respect the minimal layer supported models up to the layers preceding it. It follows trivially that layer supported models are minimal models, by definition. This ensures the truth assignment to atoms in loops in higher layers is consistent with the truth assignments in loops in lower layers and that these take precedence in their truth labeling. As a consequence of the layered support requirement, layer supported models of each layer comply with the WFM of the layers equal to or below it. Combination of the (merely syntactic) notion of layering and the (semantic) notion of layered support makes the LSM semantics. Definition 6. Layer Supported Model of P . Let P 1 , . . . , P n be the least layering of P . A layer supported interpretation M is a Layer Supported Model of P iff ∀ 1≤i≤n M | ≤i is a minimal layer supported model of ∪ 1≤j≤i P j where M | ≤i denotes the restriction of M to heads of rules in layers less or equal to i:</p><formula xml:id="formula_1">M | ≤i ⊆ M ∩ {head(r) : L(r) ≤ i}</formula><p>The Layer Supported semantics of a program is just the intersection of all of its Layer Supported Models. Layered support is a more general notion than that of perfect models <ref type="bibr" target="#b29">[30]</ref>, with similar structure. Perfect model semantics talks about "least models" rather than "minimal models" because in strata there can be no loops and so there is always a unique least model which is also the minimal one. Layers, as opposed to strata, may contain loops and thus there is not always a least model, so layers resort to minimal models, and these are guaranteed to exist (it is well known every NLP has minimal models).</p><p>The simple transformation step adding a rule of the form a ← f alsum for atoms a with no rules in P enforces compliance with the Closed World Assumption (CWA).</p><p>The principle used by LSMs to provide semantics to any NLP -whether with OLONs or not -is to accept all, and only, the minimal models that are layer supported, i.e., that respect the layers of the program. The principle used by SMs to provide semantics to only some NLPs is a "stability" (fixed-point) condition imposed on the SMs by the Gelfond-Lifschitz operator.</p><p>In <ref type="bibr" target="#b26">[27]</ref> the authors present an example <ref type="bibr" target="#b6">(7)</ref> where three alternative joint vacation solutions are found by the LM semantics, for a vacation problem modeled by an OLON. The solutions found actually coincide with those found by the LSM semantics. We recall now a syntactically similar example (from <ref type="bibr" target="#b28">[29]</ref>) but with different intended semantics, and show how it can be attained in LSM by means of ICs. Example 4. Working example <ref type="bibr" target="#b28">[29]</ref>.</p><formula xml:id="formula_2">tired ← not sleep sleep ← not work work ← not tired</formula><p>As in the example 7 of <ref type="bibr" target="#b26">[27]</ref>, the LSM semantics would provide three solutions for the above program: {work, tired}, {work, sleep}, {sleep, tired}. Although some (or even all!) of these solutions might be actually plausible in a real world case, they are not, in general, the intended semantics for this example. With the LSM semantics, the way to prune away some (or all) of these solutions is by means of ICs. For example, to eliminate the {work, sleep} solution we would just need to add the IC f alsum ← work, sleep.</p><p>Again, an IC compliance mechanism, such as conjoining not f alsum to the query, must be employed by the user in order to eliminate the undesired models.</p><p>Example 5. Jurisprudential reasoning. A murder suspect not preventively detained is likely to destroy evidence, and in that case the suspect shall be preventively detained:</p><formula xml:id="formula_3">likely_destroy_evidence(murder_suspect) ← not preventively_detain(murder_suspect) preventively_detain(murder_suspect) ← likely_destroy_evidence(murder_suspect)</formula><p>There is no SM, and a single LSM = {preventively_detain(murder_suspect)}. This jurisprudential reasoning is carried out without need for a murder_suspect to exist now. Should we wish, LSM's cumulativity (cf. below) allows adding the model literal as a fact.</p><p>Example 6. Program with depth-unbound layering. A typical case of an infinitely ground program (actually the only one with real theoretical interest, to the best of our knowledge) was presented by François Fages in <ref type="bibr" target="#b11">[12]</ref>. We repeat it here for illustration and explanation.</p><formula xml:id="formula_4">p(X) ← p(s(X)) p(X) ← not p(s(X))</formula><p>Ground (layered) version of this program, assuming there only one constant 0 (zero):</p><formula xml:id="formula_5">p(0) ← p(s(0)) p(0) ← not p(s(0)) p(s(0)) ← p(s(s(0))) p(s(0)) ← not p(s(s(0))) p(s(s(0))) ← p(s(s(s(0)))) p(s(s(0))) ← not p(s(s(s(0)))) . . . ← . . . . . . ← . . .</formula><p>The only layer supported model of this program is {p(0), p(s(0)), p(s(s(0))) . . .} or, in a non-ground form, {p(X)}. The theoretical interest of this program lies in that, although it has no OLONs it still has no SMs either because no rule is supported (under the usual notion of support), thereby showing there is a whole other class of NLPs to which the SMs semantics provides no model, due to the notion of support used.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Properties of the Layer Supported Models semantics</head><p>Although the definition of the LSMs semantics is different from the LMs of <ref type="bibr" target="#b26">[27]</ref>, this new refinement enjoys the desirable properties of the LMs semantics, namely: guarantee of model existence, relevance, cumulativity, ( <ref type="bibr" target="#b8">[9]</ref>) and being a conservative extension of the SMs semantics. Guarantee of existence ensures all normal programs have a semantics; relevance allows for simple top-down querying concerning truth in a model (like in Prolog) and is essential for abduction by need; and cumulativity means that atoms true in the semantics can be added as facts without changing it -none of these properties are enjoyed by Stable Models. A semantics is a conservative extension of another one if it gives the same semantics to programs as the latter, whenever the latter is defined, and also provides semantics to programs for which the latter does not, that is the models of the semantics being extended are kept.</p><p>Moreover, and this is the main contribution of the LSMs semantics, it respects the Well-Founded Model. It is not hard to recognize that the LM semantics <ref type="bibr" target="#b26">[27]</ref> is the conservative generalization LSM semantics one obtains by removing the layered support condition on interpretations prescribed in Definition 6.</p><p>The complexity analysis of this semantics is left out of this paper. Nonetheless, a brief note is due. Model existence is guaranteed for every NLP, hence the complexity of finding if one LSM exists is trivial, when compared to SMs semantics. Brave reasoning -finding if there is any model of the program where some atom a is true -is an intrinsically NP-hard task from the computational complexity point of view. But since LSM semantics enjoys relevance, the computational scope of this task can be reduced to consider only Rel P (a), instead of the whole P . From a practical standpoint, this can have a significant impact in the performance of concrete applications. Cautious reasoning (finding out if some atom a is in all models) in the LSM semantics should have the same computational complexity as in the SMs, i.e., coNP-complete.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Respect for the Well-Founded Model</head><p>Definition 7. Interpretation M of P respects the WFM of P . An interpretation M respects the WFM of P iff M contains the set of all the true atoms of the WFM of P , and it contains no false atoms of the WFM of P .</p><p>Theorem 2. Layer Supported Models respect the WFM. Let P be an NLP, and P ≤i denote 1≤j≤i P j , where P j is P 's j layer. Each LSM M | ≤i of P ≤i , where M ⊇ M | ≤i , respects the WFM of</p><formula xml:id="formula_6">P i ∪ M | &lt;i .</formula><p>Proof. By hypothesis, each M | ≤i is a full LSM of P ≤i . Consider P 1 . Any M | ≤1 contains the facts of P , and their direct positive consequences, since the rules for all of these are necessarily placed in the first layer in the least layering of P . Hence, M | ≤1 contains all the true atoms of the WFM of P 1 . Layer 1 also contains whichever loops that do not depend on any other atoms besides those which are the heads of the rules forming the loop. Any such loops having no negative literals in the bodies are deterministic and, therefore, the heads of the rules forming the loop will be all true or all false in the WFM of P 1 , depending on whether the bodies are fully supported by facts in the same layer, or not, and, in the latter case, if the rules are not involved in other types of loop making their heads undefined. In any case, an LSM of this layer will by necessity contain all the true atoms of the WFM of P 1 . On the other hand, assume there is some atom b which is false in the WFM of P 1 . b being false in the WFM means that either b has no rules or that every rule for b has an unsatisfiable body in P 1 . In the first case, by definition 6 we know that b cannot be in any LSM. In the second case, every unsatisfiable body is necessarily unsupported, both classically and layered. Hence, b cannot be in any LSM of P 1 . This means that any LSM contains no atoms false in the WFM of P 1 , and, therefore, that they must respect the WFM of P 1 .</p><p>By hypothesis </p><formula xml:id="formula_7">M | ≤i+1 is an LSM of P ≤i+1 iff M | ≤i+1 ⊇ M | ≤i , for some LSM M | ≤i of P ≤i ,</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2">Other Comparisons</head><p>We next argue informally that the LSMs and the Revised Stable Models (RSMs) defined in <ref type="bibr" target="#b20">[21]</ref> are the same. The equivalence is rather immediate. Take Definitions 2 and 3 of <ref type="bibr" target="#b20">[21]</ref> characterizing the RSMs: Definition 8. Sustainable Set (Definition 2 in <ref type="bibr" target="#b20">[21]</ref>). Intuitively, we say a set S is sustainable in P iff any atom a ∈ S does not go against the well-founded consequences of the remaining atoms in S, whenever, S \ {a} itself is a sustainable set. The empty set by definition is sustainable. Not going against means that atom a cannot be false in the WFM of P ∪ S \ {a}, i.e., a is either true or undefined. That is, it belongs to set Γ P ∪S\{a} (W F M (P ∪ S \ {a})). Formally, we say S is sustainable iff -M is a minimal classical model, with not interpreted as classical negation, and -∃ α≥2 such that Γ α P (M ) ⊇ RAA P (M )</p><formula xml:id="formula_8">∀ a∈S S \ {a} is sustainable ⇒ a ∈ Γ P ∪S\{a} (W F M (P ∪ S \</formula><formula xml:id="formula_9">-RAA P (M ) is sustainable</formula><p>The first item goes without saying for both semantics. For the third item, it is apparent that sustainable models respect the WFS as required of LSM models; vice-versa, LSM models are sustainable models. Finally, for the 2nd item, consider its justification provided after Def. 3 of <ref type="bibr" target="#b20">[21]</ref>. It should be clear, after due consideration, that the 2nd item holds for LSMs and, vice-versa, that RSMs, because they comply with it, will comply with the minimal models at each LSM layer, which always resolve the odd loops that may be present in it.</p><p>Only recently, we became aware of the work of Osorio et. al <ref type="bibr" target="#b19">[20]</ref>. Their approach, couched in argumentation terms, addresses the same semantic concerns as our own, apparently with similar results for the case of finite propositional normal programs, though the paper does not produce a proof of respecting the WFS. In <ref type="bibr" target="#b21">[22]</ref> we also address the same concerns in argumentation semantics terms having providing the Approved Models semantics. The latter can be made to respect the WFS only if desired, by means of an appropriate condition, and so appears to be more general than that of <ref type="bibr" target="#b19">[20]</ref>. Moreover, Approved Models is conjectured to result, in that case, in a semantics equivalent to the RSMs <ref type="bibr" target="#b20">[21]</ref> and the LSMs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Program Transformation</head><p>The program transformation we now define (which can be skipped without loss of continuity) provides a syntactical means of generating a program P from an original program P , such that the SMs of P coincide with the LSMs of P . It engenders an expedite means of computing LSMs using currently available tools like Smodels <ref type="bibr" target="#b18">[19]</ref> and DLV <ref type="bibr" target="#b6">[7]</ref>. The transformation can be query driven and performed on the fly, or previously preprocessed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1">Top-down transformation</head><p>Performing the program transformation in top-down fashion assumes applying the transformation to each atom in the program in the call-graph of a query. The transformation involves traversing the call-graph for the atom, induced by its dependency rules, to detect and "solve" the OLONs, via the specific LSM-enforcing method described below. When traversing the call-graph for an atom, one given traverse branch may end by finding (1) a fact literal, or (2) a literal with no rules, or (3) a loop to a literal (or its default negation conjugate) already found earlier along that branch.</p><p>To produce P from P we need a means to detect OLONs. The OLON detection mechanism we employ is a variant of Tarjan's Strongly Connected Component (SCC) detection algorithm <ref type="bibr" target="#b33">[34]</ref>, because OLONs are just SCCs which happen to have an odd number of default negations along its edges. Moreover, when an OLON is detected, we need another mechanism to change its rules, that is to produce and add new rules to the program, which make sure the atoms a in the OLON now have "stable" rules which do not depend on any OLON. We say such mechanism is an "OLON-solving" one. Trivial OLONs, i.e. with length 1 like that in the Introduction's example (a ← not a, X), are "solved" simply by removing the not a from the body of the rule. General OLONs, i.e. with length ≥ 3, have more complex (non-deterministic) solutions, described below.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Minimal Models of OLONs</head><p>In general, an OLON has the form</p><formula xml:id="formula_10">R 1 = λ 1 ← not λ 2 , ∆ 1 R 2 = λ 2 ← not λ 3 , ∆ 2 . . . R n = λ n ← not λ 1 ,</formula><p>∆ n where all the λ i are atoms, and the ∆ j are arbitrary conjunction of literals which we refer to as "contexts". Assuming any λ i true alone in some model suffices to satisfy any two rules of the OLON: one by rendering the head true and the other by rendering the body false.</p><formula xml:id="formula_11">λ i−1 ←∼ λ i , ∆ i−1 ,<label>and</label></formula><formula xml:id="formula_12">λ i ←∼ λ i+1 , ∆ i A minimal set of such λ i</formula><p>is what is needed to have a minimal model for the OLON. Since the number of rules n in OLON is odd we know that n−1 2 atoms satisfy n − 1 rules of OLON. So, n−1 2 + 1 = n+1 2 atoms satisfy all n rules of OLON, and that is the minimal number of λ i atoms which are necessary to satisfy all the OLON's rules. This means that the remaining n − n+1 2 = n−1 2 atoms λ i must be false in the model in order for it to be minimal.</p><p>Taking a closer look at the OLON rules we see that λ 2 satisfies both the first and second rules; also λ 4 satisfies the third and fourth rules, and so on. So the set {λ 2 , λ 4 , λ 6 , . . . , λ n−1 } satisfies all rules in OLON except the last one. Adding λ 1 to this set, since λ 1 satisfies the last rule, we get one possible minimal model for OLON: M OLON = {λ 1 , λ 2 , λ 4 , λ 6 , . . . , λ n−1 }. Every atom in M OLON satisfies 2 rules of OLON alone, except λ 1 , the last atom added. λ 1 satisfies alone only the last rule of OLON. The first rule of OLON -λ 1 ← not λ 2 , ∆ 1 -despite being satisfied by λ 1 , was already satisfied by λ 2 . In this case, we call λ 1 the top literal of the OLON under M . The other Minimal Models of the OLON can be found in this manner simply by starting with λ 3 , or λ 4 , or any other λ i as we did here with λ 2 as an example. Consider the M OLON = {λ 1 , λ 2 , λ 4 , λ 6 , . . . , λ n−1 }. Since ∼ λ i+1 ∈ body(R i ) for every i &lt; n, and ∼ λ 1 ∈ body(R n ); under M OLON all the R 1 , R 3 , R 5 , . . . , R n will have their bodies false. Likewise, all the R 2 , R 4 , R 6 , . . . , R n−1 will have their bodies true under M OLON . This means that all λ 2 , λ 4 , λ 6 , . . . , λ n−1 will have classically supported bodies (all body literals true), namely via rules R 2 , R 4 , R 6 , . . . , R n−1 , but not λ 1 -which has only layered support (all body literals of strictly lower layers true). "Solving an OLON" corresponds to adding a new rule which provides classical support for λ 1 . Since this new rule must preserve the semantics of the rest of P , its body will contain only the conjunction of all the "context" ∆ j , plus the negation of the remaining λ 3 , λ 5 , λ 7 , . . . , λ n which were already considered false in the minimal model at hand. These mechanisms can be seen at work in lines 2.10, 2.15, and 2.16 of the Transform Literal algorithm below. The TR transformation consists in performing this literal transformation, for each individual atom of P . The Transform Literal algorithm implements a top-down, ruledirected, call-graph traversal variation of Tarjan's SCC detection mechanism. Moreover, when it encounters an OLON (line 2.9 of the algorithm), it creates (lines 2.13-2.17) and adds (line 2.18) a new rule for each literal involved in the OLON (line 2.11). The newly created and added rule renders its head true only when the original OLON's In the worst case, the whole P is a set of intricate OLONs. In such case there are exponentially many combinations of literals of P forming all the possible contexts, and for each of them TR adds a new rule. Hence, the complexity of TR is conjectured to be exponential in the number of atoms of P . Since the main purpose of this transformation is to be used for top-down query-solving, only the relevant atoms for the query are explored and their respective new rules produced. This way, the transformation can be applied on a "by need" basis.</p><p>Example 7. Solving OLONs. Consider this program, coinciding with its residual: a ← not a, b b ← c c ← not b, not a Solving a query for a, we use its rule and immediately detect the OLON on a. The leaf not a is removed; the rest of the body {b} is kept as the Context under which the OLON on a is "active" -if b were to be false there would be no need to solve the OLON on a's rule. After all OLONs have been solved, we use the Contexts to create new rules that preserve the meaning of the original ones, except the new ones do not now depend on OLONs. The current Context for a is now just {b} instead of the original {not a, b}. Example 8. Solving OLONs <ref type="bibr" target="#b1">(2)</ref>. Consider this program, coinciding with its residual: a ← not b, x b ← not c, y c ← not a, z x y z Solving a query for a we push it onto the stack, and take its rule a ← not b, x. We go on for literal not b and consider the rest of the body {x} as the current Context under which the OLON on a is "active". Push not b onto the stack and take the rule for b. We go on to solve not c, and add the y to the current Context which now becomes {x, y}. Once more, push not c onto the stack, take c's rule c ← not a, z, go on to solve not a and add z to the current Context which is now {x, y, z}. When we now push not a onto the stack, the OLON is detected and it "solving" begins. Three rules are created and added to the program a ← not c, x, y, z, b ← not a, x, y, z, and c ← not b, x, y, z. Together with the original program's rules they render "stable" the originally "non-stable" LSM {a, b, x, y, z}, {b, c, x, y, z}, and {a, c, x, y, z}. The final transformed program is: <ref type="figure">x, y, z</ref> b ← not a, x, y, z c ← not b, x, y, z</p><formula xml:id="formula_13">a ← not b, x b ← not c, y c ← not a, z x y z a ← not c,</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2">Implementing the TR transformation</head><p>The XSB Prolog system<ref type="foot" target="#foot_0">1</ref> is one of the most sophisticated, powerful, efficient and versatile implementations, with a focus on execution efficiency and interaction with external systems, implementing program evaluation following the WFS for NLPs. The XASP interface <ref type="bibr" target="#b3">[4]</ref> (standing for XSB Answer Set Programming), is included in XSB Prolog as a practical programming interface to Smodels <ref type="bibr" target="#b18">[19]</ref>, one of the most successful and efficient implementations of the SMs over generalized LPs. The XASP system allows one not only to compute the models of a given NLP, but also to effectively combine 3-valued with 2-valued reasoning. The latter is achieved by using Smodels to compute the SMs of the so-called residual program, the one that results from a query evaluated in XSB using tabling <ref type="bibr" target="#b31">[32]</ref>. A residual program is represented by delay lists, that is, the set of undefined literals for which the program could not find a complete proof, due to mutual dependencies or loops over default negation for that set of literals, detected by the XSB tabling mechanism. This coupling allows one to obtain a two-valued semantics of the residual, by completing the three-valued semantics the XSB system produces. The integration also allows to make use of and benefit from the relevance property of LSM semantics by queries.</p><p>In our implementation, detailed below, we use XASP to compute the query relevant residual program on demand. When the TR transformation is applied to it, the resulting program is sent to Smodels for computation of stable models of the relevant sub-program provided by the residue, which are then returned to the XSB-XASP side.</p><p>Residual Program After launching a query in a top-down fashion we must obtain the relevant residual part of the program for the query. This is achieved in XSB Prolog using the get_residual/2 predicate. According to the XSB Prolog's manual " the predicate get_residual/2 unifies its first argument with a tabled subgoal and its second argument with the (possibly empty) delay list of that subgoal. The truth of the subgoal is taken to be conditional on the truth of the elements in the delay list". The delay list is the list of literals whose truth value could not be determined to be true nor false, i.e., their truth value is undefined in the WFM of the program.</p><p>It is possible to obtain the residual clause of a solution for a query literal, and in turn the residual clauses for the literals in its body, and so on. This way we can reconstruct the complete relevant residual part of the KB for the literal -we call this a residual program or reduct for that solution to the query.</p><p>More than one such residual program can be obtained for the query, on backtracking. Each reduct consists only of partially evaluated rules, with respect to the WFM, whose heads are atoms relevant for the initial query literal, and whose bodies are just the residual part of the bodies of the original KB's rules. This way, not only do we get just the relevant part of the KB for the literal, we also get precisely the part of those rules bodies still undefined, i.e., those that are involved in Loops Over Negation.</p><p>Dealing with the Query and Integrity Constraints ICs are written as just f alsum ← IC_Body. An Smodels IC preventing f alsum from being true (:falsum) is enforced whenever a transformed program is sent to Smodels. Another two rules are added to the Smodels clause store through XASP: one creates an auxiliary rule for the initially posed query; with the form: lsmGoal :-Query, where Query is the query conjunct posed by the user. The second rule just prevents Smodels from having any model where the lsmGoal does not hold, having the form: :-not lsmGoal.</p><p>The XSB Prolog source code for the meta-interpreter, based on this program transformation, is available at http://centria.di.fct.unl.pt/∼amp/software.html</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">Abductive Reasoning with Logic Programs</head><p>Logic Programs have been used for a few decades now in knowledge representation and reasoning. Amongst the most common kinds of reasoning performed using them, we can find deduction, induction and abduction. When query answering, if we know that the underlying semantics is relevant, i.e. guarantees it is enough to use only the rules relevant to the query (those in its call-graph) to assess its truthfulness, then we need not compute a whole model in order to find an answer to our query: it suffices to use just the call-graph relevant part of the program. This way of top-down finding a solution to a query, dubbed "backward chaining", is possible only when the underlying semantics is relevant in the above sense, because the existence of a full model is guaranteed.</p><p>Currently, the standard 2-valued semantics used by the logic programming community is Stable Models <ref type="bibr" target="#b13">[14]</ref>. It's properties are well known and there are efficient implementations (such as DLV and SModels <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b18">19]</ref>). However, Stable Models (SMs) miss some important properties, both from the theoretical and practical perspectives: guarantee of model existence for every NLP, relevancy and cumulativity. Since SMs do not enjoy relevancy they cannot use just backward chaining for query answering. This means that it may incur in waste of computational resources, when extra time and memory are used to compute parts of the model which may be irrelevant to the query.</p><p>When performing abductive reasoning, we want to find, by need only (via backward chaining), one possible set of conditions (abductive literals of the program to be assumed either true or false) sufficient to entail our query. However, sometimes we also want to know which are (some of) the consequences (side-effects, so to speak) of such conditions. I.e., we want to know the truth value of some other literals, not part of the query's call graph, whose truth-value may be determined by the abductive conditions found. In some cases, we might be interested in knowing every possible side-effectthe truth-value of every literal in a complete model satisfying the query and ICs. In other situations though, our focus is only in some specific side-effects of abductions performed.</p><p>In our approach, the side-effects of interest are explicitly indicated by the user by wrapping the corresponding goals within reserved construct inspect/1. It is advantageous, from a computational point of view, to be able to compute only the truth-value of the important side-effects instead of the whole model, so as not to waste precious time and computational resources. This is possible whenever the underlying semantics guarantees model existence, and enjoys relevance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.1">Abduction</head><p>Abduction, or inference to the best explanation, is a reasoning method whereby one chooses the hypotheses that would, if true, best explain the observed evidence. In LPs, abductive hypotheses (or abducibles) are named literals of the program which have no rules. They can be considered true or false for the purpose of answering a query. Abduction in LPs ( <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b10">11,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b17">18]</ref>) can naturally be used in a top-down query-oriented proof-procedure to find an (abductive) answer to a query, where abducibles are leafs in the call dependency graph. The Well-Founded Semantics (WFS) <ref type="bibr" target="#b12">[13]</ref>, which enjoys relevancy, allows for abductive query answering. We used it in the specific implementation described in section 8.2 based on ABDUAL <ref type="bibr" target="#b1">[2]</ref>. Though WFS is 3-valued, the abduction mechanism it employs can be, and in our case is, 2-valued.</p><p>Because they do not depend on any other literal in the program, abducibles can be modeled in a Logic Program system without specific abduction mechanisms by including for each abducible an even loop over default negation, e.g., abducible ← not neg_abducible. neg_abducible ← not abducible. where neg_abducible is a new abducible atom, representing the (abducible) negation of the abducible. This way, under the SM semantics, a program may have models where some abducible is true and another where it is false, i.e. neg_abducible is true. If there are n abducibles in the program, there will be 2 n models corresponding to all the possible combinations of true and false for each. Under the WFS without a specific abduction mechanism, e.g. the one available in ABDUAL, both abducible and neg_abducible remain undefined in the Well-Founded Model (WFM), but may hold (as alternatives) in some Partial Stable Models.</p><p>Using the SM semantics abduction is done by guessing the truth-value of each abducible and providing the whole model and testing it for stability; whereas using the WFS with abduction, it can be performed by need, induced by the top-down query solving procedure, solely for the relevant abducibles -i.e., irrelevant abducibles are left unconsidered. Thus, top-down abductive query answering is a means of finding those abducible values one might commit to in order to satisfy a query.</p><p>An additional situation, addressed in this paper, is when one wishes to only passively determine which abducibles would be sufficient to satisfy some goal but without actually abducing them, just consuming other goals' needed and produced abductions. The difference is subtle but of importance, and it requires a new construct. Its mechanism, of inspecting without abducing, can be conceived and implemented through metaabduction, and is discussed in detail in the sequel.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.2">Backward and Forward Chaining</head><p>Abductive query-answering is intrinsically a backward-chaining process, a top-down dependency-graph oriented proof-procedure. Finding the side-effects of a set of abductive assumptions may be conceptually envisaged as forward-chaining, as it consists of progressively deriving conclusions from the assumptions until the truth value of the chosen side-effect literals is determined.</p><p>The problem with full-fledged forward-chaining is that too many (often irrelevant) conclusions of a model are derived. Wasting time and resources deriving them only to be discarded afterwards is a flagrant setback. Worse, there may be many alternative models satisfying an abductive query (and the ICs) whose differences just repose on irrelevant conclusions. So unnecessary computation of irrelevant conclusions can be compounded by the need to discard irrelevant alternative complete models too.</p><p>A more intelligent solution would be afforded by selective forward-chaining, where the user would be allowed to specify those conclusions she is focused on, and only those would be computed in forward-chaining fashion. Combining backward-chaining with selective forward-chaining would allow for a greater precision in specifying what we wish to know, and improve efficiency altogether. In the sequel we show how such a selective forward chaining from a set of abductive hypotheses can be replaced by backward chaining from the focused on conclusions -the inspection points -by virtue of a controlled form of abduction which, never performing extra abductions, just checks for abducibles assumed elsewhere.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8">Inspection Points</head><p>Meta-abduction is used in abduction inhibited inspection. Intuitively, when an abducible is considered under mere inspection, meta-abduction abduces only the intention to a posteriori check for its abduction elsewhere, i.e. it abduces the intention of verifying that the abducible is indeed adopted, but elsewhere. In practice, when we want to meta-abduce some abducible 'x', we abduce a literal 'consume(x)' (or 'abduced(x)'), which represents the intention that 'x' is eventually abduced elsewhere in the process of finding an abductive solution. The check is performed after a complete abductive answer to the top query is found. Operationally, 'x' will already have been or will be later abduced as part of the ongoing solution to the top goal. Suppose we want to satisfy the Integrity Constraint, and also to check if we get drunk or not. However, we do not care about the glass becoming wet -that being completely irrelevant to our current concern. In this case, full forward-chaining or computation of whole models is a waste of time, because we are interested only in a subset of the program's literals. What we need is a selective ersatz forward chaining mechanism, an inspection tool which permits to check the truth value of given literals as a consequence of the abductions made to satisfy a given query plus any Integrity Constraints. Moreover, in this example, if we may simply want to know the side-effects of the possible actions in order to decide (to drive or not to drive) after we know which sideeffects are true. In such case, we do not want to the IC ← not unsaf e_drive because that would always impose not drink_beer. We want to allow all possible solutions for the single IC ← thirsty, not drink and then check the side-effects of each abductive solution.</p><p>diately! Abducible literals are cleaning_day, temp_rise and f aulty_alarm.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>dust</head><p>← cleaning_day, inspect(not sound_alarm) sound_alarm ← temp_rise sound_alarm ← f aulty_alarm evacuate ← sound_alarm ← not cleaning_day</p><p>Satisfying the unique IC imposes cleaning_day true and gives us three minimal abductive solutions: S 1 = {dust, cleaning_day}, S 2 = {cleaning_day, sound_alarm, temp_rise, evacuate}, and S 3 = {cleaning_day, sound_alarm, f aulty_alarm, evacuate}. If we pose the query ?−not dust we want to know what could justify the cleaners dusting not to occur given that it is a cleaning day (enforced by the IC). However, we do not want to abduce the rise in temperature of the reactor nor to abduce the alarm to be faulty in order to prove not dust. Any of these justifying two abductions must result as a side-effect of the need to explain something else, for instance the observation of the sounding of the alarm, expressible by adding the IC ← not sound_alarm, which would then abduce one or both of those two abducibles as plausible explanations. The inspect/1 in the body of the rule for dust prevents any abduction below sound_alarm to be made just to make not dust true. One other possibility would be for two observations, coded by ICs ← not temp_rise or ← not f aulty_alarm, to be present in order for not dust to be true as a side-effect. A similar argument can be made about evacuating: one thing is to explain why evacuation takes place, another altogether is to justify it as necessary side-effect of root explanations for the alarm to go off. These two pragmatic uses correspond to different queries: ? − evacuate and ? − inspect(evacuate), respectively.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8.1">Declarative Semantics of Inspection Points</head><p>A simple transformation maps programs with inspection points into programs without them. Mark that the Stable Models of the transformed program where each abducible(X) is matched by the abducible X (X being a literal a or its default negation not a) clearly correspond to the intended procedural meanings ascribed to the inspection points of the original program. 1. all the rules obtained by the rules in P by systematically replacing:</p><p>inspect(not L) with not inspect(L); inspect(a) or inspect(abduced(a)) with abduced(a) if a is an abducible, and keeping inspect(L) otherwise. 2. for every rule A ← L 1 , . . . , L t in P , the additional rule: inspect(A) ← L 1 , . . . , L t where for every 1 ≤ i ≤ t:</p><formula xml:id="formula_14">L i =    abduced(L i ) if L i is an abducible inspect(X) if L i is inspect(X) inspect(L i ) otherwise</formula><p>The semantics of the inspect predicate is given by the generated rules for inspect Example 12. Transforming a Program P with Nested Inspection Levels.</p><p>x ← a, inspect(y), b, c, not d y ← inspect(not a) z ← d y ← b, inspect(not z), c</p><p>Then, Π(P ) is:</p><formula xml:id="formula_15">x ← a, inspect(y), b, c, not d inspect(x) ← abduced(a), inspect(y), abduced(b), abduced(c), not abduced(d) y ← not inspect(a) y ← b, not inspect(z), c inspect(y) ← not abduced(a) inspect(y) ← abduced(b), not inspect(z), abduced(c) z ← d inspect(z) ← abduced(d)</formula><p>The abductive stable model of Π(P ) respecting the inspection points is: {x, a, b, c, abduced(a), abduced(b), abduced(c), inspect(y)}.</p><p>Note that for each abduced(a) the corresponding a is in the model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8.2">Inspection Points Implementation</head><p>Once again, we based our inspection point practical implementation on a formally defined, XSB-implemented, true and tried abduction system -ABDUAL <ref type="bibr" target="#b1">[2]</ref>. ABDUAL lays the foundations for efficiently computing queries over ground three-valued abductive frameworks for extended logic programs with integrity constraints, on the wellfounded semantics and its partial stable models. The query processing technique in ABDUAL relies on a mixture of program transformation and tabled evaluation. A transformation removes default negative literals (by making them positive) from both the program and the integrity rules. Specifically, a dual transformation is used, that defines for each objective literal O and its set of rules R, a dual set of rules whose conclusions not (O) are true if and only if O is false in R. Tabled evaluation of the resulting program turns out to be much simpler than for the original program, whenever abduction over negation is needed. At the same time, termination and complexity properties of tabled evaluation of extended programs are preserved by the transformation, when abduction is not needed. Regarding tabled evaluation, ABDUAL is in line with SLG <ref type="bibr" target="#b32">[33]</ref> evaluation, which computes queries to normal programs according to the well-founded semantics. To it, ABDUAL tabled evaluation adds mechanisms to handle abduction and deal with the dual programs.</p><p>ABDUAL is composed of two modules: the preprocessor which transforms the original program by adding its dual rules, plus specific abduction-enabling rules; and a meta-interpreter allowing for top-down abductive query solving. When solving a query, abducibles are dealt with by means of extra rules the preprocessor added to that effect. These rules just add the name of the abducible to an ongoing list of current abductions, unless the negation of the abducible was added before to the lists failing in order to ensure abduction consistency. Meta-abduction is implemented adroitly by means of a reserved predicate, 'inspect/1' taking some literal L as argument, which engages the abduction mechanism to try and discharge any meta-abductions performed under L by matching with the corresponding abducibles, adopted elsewhere outside any 'inspect/1' call. The approach taken can easily be at least partially adopted by other abductive systems, as we had the occasion to check with the authors <ref type="bibr" target="#b5">[6]</ref>. We have also enacted an alternative implementation, relying on XSB-XASP and the declarative semantics transformation above, which is reported below.</p><p>Procedurally, in the ABDUAL implementation, the checking of an inspection point corresponds to performing a top-down query-proof for the inspected literal, but with the specific proviso of disabling new abductions during that proof. The proof for the inspected literal will succeed only if the abducibles needed for it were already adopted, or will be adopted, in the present ongoing solution search for the top query. Consequently, this check is performed after a solution for the query has been found. At inspectionpoint-top-down-proof-mode, whenever an abducible is encountered, instead of adopting it, we simply adopt the intention to a posteriori check if the abducible is part of the answer to the query (unless of course the negation of the abducible has already been adopted by then, allowing for immediate failure at that search node.) That is, one (meta -) abduces the checking of some abducible A, and the check consists in confirming that A is part of the abductive solution by matching it with the object of the check. According to our method, the side-effects of interest are explicitly indicated by the user by wrapping the corresponding goals subject to inspection mode, with the reserved construct 'inspect/1'.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="9">Conclusions and Future Work</head><p>We have recapped the LSMs semantics for all NLPs, complying with desirable requirements: 2-valued semantics, conservatively extending SMs, guarantee of model existence, relevance and cumulativity, plus respecting the WFM.</p><p>In the context of abductive logic programs, we presented a new mechanism of inspecting literals that can check for side-effects of top-down driven 2-valued abductive solutions, enabled by a form of meta-abduction, and have provided for it a declarative semantics that relies on the relevance property of Layer Supported Model semantics. We implemented this inspection mechanism within the Abdual <ref type="bibr" target="#b1">[2]</ref> meta-interpreter, as well as within the XSB-XASP system. Its solution relies on abduction itself, making it thus generally adoptable by other abductive frameworks.</p><p>We have also exhibited a space and time linearly complex transformation, TR, from one propositional NLP into another, whose Layer Supported Models are precisely the Stable Models of the transform, which can then be computed by extant Stable Model implementations. TR can be used to answer queries but is also of theoretical interest, for it may be used to prove properties of programs. Moreover, it can be employed in combination with the top-down query procedure of XSB-Prolog, and be applied solely to the residual program corresponding to a query. The XSB-XASP interface subsequently allows the program transform to be sent for Smodels for 2-valued evaluation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="9.1">Coda</head><p>Prolog has been till recently the most accepted means to codify and execute logic programs, and a useful tool for research and application development in logic program-ming. Several stable implementations were developed and refined over the years, with plenty of working solutions to pragmatic issues, ranging from efficiency and portability to explorations of language extensions. XSB-Prolog is one of the most sophisticated, powerful, efficient and versatile, focusing on execution efficiency and interaction with external systems, implementing logic program evaluation following the Well-Founded Semantics (WFS).</p><p>The XASP interface <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref> (XSB Answer Set Programming), part of XSB-Prolog, is a practical programming interface to Smodels <ref type="bibr" target="#b18">[19]</ref>. XASP allows to compute the models of a Normal LP, and also to effectively combine 3-with 2-valued reasoning, to get the best of both worlds. This is achieved by using Smodels to compute the stable models of the residual program, one that results from a query evaluation in XSB using tabling <ref type="bibr" target="#b31">[32]</ref>. The residual program is formed by delay lists, sets of undefined literals for which XSB could not find a complete proof, due to mutual loops over default negation in a set, as detected by the tabling mechanism. This method allows obtaining 2-valued models, by completion of the 3-valued ones of XSB. The integration maintains the relevance property for queries over our programs, something Stable Model semantics does not enjoy. In Stable Models, by its very definition, it is necessary to compute whole models of a given program. With the XSB implementation framework one may sidestep this issue, using XASP to compute the query relevant residual program on demand. Only the resulting residual program is sent to Smodels for computation of its stable models, and the result returned to XSB.</p><p>Having defined a more general 2-valued semantics for NLPs much remains to be explored, in the way of properties, complexity, comparisons, implementations, extensions and applications, The applications afforded by LSMs are all those of SMs, plus those where odd loops over default negation (OLONs) are actually employed for problem domain representation, as we have shown in examples 7 and 8. The guarantee of model existence is essential in applications where knowledge sources are diverse (like in the Semantic Web), and wherever the bringing together of such knowledge (automated or not) can give rise to OLONs that would otherwise prevent the resulting program from having a semantics, thereby brusquely terminating the application. A similar situation can be brought about by self-and mutually-updating programs, including in the learning setting, where unforeseen OLONs would stop short an ongoing process if the SM semantics were in use.</p><p>That the LSM semantics includes the SM semantics and that it always exists and admits top-down querying is a novelty making us look anew at 2-valued semantics use in KRR. LSMs' implementation, because of its relevance property, can avoid the need to compute whole models and all models, and hence SM's apodictic need for complete groundness and the difficulties it begets for problem representation. Hence, apparently there is only to gain in exploring the adept move from SM semantics to the more general LSM one, given that the latter is readily implementable through the program transformation TR, introduced here for the first time.</p><p>Work under way concerns an XSB engine level efficient implementation of the LSM semantics, and the exploration of its wider scope of applications with respect to ASP, and namely in combination with abduction and constructive negation.</p><p>One topic of future work consists in defining partial model schemas, that can provide answers to queries in terms of abstract non-ground model schemas encompassing several instances of ground partial models. Abstract partial models, instead of ground ones, may be produced directly by the residual, a subject for further investigation.</p><p>Yet another topic of future exploration is the definition of a Well-Founded Layer Supported Model (WFLSM). Conceivably, the WFLSM would be defined as a partial model that, at each layer, is the intersection of the all LSMs. Floating conclusions are consequently disallowed. Incidental to this topic is the relationship of the WFLSM to O-semantics <ref type="bibr" target="#b25">[26]</ref>, it being readily apparent the former extends the latter.</p><p>Finally, the concepts and techniques introduced in this paper are might be adoptable by other logic programming systems and implementations.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Example 3 .</head><label>3</label><figDesc>Layer Supported Models semantics. Consider again the program from example 1. Its LS models are {b, c, x}, and {b, d, x}. According to LSM semantics b and x are true because they are in the intersection of all models. c and d are undefined, and a and y are false.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>which means the LSMs M | ≤i+1 of P ≤i+1 are exactly the LSMs M | ≤i+1 of P i+1 ∪ M | ≤i . Adding the M | ≤i atoms as facts imposes them as true in the WFM of P i+1 ∪ M | ≤i . The then deterministically true consequences of layer i + 1 -the true atoms of the WFM of P i+1 ∪ M | ≤i -become necessarily present in every minimal model of P i+1 ∪ M | ≤i , and therefore in its every LSM M | ≤i+1 . On the other hand, every atom b false in the WFM of P i+1 ∪ M | ≤i has now unsatisfiable bodies in all its rules (up to this layer i+1). Hence, b cannot be in any LSM of P i+1 ∪M | ≤i . Therefore, every M | ≤i+1 respects the WFM of P i+1 ∪ M | ≤i . Hence, more generally, every M | ≤i respects the WFM of P i ∪ M | &lt;i</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>{a})) where Γ P (I) is just the Gelfond-Lifschitz program division of P by interpretation I. Definition 9. Revised Stable Models and Semantics (Definition 3 in [21]). Let RAA P (M ) ≡ M \ Γ P (M ). M is a Revised Stable Model of a NLP P , iff:</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Definition 10 .Algorithm 1 :</head><label>101</label><figDesc>Top-down program transformation. input : A program P output: A transformed program P' TR Program Transformation</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head></head><label></label><figDesc>Solving a query for b, we go on to solve c -{c} being b's current Context. Solving c we find leaf not b. We remove c from b's Context, and add c's body {not b, not a} to it. The OLON on b is detected and the not b is removed from b's Context, which finally is just {not a}. As can be seen so far, updating Contexts is similar to performing an unfolding plus OLON detection and resolution by removing the dependency on the OLON. The new rule for b has final Context {not a} for body. I.e., the new rule for b is b ← not a. Next, continuing a's final Context calculation, we remove b from a's Context and add {not a} to it. This additional OLON is detected and not a is removed from a's Context, now empty. Since we already exhausted a's dependency call-graph, the final body for the new rule for a is empty: a will be added as a fact. Moreover, a new rule for b will be added: b ← not a. The final transformed program is: a ← not a, b a b ← c b ← not a c ← not b, not a it has only one SM = {a} the only LSM of the program. Mark layering is respected when solving OLONs: a's final rule depends on the answer to b's final rule.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Example 9 .</head><label>9</label><figDesc>Relevant and irrelevant side-effects. Consider this logic program where drink_water and drink_beer are abducibles. ← thirsty, not drink. % This is an Integrity Constraint wet_glass ← use_glass. use_glass ← drink. drink ← drink_water. drink ← drink_beer. thirsty. drunk ← drink_beer. unsaf e_drive ← inspect(drunk).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Definition 11 .</head><label>11</label><figDesc>Transforming Inspection Points. Let P be a program containing rules whose body possibly contains inspection points. The program Π(P ) consists of:</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>2</head><label></label><figDesc>Background Notation and Definitions Definition 1. Logic Rule. A Logic Rule r has the general form H ← B 1 , . . . , B n , not C 1 , . . . , not C m where H, the B i and the C j are atoms.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head></head><label></label><figDesc>Transform Literal context is true, but also only when that head is not classically supported, though being layered supported under the minimal model of the OLON it is part of.</figDesc><table><row><cell></cell><cell cols="2">input : An literal l</cell></row><row><cell></cell><cell cols="2">output: A partial transformed program Pa'</cell></row><row><cell>2.1</cell><cell cols="2">previous context =context</cell></row><row><cell>2.2</cell><cell>Pa' =P</cell><cell></cell></row><row><cell>2.3</cell><cell cols="2">atom =atom a of literal l; //removing the eventual not</cell></row><row><cell>2.4</cell><cell cols="2">if a has been visited then</cell></row><row><cell>2.5</cell><cell cols="2">if a or not a is in the stack then</cell></row><row><cell>2.6</cell><cell></cell><cell>scc root indx =lowest stack index where a or not a can be found</cell></row><row><cell>2.7</cell><cell></cell><cell>nots seq = sequence of neg. lits from (scc root indx +1) to top indx</cell></row><row><cell>2.8</cell><cell></cell><cell>loop length = length of nots seq</cell></row><row><cell>2.9</cell><cell></cell><cell>if loop length is odd then</cell></row><row><cell>2.10</cell><cell></cell><cell># nots in body = (loop length−1) 2</cell></row><row><cell>2.11</cell><cell></cell><cell>foreach 'not x' in nots seq do</cell></row><row><cell>2.12</cell><cell></cell><cell>idx = index of not x in nots seq</cell></row><row><cell>2.13</cell><cell></cell><cell>newbody = context</cell></row><row><cell>2.14</cell><cell></cell><cell>for i=1 to # nots in body do</cell></row><row><cell>2.15</cell><cell></cell><cell>newbody = newbody ∪</cell></row><row><cell>2.16</cell><cell></cell><cell>{nots seq ((idx + 2  *  i) mod loop length )}</cell></row><row><cell>2.17</cell><cell></cell><cell>end</cell></row><row><cell>2.18</cell><cell></cell><cell>newrule = x ←newbody</cell></row><row><cell>2.19</cell><cell></cell><cell>Pa' =Pa' ∪{newrule }</cell></row><row><cell>2.20</cell><cell></cell><cell>end</cell></row><row><cell>2.21</cell><cell></cell><cell>end</cell></row><row><cell>2.22</cell><cell cols="2">end</cell></row><row><cell>2.23</cell><cell>else</cell><cell>// a has not been visited yet</cell></row><row><cell>2.24</cell><cell cols="2">mark a as visited</cell></row><row><cell>2.25</cell><cell></cell><cell></cell></row><row><cell>2.26</cell><cell></cell><cell>foreach (not )bi do</cell></row><row><cell>2.27</cell><cell></cell><cell>Push ((not )bi, stack)</cell></row><row><cell>2.28</cell><cell></cell><cell></cell></row><row><cell>2.29</cell><cell></cell><cell>Transform Literal ((not )bi)</cell></row><row><cell>2.30</cell><cell></cell><cell>Pop ((not )bi, stack)</cell></row><row><cell>2.31</cell><cell></cell><cell>context =previous context</cell></row><row><cell>2.32</cell><cell></cell><cell>end</cell></row><row><cell>2.33</cell><cell cols="2">end</cell></row><row><cell>2.34</cell><cell>end</cell><cell></cell></row><row><cell></cell><cell></cell><cell>Algorithm 2:</cell></row></table><note>foreach rule r = a ← b1, . . . , bn, not bn+1, . . . , not bm of P do context =context ∪{b1, . . . , (not )bi−1, (not )bi+1, . . . , not bm}</note></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">XSB-Prolog and Smodels are freely available, at: http://xsb.sourceforge.net and http://www.tcs.hut.fi/Software/smodels.</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="10">Acknowledgements</head><p>We thank Marco Alberti, José Júlio Alferes, Pierangelo Dell'Acqua, Robert Kowalski, Juan Carlos Nieves, Mauricio Osorio, and David Warren for their comments.</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Example 10. Police and Tear Gas Issue. Consider this other NLP, where 'tear_gas', 'f ire', and 'water_cannon' are the only abducibles. Notice that inspect is applied to calls.</p><p>← police, riot, not contain. % this is an Integrity Constraint contain ← tear_gas. contain ← water_cannon. smoke ← f ire.</p><p>smoke ← inspect(tear_gas). police.</p><p>riot.</p><p>Notice the two rules for 'smoke'. The first states that one explanation for smoke is fire, when assuming the hypothesis 'f ire'. The second states 'tear_gas' is also a possible explanation for smoke. However, the presence of tear gas is a much more unlikely situation than the presence of fire; after all, tear gas is only used by police to contain riots and that is truly an exceptional situation. Fires are much more common and spontaneous than riots. For this reason, 'f ire' is a much more plausible explanation for 'smoke' and, therefore, in order to let the explanation for 'smoke' be 'tear_gas', there must be a plausible reason -imposed by some other likely phenomenon. This is represented by inspect(tear_gas) instead of simply 'tear_gas'. The 'inspect' construct disallows regular abduction -only allowing meta-abduction -to be performed whilst trying to solve 'tear_gas'. I.e., if we take tear gas as an abductive solution for smoke, this rule imposes that the step where we abduce 'tear_gas' is performed elsewhere, not under the derivation tree for 'smoke'. Thus, 'tear_gas' is an inspection point. The IC, because there is 'police' and a 'riot', forces 'contain' to be true, and hence, 'tear_gas' or 'water_cannon' or both, must be abduced. 'smoke' is only explained if, at the end of the day, 'tear_gas' is abduced to enact containment. Abductive solutions should be plausible, and 'smoke' is plausibly explained by 'tear_gas' if there is a reason, a best explanation, that makes the presence of tear gas plausible; in this case the riot and the police. Plausibility is an important concept in science, for lending credibility to hypotheses. Assigning plausibility measures to situations is an orthogonal issue. In this example, another way of viewing the need for the new mechanism embodied by the inspect predicate is to consider we have 2 agents: one is a police officer and has the possibility of abducing (corresponding to actually throwing) tear_gas; the other agent is a civilian who, obviously, does not have the possibility of abducing (throwing) tear_gas. For the police officer agent, having the smoke ← inspect(tear_gas) rule, with the inspect is unnecessary: the agent knows that tear_gas is the explanation for smoke because it was himself who abduced (threw) tear_gas; but for the civilian agent the inspect in the smoke ← inspect(tear_gas) rule is absolutely indispensable, since he cannot abduce tear_gas and therefore cannot know, without inspecting, if that is the real explanation for smoke.</p><p>Example 11. Nuclear Power Plant Decision Problem. This example was extracted from <ref type="bibr" target="#b30">[31]</ref> and adapted to our current designs, and its abducibles do not represent actions. In a nuclear power plant there is decision problem: cleaning staff will dust the power plant on cleaning days, but only if there is no alarm sounding. The alarm sounds when the temperature in the main reactor rises above a certain threshold, or if the alarm itself is faulty. When the alarm sounds everybody must evacuate the power plant imme-</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Scenario semantics of extended logic programs</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Alferes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">M</forename><surname>Dung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">M</forename><surname>Pereira</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">LPNMR</title>
				<imprint>
			<publisher>MIT Press</publisher>
			<date type="published" when="1993">1993</date>
			<biblScope unit="page" from="334" to="348" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Abduction in well-founded semantics and generalized stable models via tabled dual programs</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Alferes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">M</forename><surname>Pereira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Swift</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">TPLP</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="383" to="428" />
			<date type="published" when="2004-07">July 2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Arithmetic classification of perfect models of stratified programs</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">R</forename><surname>Apt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">A</forename><surname>Blair</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Fundam. Inform</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="339" to="343" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">XASP: Answer Set Programming with XSB and Smodels</title>
		<author>
			<persName><forename type="first">L</forename><surname>Castro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Swift</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">S</forename><surname>Warren</surname></persName>
		</author>
		<ptr target="http://xsb.sourceforge.net/packages/xasp.pdf" />
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">An environment for the exploration of non monotonic logic programs</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">F</forename><surname>Castro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">S</forename><surname>Warren</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Procs. WLPE&apos;01</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Kusalik</surname></persName>
		</editor>
		<meeting>s. WLPE&apos;01</meeting>
		<imprint>
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Hyprolog: A new logic programming language with assumptions and abduction</title>
		<author>
			<persName><forename type="first">H</forename><surname>Christiansen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Dahl</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICLP</title>
				<imprint>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="159" to="173" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">The dlv system: Model generator and advanced frontends (system description)</title>
		<author>
			<persName><forename type="first">S</forename><surname>Citrigno</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Eiter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Faber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Gottlob</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Koch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Leone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Mateis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Pfeifer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Scarcello</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Workshop in Logic Programming</title>
				<imprint>
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Sldnfa: An abductive procedure for normal abductive programs</title>
		<author>
			<persName><forename type="first">M</forename><surname>Denecker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">De</forename><surname>Schreye</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Procs. ICLP&apos;92</title>
				<meeting>s. ICLP&apos;92<address><addrLine>Washington, USA</addrLine></address></meeting>
		<imprint>
			<publisher>The MIT Press</publisher>
			<date type="published" when="1992">1992</date>
			<biblScope unit="page" from="686" to="700" />
		</imprint>
	</monogr>
	<note>Apt, editor</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">A Classification-Theory of Semantics of Normal Logic Programs: I, II</title>
		<author>
			<persName><forename type="first">J</forename><surname>Dix</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Fundamenta Informaticae</title>
		<imprint>
			<biblScope unit="volume">XXII</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="257" to="288" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Reducing disjunctive to non-disjunctive semantics by shifting operations</title>
		<author>
			<persName><forename type="first">J</forename><surname>Dix</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Gottlob</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Marek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Fundamenta Informaticae</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="page" from="87" to="100" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Abduction from logic programs: semantics and complexity</title>
		<author>
			<persName><forename type="first">T</forename><surname>Eiter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Gottlob</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Leone</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Theoretical Computer Science</title>
		<imprint>
			<biblScope unit="volume">189</biblScope>
			<biblScope unit="issue">1-2</biblScope>
			<biblScope unit="page" from="129" to="177" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Consistency of Clark&apos;s completion and existence of stable models</title>
		<author>
			<persName><forename type="first">F</forename><surname>Fages</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Methods of Logic in Computer Science</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="51" to="60" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">The well-founded semantics for general logic programs</title>
		<author>
			<persName><forename type="first">A</forename><surname>Van Gelder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">A</forename><surname>Ross</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Schlipf</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. of ACM</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="620" to="650" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">The stable model semantics for logic programming</title>
		<author>
			<persName><forename type="first">M</forename><surname>Gelfond</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Lifschitz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICLP/SLP</title>
				<imprint>
			<publisher>MIT Press</publisher>
			<date type="published" when="1988">1988</date>
			<biblScope unit="page" from="1070" to="1080" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Disjunctive defaults</title>
		<author>
			<persName><forename type="first">M</forename><surname>Gelfond</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Przymusinska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Lifschitz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Truszczynski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">KR-91</title>
				<imprint>
			<date type="published" when="1991">1991</date>
			<biblScope unit="page" from="230" to="237" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">A uniform approach to logic programming semantics</title>
		<author>
			<persName><forename type="first">P</forename><surname>Hitzler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wendt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">TPLP</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">1-2</biblScope>
			<biblScope unit="page" from="93" to="121" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">A fixpoint characterization of abductive logic programs</title>
		<author>
			<persName><forename type="first">K</forename><surname>Inoue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Sakama</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Logic Programming</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="107" to="136" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">The role of abduction in logic programming</title>
		<author>
			<persName><forename type="first">A</forename><surname>Kakas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Kowalski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Toni</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Handbook of Logic in AI and LP</title>
				<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="1998">1998</date>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="235" to="324" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Smodels -an implementation of the stable model and well-founded semantics for normal logic programs</title>
		<author>
			<persName><forename type="first">I</forename><surname>Niemelä</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Simons</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Procs. LPNMR&apos;97</title>
				<meeting>s. LPNMR&apos;97</meeting>
		<imprint>
			<date type="published" when="1997">1997</date>
			<biblScope unit="volume">1265</biblScope>
			<biblScope unit="page" from="420" to="429" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Expressing extension-based semantics based on stratified minimal models</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Nieves</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Osorio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Zepeda</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Logic, Language, Information and Computation</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="volume">5514</biblScope>
			<biblScope unit="page" from="305" to="319" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Revised stable models -a semantics for logic programs</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">M</forename><surname>Pereira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Pinto</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Progress in AI</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Dias</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2005">2005</date>
			<biblScope unit="volume">3808</biblScope>
			<biblScope unit="page" from="29" to="42" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Approved models for normal logic programs</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">M</forename><surname>Pereira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Pinto</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Procs. LPAR&apos;07, LPAR -LNAI</title>
				<editor>
			<persName><forename type="first">N</forename><surname>Dershowitz</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Voronkov</surname></persName>
		</editor>
		<meeting>s. LPAR&apos;07, LPAR -LNAI<address><addrLine>Yerevan, Armenia, October</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Inspection points and meta-abduction in logic programs</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">M</forename><surname>Pereira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Pinto</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">INAP&apos;09</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Layer supported models of logic programs</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">M</forename><surname>Pereira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Pinto</surname></persName>
		</author>
		<ptr target="http://centria.di.fct.unl.pt/∼lmp/publications/online-papers/LSMs.pdf" />
	</analytic>
	<monogr>
		<title level="m">Procs. 10th LPNMR, LNCS</title>
				<meeting>s. 10th LPNMR, LNCS</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2009-09">September 2009</date>
			<biblScope unit="page" from="450" to="456" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Stable model implementation of layer supported models by program transformation</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">M</forename><surname>Pereira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Pinto</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">INAP&apos;09</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Adding closed world assumptions to wellfounded semantics</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">M</forename><surname>Pereira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Alferes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">N</forename><surname>Aparício</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">TCS</title>
		<imprint>
			<biblScope unit="volume">122</biblScope>
			<biblScope unit="issue">1-2</biblScope>
			<biblScope unit="page" from="49" to="68" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Layered models top-down querying of normal logic programs</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">M</forename><surname>Pereira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Pinto</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Procs. PADL&apos;09</title>
				<meeting>s. PADL&apos;09</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2009-01">January 2009</date>
			<biblScope unit="volume">5418</biblScope>
			<biblScope unit="page" from="254" to="268" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Every logic program has a natural stratification and an iterated least fixed point model</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">C</forename><surname>Przymusinski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">PODS</title>
				<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="1989">1989</date>
			<biblScope unit="page" from="11" to="21" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Well-founded and stationary models of logic programs</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">C</forename><surname>Przymusinski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Annals of Mathematics and Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="141" to="187" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Perfect model semantics</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">C</forename><surname>Przymusinski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICLP/SLP</title>
				<imprint>
			<date type="published" when="1988">1988</date>
			<biblScope unit="page" from="1081" to="1096" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Abduction with negation as failure for active and reactive rules</title>
		<author>
			<persName><forename type="first">F</forename><surname>Sadri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Toni</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AI*IA</title>
				<imprint>
			<date type="published" when="1999">1999</date>
			<biblScope unit="page" from="49" to="60" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Tabling for non-monotonic programming</title>
		<author>
			<persName><forename type="first">T</forename><surname>Swift</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Annals of Mathematics and Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="issue">3-4</biblScope>
			<biblScope unit="page" from="201" to="240" />
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">An abstract machine for slg resolution: Definite programs</title>
		<author>
			<persName><forename type="first">T</forename><surname>Swift</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">S</forename><surname>Warren</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Symp. on Logic Programming</title>
				<imprint>
			<date type="published" when="1994">1994</date>
			<biblScope unit="page" from="633" to="652" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Depth-first search and linear graph algorithms</title>
		<author>
			<persName><forename type="first">R</forename><surname>Tarjan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">SIAM J. Computing</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="146" to="160" />
			<date type="published" when="1972">1972</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
