<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Formalizing Informal Logic and Natural Language Deductivism *</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gopal Gupta</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sarat Varanasi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kinjal Basu</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zhuo Chen Elmer Salazar</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Farhad Shakerin</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Serdar Erbatur</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fang Li</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Huaduo Wang</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zesheng Xu</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>The University of Texas at Dallas, Richardson, USA Joaqu ́ın Arias Universidad Rey Juan Carlos</institution>
          ,
          <addr-line>Madrid, Spain Brendan Hall, Kevin Driscoll Honeywell Corp, Minneapolis, MN</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <abstract>
        <p>Formalizing the human thought process has been considered fiendishly difficult. The field of informal logic has been developed in recognition of this difficulty. Work in informal logic interprets an argument as an attempt to present evidence for a conclusion. Holloway and Wasson have developed a primer to establish the terms, concepts, principles, and uses of arguments. We argue that recent advances in formal logic, especially incorporation of negation as failure, facilitate the formalization of the human thought process. These advances help formalize concepts that were hitherto thought of as impossible to formalize. We show how the paradigm of answer set programming can be used to formalize all the concepts presented in Holloway and Wasson's primer.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Formalizing the human thought process has been considered very hard. The study of human thought process
has been conducted over several millenia [
        <xref ref-type="bibr" rid="ref10 ref17">17, 10</xref>
        ]. In modern times this effort culminated in boolean logic
[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], first order logic [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], and various other advanced logics. These logics, however, are limited and could
not match the sophistication of human reasoning in the sense that it is hard to use these logics to faithfully
model the human thought process in an elegant manner. First, we make number of points [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]:
• The early problems of naive set theory found by Russell (Russell’s paradox [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]) led mathematicians
and logicians to only focus on well-founded or inductive reasoning which stipulates that to reason
soundly one has to start from the simplest object (e.g., an empty set) and then build larger objects
by embellishing this simplest object (i.e., obtain one element sets by adding an element to the empty
set, two element sets by adding an element to one element sets and so on). Thus, assumption-based
reasoning that humans frequently employ, and which requires circular or coinductive reasoning [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ],
was banished from mathematical discourse. Only recently, work on coninductive reasoning has been
taken up [
        <xref ref-type="bibr" rid="ref1 ref20 ref32 ref6">1, 6, 33, 20</xref>
        ].
      </p>
      <p>
        *Work partially supported by NSF awards IIS 1718945, IIS 1910131, IIP 1916206, and by DoD and Amazon. Copyright ©2021
for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
• Systems of logic could not reason about themselves, i.e., be reflective. Part of the reason is the
utter focus on only allowing inductive structures. Meta-reasoning was disallowed due to fears of
unsoundness and circularity. As a result, for example, classical logics cannot have the ability to
predicate a conclusion based on failure of a proof in that logic itself. In fact, Tarski stipulated that
given a logic L1, we need another logic L2 to reason about L1, and yet another logic L3 to reason
about L2, and so on, ad infinitum [
        <xref ref-type="bibr" rid="ref33">34</xref>
        ]. Thus, Tarski deemed it impossible for a language to have its
own truth predicate. Only in 1975 did Kripke show that a language can consistently contain its own
truth predicate [
        <xref ref-type="bibr" rid="ref26">27</xref>
        ].
• The concept of negation as failure [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] was added into logic along with notion of stable model
semantics that admitted multiple worlds [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Negation-as-failure allows us to take an action if a proof
fails: a notion frequently employed by humans (if something does not work out, do something else).
Classical logic (based on inductive semantics) cannot reason about proof failure. If, for example, we
program reachability (of one node from another in a directed graph) in logic, then the axioms for
reachability cannot be used to prove unreachability. Axioms for unreachability have to be given
separately to reason about unreachability. With negation-as-failure, we can easily realize unreachability
by stating that if the proof of reachability from node N1 to N2 fails, then node N2 is unreachable from
node N1.
• While coinductive reasoning and negation as failure have been around for 30-40 years, they did not
lead to formalization of the human thought process. The advent of answer set programming based on
the idea of negation-as-failure [
        <xref ref-type="bibr" rid="ref15 ref27">15, 28</xref>
        ] made this possible, where complex human thought processes
such as default reasoning, counterfactual reasoning, nonmonotonic reasoning, abductive reasoning
and possible-worlds reasoning could now be realized in a formal framework in an elegant manner.
Progress was still limited by the type of implementations available for answer set programming that
precluded goal-directed thinking. The design of goal-directed or query-driven ASP execution engines
such as s(ASP) [
        <xref ref-type="bibr" rid="ref28">29</xref>
        ] and s(CASP) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] solved this problem.
• Formalization of the human though process has been further compounded by the fact that humans may
use the same linguistic pattern to represent different logical representations. This is made evident in
Wason’s Selection Task [
        <xref ref-type="bibr" rid="ref24">25</xref>
        ] where if A then B may be used by humans to represent both A ) B and
A () B.
      </p>
      <p>
        Given the lack of success of classical logic in formalizing the human though processes, work was started
on study of “informal logic” [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. Similar to classical logic, work in informal logic is focused on inference
rules (called argument in the informal logic literature) and reasoning (proof). An argument is an attempt to
present evidence for a conclusion (or a claim) and it relies on premises that support the conclusion. Halloway
and Wasson have developed an excellent primer on this subject [
        <xref ref-type="bibr" rid="ref23">24</xref>
        ]. We show how all the terms in the primer
can be mapped to answer set programming constructs and how arguments, evidence, etc., can be represented
and executed on our s(CASP) ASP engine. The ASP code for various examples of the primer is shown later
and the output of the s(CASP) system for the respective query of each example is shown in the Appendix.
The output includes the computed answer, the model, and the proof trace. A more comprehensive example
from the argumentation literature has also been worked out.
      </p>
      <p>
        As an aside, it should be noted that ASP/s(CASP) technology can automate the overarching properties
framework [21]. The OAP framework envisions that to have confidence in a system, establishing three
overarching properties of the system suffices: intent, correctness, and innocuity [
        <xref ref-type="bibr" rid="ref22">23</xref>
        ].
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Mapping Informal Logic to ASP</title>
      <p>We first cast all the terms used in the primer in terms of those used in logic, logic programming (LP)
and answer set programming (ASP). Terms from the primer are italicized while those from LP/ASP are in
boldface font.</p>
      <p>• A conclusion is a theorem. In LP/ASP, it translates into a query. The word claim is also used
sometimes in assurance literature. A claim is also a theorem or a query in LP/ASP terminology.
• An argument is a clause (i.e., a rule).
• A premise (“what an audience believes”) is a belief based on knowledge or assumption. A premise
can be of two types: evidence or assumption. An evidence is a fact. An assumption is an abducible.
• Reasoning corresponds to a proof. To perform reasoning, in LP/ASP we develop a proof tree.
• A binding is a substitution, i.e., value imparted to a variable that is existentially or universally
quantified. It is no different from what is understood to be a binding in LP/ASP.
• A defeater is a negated goal (negation here is negation-as-failure, as it “provides support for not
believing”). Note that ASP has two types of negation. The second type, classical negation, represented
as -p, is a premise.</p>
      <p>
        Before we go on to mapping other concepts of the primer to ASP, we give a brief introduction to abductive
reasoning. The term abduction refers to a form of reasoning that is concerned with the generation and
evaluation of explanatory hypotheses. We could also think of abduction as assumptions based reasoning.
Abductive reasoning leads back from facts to a proposed explanation of those facts or assumptions that will
explain that fact. According to Harman [
        <xref ref-type="bibr" rid="ref21">22</xref>
        ], abductive reasoning takes the following form:
The fact B is observed.
      </p>
      <p>But if A were (assumed) true, B would be a matter of course.</p>
      <p>Hence, there is reason to suspect that A is true.</p>
      <p>In this form, B can be either a particular event or an empirical generalization. A serves an explanatory
hypothesis and B follows from A combined with relevant background knowledge. Note that A is not necessarily
true, but plausible and worthy of further validation. We can also think of A as an assumption that we must
make to explain the observation B. A simple example of abductive reasoning is that one might attribute the
symptoms of a common cold to a viral infection. Or, that if we assume viral infection, then no wonder the
person has symptoms of a cold. More formally, abduction is a form of reasoning where, given the premise
P ) Q, and the observation Q, one surmises (abduces) that P holds. More generally, given a theory T , an
observation O, and a set of abducibles A, then E is an explanation of O (where E A) if:
1. T [ E j= O
2. T [ E is consistent</p>
      <p>We can think of abducibles A as a set of assumptions. Generally, A consists of a set of propositions such
that if p 2 A, then there is no rule in theory T with p as its head (that is, there is no way to argue for p).</p>
      <p>We assume the theory T to be an answer set program. Under a goal-directed execution regime, an ASP
system can be extended with abduction by simply adding the following rule for an abducible p:
p :- not not p.</p>
      <p>not p :- not p.
this is automatically achieved for a predicate p that we want to declare as an abducible in the s(CASP)
system through the declaration:</p>
      <p>#abducible p.</p>
      <p>We now proceed to give the rest of the mapping from overarching properties (OAP) to ASP.
Atomic and Compound Arguments: An atomic argument is a clause (rule) that only uses facts and
abducibles in its body. A compound argument is a clause (rule) that has facts, abducibles and a reference
to other rules in its body.</p>
      <p>
        Cogent Arguments: An argument is cogent if it rationally justifies believing its conclusion to the required
standard of confidence [
        <xref ref-type="bibr" rid="ref23">24</xref>
        ]. How do we simulate cogency in ASP? ASP can model various shades of
confidence in a conclusion by a combination of negation as failure and classical negation. Given not p,
where not is interpreted as negation as failure, it will be interpreted as “no evidence of p”. Similarly, -p
denotes that p is unconditionally false, i.e., there is irrefutable evidence that p is false. Negation as failure
and classical negation can be combined to create nuanced reasoning. Given a proposition p (e.g., p “it is
raining now”):
      </p>
      <sec id="sec-2-1">
        <title>1. p: denotes that p is unconditionally true.</title>
      </sec>
      <sec id="sec-2-2">
        <title>2. not -p: denotes that p maybe true.</title>
        <p>3. not p ^ not -p: denotes that p is unknown, i.e., there is no evidence of either p or -p.
4. not p: denotes that p may be false (no evidence that p is true).</p>
      </sec>
      <sec id="sec-2-3">
        <title>5. -p: denotes that p is unconditionally false.</title>
        <p>Cogency can be represented in ASP using these five shades of truth. Thus, if we consider a criminal
case vs a civil case, e.g., O.J. Simpson murder trial, then for O.J. Simpson to be acquitted, a proof of
not murdered(oj, nicole) will be needed for the civil case, while a proof of -murdered(oj,
nicole) will be needed for the criminal one.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Precepts</title>
      <p>The Primer presents a number of guiding principles called precepts. We discuss them here in light of our
ASP rendering.</p>
      <p>Locality: The precept of locality states that the cogency of a compound argument never exceeds the cogency
of its weakest atomic argument. This obviously holds in our ASP rendering, assuming rules are written in a
reasonable manner following the precept of locality, and normal rules of logical inference are followed. For
example, if we assert that all crows are birds (bird(X) if crow(X)), then if we establish that Jimmy
may not be a crow (not crow(jimmy)), we can only conclude that Jimmy may not be a bird either and
no more.</p>
      <p>
        Depth: The precept of depth states that “argument decomposition must descend far enough to serve
stakeholder objectives, and not so far as to to unnecessarily consume resources, create distraction, ...”. In ASP
rendering this translates to the level of granularity to which the modeling is done. When we give arguments
to have our audience believe a certain conclusion, we furnish a proof. The proof tree serves as the
justification for the conclusion. The proof tree can be displayed to the depth desired to keep the explanation high
level, even though our modeling and reasoning may be very low level. The interactive proof viewing facility
of s(CASP), for example, allows one to explore the proof tree to any level of detail [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
Change: Change relates to the fact that the context in which an argument is made may change. Arguments
may be made to assure that a system is safe before deployment. However, post deployment, a new set of
situations may be encountered and the assurance argument may fall apart. Change, thus, pertains to
nonmonotonicity of our knowledge. That is, a conclusion drawn now may have to be withdrawn later as new
knowledge becomes available. We know that Tweety is a bird, so we conclude it can fly. However, later we
discover that Tweety is a penguin, so this conclusion has to be withdrawn. ASP is based on a non-monotonic
logic as it incorporates negation as failure. Thus, change is easily accommodated in our ASP rendering. As
our knowledge grows post deployment, we can refine our arguments and our proofs. In fact, ASP allows
for any accommodations that may have to be modeled in advance (i.e., known unknowns can be modeled).
However, if it turns out that known unknowns do not arise then, that is not a problem either. In fact, ASP’s
major strength is being able to model a situation even when information is lacking.
      </p>
      <p>
        Induction: The induction precept states that not all reasoning may be deductive. This can also be modeled
using ASP, since we can model analogical, explanatory, defeasible, counterfactual, and various other types
of reasoning, thanks to the presence of negation as failure and possible world semantics. In fact, an ASP rule
captures (enumerative1) inductive reasoning [
        <xref ref-type="bibr" rid="ref34">35</xref>
        ] quite precisely by also stating the exceptions to an induced
default rule. For example, if we see a number of swans and all of them are white, then we may induce that
all swans are white. However, we can never be sure that all swans are white, so we could leave room for
exceptions and code it in ASP as:
color(X, white) :- swan(X), not abnormal swan(X).
      </p>
      <p>abnormal swan(X) :- black swan(X).</p>
      <p>
        This property of default rules has been exploited for making machine learning explainable [
        <xref ref-type="bibr" rid="ref30">31</xref>
        ].
Plausibility: The precept of plausibility states that people have biases, beliefs, etc., that will color their
perception of the world. This is also easily modeled in the ASP framework through the five shades of truths
mentioned above. Consider a physician who is about to prescribe a medicine to a patient. A physician with
1Enumerative induction is an inductive method in which a conclusion is constructed based upon the number of instances that
support it [
        <xref ref-type="bibr" rid="ref34">35</xref>
        ].
aggressive thinking may immediately prescribe the medicine and if any side-effects show up later, he/she
will worry about them then. In ASP, this will be modeled as:
prescribe(M, D, P) :- cures(M, D), not contraindicated(M, P).
contraindicated(M, P) :- positive for side-effects(M, P).
which states that medicine M can be prescribed to patient P for disease D if, normally, medicine M cures
disease D, and there is no evidence of contraindications for medicine M in person P. The rule here states that
the medicine should be given without testing for side-effects, however, if for some reason we know, (or we
learn later, that the patient tests positive for medicine M’s side-effects), then the medicine will be stopped
from being prescribed. That’s how reasoning in ASP will work. Note that this reasoning make sense, say,
for example when we know that only 1% of the patients are allergic to the medicine so the chances of having
a side-effect are very low.
      </p>
      <p>In contrast, a conservatively thinking doctor may decide to first ensure in advance that medicine M does
not test positive for any side effects for person P. This doctor does not want to take even a 1% chance. This
conservative thinking will be modeled as:
prescribe(M, D, P) :- cures(M, D), not contraindicated(M, P).
contraindicated(M, P) :- not -positive for side-effects(M, P).
which states that the medicine can be prescribed only if the patient does not test positive for M’s side-effect.</p>
      <p>The aggressive reasoning rule is read as follows: prescribe medicine M to patient P for disease D if M
cures D and contraindication of M for P maybe false. Since contraindication is qualified with a maybe, the
prescribe goal can succeed without performing the test. The conservative reasoning rule, in contrast, is
read as follows: prescribe medicine M for patient P for disease D if M cures D and patient definitely does
not test positive to M’s side-effect.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Discussion</title>
      <p>
        It should be noted that ASP is especially good at default reasoning. Defaults are used by humans all the
time to jump to conclusions. Defaults are statements that begin with the word normally (normally, birds
fly). Humans learn defaults and then gradually learn exceptions to them (e.g., exceptions to the default rule
about flying such as: penguins don’t fly, wounded birds don’t fly, ostriches don’t fly, newly born baby birds
don’t fly, etc.). There may be multiple defaults in some cases, and humans learn to prefer one over the other.
In fact, our biases, expertise, etc., is captured as default rules and exceptions (plus preferences over defaults)
that reside in our minds. Expert knowledge is nothing but a set of defaults, exceptions and preferences about
some very specialized knowledge that the expert has acquired through studying and practice over a number
of years. Given that ASP is very good at representing defaults, exceptions and preferences, modeling real
world situations in ASP is quite feasible. In fact, our group has built tools [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] that model a cardiologist’s
expertise for treating congestive heart failure using ASP. Our tool outperform cardiologists [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. This system
is based on ASP and represents complex expert knowledge found in guidelines for treating heart failure [
        <xref ref-type="bibr" rid="ref35">36</xref>
        ]
as ASP rules. Similarly, ASP technology allows us to answer natural language questions against a textual
passage or a graphical image by invoking common sense knowledge [
        <xref ref-type="bibr" rid="ref7 ref8">8, 7</xref>
        ].
      </p>
      <p>
        Significant amount of research has been invested in formalizing argumentation using logic programming
and answer set programming [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. However, the logic-based modeling in all these approaches is based on
propositions. In contrast, we can model claims, arguments, evidence and assumptions at the predicate level
using our s(CASP) query-driven ASP engine. We next illustrate our method with a detailed example.
5
      </p>
    </sec>
    <sec id="sec-5">
      <title>An Illustrative Example</title>
      <p>
        We take an example from the work of Modgil and Prakken [
        <xref ref-type="bibr" rid="ref29">30</xref>
        ] that narrates a scenario where a person,
John, is seen in Holland Park by Mary, an observer. Modgil and Prakken use this scenario to illustrate the
complexity of argumentation research. We use it to demonstrate how elegantly this complex scenario can be
modeled in ASP and executed in our s(CASP) system to automatically verify claims. Note that the original
text presents the scenario as observed by “us”. We have changed the observer to Mary.
      </p>
      <p>Suppose Mary believes that John was in Holland Park some morning and that Holland Park is
in London. Then Mary can deductively reason from these beliefs, to conclude that John was
in London that morning. So the reasoning cannot be attacked. However, perfection remains
unattainable since the argument is still fallible: its grounds may turn out to be wrong. For
instance, Jan may tell us that he met John in Amsterdam that morning around the same time.
We now have a reason against Mary’s belief that John was in Holland Park that morning, since
witnesses usually speak the truth. Can we retain our belief or must we give it up? The answer
to this question determines whether we can accept that John was in London that morning.
Maybe Mary originally believed that John was in Holland Park for a reason. Maybe Mary went
jogging in Holland Park and she saw John. We then have a reason supporting Mary’s belief
that John was in Holland Park that morning, since we know that a person’s senses are usually
accurate. But we cannot be sure, since Jan told us that he met John in Amsterdam that morning
around the same time. Perhaps Mary’s senses betrayed her that morning? But then we hear that
Jan has a reason to lie, since John is a suspect in a robbery in Holland Park that morning and
Jan and John are friends. We then conclude that the basis for questioning Mary’s belief that
John was in Holland Park that morning (namely, that witnesses usually speak the truth and Jan
witnessed John in Amsterdam) does not apply to witnesses who have a reason to lie. So our
reason in support of Mary’s belief is undefeated and we accept it.</p>
      <p>
        The narrative above is an excellent example of claims being made (e.g., John is in London) and then
arguments and evidence being used to establish that claim. The arguments made may possibly encounter
exceptions (defeaters) along the way. We will model the various scenarios using answer set programming.
We will use the event calculus [
        <xref ref-type="bibr" rid="ref31 ref5">32, 5</xref>
        ] to model the situation as it evolves. We also have to make assumptions
about Mary’s eyesight being good, Jan and John being friends and John being a robbery suspect. We will
treat these as abducibles, i.e., we will attempt to prove the claim with the assumption being true or the
assumption being false. Some of the knowledge used in this example is, in fact, generated automatically from
the English text above using techniques we have developed that make use of English text parsers, VerbNet
and our s(CASP) system [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Some commonsense knowledge about various concepts that is needed is also
added.
      </p>
      <p>We start with Mary’s claim that John was in London in the morning. This amounts to proving the query:
holds(in london,morning)
where in london represents the fluent that John is in London. We next represent the knowledge
encapsulated in the story. Mary, John, and Jan are people. Holland Park is in London. This is represented as facts in
ASP:
person(jan).
person(john).
person(mary).</p>
      <p>is in(holland park,london).</p>
      <p>Next we translate rest of the information. Consider the sentence: Mary saw John in Holland Park. This
sentence is automatically translated into the following facts using our SQuARE system and VerbNet primitives
(for verbs ‘discover’ and ‘occur’):
discover(morning, during(see 1), agent(mary), theme(john),
source(unknown)).</p>
      <p>occur(morning, event(see 1), theme(john), location(holland park)).
We automatically extract commonsense knowledge about an observer (Mary and John are both observers of
events). Note that in ASP ‘ ’ denotes an anonymous variable (i.e., a variable whose value we don’t care for).</p>
      <p>observer(E,X) :- discover( , during(E),agent(X), , ).</p>
      <p>Next we define the commonsense knowledge of an event happening. An event E happens at time T in some
location with some theme.</p>
      <p>happens(E,T) :- occur(T, event(E), theme( ), location( )).</p>
      <p>
        John’s presence in London is modeled using the fluent in london. A fluent is a variable whose value
changes with time as events happen. We model the knowledge following the event calculus [
        <xref ref-type="bibr" rid="ref31">32</xref>
        ]. The fluent
in london is initiated to become true if Y is seen by A in London.
      </p>
      <p>initiates(E, in_london, T)
:discover(T, during(E), agent(A), theme(Y), source(_)),
occur(T,event(E), theme(Y), location(Loc)),
is_in(Loc,london), not ab_initiates(E,in_london,T, A).
Note that the initiation process can be defeated due to an abnormal situation (e.g., A has poor eyesight). This
is reflected in the ab initiates predicate in the rule above.</p>
      <p>ab initiates(E,in london,T, A) :- person(A), observer(E,A),
not accurate(A,sense).
We next complete the definition of the fluent in london: if Jan sees John in Amsterdam, John cannot be
in London. The definition is completed by defining the terminate primitive of the event calculus for the
fluent in london. We state that event E terminates the fluent in london at time T, if Y is seen by X in a
place other than London.</p>
      <p>terminates(E, in_london, T)
:perceive(T, during(E),experiencer(X),stimulus(Y)),
occur(T,event(E), theme(Y), location(Loc)),
not is_in(Loc,london),
not ab_terminates(E,in_london,T).</p>
      <p>However, the claim that John is not in London may be defeated due to an abnormal situation (e.g., supposed
observer is a liar).</p>
      <p>ab terminates(E,in london,T) :- person(X), person(Y), observer(E,X),
theme(E,Y), not speaks(X, truth, Y).</p>
      <p>
        Next we define the observer w.r.t. the VerbNet verb perceive, as part of our commonsense knowledge.
We also define what a theme is. Note that some of these technicalities are introduced due to our attempt to
automate translation of English text into ASP using VerbNet [
        <xref ref-type="bibr" rid="ref25 ref8">8, 26</xref>
        ]).
      </p>
      <p>observer(E,X) :- person(X), perceive( , during(E), experiencer(X), ).
theme(E,X) :- person(X), perceive( , during(E), , stimulus(X)).
The concept of speaking truth is also modeled as a rule (“witnesses usually speak the truth”). The rule below
states that X will normally speak the truth about observing Y, unless X is a liar (defeater).
speaks(X,truth,Y) :- person(X), person(Y), observer(E,X),</p>
      <p>theme(E,Y), not ab speaks(X,truth,Y).</p>
      <p>ab speaks(X,truth,Y) :- may lie(X,Y).</p>
      <p>We have to model the situations in which a person may lie. We assume that a person lies if we fail to
prove that he/she is a truth-teller (not -lie). We also assume that a person may lie if there is evidence of
conflict of interest. Note that arguments with defeaters can be thought of as default rules with exceptions.
As is obvious, we make extensive use of default rules in this example.</p>
      <p>may_lie(X,Y) :- person(X), person(Y), not -lie(X,Y).
-lie(X,Y) :- person(X), person(Y), not conflict_interest(X,Y).
conflict_interest(X,Y) :- person(X), person(Y), friends(X,Y),
crime_suspect(Y), not ab_conflict_interest(X).
crime_suspect(X) :- person(X), robbery_suspect(X),</p>
      <p>not ab_crime_suspect(X).</p>
      <p>Finally, we represent our assumptions as abducibles. These abducibles are simply declared using the
#abducible declaration in our s(CASP) system. The assumptions we may make are the following: (i)
Jan and John are friends, (ii) John is a robbery suspect, and (iii) Mary is old and infirm.
#abducible friends(jan, john).
#abducible robbery suspect(john).
#abducible age(mary,old).
With the above arguments, evidence, assumptions, and defeaters expressed in ASP, we are ready to verify
our claims. We can make a number of claims: John is in London, John is not in London, and John’s location
is unknown. For simplicity, we put these claims down as rules.</p>
      <sec id="sec-5-1">
        <title>Claim #1: John’s location is unknown.</title>
        <p>claim(john location unknown) :- not holds(in london, morning),
not -holds(in london, morning).</p>
      </sec>
      <sec id="sec-5-2">
        <title>Claim #2: John is not in London.</title>
        <p>claim(john not in london) :- -holds(in london, morning).</p>
      </sec>
      <sec id="sec-5-3">
        <title>Claim #3: John is in London.</title>
        <p>claim(john in london) :- holds(in london, morning).</p>
      </sec>
      <sec id="sec-5-4">
        <title>Now we can execute the query:</title>
        <p>?- claim(X).
to find the answers under various assumptions. These answers—five of them—computed by our s(CASP)
system along with the assumptions under which the claims hold are shown next.</p>
        <p>There is one scenario (one world, in ASP parlance) in which John’s location is unknown:
X = john location unknown</p>
        <p>Assumptions: age(mary,old)),friends(jan,john),robbery suspect(john))
The answer above states that if Mary is infirm, Jan and John are friends, and John is a robbery suspect, then
we cannot really say with certainty if John is in London or not in London.</p>
        <p>There are two scenarios in which we can support the claim that John is not in London.</p>
        <p>X = john not in london;
Assumptions: age(mary,old)), not friends(jan, john)
X = john not in london;
Assumptions: age(mary,old), friends(jan,john),</p>
        <p>not robbery suspect(john)
The claim that John is not in London can only be true if we cannot trust Mary, so the assumption about Mary
being old has to hold. In the first case, if Jan and John are not friends then regardless of whether John is a
robbery suspect or not, we can trust Jan to tell the truth, so John must be in Amsterdam. In the second case,
Jan and John are friends, but under the assumption that John is not a robbery suspect we can trust Jan to tell
the truth as well. So John must be in Amsterdam in this case as well.</p>
        <p>Finally, there are two scenarios in which the claim that John is in London will hold.</p>
        <p>X = john in london:
Assumptions: friends(jan,john), robbery suspect(john)
X = john in london
Assumptions: age(mary, B | B 6= old)), friends(jan,john)),
robbery suspect(john)
In the first case, the claim is true if Jan and John are friends and John is a robbery suspect. Mary’s being
young or old does not matter as Jan has a strong motivation to lie. In the second case, the claim that John is
in London is obviously true if Mary is not old (and so is not infirm and her senses can be trusted), Jan and
John are friends, and John is a robbery suspect.</p>
        <p>The example above illustrates the power of ASP and of our s(CASP) system in modeling commonsense
reasoning and how they can be used to automatically verify claims in the OAP framework. The mapping
that we have developed between OAP and ASP facilitates this task.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Conclusion</title>
      <p>
        We showed that ASP can elegantly model human-style arguments as laid out in the Primer of Holloway
and Wasson. It is believed that human discourse can be reasoned about only through informal reasoning.
We advance the argument that human discourse can be reasoned about through formal reasoning as well.
Answer set programming has a well-defined declarative and operational semantics [
        <xref ref-type="bibr" rid="ref16 ref28 ref4">16, 29, 4</xref>
        ] that can model
the human thought process very effectively, as demonstrated in this paper. Additionally, query-driven answer
set programming can be extended with constraints over reals, which allows for reasoning over time to be
performed faithfully (i.e., without discretizing time [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]) as well.
      </p>
      <sec id="sec-6-1">
        <title>Available at http:// [20] Gopal Gupta, Ajay Bansal, Richard Min, Luke Simon &amp; Ajay Mallya (2007): Coinductive Logic</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Peter</given-names>
            <surname>Aczel</surname>
          </string-name>
          (
          <year>1988</year>
          ):
          <article-title>Non-well-founded sets</article-title>
          .
          <source>CSLI lecture notes series 14</source>
          , CSLI.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Krzysztof</surname>
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Apt</surname>
          </string-name>
          &amp;
          <string-name>
            <surname>Roland N. Bol</surname>
          </string-name>
          (
          <year>1994</year>
          )
          <article-title>: Logic Programming and Negation: A Survey</article-title>
          .
          <source>J. Log. Program. 19/20</source>
          , pp.
          <fpage>9</fpage>
          -
          <lpage>71</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3] Joaqu´ın Arias, Manuel Carro, Zhuo Chen &amp; Gopal
          <string-name>
            <surname>Gupta</surname>
          </string-name>
          (
          <year>2020</year>
          )
          <article-title>: Justifications for Goal-Directed Constraint Answer Set Programming</article-title>
          .
          <source>In: Proceedings 36th ICLP (Technical Communications)</source>
          ,
          <source>EPTCS 325</source>
          , pp.
          <fpage>59</fpage>
          -
          <lpage>72</lpage>
          . ArXiv:
          <year>2009</year>
          .09158.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4] Joaqu´ın Arias, Manuel Carro, Elmer Salazar, Kyle Marple &amp; Gopal
          <string-name>
            <surname>Gupta</surname>
          </string-name>
          (
          <year>2018</year>
          ):
          <article-title>Constraint answer set programming without grounding</article-title>
          .
          <source>TPLP</source>
          <volume>18</volume>
          (
          <issue>3-4</issue>
          ), pp.
          <fpage>337</fpage>
          -
          <lpage>354</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5] Joaqu´ın Arias, Zhuo Chen,
          <string-name>
            <given-names>Manuel</given-names>
            <surname>Carro</surname>
          </string-name>
          &amp; Gopal
          <string-name>
            <surname>Gupta</surname>
          </string-name>
          (
          <year>2019</year>
          )
          <article-title>: Modeling and Reasoning in Event Calculus Using Goal-Directed Constraint Answer Set Programming</article-title>
          .
          <source>In: Proc. LOPSTR</source>
          <year>2019</year>
          , Porto, Portugal, LNCS 12042, Springer, pp.
          <fpage>139</fpage>
          -
          <lpage>155</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>John</given-names>
            <surname>Barwise</surname>
          </string-name>
          &amp; Lawrence A.
          <string-name>
            <surname>Moss</surname>
          </string-name>
          (
          <year>1996</year>
          )
          <article-title>: Vicious Circles</article-title>
          . Cambridge University Press.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Kinjal</given-names>
            <surname>Basu</surname>
          </string-name>
          , Farhad Shakerin &amp; Gopal
          <string-name>
            <surname>Gupta</surname>
          </string-name>
          (
          <year>2020</year>
          )
          <article-title>: AQuA: ASP-Based Visual Question Answering</article-title>
          .
          <source>In: Practical Aspects of Declarative Languages</source>
          , Springer International Publishing, Cham, pp.
          <fpage>57</fpage>
          -
          <lpage>72</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Kinjal</given-names>
            <surname>Basu</surname>
          </string-name>
          , Sarat Chandra Varanasi, Farhad Shakerin &amp; Gopal
          <string-name>
            <surname>Gupta</surname>
          </string-name>
          (
          <year>2020</year>
          )
          <article-title>: SQuARE: Semanticsbased Question Answering and Reasoning Engine</article-title>
          .
          <source>In: Proceedings 36th International Conference on Logic Programming (Technical Communications)</source>
          , Rende, Italy, EPTCS
          <volume>325</volume>
          , pp.
          <fpage>73</fpage>
          -
          <lpage>86</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Philippe</given-names>
            <surname>Besnard</surname>
          </string-name>
          , Claudette Cayrol &amp;
          <string-name>
            <surname>Marie-Christine</surname>
          </string-name>
          Lagasquie-Schiex:
          <article-title>Logical Theories and Abstract Argumentation: A Survey of Existing Works</article-title>
          .
          <source>Argumentation and Computation</source>
          vol.
          <volume>11</volume>
          , no.
          <issue>1-2</issue>
          , pp.
          <fpage>41</fpage>
          -
          <lpage>102</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Susanne</given-names>
            <surname>Bobzien</surname>
          </string-name>
          (
          <year>2020</year>
          )
          <article-title>: Ancient Logic</article-title>
          . In Edward N. Zalta, editor:
          <source>The Stanford Encyclopedia of Philosophy</source>
          , summer
          <year>2020</year>
          edition, Metaphysics Research Lab, Stanford University.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>George</given-names>
            <surname>Boole</surname>
          </string-name>
          (
          <year>1854</year>
          )
          <article-title>: An Investigation of the Laws of Thought</article-title>
          . Walton &amp; Maberly.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Zhuo</surname>
            <given-names>Chen</given-names>
          </string-name>
          , Kyle Marple, Elmer Salazar, Gopal Gupta &amp; Lakshman
          <string-name>
            <surname>Tamil</surname>
          </string-name>
          (
          <year>2016</year>
          )
          <article-title>: A Physician Advisory System for Chronic Heart Failure management based on knowledge patterns</article-title>
          .
          <source>Theory Pract</source>
          . Log. Program.
          <volume>16</volume>
          (
          <issue>5-6</issue>
          ), pp.
          <fpage>604</fpage>
          -
          <lpage>618</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Zhuo</surname>
            <given-names>Chen</given-names>
          </string-name>
          , Elmer Salzar, Kyle Marple,
          <string-name>
            <surname>Sandeep R. Das</surname>
          </string-name>
          ,
          <string-name>
            <surname>Alpesh Amin</surname>
          </string-name>
          , Daniel Cheeran, Lakshman Tamil &amp; Gopal
          <string-name>
            <surname>Gupta</surname>
          </string-name>
          (
          <year>2018</year>
          )
          <article-title>: An AI-Based Heart Failure Treatment Adviser System</article-title>
          .
          <source>IEEE journal of translational engineering in health and medicine 6</source>
          ,
          <fpage>2800810</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Gottlob</given-names>
            <surname>Frege</surname>
          </string-name>
          (
          <year>1884</year>
          ):
          <article-title>Grundlagen der Arithmetik</article-title>
          . Wilhelm Koebner.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Michael</given-names>
            <surname>Gelfond</surname>
          </string-name>
          &amp; Yulia
          <string-name>
            <surname>Kahl</surname>
          </string-name>
          (
          <year>2014</year>
          )
          <article-title>: Knowledge representation, reasoning, and the design of intelligent agents: The answer-set programming approach</article-title>
          . Cambridge University Press.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Michael</given-names>
            <surname>Gelfond</surname>
          </string-name>
          &amp; Vladimir
          <string-name>
            <surname>Lifschitz</surname>
          </string-name>
          (
          <year>1988</year>
          ):
          <article-title>The stable model semantics for logic programming</article-title>
          .
          <source>In: ICLP/SLP</source>
          , 88, pp.
          <fpage>1070</fpage>
          -
          <lpage>1080</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>Brendan</given-names>
            <surname>Gillon</surname>
          </string-name>
          (
          <year>2016</year>
          )
          <article-title>: Logic in Classical Indian Philosophy</article-title>
          . In Edward N. Zalta, editor:
          <source>The Stanford Encyclopedia of Philosophy</source>
          , fall
          <year>2016</year>
          edition, Metaphysics Research Lab, Stanford University.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>Leo</given-names>
            <surname>Groarke</surname>
          </string-name>
          (
          <year>2020</year>
          )
          <article-title>: Informal Logic</article-title>
          . In Edward N. Zalta, editor: The Stanford Encyclopedia of Philosophy, spring
          <year>2020</year>
          edition, Metaphysics Research Lab, Stanford University.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Gopal</surname>
          </string-name>
          <article-title>Gupta (July 7,</article-title>
          <year>2020</year>
          ):
          <article-title>Automating Common Sense Reasoning</article-title>
          . utdallas.edu/˜gupta/csg. Tutorial talk.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Gopal</surname>
            <given-names>Gupta</given-names>
          </string-name>
          , Ajay Bansal, Richard Min,
          <string-name>
            <given-names>Luke</given-names>
            <surname>Simon</surname>
          </string-name>
          &amp; Ajay
          <string-name>
            <surname>Mallya</surname>
          </string-name>
          (
          <year>2007</year>
          )
          <article-title>: Coinductive Logic Programming and Its Applications</article-title>
          .
          <source>In: Proc. 23rd ICLP 2007, Lecture Notes in Computer Science 4670</source>
          , Springer, pp.
          <fpage>27</fpage>
          -
          <lpage>44</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>G. H.</given-names>
            <surname>Harman</surname>
          </string-name>
          (
          <year>1965</year>
          ):
          <article-title>The Inference to the Best Explanation</article-title>
          .
          <source>The Philosophical Review</source>
          <volume>74</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>88</fpage>
          -
          <lpage>95</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>C.</given-names>
            <surname>Michael Holloway</surname>
          </string-name>
          (
          <year>2019</year>
          )
          <article-title>: Understanding the Overarching Properties</article-title>
          . Available at https:// ntrs.nasa.gov/citations/20190029284.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>C. Michael</given-names>
            <surname>Holloway</surname>
          </string-name>
          &amp;
          <string-name>
            <surname>Kimberly S. Wasson</surname>
          </string-name>
          (
          <year>2020</year>
          )
          <article-title>: A Primer on Argument</article-title>
          . Available at https: //shemesh.larc.nasa.gov/people/cmh/cmhpubs.html.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>Philip</given-names>
            <surname>Johnson-Laird</surname>
          </string-name>
          (
          <year>2006</year>
          )
          <article-title>: How We Reason</article-title>
          . Oxford University Press.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [26]
          <string-name>
            <surname>Karin</surname>
            <given-names>Kipper</given-names>
          </string-name>
          , Anna Korhonen, Neville Ryant &amp; Martha
          <string-name>
            <surname>Palmer</surname>
          </string-name>
          (
          <year>2008</year>
          )
          <article-title>: A large-scale classification of English verbs</article-title>
          .
          <source>Language Resources and Evaluation</source>
          <volume>42</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>21</fpage>
          -
          <lpage>40</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [27]
          <string-name>
            <surname>Saul</surname>
            <given-names>Kripke:</given-names>
          </string-name>
          <article-title>An Outline of a Theory of Truth</article-title>
          .
          <source>The Journal of Philosophy Col. LXXII, No. 19, Nov. 6</source>
          ,
          <issue>1975</issue>
          , pp.
          <fpage>690</fpage>
          -
          <lpage>716</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>Vladimir</given-names>
            <surname>Lifschitz</surname>
          </string-name>
          (
          <year>2019</year>
          )
          <article-title>: Answer Set Programming</article-title>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [29]
          <string-name>
            <surname>Kyle</surname>
            <given-names>Marple</given-names>
          </string-name>
          , Elmer Salazar &amp; Gopal
          <string-name>
            <surname>Gupta</surname>
          </string-name>
          (
          <year>2017</year>
          ):
          <article-title>Computing stable models of normal logic programs without grounding</article-title>
          .
          <source>arXiv preprint arXiv:1709</source>
          .
          <fpage>00501</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>Sanjay</given-names>
            <surname>Modgil</surname>
          </string-name>
          &amp; Henry
          <string-name>
            <surname>Prakken</surname>
          </string-name>
          (
          <year>2014</year>
          ):
          <article-title>The ASPIC+ framework for structured argumentation: a tutorial</article-title>
          .
          <source>Argument Comput</source>
          .
          <volume>5</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>31</fpage>
          -
          <lpage>62</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [31]
          <string-name>
            <surname>Farhad</surname>
            <given-names>Shakerin</given-names>
          </string-name>
          , Elmer Salazar &amp; Gopal
          <string-name>
            <surname>Gupta</surname>
          </string-name>
          (
          <year>2017</year>
          )
          <article-title>: A new algorithm to automate inductive learning of default theories</article-title>
          .
          <source>Theory Pract</source>
          . Log. Program.
          <volume>17</volume>
          (
          <issue>5-6</issue>
          ), pp.
          <fpage>1010</fpage>
          -
          <lpage>1026</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>Murray</given-names>
            <surname>Shanahan</surname>
          </string-name>
          (
          <year>1999</year>
          ):
          <article-title>The Event Calculus Explained</article-title>
          . In Michael J. Wooldridge &amp; Manuela M. Veloso, editors:
          <source>Artificial Intelligence Today: Recent Trends and Developments, Lecture Notes in Computer Science 1600</source>
          , Springer, pp.
          <fpage>409</fpage>
          -
          <lpage>430</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [33]
          <string-name>
            <surname>Luke</surname>
            <given-names>Simon</given-names>
          </string-name>
          , Ajay Mallya, Ajay Bansal &amp; Gopal
          <string-name>
            <surname>Gupta</surname>
          </string-name>
          (
          <year>2006</year>
          )
          <article-title>: Coinductive Logic Programming</article-title>
          .
          <source>In: Proc. ICLP'06, Lecture Notes in Computer Science 4079</source>
          , Springer, pp.
          <fpage>330</fpage>
          -
          <lpage>345</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>Alfred</given-names>
            <surname>Tarski</surname>
          </string-name>
          (
          <year>1939</year>
          ):
          <article-title>On Undecidable Statements in Enlarged Systems of Logic and the Concept of Truth</article-title>
          .
          <source>J. Symb. Log</source>
          .
          <volume>4</volume>
          (
          <issue>3</issue>
          ), pp.
          <fpage>105</fpage>
          -
          <lpage>112</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [35]
          <string-name>
            <surname>Wikipedia (retrieved February</surname>
          </string-name>
          ,
          <year>2021</year>
          ):
          <article-title>Inductive Reasoning</article-title>
          . https://en.wikipedia.org/ wiki/Inductive_reasoning.
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [36]
          <string-name>
            <surname>Clyde</surname>
            <given-names>W.</given-names>
          </string-name>
          <string-name>
            <surname>Yancey</surname>
          </string-name>
          , Mariell Jessup et al. (
          <year>2013</year>
          )
          <article-title>: ACCF/AHA Guideline for the Management of Heart Failure</article-title>
          .
          <source>Circulation</source>
          <volume>28</volume>
          (
          <issue>16</issue>
          ), pp.
          <fpage>e240</fpage>
          -
          <lpage>e327</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>