<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Extended Nomological Deductive Reasoning (eNDR) for Transparent AI Outputs⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gedeon Hakizimana</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Universidad Carlos III de Madrid Department of Computer Science and Engineering Av. Universidad</institution>
          ,
          <addr-line>30, 28911 Leganés (Madrid)</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Extended Nomological Deductive Reasoning (eNDR) is a novel Explainable AI framework that integrates causal domain knowledge with deductive logic to deliver transparent, trustworthy predictions in dynamic, high-stakes environments. Building on the original NDR model, eNDR handles continuous data, uncertainty, and real-time inference by expressing domain laws as continuous functions, modeling conditions as constraints, and generating explanations through diferentiable reasoning and probabilistic integration. Early results show that eNDR produces human-readable, domain-aligned explanations without sacrificing predictive performance, ofering a pathway toward AI systems that are both accurate and interpretable across domains such as healthcare, finance, and criminal justice.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Context and Motivation for the Research</title>
    </sec>
    <sec id="sec-2">
      <title>2. Key Related Work</title>
      <p>The field of Explainable AI (XAI) has produced over 200 techniques aimed at improving model
transparency. Popular model-agnostic methods like LIME and SHAP ofer post hoc explanations by
approximating black-box decision boundaries but often lack domain-specific reasoning, limiting their
efectiveness in expert-driven fields.</p>
      <p>Inherently interpretable models—such as decision trees and rule-based systems—ofer more
understandable outputs but typically sacrifice predictive performance, highlighting the persistent
trade-of between accuracy and interpretability in XAI.</p>
      <p>A promising alternative is the Nomological Deductive Reasoning (NDR) framework, which balances
performance with explanatory depth by integrating deductive logic and causal domain knowledge
through the Nomological Deductive Knowledge Representation (NDKR). Inspired by Hempel’s covering
law model, NDR aims to provide logically structured, cognitively satisfying explanations.</p>
      <p>Originally proposed by Hakizimana and Ledezma Espino, NDR was limited to static, deterministic
settings. To address real-world complexity, the extended version—eNDR—introduces probabilistic
reasoning, continuous data handling, and real-time inference, making it more applicable to dynamic,
high-stakes domains like healthcare, finance, and criminal justice.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Research Questions, Hypothesis, and Objectives</title>
      <sec id="sec-3-1">
        <title>3.1. Research Questions</title>
        <p>This research will explore the following key questions:
1. How can the Nomological Deductive Reasoning (NDR) framework be extended to handle
continuous data and domain-specific uncertainty while preserving the interpretability and trustworthiness
of the AI model’s explanations?
2. How can the integration of structured knowledge and causal reasoning improve the transparency
and explainability of AI systems in high-stakes applications?</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Hypothesis</title>
        <p>By modeling the Nomological Deductive Reasoning approach through the expression of laws as
continuous functions; conditions as constraints on data instances; predictions as a function of data instances,
laws, and conditions; and explanation as an integral summing over all possible laws and conditions
capturing the cumulative efect of how they interact with the data, the NDR framework can handle
continuous data and complex real-world scenarios. In addition, using probabilistic settings and considering
the prediction task as an optimization process can enhance the interpretability and transparency of
NDR-based predictions in domains with complex data and uncertainty.
3.3. Objectives
1. To extend the NDR framework to accommodate continuous data and uncertainty through
advanced mathematical methods like calculus and optimization techniques.
2. To test the efectiveness of the extended NDR framework in generating human-comprehensible
explanations that are aligned with domain-specific knowledge.
3. To evaluate the performance of AI models using the NDR framework in real-world applications,
such as healthcare, finance, or criminal justice.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Research Approach, Methods, and Rationale for Testing the</title>
    </sec>
    <sec id="sec-5">
      <title>Hypothesis</title>
      <p>The research approach is structured around the extension of the Nomological Deductive Reasoning
(NDR) framework.The inclusion of probabilistic models and uncertainty quantification within NDR
allows the system to account for variations in the input data and make robust, well-grounded predictions
even in the face of noise or uncertainty, enhancing both the model’s reliability and its interpretability
when making decisions on complex data.</p>
      <sec id="sec-5-1">
        <title>4.1. Expressing Laws (L) as Continuous Functions</title>
        <p>In our initial development of NDR framework, we have assumed the following:</p>
        <sec id="sec-5-1-1">
          <title>Laws (L)</title>
          <p>Let  represent the set of laws or rules governing a certain real-world domain (e.g., healthcare diagnosis,
bank credit score, trafic code for mobility applications, criminal justice, etc.). These laws are formalized
as logical statements or principles that provide the foundation for reasoning in the system. Each law
 ∈  corresponds to a specific rule or law within the system.</p>
          <p>Example (in medical settings):
• 1: “If a patient has high blood pressure and is over 60 years old, then they are at a high risk of
cardiovascular disease.”
• 2: “If a treatment is an ACE inhibitor, it lowers blood pressure.”</p>
        </sec>
        <sec id="sec-5-1-2">
          <title>Conditions (C)</title>
          <p>Let  denote the set of antecedents or conditions that must hold true in order for a law to be applicable
to a particular data instance. Each condition  ∈  is a prerequisite that must be satisfied for the
corresponding law to be activated or relevant.</p>
          <p>Example (in medical settings):
• 1: “Patient has high blood pressure.”
• 2: “Patient is over 60 years old.”</p>
        </sec>
        <sec id="sec-5-1-3">
          <title>Data Instances (D)</title>
          <p>Let  = {1, 2, . . . , } represent the set of input data fed into the AI system. Each  ∈  represents
a specific data sample.</p>
          <p>Example (in medical settings):
• 1: A data sample where the patient has high blood pressure and is 65 years old.
• 2: A data sample where the patient has normal blood pressure and is 45 years old.</p>
        </sec>
        <sec id="sec-5-1-4">
          <title>Hypothesis or Prediction (H)</title>
          <p>Let  = {ℎ1, ℎ2, . . . , ℎ} represent the set of predictions or outcomes generated by the AI model. Each
ℎ ∈  corresponds to a specific prediction for the instance .</p>
          <p>Example (in medical settings):
• ℎ1: “The patient is at high risk for cardiovascular disease.”
• ℎ2: “The patient is not at high risk for cardiovascular disease.”</p>
        </sec>
        <sec id="sec-5-1-5">
          <title>Formalized Deductive Inference</title>
          <p>The key goal of the NDR framework is to use deductive reasoning to formalize how the AI model
generates a prediction ℎ based on the combination of conditions  and laws  applied to the input .
∀ ∈ , ∃ℎ such that (1 ∧ 2 ∧ · · · ∧
 ∧ 1 ∧ 2 ∧ · · · ∧
) ⊢ ℎ
Where:
•  ∈  is an input data instance.
• 1, 2, . . . ,  are the conditions (e.g., patient characteristics like age, blood pressure).
• 1, 2, . . . ,  are the domain laws (e.g., risk relationships).</p>
          <p>• ⊢ denotes deductive reasoning leading to prediction ℎ.</p>
        </sec>
        <sec id="sec-5-1-6">
          <title>Formalized Explanation Generation</title>
          <p>Once we have the laws, conditions, and input data, the explanation  for the prediction ℎ can be
expressed as:</p>
          <p>=  (, , ) ⇒ ℎ
•  is the explanation for prediction ℎ.
•  is the function describing how , , and  combine to produce ℎ.</p>
          <p>• ⇒ indicates the logical flow from input to prediction.</p>
          <p>In this research, the first step is to model the domain laws as continuous functions. These laws
describe relationships between input variables and outcomes or predictions, and can be formalized as
continuous functions that capture gradual changes in dynamic systems or continuous data. For example,
in the medical domain:
 : R → R, () = some relationship between conditions
1() =</p>
          <p>BloodPressure · Age
1 + Age
,</p>
          <p>where  = [BloodPressure, Age]</p>
          <p>This equation models how the interaction between a patient’s blood pressure and age influences
cardiovascular risk.</p>
        </sec>
      </sec>
      <sec id="sec-5-2">
        <title>4.2. Conditions (C) as Constraints on Data Instances</title>
        <p>Conditions are modeled as constraints that must hold true for the corresponding laws to be applicable.
These are represented as indicator functions:
 () =</p>
        <p>0 otherwise
{︃1 if condition  holds true for instance 
Example (in medical settings):
• 1() =
{︃1 if blood pressure is high</p>
        <p>0 otherwise</p>
        <p>This ensures that the system only applies relevant laws when the conditions are satisfied.</p>
      </sec>
      <sec id="sec-5-3">
        <title>4.3. Data Instances (D) and Their Continuous Representation</title>
        <p>Data instances  are treated as vectors in an -dimensional space, representing diferent entities in the
real world. For example, in healthcare, the vector
could represent a particular patient’s characteristics. The data set  is formalized as:
Each  represents an instance with  features that are used in the model.</p>
        <p>= (BloodPressure, Age, . . .)</p>
        <p>∈  ⊂ R</p>
      </sec>
      <sec id="sec-5-4">
        <title>4.4. Hypothesis or Prediction (H) as a Function of Data</title>
        <p>The AI model generates a prediction ℎ as a function of the data instances, laws, and conditions. This
can be represented as:</p>
        <p>ℎ() =  (1, 2, . . . , , 1, 2, . . . , , )</p>
        <p>Where  is a function that maps input data to a prediction. This function can take various forms,
such as linear, non-linear, or other suitable formulations depending on the model’s complexity.</p>
      </sec>
      <sec id="sec-5-5">
        <title>4.5. Formalized Deductive Inference Using Diferentiable Functions</title>
        <p>To formalize the inference process, we use calculus to express how the prediction ℎ() changes with
respect to the input data . The derivative of ℎ() with respect to  is computed as:
ℎ()

= ∑︁  ()</p>
        <p>=1
·  ()
applies only when the condition holds.</p>
        <p>This derivative represents how sensitive the prediction is to changes in the input features. The term
() represents how the laws  change with respect to the input, and  () ensures that the law</p>
      </sec>
      <sec id="sec-5-6">
        <title>4.6. Formalized Explanation Generation As an Integration Task</title>
        <p>The explanation  for a prediction ℎ can be generated by integrating over the laws and conditions
that contributed to the prediction. This can be expressed as:</p>
        <p>This integral sums over all possible laws and conditions, capturing the cumulative efect of how they
interact with the data. In a probabilistic setting, we can also use a Bayesian approach:
 =
∫︁ ∫︁</p>
        <p>(, , )  
∫︁</p>
        <p>=</p>
        <p>( | )</p>
        <p>Where  ( | ) represents the posterior probability of law  given the data instance .</p>
      </sec>
      <sec id="sec-5-7">
        <title>4.7. Formalized Deductive Inference as Optimization</title>
        <p>In AI systems, particularly those utilizing machine learning, the inference process is often optimized
through a loss function. Hence, the optimization problem can be formalized as:


=1
ℎˆ = arg min ∑︁</p>
        <p>ℒ(ℎˆ(;  ), )</p>
        <p>Where ℒ is the loss function (e.g., mean squared error), ℎˆ is the predicted outcome based on model
parameters  , and  is the true label. The parameters  represent the weights associated with the laws,
conditions, and other model components. We propose the architecture of the extended NDR framework
as per the illustration in Figure 1.</p>
      </sec>
      <sec id="sec-5-8">
        <title>4.8. eNDR Application Scenario</title>
        <p>Let’s consider the task of predicting the likelihood of a heart attack or stroke based on various risk
factors. In this scenario, eNDR framework can be used to model causal relationships from cardiovascular
theories, linking the disease to various risks. eNDR then applies the deductive reasoning on complex
factors, handles uncertainty, and explains predictions based on the laws governing the medical domain
(e.g., age, cholesterol, smoking habits). Any black-box learning model can be used to extract features
from the dataset, and by eNDR the output is human-readable knowledge-based explanations.as captured
on figure 2.</p>
      </sec>
      <sec id="sec-5-9">
        <title>4.9. Metrics for Model Validation</title>
        <p>To validate the extended NDR framework with a focus on complex data and uncertainty, a complex
health dataset (e.g., the Framingham Heart Study dataset) will be used. This dataset involves multiple
features with both causal relationships and uncertainty (due to missing data, variable progression, and
noise), which makes it an excellent candidate for demonstrating the efectiveness of eNDR in providing
transparent and interpretable explanations grounded in causal knowledge.</p>
        <p>The key metrics will include prediction accuracy to measure the algorithmic performance, uncertainty
handling and rule coverage to measure its loyalty to Knowledge Base, as well as reasoning transparency
and user trust measurement to evaluate eNDR trustworthiness.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Preliminary Results and Contributions to Date</title>
      <p>Preliminary results show that integrating domain-specific laws into machine learning models enhances
interpretability without compromising performance. Early tests in finance reveal that the NDR
framework produces explanations aligned with human reasoning, ofering clear insights into decision-making.
Unlike methods such as Causal Inference, Neuro-Symbolic Reasoning, Knowledge Graphs, LIME, and
SHAP, which often cater to technical users, NDR focuses on intuitive, domain-grounded explanations,
which is key to trust in AI-powered solutions.
6. Expected Next Research Steps and Final Contribution to Knowledge
The next steps involve refining the NDR framework to handle larger, more complex datasets and
incorporate probabilistic reasoning to account for uncertainty in real-world data. We will also evaluate
the framework in additional domains, such as finance and criminal justice, to assess its generalizability.</p>
      <p>The final contribution of this research will be a novel framework for Explainable AI that combines
deductive reasoning with domain-specific knowledge, ofering a pathway for building more transparent
and trustworthy AI systems. This work will also provide insights into how structured knowledge and
causal reasoning can be embedded into machine learning models without compromising performance.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>The author has used ChatGPT for grammar and content spelling check, after what he reviewed and
edited the content as needed. The author takes full responsibility for the publication’s content
[8] Miller, T. (2019). Explanation in Artificial Intelligence: Insights from the Social Sciences. Artificial</p>
      <p>
        Intelligence, 267, 1-38.
[9] Preece, S., &amp; Gunning, D. (2019). Explainable AI: A Survey of the State of the Art. IEEE Transactions
on Neural Networks and Learning Systems, 30(10), 2911-2924.
[10] Vilone, G.; Longo, L. Classification of Explainable Artificial Intelligence Methods through Their
Output Formats. Machine Learning and Knowledge Extraction 2021, 3(
        <xref ref-type="bibr" rid="ref3">3</xref>
        ), 615–661. https://doi.org/
10.3390/make3030032
[11] Ribeiro, M. T., Singh, S., &amp; Guestrin, C. (2016). Why Should I Trust You? Explaining the Predictions
of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining (pp. 1135-1144). ACM.
[12] Vasilenko, A., &amp; Gamanenko, O. (2016). Knowledge-Based Systems and Knowledge Engineering
for Explainable AI Applications. Artificial Intelligence Review , 45(
        <xref ref-type="bibr" rid="ref3">3</xref>
        ), 329-356.
[13] Zhou, S., &amp; Xie, L. (2020). Explainable Artificial Intelligence (XAI) Methods for Healthcare: A
      </p>
      <p>
        Review. Journal of Healthcare Engineering, 2020, Article 8706534.
[14] Hempel, C.G.; Oppenheim, P. Studies in the Logic of Explanation. Philosophy of Science 1948, 15(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ),
135-175.
[15] Feng, S., &amp; Zhang, W. (2020). A Survey on Rule-based Machine Learning Methods for Explainable
      </p>
      <p>
        Artificial Intelligence. Artificial Intelligence Review , 53(
        <xref ref-type="bibr" rid="ref3">3</xref>
        ), 1621-1644.
[16] Lundberg, S. M., &amp; Lee, S. I. (2017). A Unified Approach to Interpreting Model Predictions. In
Proceedings of the 31st International Conference on Neural Information Processing Systems (pp.
4765-4774).
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Caruana</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gehrke</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koch</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Stiksma</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2000</year>
          ).
          <article-title>Rules for Interpretable Classification Models</article-title>
          .
          <source>In Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</source>
          (pp.
          <fpage>278</fpage>
          -
          <lpage>287</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Cai</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Wei</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Rule-based Machine Learning Models for Knowledge Representation and Inference</article-title>
          .
          <source>Computational Intelligence and Neuroscience</source>
          ,
          <year>2020</year>
          , Article 1583720.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Gottfried</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>O'Reilly</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Knowledge-Based Systems: A Practical Introduction</article-title>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Guidotti</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Monreale</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Pedreschi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>A Survey of Methods for Explaining Black-box Models</article-title>
          .
          <source>ACM Computing Surveys (CSUR)</source>
          ,
          <volume>51</volume>
          (
          <issue>5</issue>
          ),
          <fpage>93</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Hakizimana</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ;
          <article-title>Ledezma Espino, A. Nomological Deductive Reasoning for Trustworthy, HumanReadable, and Actionable AI Outputs</article-title>
          .
          <source>Algorithms</source>
          <year>2025</year>
          ,
          <volume>18</volume>
          , 306. https://doi.org/10.3390/a18060306
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Hendrix</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Peeters</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Interpretable Machine Learning Models: A Survey of the State of the Art</article-title>
          .
          <source>In Proceedings of the International Conference on Machine Learning (ICML)</source>
          (pp.
          <fpage>232</fpage>
          -
          <lpage>241</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Liao</surname>
            ,
            <given-names>Q. V.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Christodoulou</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Model-agnostic Interpretability for Rule-based Systems</article-title>
          .
          <source>In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems</source>
          (pp.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>