<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>International
Journal of Approximate Reasoning 54 (2013) 47-81. doi:/10.1016/j.ijar.2012.08.003.
[26] K. Čyras</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1145/3173574.3174156</article-id>
      <title-group>
        <article-title>PHAX: A Structured Argumentation Framework for User-Centered Explainable AI in Public Health and Biomedical Sciences</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Bahar İlgen</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Akshat Dubey</string-name>
          <email>dubeya@rki.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Georges Hattab</string-name>
          <email>hattabg@rki.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Center for Artificial Intelligence in Public Health Research (ZKI-PH), Robert Koch Institute</institution>
          ,
          <addr-line>Nordufer 20, Berlin, 13353</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Mathematics and Computer Science, Freie Universität Berlin</institution>
          ,
          <addr-line>Arnimallee 14, Berlin, 14195</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <volume>142</volume>
      <fpage>1</fpage>
      <lpage>18</lpage>
      <abstract>
        <p>Ensuring transparency and trust in AI-driven public health and biomedical sciences systems requires more than accurate predictions-it demands explanations that are clear, contextual, and socially accountable. While explainable AI (XAI) has advanced in areas like feature attribution and model interpretability, most methods still lack the structure and adaptability needed for diverse health stakeholders, including clinicians, policymakers, and the general public. We introduce PHAX-a Public Health Argumentation and eXplainability framework-that leverages structured argumentation to generate human-centered explanations for AI outputs. PHAX is a multilayer architecture combining defeasible reasoning, adaptive natural language techniques, and user modeling to produce context-aware, audience-specific justifications. More specifically, we show how argumentation enhances explainability by supporting AI-driven decision-making, justifying recommendations, and enabling interactive dialogues across user types. We demonstrate the applicability of PHAX through use cases such as medical term simplification, patient-clinician communication, and policy justification. In particular, we show how simplification decisions can be modeled as argument chains and personalized based on user expertise-enhancing both interpretability and trust. By aligning formal reasoning methods with communicative demands, PHAX contributes to a broader vision of transparent, human-centered AI in public health.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Explainable AI</kwd>
        <kwd>Argumentation-based Explainability</kwd>
        <kwd>Structured Argumentation</kwd>
        <kwd>User-Adaptive Explanation</kwd>
        <kwd>Public Health Informatics</kwd>
        <kwd>Natural Language Processing</kwd>
        <kwd>Trustworthy AI</kwd>
        <kwd>Health Communication</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        As artificial intelligence (AI) becomes increasingly embedded in public health systems, ensuring that
AI outputs are understandable, trustworthy, and tailored to diverse stakeholders has become a critical
challenge [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4">1, 2, 3, 4</xref>
        ]. Moreover, recent calls in public health literature highlight the necessity of
Explainable AI (XAI) to foster transparency and professional trust in healthcare applications. From
clinical diagnostics to vaccination policy, AI now plays a role in high-stakes decisions that afect patients,
practitioners, and entire populations. Applications in areas like pandemic preparedness have made
clear that epidemiological decision-making increasingly depends on the integration of XAI [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Yet, the
logic underlying many AI-driven decisions often remains obscure, fueling concerns over accountability,
fairness, and interpretability.
      </p>
      <p>
        The goal of XAI is to address such issues by shedding light on model behavior. However, most
existing XAI approaches—such as feature attribution or counterfactual analysis—struggle to provide
useradaptive and communicatively efective explanations, especially in language-based applications [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ].
These limitations are especially concerning in public health and biomedical fields, where information
must be not only technically sound but also efectively communicated to society. Recent work in
human-computer interaction (HCI) has also emphasized the need for explainable, accountable, and
intelligible systems [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Taken together, these challenges call for a new paradigm in explainability that
mirrors how humans reason and justify decisions. In this context, explanation ought to be understood
as a reasoning process rather than merely a visualization or annotation.
      </p>
      <p>Defining what constitutes an explanation is itself a complex issue. As reviewed in [ 9], explanations
have been conceptualized in various ways: as assignments of causal responsibility [10], as both the
process and product of addressing a "Why?" question [11], and as a means of constructing shared meaning.
These perspectives highlight that explanation is not merely a factual output but a communicative and
cognitive process that engages reasoning and interpretation. To this end, we propose PHAX: a Public
Health Argumentation and eXplainability framework. PHAX is a multi-layer architecture integrating
structured argumentation, adaptive natural language processing (NLP), and user modeling to generate
clear, audience-specific justifications for AI outputs. It treats explanation not as a post-hoc add-on, but
as a first-class component of decision-making pipelines. Structured argumentation functions as a core
mechanism, allowing AI systems to explain their decision processes step by step, handle uncertainty,
and reconcile conflicting evidence through formal reasoning [ 12]. Such capabilities are essential for
building trust in AI-driven public health systems and biomedical systems. More specifically, within
the domains of public health and biomedical sciences, we demonstrate how argumentation enhances
explainability, with applications spanning areas such as decision-making (e.g., vaccination prioritization
or clinical risk stratification), justification of system outputs (e.g., medical term simplification or the
selection of diagnostic biomarkers), and interactive dialogue (e.g., clinician-AI interaction in diagnosis
or treatment planning). These capabilities allow AI systems to deliver context-sensitive explanations
aligned with stakeholder needs across both population-level and individual-level biomedical
applications. Through structured reasoning and audience-aware communication, argumentation enables AI
systems to provide transparent, tailored explanations across a range of high-stakes scenarios in public
health and biomedical sciences.</p>
      <p>PHAX builds on the formal tools of argumentation theory—including Dung’s Abstract Framework
[13] and ASPIC+—to model [14] outputs as defeasible claims supported by reasoning chains. It also
incorporates adaptive NLP techniques such as text simplification (TS), semantic role labeling (SRL),
and discourse parsing, and audience-aware surface realization to tailor explanations to diferent users.
Whether the audience is a patient, clinician, or policymaker, PHAX generates logically grounded and
socially appropriate explanations.</p>
      <p>To demonstrate the utility of PHAX, we present medical text simplification (MTS) as a core use case.
Simplification decisions—such as replacing "myocardial infarction" with "heart attack" — are modeled
as arguments, based on corpus frequency, semantic equivalence, and contextual appropriateness.
Explanations are then adjusted in tone and depth based on user profiles. This showcases how PHAX
enhances interpretability, transparency, and trust in a critical public health application. This paper
makes the following contributions: (i) Introduces PHAX, a novel framework that integrates structured
argumentation and adaptive NLP for explainable AI in public health and biomedical sciences., (ii)
Demonstrates how simplification and other AI outputs can be modeled as defeasible reasoning chains.,
(iii) Proposes user-adaptive explanation strategies tailored to diferent stakeholders., (iv) Provides
illustrative use cases highlighting PHAX’s applicability in diverse public health contexts.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>XAI encompasses a range of approaches designed to make model behavior more interpretable. Common
techniques include feature attribution methods (e.g., LIME, SHAP), saliency mapping, and
counterfactual reasoning. These methods aim to provide insight into how AI models arrive at their predictions,
but they often lack the ability to produce explanations that are user-adaptive and socially
contextualized—particularly in domains like public health and biomedical sciences [15]. Given the shortcomings
of purely statistical or post-hoc approaches, researchers have begun to investigate structured
argumentation as a foundation for AI explanations. Frameworks based on Dung’s Abstract Argumentation
Framework (AF) and ASPIC+ have been explored as mechanisms to model reasoning processes and
support step-by-step justifications for AI outputs. For instance, Vassiliades et al.[ 12] and Čyras et al.[16]
survey a range of argumentation-based XAI approaches, showing how argument structures can provide
more transparent and logically grounded explanations, particularly in settings involving uncertainty or
conflicting information.</p>
      <p>Although structured argumentation provides a strong basis for XAI, prior work has often emphasized
symbolic and formal rigor over communicative usefulness, overlooking how explanations are interpreted
by diverse users. PHAX builds on argumentation theory while extending it through user modeling
and adaptive natural language generation to move beyond structural clarity. Unlike prior approaches,
PHAX aims to deliver context-sensitive, stakeholder-specific justifications that are not only logically
coherent but also socially meaningful.</p>
      <p>Biomedical and healthcare research has provided concrete cases where argumentation-based
explainability is applied. These studies indicate that argumentation theory helps clinicians reason under
uncertainty and incomplete information. Longo et al. [17], for instance, applied defeasible reasoning
and formal argumentation to model expert judgments in cancer recurrence prediction. This aligns with
broader research in hybrid intelligence, which emphasizes collaborative, explainable AI systems that
support human reasoning rather than replace it [18]. Seen in this light, argumentation and explanation
are key elements in the design of transparent and reliable systems, a need that is especially pressing
in healthcare contexts. Such approaches stress the importance of aligning machine reasoning with
human cognitive and ethical expectations—an objective well supported by structured argumentation
frameworks such as PHAX. By combining user-adaptive justifications with formal inference, PHAX
supports this vision and helps ensure that explanations are not only logically sound but also socially
meaningful in varied health contexts.</p>
      <p>One concrete example of such a system is the CONSULT project [19], which applies computational
argumentation to clinical settings. The CONSULT system brings together data from EHRs, wearable
sensors, and treatment guidelines to aid collaborative decision-making. Using ASPIC+ to reason under
uncertainty, it produces argumentation-based dialogues that explain treatment options to both patients
and clinicians. By drawing on argument schemes, attack relations, and user-facing explanations,
CONSULT mirrors the aims of PHAX in structuring and presenting personalized justifications for
diferent stakeholders. Beyond such systems, recent work on user-adaptive explanation and NLG
(e.g., [20]) highlights the importance of tailoring explanations to diverse audiences through
rolesensitive or dialogue-based generation. However, these approaches are rarely integrated with structured
argumentation, leaving a gap that PHAX directly addresses.</p>
    </sec>
    <sec id="sec-3">
      <title>3. PHAX: Public Health Argumentation and eXplainability Framework</title>
      <sec id="sec-3-1">
        <title>3.1. Architecture and Layers</title>
        <p>PHAX (Public Health Argumentation and eXplainability) is a structured argumentation framework
designed to enhance the transparency, accountability, and user alignment of AI systems in public
health and biomedical domains. Embedding explainability into reasoning structures helps overcome
the limits of post-hoc or model-agnostic XAI, especially in high-stakes domains. PHAX combines
formal reasoning, NLP, and audience-aware methods to produce explanations that are both
contextsensitive and socially meaningful. Unlike post-hoc methods, PHAX embeds explanation in the decision
pipeline, allowing outputs to be justified and adapted to counterarguments and user needs. Table 1
shows how PHAX applies key XAI goals [12] through NLP tasks in public health. Its architecture has
four layers, moving from raw data to user-adaptive explanation. As Figure 1 illustrates, each layer
passes information forward, with user feedback enabling refinement. Argumentation serves as the core
reasoning mechanism that translates NLP-derived insights into structured justifications. It connects the
output of the NLP Layer to both the internal logic of decision-making and the external communicative
needs of the user interface.</p>
        <p>• Data Layer: The Data Layer gathers and preprocesses heterogeneous sources—such as clinical
texts, patient records, epidemiological databases, and social media content—so that structured
and unstructured inputs are harmonized before moving to subsequent layers.
• NLP Processing Layer: The NLP layer performs domain-specific analysis (NER, SRL, discourse
parsing, text simplification), producing structured input for argument construction and validation.
• Explanation and Argumentation Layer: This layer models outputs as defeasible arguments
with structures such as Dung’s AF and ASPIC+. Arguments consist of claims (e.g., a simplification),
supports (e.g., corpus frequency or semantic equivalence), and counterarguments (e.g., ambiguity
or domain-specific issues). It formalizes reasoning and helps manage uncertainty and conflict.
• User Interface Layer: The user interface layer delivers explanations via dashboards, agents, or
summaries, adapting tone and depth to the user (patient, clinician, policymaker). It links formal
reasoning with human interpretability.</p>
        <p>Traditional frameworks separate development and deployment, but PHAX uses a layered design
that integrates explanation processes across the AI lifecycle. Each layer supports model building and
real-time explanation, ensuring traceability and stakeholder alignment.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Formal Specification and Data Flow in PHAX</title>
        <p>PHAX follows a layered architecture that integrates formal argumentation, NLP, and user modeling
for user-adaptive explainability in public health. Each component is defined by its data types and
transformation functions, enabling traceability from raw input to stakeholder-tailored explanations.
Formal Framework. We define an abstract argumentation framework as  = (, ), where 
is the set of arguments and  ⊆  ×  the attack relation. In PHAX/ASPIC+, the knowledge base
is  = (, ), where  contains strict rules and facts, and  consists of defeasible rules and
empirical observations.</p>
        <p>Rule Types. Strict rules () encode deterministic knowledge (e.g., clinical guideline ⇒
recommendation). Defeasible rules () capture uncertain, corpus- or context-driven inferences (e.g., if
frequency(symptom) &gt; frequency(symptom), prefer(symptom)).</p>
        <p>NLP Mapping. NLP modules (such as NER, SRL, and discourse parsing) map their outputs into
premises and rules via:</p>
        <p>Φ : NLP outputs → (,  ∪ ),
where  denotes extracted premises.</p>
        <p>Argument Construction. Arguments are constructed by chaining premises and rules:
 = ⟨, ⟩,
with attacks (rebut, undercut) following ASPIC+. For clarity, we use this compact form, though ASPIC+
models arguments in full detail.</p>
        <p>Graph Evaluation. Given semantics  ∈ {grounded, preferred} , the accepted extension is  ⊆
. Grounded semantics yield conservative acceptance, while preferred allow richer extensions; other
variants may also apply.</p>
        <p>Explanation Object. A PHAX explanation is:</p>
        <p>= ( * ,  ),
where  * is the argument subtree supporting the output under  , and  is the user profile.  is valid if
 * supports the claim and satisfies utility criteria such as readability, detail, or audience alignment. In
other words, explanations are extracted from the evaluated argument graph ( ) and then tailored
to the needs of the user profile  .</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Common Argumentation Schemes in Public Health and Biomedical Reasoning</title>
        <p>Decisions in public health and biomedicine often face uncertainty, multiple stakeholders, ethical issues,
and context dependence. To handle these factors, argumentation may take diferent forms—causal,
analogical, practical, or expert-based—depending on the task and audience. PHAX applies well-established
argumentation schemes to generate explanations that are systematically organized and accessible to
diverse users. These schemes capture typical reasoning patterns employed to justify claims in various
domains [21]. Each scheme defines a type of inference (e.g., expert authority, practical goals) and is
accompanied by critical questions that guide its evaluation. In public health and biomedical decisions,
they provide a solid basis for user-facing justifications. Table 2 shows several schemes adapted to
real-world scenarios. Beyond structuring logical support, they also act as templates for natural language
explanations aligned with stakeholder needs.</p>
        <sec id="sec-3-3-1">
          <title>Formal Representation of Schemes</title>
          <p>PHAX uses formal representations of argumentation schemes to support logic-based justification and
reasoning. Below are selected examples:
Formalization: The scheme for  where  is the proposition under consideration
and  is the relevant domain, can be encoded as:</p>
          <p>is_expert(, ), asserts(,  ), relevant(, ) ⇒ believe( )</p>
        </sec>
        <sec id="sec-3-3-2">
          <title>Cause to Efect:</title>
        </sec>
        <sec id="sec-3-3-3">
          <title>Practical Reasoning:</title>
          <p>action(), causes(, ) ⇒ expect()
goal(), action(), promotes(, ) ⇒ do()</p>
        </sec>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Structured Reasoning and Argumentative Explanation in PHAX</title>
        <p>To support structured and adaptable explanations, PHAX relies on a hybrid formal foundation that
combines elements from deductive, structured, and label-based argumentation models. PHAX applies
structured argumentation to ensure that conclusions remain traceable and valid in biomedical and
public health contexts. The framework incorporates ASPIC+, which supports both strict (deductive)
and defeasible rules. Strict rules model clear-cut logic (e.g., eligibility based on clinical criteria), while
defeasible rules capture reasoning under uncertainty and exceptions—crucial in high-stakes public
health decision-making.</p>
        <p>Additionally, PHAX adopts principles from label-based argumentation to handle preference,
uncertainty, and credibility. Arguments may carry labels such as confidence, stakeholder relevance, or
ethical weight; these propagate through the argument graph to guide resolution. This makes the
system sensitive to contextual and user-specific needs, supporting more personalized and socially
attuned justifications. Finally, PHAX incorporates argumentation schemes, such as Expert Opinion,
Practical Reasoning, and Cause to Efect, which reflect common patterns of human reasoning. These
schemes serve as templates for generating natural language explanations that align with how diferent
stakeholders interpret justification—enhancing both transparency and persuasive power.</p>
        <p>At the core of PHAX is the use of structured argumentation to represent and explain AI-generated
outputs. Each decision is modeled as a claim supported by explicit premises and, when appropriate,
challenged by potential objections—mirroring human reasoning and enabling transparent justifications.
For instance, in medical text simplification, PHAX treats the decision to simplify a term  to  not
merely as an output, but as an argument that can be analyzed and, if needed, contested.</p>
        <sec id="sec-3-4-1">
          <title>3.4.1. Illustrative Example: Medical Simplification as Structured Argument</title>
          <p>The following structures are formalized using ASPIC+, enabling both graphical visualization and
logicbased evaluation. This approach goes beyond surface-level explainability by exposing the reasoning
process itself. In particular, the decision to simplify a term  to  is not presented as a final output
alone, but accompanied by explicit justifications and possible objections. This aligns with the principles
of structured argumentation used in explainable AI.</p>
          <p>Claim
Support 1
Support 2
Attack</p>
          <p>Term  is simplified to 
 is more frequent in lay corpora
No semantic loss detected via NLI model
 may be ambiguous in clinical contexts</p>
        </sec>
        <sec id="sec-3-4-2">
          <title>Dung’s Abstract Argumentation Framework (AF)</title>
          <p>In Dung’s AF [13], arguments are modeled as atomic elements with defined attack relations. Let:
: The argument supporting the simplification of  → 
: Support based on frequency: " is more frequent than "
: Support based on semantic similarity: "No meaning lost"
: Counterargument: " is ambiguous in clinical contexts"
We define the argument set and the attack relation as follows:</p>
          <p>Args = {, , , },</p>
          <p>Attacks = {(, )}</p>
          <p>Here,  challenges the simplification decision, which can be evaluated using grounded or preferred
semantics depending on the context.</p>
        </sec>
      </sec>
      <sec id="sec-3-5">
        <title>ASPIC+ Representation</title>
        <p>ASPIC+ [14] enriches this view by including internal structure, rules, and types of reasoning. The same
example can be modeled as:</p>
        <p>Premises
1: frequency( ) &gt; frequency()
2: semantic_match(,  ) =  
3: ambiguity( ) = ℎ_
Rules
1 : (1, 2) ⇒  ( )
2 : 3 ⇒ ¬ ( )</p>
        <p>Arguments
1 = ⟨1, 2, 1⟩ ⇒  ( )
2 = ⟨3, 2⟩ ⇒ ¬ ( )</p>
        <p>Here, 2 attacks 1, resulting in a defeasible justification structure. The system can select or
reject the simplification based on external preferences, such as the user’s role (e.g., patient or clinician).
This structured approach allows PHAX to generate explainable outputs that go beyond readability
scores, instead providing reasoned justifications that can be tailored and interrogated across use cases.</p>
        <sec id="sec-3-5-1">
          <title>3.4.2. Formalization: Evidence-Based Reasoning via PICO</title>
          <p>Moving beyond basic linguistic tasks such as term simplification, PHAX’s formal reasoning capabilities
extend to evidence-based clinical logic. The following illustrates how structured argumentation can be
applied to biomedical literature analysis, using the widely adopted PICO paradigm. Building on the
earlier simplification use case, we now illustrate how the same argumentation machinery can support
evidence-based clinical reasoning through the PICO paradigm. In particular, structured argumentation
ofers a compelling foundation for modeling evidence-based claims derived from biomedical literature
using the PICO (Population, Intervention, Comparison, Outcome) paradigm. PICO elements can be
expressed as formal predicates, enabling the construction of defeasible rules (see below).
Predicates
•  (): entity  belongs to target population
• (): intervention applied
• (): control/comparison condition
• (): observed or expected outcome
Defeasible Rule
 () ∧ () ⇒ ()</p>
          <p>This implies that for individuals in population  , the application of intervention  leads to outcome
—under typical conditions. However, due to co-morbidities, alternative studies, or contextual
constraints, such a rule remains defeasible. Counterarguments may cite exceptions (e.g., “ is contraindicated
for subgroups in  ”). Using ASPIC+, such clinical evidence can be formalized as follows:
Premises
1: Study population matches 
2: Intervention  applied
3: Outcome  observed
4: Source study credible
Defeasible Rule</p>
          <p>Counterargument
(1, 2, 3, 4) ⇒ recommend(,  )
(1′, 2′, ¬, 4′) ⇒ ¬recommend(,  )</p>
          <p>These opposing arguments can then be compared via preference criteria (e.g., study quality,
sample size) and evaluated within an argumentation framework using grounded or preferred semantics.
Grounded semantics selects the most cautious acceptable set of arguments, while preferred semantics
favors maximal admissible sets. This formalism not only enhances the interpretability of AI
recommendations in public health contexts, but also allows systematic traceability of how and why a certain
intervention is proposed—bridging evidence-based medicine and explainable AI.</p>
        </sec>
      </sec>
      <sec id="sec-3-6">
        <title>3.5. User-Adaptive Explanation Generation</title>
        <p>Public health communication involves a range of stakeholders—clinicians, policymakers, patients—each
with diferent cognitive needs and expectations. To support efective communication, PHAX dynamically
adapts both the structure and the presentation of its explanations based on user profiles. These user
modeling attributes—such as expertise, lexical tolerance, and cognitive expectations—govern how
explanations are tailored across multiple adaptation layers, as illustrated below.</p>
        <p>Theoretical Foundation. Drawing from Relevance Theory [22] and Grice’s Cooperative
Principles [23], PHAX ensures that explanations are not only accurate but also cognitively appropriate for
the intended audience. This is operationalized through user modeling and selective generation of
explanation content. Furthermore, PHAX incorporates principles from Labelled Argumentation Frameworks
[24, 25] to propagate metadata such as confidence, role-based preference, or ethical weight across the
explanation graph.</p>
        <p>
          Definition 1. (User Profile) A user profile  is a tuple (, , ), where:
•  ∈ R: Domain expertise level (e.g., clinician vs. layperson)
•  ∈ R: Lexical tolerance (e.g., jargon sensitivity)
•  ∈ R: Cognitive depth (e.g., expected explanation complexity)
Definition 2. (Semantic Suficiency) Given an explanation tree  and argument , semantic
suficiency   () ∈ [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] quantifies the extent to which  supports , possibly via aggregation over
leaf node support and edge weights.
        </p>
        <p>Definition 3. (Utility Function) Utility is a linear combination of weighted factors:

Utility(,  ) = ∑︁  ·  (,  )</p>
        <p>=1
where  is a feature function (e.g., clarity, lexical fit), and   ∈ R is a tunable weight.
Formal Mechanism. Each user is modeled as a profile  with attributes defined in the user profile.
Given a full Quantitative Dispute Tree  () [26] for an argument , the framework selects a
user-appropriate subgraph  * as follows:
 * = arg max Utility(,  ) subject to   () ≥  ()</p>
        <p />
        <p>Utility(,  ) =  · Clarity(,  ) +  · Relevance(,  ) +  · LexicalFit(,  )
Where:
•   (): Semantic suficiency — does  still justify argument ?
•  (): Task-defined threshold for completeness
Adaptation Dimensions. Adaptation operates along lexical complexity (simplified phrasing for
lay users), information depth (detailed chains for experts, summaries for general audiences), and
presentation format (visuals for policymakers, text for patients, dialogue for clinicians).
Illustrative Example. A vaccine prioritization decision may be explained as clinical evidence for a
clinician (“Phase III trial data show 92% eficacy”), personal reassurance for a patient (“This vaccine
has helped many people like you stay safe”), or system-level impact for a policymaker (“Prioritizing
this group prevents ICU overload by 45%”).</p>
        <p>Relation to Argumentation Schemes. User-tailored explanations map to diferent argumentation
schemes depending on the audience: Cause to Efect for lay users (“Vaccination reduces risk of severe
disease”), Statistical Generalization for experts (“70% of patients showed improvement”), Practical
Reasoning for decision-makers (“To prevent ICU overload, prioritize group A”), and Ethical Reasoning
for public discourse (“We must protect the most vulnerable first”).</p>
        <p>Connection to User Interface Layer. These adaptive explanations are operationalized through
the User Interface Layer of PHAX, which selects and renders the appropriate format and depth of
explanation based on the computed utility for each user profile. The UI layer delivers argument
structures in diferent formats—textual justifications, interactive dialogues, or visual dashboards—acting
as the channel through which they reach the user. In this way, the formal reasoning developed in earlier
layers is preserved while being presented in a form that is both understandable and convincing for its
audience.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Application Scenarios Across Public Health and Biomedical</title>
    </sec>
    <sec id="sec-5">
      <title>Sciences</title>
      <p>PHAX addresses a broad spectrum of reasoning and communication challenges in public health and
biomedical domains, where decisions often involve uncertainty, competing values, and diverse
stakeholders. Beyond its core architecture, the framework provides structured and audience-sensitive explanations
tailored to real-world needs—from clinical decision support to public communication. Below, we present
representative scenarios illustrating how PHAX integrates argumentation and explanation to promote
transparency, trust, and actionable insight across practical settings.</p>
      <sec id="sec-5-1">
        <title>4.1. Decision Support and Stakeholder Alignment</title>
        <p>Public health decisions frequently demand balancing competing priorities, working with limited
resources, and dealing with uncertainty. For instance, setting vaccination priorities during a pandemic
requires weighing exposure risks, equity concerns, and the capacity of the healthcare system. PHAX
models such dilemmas using defeasible argumentation, enabling transparent, traceable justifications
for complex decisions. Its layered architecture delivers explanations tailored to diferent stakeholders:
clinicians may explore structured evidence trails via interactive dashboards, while policymakers access
high-level summaries that emphasize societal trade-ofs and ethical considerations.</p>
      </sec>
      <sec id="sec-5-2">
        <title>4.2. Evidence Synthesis and Biomedical Summarization</title>
        <p>Systematic reviews play a central role in biomedical research by combining results from multiple
studies, but their length and variability can make them hard to access and interpret. PHAX supports
structured summarization by using argumentation mining on PICO-extracted data to capture key
claims, counterclaims, and the strength of supporting evidence. These elements are organized into
argument structures, producing contrastive summaries that highlight where studies agree, disagree, or
remain uncertain. Such summaries help clinicians and researchers quickly navigate complex and often
conflicting literature.</p>
      </sec>
      <sec id="sec-5-3">
        <title>4.3. Public Communication and Policy Justification</title>
        <p>Efective communication of health interventions—such as lockdowns or vaccine mandates—requires
balancing scientific accuracy with accessibility for diverse audiences. PHAX supports this by using
established argumentation schemes (causal, ethical, practical) and adapting their wording and framing
to diferent user profiles. For instance, a lockdown policy may be framed in terms of “transmission
control” when addressing clinicians, but emphasized as “protecting the vulnerable” in public-facing
messages. This audience-sensitive adaptation enhances clarity and trust without compromising factual
integrity.</p>
      </sec>
      <sec id="sec-5-4">
        <title>4.4. Risk Communication and Misinformation Rebuttals</title>
        <p>Health misinformation often spreads through arguments that are emotionally compelling but logically
weak. PHAX tackles this by producing structured rebuttals: it breaks claims into premises, tests their
validity, and formulates counterarguments supported by scientific evidence and adapted to the audience.
For instance, the false claim that “vaccines cause infertility” can be refuted through mechanistic evidence
and trial data for clinicians, while lay audiences may receive simpler, empathetically framed responses
that emphasize safety and social consensus. This audience-aware rebuttal strategy enhances persuasive
efectiveness without compromising scientific rigor.</p>
      </sec>
      <sec id="sec-5-5">
        <title>4.5. Interface-Driven Personalization and Delivery</title>
        <p>An explanation is shaped as much by how it is delivered as by what it contains. PHAX addresses this by
ofering diferent modes of presentation, tailored to user preferences, literacy level, and context. These
modes include narrative text, visual overviews, and conversational dialogue. For example, patients
may receive conversational explanations via chatbot interfaces, while policymakers might explore
comparative scenario graphs that highlight trade-ofs. These modalities are selected dynamically based
on user modeling, ensuring that the explanation aligns with the user’s cognitive and informational
needs as captured by PHAX’s adaptive layer.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Conclusion and Future Work</title>
      <p>This study presents PHAX—a Public Health Argumentation and eXplainability framework—designed
to support transparent, context-aware, and user-adaptive explanations in high-stakes domains such
as healthcare and biomedical sciences. Building upon structured argumentation theory, PHAX
incorporates formal reasoning, adaptive NLP pipelines, and user modeling to generate stakeholder-specific
justifications for AI outputs. Our contributions include a modular architecture for integrating
explainability into the AI lifecycle, a formalization of user-adaptive explanation generation, and illustrative
applications in medical term simplification, policy justification, and systematic review summarization.
By combining defeasible reasoning, argumentation schemes, and multimodal delivery interfaces, PHAX
enables interpretable decision support tailored to diverse user needs. Future work extends PHAX
with uncertainty-aware and value-sensitive argumentation to better reflect complex, conflicting public
health priorities. A key direction involves activating PHAX’s adaptive layer through live user feedback,
enabling continuous refinement of explanations aligned with user profiles. We also plan to evaluate
PHAX in real-world settings through user studies with clinicians, policymakers, and patients, to assess
explanation efectiveness, trust calibration, and usability in practice.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This work has been financially supported by the German Federal Ministry of Health (BMG) under grant
No.: ZMI5- 2523GHP027 (project “Strengthening National Immunization Technical Advisory Groups
and their Evidence-based Decision-making in the WHO European Region and Globally” SENSE) part of
the Global Health Protection Programme, GHPP.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E. J.</given-names>
            <surname>Topol</surname>
          </string-name>
          ,
          <article-title>High-performance medicine: The convergence of human and artificial intelligence</article-title>
          ,
          <source>Nature Medicine</source>
          <volume>25</volume>
          (
          <year>2019</year>
          )
          <fpage>44</fpage>
          -
          <lpage>56</lpage>
          . doi:
          <volume>10</volume>
          .1038/s41591-018-0300-7.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Amann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Blasimme</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Vayena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Frey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. I.</given-names>
            <surname>Madai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Consortium</surname>
          </string-name>
          ,
          <article-title>Explainability for artificial intelligence in healthcare: A multidisciplinary perspective</article-title>
          ,
          <source>BMC Medical Informatics and Decision Making</source>
          <volume>20</volume>
          (
          <year>2020</year>
          )
          <article-title>310</article-title>
          . doi:
          <volume>10</volume>
          .1186/s12911-020-01332-6.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>G.</given-names>
            <surname>Hattab</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Irrgang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Körber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kühnert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ladewig</surname>
          </string-name>
          ,
          <article-title>The way forward to embrace artificial intelligence in public health</article-title>
          ,
          <source>American Journal of Public Health</source>
          <volume>115</volume>
          (
          <year>2025</year>
          )
          <fpage>123</fpage>
          -
          <lpage>128</lpage>
          . doi:
          <volume>10</volume>
          .2105/ AJPH.
          <year>2024</year>
          .
          <volume>307888</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Dubey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Hattab</surname>
          </string-name>
          ,
          <article-title>A nested model for ai design and validation</article-title>
          , iScience
          <volume>27</volume>
          (
          <year>2024</year>
          )
          <article-title>110603</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.isci.
          <year>2024</year>
          .
          <volume>110603</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>H.</given-names>
            <surname>Khalili</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Wimmer</surname>
          </string-name>
          ,
          <article-title>Towards improved xai-based epidemiological research into the next potential pandemic</article-title>
          ,
          <source>Life</source>
          <volume>14</volume>
          (
          <year>2024</year>
          )
          <article-title>783</article-title>
          . doi:
          <volume>10</volume>
          .3390/life14070783.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>D.</given-names>
            <surname>Mindlin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Robrecht</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Morasch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cimiano</surname>
          </string-name>
          ,
          <article-title>Measuring user understanding in dialoguebased xai systems</article-title>
          ,
          <source>in: ECAI 2024 - 27th European Conference on Artificial Intelligence</source>
          ,
          <fpage>19</fpage>
          - 24
          <source>October</source>
          <year>2024</year>
          , Santiago de Compostela,
          <source>Spain - Including 13th Conference on Prestigious Applications of Intelligent Systems (PAIS</source>
          <year>2024</year>
          ),
          <year>2024</year>
          , pp.
          <fpage>1148</fpage>
          -
          <lpage>1155</lpage>
          . doi:
          <volume>10</volume>
          .3233/FAIA240608.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>G.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Aishwarya</surname>
          </string-name>
          , U. Gadiraju,
          <article-title>Is conversational xai all you need? human-ai decision making with a conversational xai assistant</article-title>
          ,
          <source>in: Proceedings of the 30th International Conference on Intelligent User Interfaces (IUI</source>
          <year>2025</year>
          ), ACM,
          <year>2025</year>
          , pp.
          <fpage>907</fpage>
          -
          <lpage>924</lpage>
          . doi:/10.1145/3708359.3712133.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Abdul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Vermeulen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. Y.</given-names>
            <surname>Lim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kankanhalli</surname>
          </string-name>
          ,
          <article-title>Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda</article-title>
          , in: Proceedings of
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>