<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>IPEXCO: A Platform for Iterative Planning with Interactive Goal-Conflict Explanations</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Rebecca Eifler</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Guilhem Fouilhé</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institut de Recherche en Informatique de Toulouse (IRIT)</institution>
          ,
          <addr-line>118 Route de Narbonne, 31062 Toulouse</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Laboratory for Analysis and Architecture of Systems (LAAS-CNRS)</institution>
          ,
          <addr-line>7 Avenue du Colonel Roche, 31400 Toulouse</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>When automating plan generation for a real-world sequential decision problem, the objective is often not to replace the human planner, but rather to facilitate an iterative reasoning and elicitation process. In this process, the human's role is to guide the planner according to their preferences and expertise. In this context, explanations that address users' questions are crucial to improve their understanding of potential solutions and increase their trust in the system. We present a platform that implements this iterative planning approach and provides explanations to user questions based on conflicting goals and preferences. The platform supports both a classical template-based interface and a multi-agent Large Language Model (LLM) architecture that enables interactive explanations tailored to the user and context. The integration of online user studies allows for the evaluation of the efectiveness of the explanations and the impact of the communication interface. The platform code is available at https://github.com/r-eifler/IPEXCO-frontend.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Planning</kwd>
        <kwd>Explanations</kwd>
        <kwd>minimal unsolvable sets (MUS)</kwd>
        <kwd>LLM</kwd>
        <kwd>conversational</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Real-world problems are often oversubscribed, meaning that due to constraints such as limited resources
or time, not all goals and user preferences can be satisfied. An iterative planning process, as outlined
in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], enables users to explore the plan space and to refine their preferences to identify a satisfactory
trade-of. In such an interactive planning setup, explanations that help the user to understand the
dependencies between the goals and their preferences are crucial. We present IPEXCO, an online
platform that facilitates iterative planning with interactive explanations. The platform has two main
objectives. First, it provides a user-friendly interface for iterative planning, supported by goal conflict
explanations as introduced in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and more recently extended in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Explanations based on minimal
unsolvable sets (MUS) or minimal conflicts are also relevant in constraint programming [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. They
provide contrastive explanations [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] to user questions such as “Why does the plan not satisfy property
?” by ofering insights into the properties that cannot longer be satisfied in the alternative proposed in
the question. These explanations are accessible either via a template-based interface or via an interactive
natural language interface leveraging the recent advances in large language models, while still relying
on specialized algorithms to compute the explanations. To facilitate the evaluation of the explanations,
our platform supports online user studies, in a controlled environment, allowing participants to utilize
their own strategies during the planning process and the exploitation of the explanations.
      </p>
      <p>In the following we provide an overview of the main platform features, the iterative planning process,
goal conflict explanations and the user study support. Then we present the interactive explanation
interface based on a multi-agent LLM architecture. Finally, we give an overview of the modular platform
architecture.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Iterative Planning Process</title>
      <p>The platform implements an iterative planning process. In each iteration step, the user is presented a
sample plan   (if one exists). Based on  , the user must then decide which goals and preferences the
sample plan  +1 in the next iteration should satisfy. If the goals and preferences enforced by the user
are unsolvable, no sample plan can be computed. To restore solvability, the user must then decide which
goals and preferences to forego. During the exploration of the plan space new preferences may emerge,
and goals which are to restrictive might be dropped. Our platform facilitates this iterative process by
ofering interfaces to access the individual iteration steps, along with the preferences and goals satisfied
in each step. Additionally, it allows users to define temporal goals, reflecting their preferences. The
planning task is given by a PDDL domain and problem file and the goals are defined during the planning
process via diferent interface options, which are outlined next.</p>
      <p>
        Goal Creation At the beginning of the planning process, the user defines all known goals and
potential preferences. As the planning process progresses, the initial goals and preferences are refined
based on the analysis of sample plans. Goals are defined by LTL f [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] formulas with literals based on the
predicates and objects of the planning task. To facilitate the goal creation process, the requirement to
write LTLf is bypassed, as it is not suitable for laypeople. Two options are supported.
      </p>
      <p>First, it is possible to define templates for each domain to cover commonly used goals and temporal
preferences. Such a template maps a natural language description, e. g. “Load package $Pi before
package $Pj into truck $T” to an LTLf formula: ¬( ,  ) U (,  ). To restrict the objects ofered
for selection, the allowed types (e.g. $Pi must be of type package) and facts that must/must not be
satisfied in the initial state (e.g. ¬( ,  )) can be specified. This helps to ensure that only well-formed
goals can be instantiated. In Figure 1 on the left a list of possible templates for a logistics domain are
shown. The first step in creating a goal is choosing one of those templates. In the second step the
objects are selected.</p>
      <p>
        The second option is to use a LLM-based goal translator, that translates a natural language description
of the goal into an LTLf formula. An example is given on the right in Figure 1. The LLM infers the
delivery location from the planning task. The translated LTLf formulas shown in this figure are intended
to be shown exclusively to expert users. An alternative approach to verify the translation involves the
use of a reverse translation of the users’ question, which is then converted back into natural language
without ambiguities. Several tools such as NL2LTL [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], Lang2LTL [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], and nl2spec [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] use LLMs and
prompting to translate temporal goals into LTL or LTLf. We opted for a simple base implementation,
similar to the End-to-End Approach described by [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>Iterative Planning Process During an iterative planning process the user explores the plan space
by iteratively enforcing goals and inspecting the resulting sample plans. The interface is designed to
provide users with the information they need to make informed decisions about which goals to enforce.
To make this decision, users need an overview of the existing iteration steps and access to more detailed
information for each step. At the same time, users should be able to select the goals to be enforced in
the next iteration step. Our tool implements this process as follows.</p>
      <p>The overview of the iteration steps is shown on the left side of Figure 2. This view allows to quickly
identify which steps were unsolvable and solvable along with the utility they achieved. The details view
of an iteration step ofers, in addition to the information provided in the overview, the following details.
For a solvable iteration step (a plan satisfying the enforced goals exists), the additional satisfied reference
goals, the unsatisfied reference goals and the enforced goals are listed. The sample plan satisfying the
enforced goals, is accessible and displayed in a separate view. By default, the plan is displayed as a
list of actions; alternatively, it is possible to implement a domain-dependent plan visualization. For an
unsolvable iteration step (no plan satisfying the enforced goals exists), the enforced goals are listed.</p>
      <p>The interface to create the next iteration step is located in a side panel, as shown on the right in
Figure 2. To create a new step, a user must select at least one enforced goal. These goals can be either
existing or newly created. Additionally, users can also select or create new reference goals. Reference
goals represent objectives and preferences the user is interested in, but that need not be satisfied by the
next sample plan. The reference goals are considered in the explanations ofering insights on how to
satisfy them in subsequent iteration steps.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Goal Conflict Explanations</title>
      <p>
        Goal-conflict explanations in planning were introduced in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and more recently extended in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. They
are based on minimal unsolvable subsets (MUS) and minimal correction sets (MCS), a concept also used
in constraint programming [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. A MUS  is a set of goals that is not solvable, i. e. there exists no plan
that satisfies all goals in , but all proper subsets of  are solvable. This means a MUS is a minimal
conflict. For example in a logistics problem the set {1, 2, 3}, where  indicates that package  is
delivered, is a MUS if it is not possible to deliver all three package due to for example limited fuel, but
any combination of two packages can be delivered. Given a set of goals  that is unsolvable, an
MCS is a subset  ⊆  such that  ∖  is solvable but for each proper subset of  this is not
the case. This means an MCS is a minimal set of goals one has to forego to restore solvability. If for
example the packages {1, 2, 3, 4} can not be delivered, then the set {1, 2} is an MCS if {3, 4}
can be delivered but additionally delivering either 1, or 2 is not possible. Note that the conflicts we
analyze, such as those related to the delivery of packages or more generally to any temporal properties
of a set of plans, are not caused by inconsistencies in the planning model [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Conflicts may arise due
to various factors, including resource or time constraints, or conflicting objectives, such as minimizing
the overall driving distance while ensuring even utilization of the individual trucks.
      </p>
      <p>
        Based on MUS and MCS questions about unsolvable as well as solvable iteration steps can be addressed.
In an unsolvable step, the MUS of the enforced goals provide insights into why the step is unsolvable,
while the MCS ofer guidance on how to restore solvability. In a solvable step questions such as “Why is
goal  not satisfied?” , “What happens if I enforce goal ?” or “How can I satisfy goal ?” can be addressed.
These questions refer to goals that are not satisfied by the sample plan, but the user is considering
enforcing them in the next iteration step and wants to identify the implications of doing so. This
approach ofers contrastive explanations [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] that provide insights into the goals that cannot longer be
satisfied in the alternative proposed in the question. For a more detailed description we refer to [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>Explanations are provided in the details view of the iteration steps. The access to the explanations
and their presentation difers depending on the chosen interface. Currently, two options are supported.
In the template based explanation interface the user is restricted to a predefined list of questions and the
answer lists all explanations. The questions and explanations are displayed in a chat interface, as shown
on the left in Figure 3. For an unsolvable step the questions refer to the entire selection of enforced
goals. Therefore, the explanation interface is displayed above the list of enforced goals. In a solvable
step the questions refer to individual unsatisfied reference goals. Therefore, for each unsatisfied goal
there is a dedicated explanation interface accessible by clicking on the goal (see left Figure 3).</p>
      <p>The LLM-Chat based explanation interface is displayed in the top of the iteration step details view for
both solvable and unsolvable steps (see right Figure 3). Users can submit questions and receive answers
via a chat interface. The interface employs a hybrid approach. The LLM-agents function as a bridge,
translating the user’s natural language into the formal language of the planner, which serves as the
reasoner and computes the MUS and MCS. More examples and details can be found in the next section.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Interactive Explanations with LLMs</title>
      <p>
        As described in Section 3, IPEXCO supports an LLM-powered explanation interface introduced in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
This interface ofers several key advantages for users interacting with the planning system. First, it
allows users to formulate their questions in natural language, with an LLM translating these questions
into formal queries for the explanation service. Secondly, the LLM converts the technical explanations
generated by the explanation service into natural language responses tailored to the user’s original
question. This not only creates a more natural conversation flow and adapts to the user’s vocabulary,
but also enables more complex interactions, including requests for summarized or partial explanations
as needed. Finally, using LLMs can facilitate the overall interaction by asking users to clarify their
requests when ambiguities arise.
      </p>
      <p>
        Similar approaches involving LLM to enrich explainer system outputs have been proposed in
explainable machine learning [
        <xref ref-type="bibr" rid="ref12 ref13 ref14 ref15">12, 13, 14, 15</xref>
        ] and in scheduling [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. The LLM interface implemented in
IPEXCO shares similarities with these works while being the first such application to planning systems.
The technical implementation of this LLM-powered explanation system is based on a multi-agent
approach where specialized LLM agents essentially perform translation tasks between natural language
and formal explanation queries.
      </p>
      <p>Goal
Goal Description</p>
      <p>Question</p>
      <p>Type
Classifier</p>
      <p />
      <p>Response</p>
      <p>Explanation
User</p>
      <p>Question</p>
      <p>Question Type</p>
      <p>DIRECT</p>
      <p>Question Topic EFQUERY-GT</p>
      <p>Goal
Translator</p>
      <p>Dispatcher
( )
Explanation
Translator</p>
      <p>ℰ
Question FOLLOW-UP</p>
      <p>Question
EFQUERY-noGT</p>
      <p>Question</p>
      <p>Explanation</p>
      <p>Framework
Explanations &amp; prior Information
natural language
formal language
goal creation protocol
explanations protocol</p>
      <p>LLM Agent</p>
      <p>Multi-Agent LLM Architecture Our system (Figure 4) consists of three main components: Question
Type Classifier , which analyzes the user’s question to determine its type and how it should be processed
(it can directly answer simple questions like “What can I ask”, identify follow-up questions that reference
previous explanations, or route supported questions to the explanation service while translating the
question into a formal request); Goal Translator, which converts natural language goals into LTLf
formulas that the planning system understands and creates clear natural language descriptions of these
goals; and Explanation Translator, which converts technical explanations from the formal system
into natural language responses tailored to the user’s original question. The system maintains context
across interactions, allowing for natural conversation flow with follow-up questions and references
to previous explanations. For example, a user might ask “Why can’t I achieve goal X?” followed by
“How can I fix that?” without needing to restate the original goal. Beyond allowing a natural language
interaction, this system also allows more complex interactions with explanations including selection
and summarization, which can be actively requested by users.</p>
      <p>Example interaction We demonstrate the capabilities of the LLM-chat based explanation interface
through an example interaction in an unsolvable iteration step. In the UI, these examples would be
formatted in Markdown for enhanced readability.</p>
      <p>User: What are the conflicts?
LLM: Here are the current conflicts: [Bring friend to sports center, parent
does sports, bring grandma’s groceries to her house, parent buys groceries,
grandma and friend are not together in car], [Bring friend to sports center,
parent does sports, bring groceries back home, bring grandma’s groceries to
her house], [Bring grandma’s groceries to her house, do shopping before parent
sports] These conflicts indicate issues with achieving your goals based on
the currently enforced conditions.</p>
      <p>User: Thanks, is there one goal that is involved in all of these?
LLM: Yes, the goal "bring grandma’s groceries to her house" appears in all of
the listed conflicts. This indicates that it is a common factor contributing
to the conflicts preventing successful planning.</p>
      <p>User: So removing it is enough to make the task solvable, right?
LLM: Yes, removing the goal "bring grandma’s groceries to her house" would
be sufficient to resolve the conflicts and make the planning task solvable.
By forgoing this goal, you would eliminate all conflicts currently preventing
you from achieving your objectives.</p>
    </sec>
    <sec id="sec-5">
      <title>5. User Study Support</title>
      <p>In order to evaluate the usefulness of explanations and the impact of diferent interfaces, user studies
are required. To ensure a high-quality evaluation, participants should be given comparable tasks and
enough freedom to develop their own strategies and use the explanations according to their preferences.</p>
      <p>The platform provides a controlled environment through the use of planning processes with a fixed
set of goals. Each goal is assigned a utility that serves as a proxy for an intrinsic preference of the
participants. This approach ensures that all users are presented with the same optimization task,
which is to maximize the utility of a solvable iteration step. Note that the computation of explanations
can require a significant amount of time. The fixed set of goals allows for the pre-computation of
explanations, ensuring a responsive experience. Certain features of the iterative planning process may
be disabled. Participants who do not have access to sample plans, but only to the information and
explanations provided in the iteration step details view, must rely on the explanations. To encourage a
high performance, achievable utility for the task and the utility of the best iteration step are displayed.</p>
      <p>The platform facilitates online user studies by leveraging the recruitment platform Prolific. User
studies are composed of diferent parts that the user study coordinator can use to set up a user study to
their needs. The options are: description steps (general information and instructions), external links
(for example for questionnaires), a tool manual (general usage instructions for the tool and the selected
interfaces for explanations), planning task information (description of the domain, the specific instance,
and list of available goals), and planning tasks (iterative planning process with the objective to maximize
the utility). Each part is associated with a minimum or maximum processing time, which restricts the
progress of participants. This incentivizes users to allocate a reasonable amount of time to instructions
or questionnaires and to limit the processing time of the optimization task. Given the complexity of
the tool, it is essential that users receive a training to understand and fully use all its features. For
this purpose, it is possible to select a task as an introductory task that contains additional instructions
for the user. All actions performed by the user are tracked with timestamps, allowing for the analysis
of user strategies and decisions over time. The platform also supports the monitoring of participants
during the study, by providing access to metrics such as completion time and achieved utility. For a
comprehensive statistical analysis, the user data can be exported as a JSON file.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Architecture</title>
      <p>The tool is a web platform, i. e. it runs directly in the browser and does not require installation on
individual users’ machines. This feature is particularly advantageous for conducting user studies.</p>
      <p>The system features a modular architecture, in which explanations and plans are provided by
individual services. The back-end and the explanation and planning services implement REST APIs,
and communication between the back-end and a service is asynchronous. The services themselves
maintain a queue for incoming jobs and schedule them based on resource availability. This enables
the integration of additional planner and explanation services, and facilitates the expansion of the
platform. The supported questions are also modular, so that they can be extended depending on the
explanation service selected. The presentation of explanations is not easily generalizable. However, the
template-based approach can be easily extended, and the prompts of the LLM-based translators are
configurable to accommodate diferent formal explanation languages.</p>
      <p>
        The included planner service is based on Fast Downward (FD) [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] extended with a compilation
approach to support temporal goals [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. The explanations service computes MUS and MCS using
the algorithms introduced in [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] implemented in FD. The LLM implementation, based on prompting,
supports the usage of OpenAI models through the Chat Completion API that supports Structured
Outputs.This restriction ensures a strict format for all LLM-generated text, which ensures that the user
interaction follows the specification in Figure 4. The UI provides interfaces to set up model name (e. g.
gpt-4o-mini), temperature , maximum completion tokens, prompts and output formats.
      </p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion &amp; Future Work</title>
      <p>We have introduced IPEXCO, an online platform that facilitates iterative planning with explanations.
The platform ofers interfaces to goal-conflict explanation, leveraging LLMs to enhance interactivity. The
support for user studies enables the evaluation of information and the communication of explanations.</p>
      <p>
        Moving forward, we aim to incorporate additional explanation methods from planning [
        <xref ref-type="bibr" rid="ref19 ref20">19, 20</xref>
        ],
enhancing their accessibility and facilitating a comparative and evaluative analysis of their applicability,
advantages, and disadvantages.
      </p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgments</title>
      <p>This work was funded by the European Union’s Horizon Europe Research and Innovation program
under the grant agreement TUPLES No 101070149, and was supported by the Artificial and Natural
Intelligence Toulouse Institute (ANITI), funded by the French Investing for the Future PIA3 program
under the Grant agreement ANR-19-PI3A-000.</p>
    </sec>
    <sec id="sec-9">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used DeepL and ChatGPT in order to: Grammar and
spelling check and improve writing style. After using these tool(s)/service(s), the author(s) reviewed
and edited the content as needed and take(s) full responsibility for the publication’s content.</p>
      <p>The presented platform uses ChatGPT to implement the interactive natural language explanations.
Examples generated in this work are clearly labelled as such.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <article-title>Planning as an iterative process</article-title>
          ,
          <source>in: Proceedings of AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>26</volume>
          ,
          <year>2012</year>
          , pp.
          <fpage>2180</fpage>
          -
          <lpage>2185</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Eifler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cashmore</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hofmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Magazzeni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Steinmetz</surname>
          </string-name>
          ,
          <article-title>A new approach to plan-space explanation: Analyzing plan-property dependencies in oversubscription planning</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>34</volume>
          ,
          <year>2020</year>
          , pp.
          <fpage>9818</fpage>
          -
          <lpage>9826</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>G.</given-names>
            <surname>Fouilhé</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Eifler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Thiebaux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Asher</surname>
          </string-name>
          ,
          <article-title>Conversational goal-conflict explanations in planning via multi-agent LLMs</article-title>
          ,
          <source>in: AAAI 2025 Workshop LM4Plan</source>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Belov</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Lynce</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Marques-Silva</surname>
          </string-name>
          ,
          <article-title>Towards eficient MUS extraction</article-title>
          ,
          <source>AI Commun</source>
          .
          <volume>25</volume>
          (
          <year>2012</year>
          )
          <fpage>97</fpage>
          -
          <lpage>116</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>E.</given-names>
            <surname>Gamba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bogaerts</surname>
          </string-name>
          , T. Guns,
          <article-title>Eficiently explaining csps with unsatisfiable subset optimization</article-title>
          ,
          <source>Journal of Artificial Intelligence Research</source>
          <volume>78</volume>
          (
          <year>2023</year>
          )
          <fpage>709</fpage>
          -
          <lpage>746</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <article-title>Explanation in artificial intelligence: Insights from the social sciences</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>267</volume>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>38</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>G.</given-names>
            <surname>De Giacomo</surname>
          </string-name>
          , R. De Masellis,
          <string-name>
            <given-names>M.</given-names>
            <surname>Montali</surname>
          </string-name>
          ,
          <article-title>Reasoning on LTL on finite traces: Insensitivity to infiniteness</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>28</volume>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>F.</given-names>
            <surname>Fuggitti</surname>
          </string-name>
          , T. Chakraborti,
          <article-title>NL2LTL-a python package for converting natural language (nl) instructions to linear temporal logic (ltl) formulas</article-title>
          , in
          <source>: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>37</volume>
          ,
          <year>2023</year>
          , pp.
          <fpage>16428</fpage>
          -
          <lpage>16430</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J. X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Schornstein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Liang</surname>
          </string-name>
          , I. Idrees,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tellex</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Shah,</surname>
          </string-name>
          <article-title>Lang2ltl: Translating natural language commands to temporal specification with large language models</article-title>
          ,
          <source>in: Workshop on Language and Robotics at CoRL</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Cosler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hahn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Mendoza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Schmitt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Trippel</surname>
          </string-name>
          , nl2spec:
          <article-title>Interactively translating unstructured natural language to temporal logics with large language models</article-title>
          , in: International Conference on Computer Aided Verification, Springer,
          <year>2023</year>
          , pp.
          <fpage>383</fpage>
          -
          <lpage>396</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>G.</given-names>
            <surname>Vilone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Longo</surname>
          </string-name>
          ,
          <article-title>A novel human-centred evaluation approach and an argument-based method for explainable artificial intelligence</article-title>
          ,
          <source>in: International Conference on Artificial Intelligence Applications and Innovations</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>447</fpage>
          -
          <lpage>460</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>D.</given-names>
            <surname>Slack</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Krishna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lakkaraju</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Singh,</surname>
          </string-name>
          <article-title>Explaining machine learning models with interactive natural language conversations using TalkToModel</article-title>
          ,
          <source>Nature Machine Intelligence</source>
          <volume>5</volume>
          (
          <year>2023</year>
          )
          <fpage>873</fpage>
          -
          <lpage>883</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>V. B.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schlötterer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Seifert</surname>
          </string-name>
          ,
          <article-title>From black boxes to conversations: Incorporating XAI in a conversational agent</article-title>
          ,
          <source>in: World Conference on Explainable Artificial Intelligence</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>71</fpage>
          -
          <lpage>96</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Castelnovo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Depalmas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Mercorio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mombelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Potertì</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Serino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Seveso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sorrentino</surname>
          </string-name>
          , L. Viola,
          <article-title>Augmenting XAI with LLMs: A case study in banking marketing recommendation</article-title>
          ,
          <source>in: World Conference on Explainable Artificial Intelligence</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>211</fpage>
          -
          <lpage>229</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>N.</given-names>
            <surname>Feldhus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Anikina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chopra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Oguz</surname>
          </string-name>
          , S. Möller,
          <article-title>InterroLang: Exploring NLP models and datasets through dialogue-based explanations</article-title>
          ,
          <source>in: Findings of the Association for Computational Linguistics</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>5399</fpage>
          -
          <lpage>5421</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S. L.</given-names>
            <surname>Vasileiou</surname>
          </string-name>
          , W. Yeoh,
          <article-title>Trace-cs: A synergistic approach to explainable course scheduling using llms and logic</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>39</volume>
          ,
          <year>2025</year>
          , pp.
          <fpage>29706</fpage>
          -
          <lpage>29708</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>M.</given-names>
            <surname>Helmert</surname>
          </string-name>
          ,
          <article-title>The fast downward planning system</article-title>
          ,
          <source>Journal of Artificial Intelligence Research</source>
          <volume>26</volume>
          (
          <year>2006</year>
          )
          <fpage>191</fpage>
          -
          <lpage>246</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>R.</given-names>
            <surname>Eifler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Steinmetz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Torralba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hofmann</surname>
          </string-name>
          ,
          <article-title>Plan-space explanation via plan-property dependencies: Faster algorithms &amp; more powerful properties</article-title>
          ,
          <source>in: Proceedings of the 29th International Conference on International Joint Conferences on Artificial Intelligence</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>4091</fpage>
          -
          <lpage>4097</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>B.</given-names>
            <surname>Krarup</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Krivic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Magazzeni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Long</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cashmore</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. E.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <article-title>Contrastive explanations of plans through model restrictions</article-title>
          ,
          <source>Journal of Artificial Intelligence Research</source>
          <volume>72</volume>
          (
          <year>2021</year>
          )
          <fpage>533</fpage>
          -
          <lpage>612</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>S.</given-names>
            <surname>Sreedharan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kulkarni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kambhampati</surname>
          </string-name>
          ,
          <article-title>Explanation as model reconciliation, in: Explainable Human-AI Interaction: A Planning Perspective</article-title>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>59</fpage>
          -
          <lpage>80</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>