<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Answer Set Programming and Large Language Models interaction with YAML: Preliminary Report</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mario Alviano</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lorenzo Grillo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>DeMaCS, University of Calabria</institution>
          ,
          <addr-line>87036 Rende (CS)</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Answer Set Programming (ASP) and Large Language Models (LLMs) have emerged as powerful tools in Artificial Intelligence, each ofering unique capabilities in knowledge representation and natural language understanding, respectively. In this paper, we combine the strengths of the two paradigms to couple the reasoning capabilities of ASP with the attractive natural language processing tasks of LLMs. We introduce a YAML-based format for specifying prompts, allowing users to encode domain-specific background knowledge. Input prompts are processed by LLMs to generate relational facts, which are then processed by ASP rules for knowledge reasoning, and finally the ASP output is mapped back to natural language by LLMs, so to provide a captivating user experience.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Answer Set Programming</kwd>
        <kwd>Large Language Models</kwd>
        <kwd>Knowledge Representation</kwd>
        <kwd>Natural Language Generation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Large Language Models (LLMs) and Answer Set Programming (ASP) are two distinct yet complementary
paradigms that have emerged in Artificial Intelligence (AI) LLMs such as GPT [ 1], PaLM [2] and LLaMa
[3], have revolutionized natural language processing (NLP) by achieving unprecedented levels of fluency
and comprehension in textual data. Conversely, ASP [4, 5], a declarative programming paradigm rooted
in logic programming under answer set semantics [6], excels in knowledge representation and logical
reasoning, making it a cornerstone in AI systems requiring robust inference capabilities. Individually,
LLMs and ASP ofer distinct advantages in their respective domains. LLMs efortlessly handle several
NLP tasks [7, 8], including language generation, summarization, and sentiment analysis, leveraging
the power of deep learning and vast pre-trained language representations. Meanwhile, ASP empowers
AI systems with the ability to reason over complex knowledge bases, derive logical conclusions, and
solve intricate combinatorial problems, thereby facilitating decision-making in domains ranging from
planning and scheduling [9, 10] to diagnosis and configuration [ 11, 12]. Recognizing the complementary
nature of LLMs’ linguistic prowess and ASP’s reasoning capabilities, this paper proposes an approach
that harnesses the synergies between these two paradigms, in the spirit of other recent works in the
literature [13, 14]. Our goal is to develop a cohesive system that seamlessly integrates natural language
understanding and logical inference, thereby enabling AI applications to navigate the intricate interplay
between textual data and logical structures.</p>
      <p>In this paper, we present a comprehensive framework for combining LLMs and ASP, leveraging
the strengths of each paradigm to address the limitations of the other. We introduce a methodology
for encoding domain-specific knowledge into input prompts using a YAML-based format, enabling
LLMs to generate relational facts that serve as input to ASP for reasoning. Subsequently, the reasoned
output from ASP is mapped back to natural language by LLMs, thereby facilitating a captivating user
experience and enhancing the interpretability of the results. The YAML format serves as a flexible
and intuitive mechanism for specifying the various components essential to our integrated LLM-ASP
system. Preprocessing prompts within the YAML specification provide instructions to the LLM on how
to interpret and map input text into ASP facts. These prompts enable users to encode domain-specific
knowledge and contextual information, guiding the generation of relational facts that serve as input to
the ASP reasoning process. Moreover, the YAML format allows users to define a separate knowledge
base, represented as an ASP program, which encapsulates logical rules, constraints, and background
knowledge relevant to the task at hand. By decoupling the input prompts from the knowledge base, our
approach facilitates modularity and reusability, empowering users to seamlessly adapt the system to
diverse domains and problem instances. Additionally, postprocessing instructions specified in the YAML
format provide guidance to the LLM on how to map atoms in the computed answer set generated by
ASP reasoning process into human-readable text, thereby enhancing the interpretability and usability
of the system’s results. Overall, the YAML format serves as a comprehensive and versatile specification
framework, enabling users to orchestrate the entire workflow of our LLM-ASP integration seamlessly.</p>
      <p>Our prototype system orchestrates a seamless interaction between a LLM and an ASP solver Upon
receiving an input text, the system interfaces with the LLM, providing predefined prompts, and enriching
the prompts specified in the YAML format with additional predefined sentences. Subsequently, the
responses generated by the LLM are processed to extract factual information, which serves as input
to the ASP solver. The ASP solver then executes logical inference over the combined input facts and
the specified knowledge base, computing an answer set that encapsulates reasoned conclusions and
insights. To bridge the gap between logical output and natural language comprehension, the system
once again engages with the LLM, providing predefined prompts and enriched specifications to map the
computed answer set to coherent natural language sentences. Finally, the collected response sentences
are summarized by the LLM, providing users with a concise and informative overview of the computed
insights and conclusions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <sec id="sec-2-1">
        <title>2.1. Large Language Models</title>
        <p>Large Language Models (LLMs) are sophisticated artificial intelligence systems designed to understand
and generate human-like text. These models are typically based on deep learning architectures, such as
Transformers, and are trained on vast amounts of text data to learn complex patterns and structures of
language. In this article, LLMs are used as black box operators on text (functions that take text in input
and produce text in output). At each interaction with a LLM, the generated text is influenced by all
previously processed text, and randomness is involved in the process. The text in input is called prompt,
and the text in output is called generated text or response.</p>
        <p>Example 1. Let us consider the following prompt:</p>
        <p>Encode as Datalog facts the following sentences: Tonight I want to go to eat some pizza
with Marco and Alessio. Marco really like the pizza with onions as toppings.</p>
        <p>A response produced by Google Gemini is reported below. It is a very good starting point, but the LLM
must be instructed on a specific format to use in encoding facts.</p>
        <p>Here’s the encoding of the sentence as Datalog facts:
wants_pizza(you). pizza_topping_preference(marco, onions).
dinner_companions(you, marco). dinner_companions(you, alessio).
This translates the information into facts: [bullet list omitted]
■</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Answer Set Programming</title>
        <p>
          All sets and sequences considered in this paper are finite. Let P, C, V be fixed nonempty sets of
predicate names, constants and variables. Predicates are associated with an arity, a non-negative integer.
A term is any element in C ∪ V. An atom is of the form (), where  ∈ P, and  is a possibly empty
sequence of terms. A literal is an atom possibly preceded by the default negation symbol not; they are
referred to as positive and negative literals. An aggregate is of the form
where ⊙ ∈ { &lt;, ≤ , ≥ , &gt;, =, ̸=} is a binary comparison operator,  ∈ P,  and ′ are possibly empty
sequences of terms, and  and  are terms. Let #count{′ : ()} ⊙  be syntactic sugar for
#sum{1, ′ : ()} ⊙ . A choice is of the form
#sum{, ′ : ()} ⊙ 
1 ≤ { atoms} ≤ 2
head :– body.
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
1 ≤ {} ≤
where atoms is a possibly empty sequence of atoms, and 1, 2 are terms. Let ⊥ be syntactic sugar for
1. A rule is of the form
where head is an atom or a choice, and body is a possibly empty sequence of literals and aggregates.
(Symbol :– is omitted if body is empty. The head is usually omitted if it is ⊥, and the rule is called
constraint.) For a rule , let () denote the atom or choice in the head of ; let Σ(), +() and
− () denote the sets of aggregates, positive and negative literals in the body of ; let () denote the
set Σ() ∪ +() ∪ − ().
        </p>
        <p>
          Example 2. Let us consider the following rules:
index(
          <xref ref-type="bibr" rid="ref1">1</xref>
          ). index(
          <xref ref-type="bibr" rid="ref2">2</xref>
          ). index(
          <xref ref-type="bibr" rid="ref3">3</xref>
          ). succ(
          <xref ref-type="bibr" rid="ref1 ref2">1,2</xref>
          ). succ(
          <xref ref-type="bibr" rid="ref2 ref3">2,3</xref>
          ). empty(
          <xref ref-type="bibr" rid="ref2 ref2">2,2</xref>
          ).
grid(X,Y) :- index(X), index(Y).
0 &lt;= {assign(X,Y)} &lt;= 1 :- grid(X,Y), not empty(X,Y).
:- #count{A,B : assign(A,B)} != 5.
:- assign(X,Y), succ(X,X'), assign(X',Y ).
        </p>
        <p>:- assign(X,Y), succ(Y,Y'), assign(X ,Y').</p>
        <p>The first line above comprises rules with atomic heads and empty bodies (also called facts). After
that there is a rule with atomic head and nonempty body (also called a definition ), followed by a
choice rule, and three constraints. If  is the choice rule, then () = {0 &lt;= {assign(X, Y)} &lt;= 1},
+() = {grid(X, Y)}, − () = {not empty(X, Y)}, and Σ() = ∅. ■</p>
        <p>
          A variable  occurring in +() is a global variable. Other variables occurring among the terms  of
some aggregate in Σ() of the form (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) are local variables. And any other variable occurring in  is
an unsafe variable. A safe rule is a rule with no unsafe variables. A program Π is a set of safe rules. A
substitution  is a partial function from variables to constants; the application of  to an expression
 is denoted by  . Let instantiate(Π) be the program obtained from rules of Π by substituting
global variables with constants in C, in all possible ways; note that local variables are still present in
instantiate(Π). The Herbrand base of Π, denoted base(Π), is the set of ground atoms (i.e., atoms with
no variables) occurring in instantiate(Π).
        </p>
        <p>
          Example 3. Variables ,  are local, and all other variables are global. Let Π be the program comprising
all rules in Example 2 (which are safe). If C = {1, 2, 3}, then instantiate(Π) comprises, among others,
the following rules:
grid(
          <xref ref-type="bibr" rid="ref1 ref1">1,1</xref>
          ) :- index(
          <xref ref-type="bibr" rid="ref1">1</xref>
          ), index(
          <xref ref-type="bibr" rid="ref1">1</xref>
          ).
0 &lt;= {assign(
          <xref ref-type="bibr" rid="ref1 ref1">1,1</xref>
          )} &lt;= 1 :- grid(
          <xref ref-type="bibr" rid="ref1 ref1">1,1</xref>
          ), not empty(
          <xref ref-type="bibr" rid="ref1 ref1">1,1</xref>
          ).
        </p>
        <p>:- #count{A,B : assign(A,B)} != 5.</p>
        <p>Note that the local variables ,  are still present in the last rule above.
■</p>
        <p>
          An interpretation is a set of ground atoms. For an interpretation , relation  |= · is defined as follows:
for a ground atom (),  |= () if () ∈ , and  |= not () if () ∈/ ; for an aggregate  of
the form (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ), the aggregate set of  w.r.t. , denoted aggset(,  ), is {⟨, ′⟩ | () ∈ , for some
substitution  }, and  |=  if (∑︀⟨,′⟩∈aggset(, ) ) ⊙  is a true expression over integers; for a choice
 of the form (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ),  |=  if 1 ≤ |  ∩ atoms| ≤ 2 is a true expression over integers; for a rule  with
no global variables,  |= () if  |=  for all  ∈ (), and  |=  if  |= () whenever  |= ();
for a program Π,  |= Π if  |=  for all  ∈ instantiate(Π).
        </p>
        <sec id="sec-2-2-1">
          <title>For a rule  of the form (3) and an interpretation , let expand (, ) be the set {() :– body. |</title>
          <p>() ∈  occurs in ()}. The reduct of Π w.r.t.  is the program comprising the
expanded rules of instantiate(Π) whose body is true w.r.t. , that is, reduct (Π, ) :=
⋃︀∈instantiate( ), |=() expand (, ). An answer set of Π is an interpretation  such that  |= Π and
no  ⊂  satisfies  |= reduct (Π, ).</p>
          <p>Example 4 (Continuing Example 3). Program Π has two answer sets, which comprise the following
instances of predicate assign/2:
■</p>
          <p>The language of ASP supports several other constructs, among them binary built-in relations (i.e., &lt;,
&lt;=, &gt;=, &gt;, ==, !=) which are interpreted naturally.
2.3. YAML
YAML (YAML Ain’t Markup Language; https://yaml.org/spec/1.2.2/) is a human-readable data
serialization format commonly used for configuration files, data exchange, and representation of structured data.
YAML is designed to be easily readable by humans and is commonly used in software development
for configuration files, data storage, and data interchange between diferent programming languages.
YAML uses indentation to denote nesting and relies on simple syntax rules, such as key-value pairs and
lists, to represent structured data. YAML is often preferred for its simplicity, readability, and flexibility
compared to other serialization formats like JSON and XML. In this article, we focus on the following
restricted fragment: A scalar is any number or string (possibly quoted). A block sequence is a sequence
of entries, each one starting by a dash followed by a space. A mapping is a sequence of key-value pairs,
each pair using a colon and space as separator, where keys and values are scalars. A scalar can be
written in block notation using the prefix | (if not a key). Lines starting with # are comments.
Example 5. Below is a YAML document:
name: Lorenzo
degrees:
- Bachelor
short bio: |</p>
          <p>I'm Lorenzo...</p>
          <p>I'm a student at UNICAL...</p>
          <p>It encodes a mapping with keys name, degrees and short bio. Key name is associated with the
scalar Lorenzo. Key degrees is associated with the list [Bachelor]. Key short bio is associated
with a scalar in block notation. ■</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. YAML format</title>
      <p>The interaction between LLMs and ASP is achieved by means of a YAML specification defined in this
section. The specification expects a mapping with keys preprocessing, background knowledge, and
postprocessing. The values associated with preprocessing and postprocessing are mappings
where keys are either atoms or the special value _ (used for providing a context), and values are scalars.
The value associated with background knowledge is an ASP program. Given a YAML specification
, let pre( ) be the value associated with  in the preprocessing mapping, where  is either an
atom or _. Similarly, let post( ) be the value associated with  in the postprocessing mapping,
where  is either an atom or _. Finally, let kb be the ASP program in background knowledge.
Example 6. Below is an example about catering suggestions.
preprocessing:
- _: The user provides a request to obtain catering suggestions. The user
can mention a day, other persons, and their cuisine preferences.
- person("who"): List all the persons mentioned including me if indirectly
included.
- cuisine_preferences("who", "country"): For each person, list any
restaurant preferences.
- want_food("who", "what"): For each person, list what they want to eat.
knowledge base: |
can_go_together(X,Y,Z) :- person(X), person(Y), X &lt; Y,</p>
      <p>want_food(X,Z), want_food(Y,Z).
can_go_together(X,Y,Z) :- person(X), person(Y), X &lt; Y,</p>
      <p>cuisine_preferences(X,Z), cuisine_preferences(Y,Z).</p>
      <p>#show can_go_together/3.
postprocessing:
- _: Explain the facts in a human readable way, as a paragraph.
- can_go_together("person 1", "person 2", "cuisine preference"): Say that
"person 1" can go with "person 2" to eat "cuisine preference".
The preprocessing aims at extracting data about cuisine preferences of a group of persons. These data
are combined with the knowledge base to identify pairs of persons with compatible preferences. Finally,
the postprocessing prompts aim at producing a paragraph reporting the identified pairs of persons with
compatible preferences. ■
A YAML specification  and an input text  are processed as follows:
P1. The set  of facts and  of responses are initially empty, and the LLM is invoked with the prompt
You are a Natural Language to Datalog translator. To translate your
input to Datalog, you will be asked a sequence of questions. The
answers are inside the user input provided with
[USER_INPUT]input[/USER_INPUT] and the format is provided with
[ANSWER_FORMAT]predicate(terms).[/ANSWER_FORMAT]. Predicate is a
lowercase string (possibly including underscores). Terms is a
comma-separated list of either double quoted strings or integers.
Be sure to control the number of terms in each answer!
An answer MUST NOT be answered if it is not present in the user input.</p>
      <p>Remember these instructions and don't say anything!
P2. If pre(_) is defined, the LLM is invoked with the prompt</p>
      <p>Here is some context that you MUST analyze and remember.
pre(_)</p>
      <p>Remember this context and don't say anything!
P3. For each atom  such that pre( ) is defined, the LLM is invoked with the prompt
[USER_INPUT] [/USER_INPUT]
pre( )
[ANSWER_FORMAT] .[/ANSWER_FORMAT]</p>
      <p>Facts in the response are collected in  . Everything else is ignored.</p>
      <sec id="sec-3-1">
        <title>P4. An answer set of kb ∪ {. |  ∈  } is searched, say . If an answer set does not exist, the</title>
        <p>process terminates with a failure.</p>
        <p>P5. The LLM is invoked with the prompt</p>
        <p>You are now a Datalog to Natural Language translator.</p>
        <p>You will be given relational facts and mapping instructions.</p>
        <p>Relational facts are given in the form [FACTS]atoms[/FACTS].</p>
        <p>Remember these instructions and don't say anything!
P6. If post(_) is defined, the LLM is invoked with the prompt</p>
        <p>Here is some context that you MUST analyze and remember.
post(_)</p>
        <p>Remember this context and don't say anything!
P7. For each atom () such that post(()) is defined, the LLM is invoked with the prompt
[FACTS]{(′) | (′) ∈ }[/FACTS]
Each fact matching () must be interpreted as follows: post(())
Responses are collected in .</p>
        <p>P8. The LLM is invoked with the prompt</p>
        <p>Summarize the following responses:</p>
        <p>The response is provided in output.</p>
        <p>Example 7. Let  be the specification given in Example 6. Let  be the following text:
Tonight I want to go to eat some pizza with Marco and Alessio. Marco really like the pizza
with onions as toppings.</p>
        <p>The LLM is invoked with the initial fixed prompt of P1, and then with the prompt of P2:
Here is some context that you MUST analyze and remember.</p>
        <p>The user provides a request to obtain catering suggestions. The user can mention a day,
other persons, and their cuisine preferences.</p>
        <p>Remember this context and don’t say anything!
After that, the LLM is invoked seven times with the prompt of P3 to populate set  . For example,
[USER_INPUT]Tonight I want to go to eat some pizza with Marco and Alessio. Marco
really like the pizza with onions as toppings.[/USER_INPUT]
List all the persons mentioned including me if indirectly included.</p>
        <p>[ANSWER_FORMAT]person("who").[/ANSWER_FORMAT]
The LLM may provide the response
[ANSWER_FORMAT]person("me").[/ANSWER_FORMAT]
[ANSWER_FORMAT]person("marco").[/ANSWER_FORMAT]
[ANSWER_FORMAT]person("alessio").[/ANSWER_FORMAT]
from which the following facts are extracted and added to  :</p>
        <p>person("me"). person("marco"). person("alessio").</p>
        <p>Once all facts are collected, the knowledge base is used to search an answer set (P4), say one containing
the following atoms:
can_go_together("me", "marco", "pizza").
can_go_together("me", "alessio", "pizza").</p>
        <p>can_go_together("marco", "alessio", "pizza").</p>
        <p>The LLM is now invoked with the prompts of P5–P7. In particular, the prompt of P7 is the following:
[FACTS]can_go_together("me", "marco", "pizza"). can_go_together("me", "alessio", "pizza").
can_go_together("marco", "alessio", "pizza"). [/FACTS]
Each fact matching can_go_together("person 1", "person 2", "cuisine preference") must be
interpreted as follows: Say that "person 1" can go with "person 2" to eat "cuisine preference".
The response, say</p>
        <p>All three of you, including yourself, Marco and Alessio, enjoy pizza. This means everyone
would be happy to go on a pizza outing together. It’s a perfect situation for a group pizza
party!
is added to . Finally, the LLM is invoked with the prompt of P8:</p>
        <p>Summarize the following responses: All three of you, including yourself, Marco and Alessio,
enjoy pizza. This means everyone would be happy to go on a pizza outing together. It’s a
perfect situation for a group pizza party!
In this paper, we have introduced an approach for combining Large Language Models (LLMs) and Answer
Set Programming (ASP) to harness their complementary strengths in natural language understanding
and logical reasoning. Our prototype system (https://github.com/Xiro28/LLMASP) is written in Python,
and it is powered by the LLM TheBloke/Llama-2-13B-chat-GGUF from HuggingSpace and the clingo
Python API [15]. By providing predefined prompts and enriching specifications with domain-specific
knowledge, our approach enables users to tailor the system to diverse problem domains and applications,
enhancing its adaptability and versatility. Future research directions encompass the evaluation of the
quality of the answers provided by our integrated LLM-ASP system, and exploring with diferent
prompts to improve the overall quality of the system.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <p>This work was supported by Italian Ministry of University and Research (MUR) under PRIN project PRODE
“Probabilistic declarative process mining”, CUP H53D23003420006 under PNRR project FAIR “Future AI Research”,
CUP H23C22000860006, under PNRR project Tech4You “Technologies for climate change adaptation and quality
of life improvement”, CUP H23C22000370006, and under PNRR project SERICS “SEcurity and RIghts in the
CyberSpace”, CUP H73C22000880001; by Italian Ministry of Health (MSAL) under POS projects CAL.HUB.RIA
(CUP H53C22000800006) and RADIOAMICA (CUP H53C22000650006); by Italian Ministry of Enterprises and Made
in Italy under project STROKE 5.0 (CUP B29J23000430005); and by the LAIA lab (part of the SILA labs). Alviano
is member of Gruppo Nazionale Calcolo Scientifico-Istituto Nazionale di Alta Matematica (GNCS-INdAM).
The response, say
is provided in output.</p>
    </sec>
    <sec id="sec-5">
      <title>4. Conclusion</title>
      <p>Based on the analysis of the facts, it appears you, Marco, and Alessio all enjoy pizza, making
a group pizza party a perfect option for your outing tonight.
■</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>T. B. Brown</surname>
          </string-name>
          , et al.,
          <article-title>Language models are few-shot learners</article-title>
          , CoRR abs/
          <year>2005</year>
          .14165 (
          <year>2020</year>
          ). URL: https://arxiv.org/abs/
          <year>2005</year>
          .14165. arXiv:
          <year>2005</year>
          .14165.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Chowdhery</surname>
          </string-name>
          , et al.,
          <article-title>Palm: Scaling language modeling with pathways</article-title>
          ,
          <source>J. Mach. Learn. Res</source>
          .
          <volume>24</volume>
          (
          <year>2023</year>
          )
          <volume>240</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>240</lpage>
          :
          <fpage>113</fpage>
          . URL: http://jmlr.org/papers/v24/
          <fpage>22</fpage>
          -
          <lpage>1144</lpage>
          .html.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>H.</given-names>
            <surname>Touvron</surname>
          </string-name>
          , et al.,
          <article-title>Llama: Open and eficient foundation language models</article-title>
          ,
          <source>CoRR abs/2302</source>
          .13971 (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .48550/ARXIV.2302.13971. arXiv:
          <volume>2302</volume>
          .
          <fpage>13971</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>V.</given-names>
            <surname>Marek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Truszczyński</surname>
          </string-name>
          ,
          <article-title>Stable models and an alternative logic programming paradigm</article-title>
          ,
          <source>in: The Logic Programming Paradigm: a 25-year Perspective</source>
          ,
          <year>1999</year>
          , pp.
          <fpage>375</fpage>
          -
          <lpage>398</lpage>
          . doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>642</fpage>
          -60085-2_
          <fpage>17</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>I. Niemelä</surname>
          </string-name>
          ,
          <article-title>Logic programming with stable model semantics as a constraint programming paradigm</article-title>
          ,
          <source>Annals of Mathematics and Artificial Intelligence</source>
          <volume>25</volume>
          (
          <year>1999</year>
          )
          <fpage>241</fpage>
          -
          <lpage>273</lpage>
          . doi:
          <volume>10</volume>
          .1023/A:
          <fpage>1018930122475</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Gelfond</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Lifschitz</surname>
          </string-name>
          ,
          <article-title>Logic programs with classical negation</article-title>
          , in: D.
          <string-name>
            <surname>Warren</surname>
          </string-name>
          , P. Szeredi (Eds.),
          <source>Logic Programming: Proc. of the Seventh International Conference</source>
          ,
          <year>1990</year>
          , pp.
          <fpage>579</fpage>
          -
          <lpage>597</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>H.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Meng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <article-title>A comprehensive survey on process-oriented automatic text summarization with exploration of llm-based methods</article-title>
          ,
          <source>CoRR abs/2403</source>
          .02901 (
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          . 48550/ARXIV.2403.02901. arXiv:
          <volume>2403</volume>
          .
          <fpage>02901</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Lam</surname>
          </string-name>
          ,
          <article-title>A survey on aspect-based sentiment analysis: Tasks, methods, and challenges</article-title>
          ,
          <source>IEEE Trans. Knowl. Data Eng</source>
          .
          <volume>35</volume>
          (
          <year>2023</year>
          )
          <fpage>11019</fpage>
          -
          <lpage>11038</lpage>
          . doi:
          <volume>10</volume>
          .1109/ TKDE.
          <year>2022</year>
          .
          <volume>3230975</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>P.</given-names>
            <surname>Cappanera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gavanelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Nonato</surname>
          </string-name>
          , M. Roma,
          <article-title>Logic-based benders decomposition in answer set programming for chronic outpatients scheduling</article-title>
          ,
          <source>Theory Pract. Log. Program</source>
          .
          <volume>23</volume>
          (
          <year>2023</year>
          )
          <fpage>848</fpage>
          -
          <lpage>864</lpage>
          . doi:
          <volume>10</volume>
          .1017/S147106842300025X.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Cardellini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. D.</given-names>
            <surname>Nardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Dodaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Galatà</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Giardini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Maratea</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Porro</surname>
          </string-name>
          ,
          <article-title>Solving rehabilitation scheduling problems via a two-phase ASP approach</article-title>
          , Theory Pract. Log. Program.
          <volume>24</volume>
          (
          <year>2024</year>
          )
          <fpage>344</fpage>
          -
          <lpage>367</lpage>
          . doi:
          <volume>10</volume>
          .1017/S1471068423000030.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>F.</given-names>
            <surname>Wotawa</surname>
          </string-name>
          ,
          <article-title>On the use of answer set programming for model-based diagnosis</article-title>
          , in: H.
          <string-name>
            <surname>Fujita</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Fournier-Viger</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Ali</surname>
          </string-name>
          , J. Sasaki (Eds.),
          <source>Trends in Artificial Intelligence Theory and Applications. Artificial Intelligence Practices - 33rd International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE</source>
          <year>2020</year>
          , Kitakyushu, Japan,
          <source>September 22-25</source>
          ,
          <year>2020</year>
          , Proceedings, volume
          <volume>12144</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2020</year>
          , pp.
          <fpage>518</fpage>
          -
          <lpage>529</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -55789-8_
          <fpage>45</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>R.</given-names>
            <surname>Taupe</surname>
          </string-name>
          , G. Friedrich,
          <string-name>
            <given-names>K.</given-names>
            <surname>Schekotihin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Weinzierl</surname>
          </string-name>
          ,
          <article-title>Solving configuration problems with ASP and declarative domain specific heuristics</article-title>
          , in: M.
          <string-name>
            <surname>Aldanondo</surname>
            ,
            <given-names>A. A.</given-names>
          </string-name>
          <string-name>
            <surname>Falkner</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Felfernig</surname>
          </string-name>
          , M. Stettinger (Eds.),
          <source>Proceedings of the 23rd International Configuration Workshop (CWS/Conf WS</source>
          <year>2021</year>
          ), Vienna, Austria,
          <fpage>16</fpage>
          -
          <lpage>17</lpage>
          September,
          <year>2021</year>
          , volume
          <volume>2945</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>13</fpage>
          -
          <lpage>20</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2945</volume>
          /
          <fpage>21</fpage>
          -
          <string-name>
            <surname>RT-Conf</surname>
            <given-names>WS21</given-names>
          </string-name>
          _paper_4.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>K.</given-names>
            <surname>Basu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Varanasi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Shakerin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Arias</surname>
          </string-name>
          , G. Gupta,
          <article-title>Knowledge-driven natural language understanding of english text and its applications</article-title>
          ,
          <source>in: Thirty-Fifth AAAI Conference on Artificial Intelligence</source>
          ,
          <source>AAAI</source>
          <year>2021</year>
          ,
          <article-title>Thirty-Third Conference on Innovative Applications of Artiifcial Intelligence</article-title>
          ,
          <string-name>
            <surname>IAAI</surname>
          </string-name>
          <year>2021</year>
          ,
          <source>The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI</source>
          <year>2021</year>
          ,
          <string-name>
            <given-names>Virtual</given-names>
            <surname>Event</surname>
          </string-name>
          ,
          <source>February 2-9</source>
          ,
          <year>2021</year>
          , AAAI Press,
          <year>2021</year>
          , pp.
          <fpage>12554</fpage>
          -
          <lpage>12563</lpage>
          . doi:
          <volume>10</volume>
          .1609/AAAI.V35I14.17488.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zeng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rajasekharan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Padalkar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Basu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Arias</surname>
          </string-name>
          , G. Gupta,
          <article-title>Automated interactive domainspecific conversational agents that understand human dialogs</article-title>
          , in: M.
          <string-name>
            <surname>Gebser</surname>
          </string-name>
          , I. Sergey (Eds.),
          <source>Practical Aspects of Declarative Languages - 26th International Symposium, PADL</source>
          <year>2024</year>
          , London, UK, January
          <volume>15</volume>
          -
          <issue>16</issue>
          ,
          <year>2024</year>
          , Proceedings, volume
          <volume>14512</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2024</year>
          , pp.
          <fpage>204</fpage>
          -
          <lpage>222</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -52038-9_
          <fpage>13</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>R.</given-names>
            <surname>Kaminski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Romero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Schaub</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wanko</surname>
          </string-name>
          ,
          <article-title>How to build your own asp-based system?!</article-title>
          ,
          <source>Theory and Practice of Logic Programming</source>
          <volume>23</volume>
          (
          <year>2023</year>
          )
          <fpage>299</fpage>
          -
          <lpage>361</lpage>
          . doi:
          <volume>10</volume>
          .1017/S1471068421000508.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>