<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Querying Cultural Heritage Knowledge Bases in Natural Language: Discussion Paper</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Bernardo Cuteri</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kristian Reale</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesco Ricca</string-name>
          <email>riccag@mat.unical.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Mathematics and Computer Science University of Calabria</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Knowledge Based Question Answering (KBQA) is concerned with the possibility of querying data by posing questions in Natural Language (NL), that is by far, the most (human) common form of expressing an information need. This work reviews an approach to Question Answering that has the goal to transform Natural Language questions into SPARQL queries over cultural heritage knowledge bases. The key idea is to apply a rule-based classi cation process that we call template matching and that we have implemented in a prototype using logic programming. In the paper we discuss about the application of the prototype in real use-case, and indicate ongoing and future directions of work.</p>
      </abstract>
      <kwd-group>
        <kwd>Knowledge-based Question Answering Cultural Heritage</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Reference Model [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. CIDOC-crm provides a common semantic framework for the
mapping of cultural heritage information and has been already adopted as a base
interchange format by museums, libraries, online data collections and archives
all over the world [
        <xref ref-type="bibr" rid="ref10 ref9">9,10</xref>
        ]. For this reason, CIDOC-crm has been identi ed as the
knowledge reference model for PIUCULTURA. Thus, our Question Answering
prototype is applicable to query both general (e.g., online data collections) and
speci c (e.g., museums databases) CIDOC-compliant knowledge sources.
      </p>
      <p>
        Our QA approach can be described as a waterfall-like process in which a user
question is rst processed from a syntactic point of view and then from a
semantic point of view. Syntactic processing is based on the concept of template, where
a template represents a category of syntactically homogeneous sentence patterns
and is expressed in terms of Answer Set Programming (ASP) [
        <xref ref-type="bibr" rid="ref12 ref6 ref7">12,7,6</xref>
        ] rules. ASP
is a well-established formalism for logic programming, and combines a
comparatively high knowledge-modeling power with a robust solving technology [
        <xref ref-type="bibr" rid="ref11 ref17">11,17</xref>
        ].
ASP has roots in Datalog, that is a language for deductive databases.
      </p>
      <p>
        On the other hand, the semantic processing is based on the concept of intent.
By intent we mean the purpose (i.e., the intent) of the question: two questions
can belong to two disjoint syntactic categories but have the same intent and
vice versa. To give an example: who created Guernica? and who is the author of
Guernica? have a quite di erent syntactic structure, but have the same intent,
i.e., know who made the work Guernica. On the other hand, if we consider who
created Guernica? and who restored Guernica? we can say that they are
syntactically similar (or homogeneous), but semantically di erent: the purpose of the
two questions is di erent. Semantic disambiguation, in which intents are mapped
to a set of prede ned queries on the knowledge base, is done by resorting to the
multilingual BabelNet [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] dictionary.
2
2.1
      </p>
    </sec>
    <sec id="sec-2">
      <title>Question Answering for Cultural Heritage</title>
      <sec id="sec-2-1">
        <title>The problem</title>
        <p>
          The goal of our approach is to answer questions on cultural heritage facts stored
in a repository modeled according to a standard model of this domain. In
particular, the target knowledge base model of the project is the CIDOC conceptual
reference model [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. The CIDOC-crm is a standardized maintained model that
has been designed as a common language for the exchange and sharing of data
on cultural heritage without loss of meaning, supporting the implementation
of local data transformation algorithms towards this model. This standard has
been adopted worldwide by a growing number of institutions and provides a
more trustable, structured and complete source w.r.t. freely available (often
unstructured and non-certi ed) web resources. Indeed, museums and institutions
typically have structured sources in which they store information about their
artifacts that can be mapped to CIDOC-crm rather easily (actually, this was
one of the main goals of the promoters of CIDOC-crm). On the other hand, the
availability of documentary sources is limited. If we take into consideration freely
available documentary sources such as Wikipedia, we realize that the percentage
coverage of works and authors is low. For example, a museum like the British
Museum has about 8 million artifacts (made available in CIDOC-compliant
format) while on Wikipedia there are in total about 500 thousand articles about
works of art from all over the world. The CIDOC-crm is periodically released in
RDFs format [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], thus our Question Answering system has to answer questions
by nding the information required on an RDF knowledge base that follows
the CIDOC-crm model. The reference query language of RDF is SPARQL [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ].
So, in basic terms, the QA system has to map natural language questions into
SPARQL queries and produce answers in natural language from query results.
2.2
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Overview of the Approach</title>
        <p>In this section we present step-by-step our question answering system for cultural
heritage. In particular, the question answering process is split into the
following phases: (1) Question NL Processing: the input question is transformed
into a three-level syntactic representation; (2) Template Matching: the
representation is categorized by a template system that is implemented by means
of logical rules; (3) Intent Determination: the identi ed template is mapped
to an intent, where the intent identi es precisely the intent (or purpose) of the
question; (4) Query Generation: an intent becomes a query on the knowledge
base; (5) Query Execution: the Query is executed on the knowledge base;
(6) Answer Generation: the result produced by the query is transformed into
a natural language answer. Splitting the QA process into distinct phases allowed
us to implement a system by connecting loosely-coupled modules dedicated to
each phase. In this way we also achieved better maintainability, and extensibility.
In the following sections, we analyze in detail the 6 phases just listed.
Question NL Processing The NL processing phase builds a formal and
therefore tractable representation of the input question. The question is decomposed
and analyzed by highlighting both the atomic components that compose it, and
the morphological properties of the components and the relationships that bind
them. In this phase we perform the following NLP steps: Named entities
recognition, Tokenization, Part-of-speech tagging, and Dependency Parsing.</p>
        <p>
          Named entities recognition (NER) is an NLP task that deals with the
identi cation and categorization of the entities that appear in a text. The named
entities are portions of text that identify the names of people, organizations,
places or other elements that are referenced by a proper name. For example, in
the phrase Michelangelo has painted the Last Judgment, we can recognize two
entities: Michelangelo, that could belong to a Person category, and the Last
Judgment that could belong to an Artwork category. When the entities of the
text have been recognized, they can be replaced with placeholders that are
easier to manage in the subsequent stages of processing of natural language. In our
implementation we use CRF++ [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] that implements a supervised model based
on conditional random elds that are probabilistic models for segmenting and
labeling sequence data [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
        <p>Tokenization consists of splitting text into words (called tokens). A token is
an indivisible unit of text. Tokens are separated by spaces or punctuation marks.
In western languages, the tokenization phase turns out to be rather simple, as
these languages place quite clear word demarcations. In fact, the approaches
used for natural language tokenization are based on simple regular expressions.
Tokenization is the rst phase of lexical analysis and creates the input for the
next Part-of-Speech Tagging phase.</p>
        <p>The Part-of-Speech tagging phase assigns to each word the corresponding
part of the speech. Common examples of parts of speech are adjectives, nouns,
pronouns, verbs, or articles. The Part-of-Speech assignment is typically
implemented with supervised statistical methods. There are, for several languages,
large manually annotated corpora that can be used as training sets to train
a statistical system. Among the best performing approaches are those based
on Maximum Entropy. For tokenization and POS-tagging we used the Apache
OpenNLP library1 with pretrained models.</p>
        <p>
          Finally, Dependency Parsing is the identi cation of lexical dependencies of
the words of a text according to a grammar of dependencies. The dependency
grammar (DG) is a class of syntactic theories that are all based on the
dependency relationship (as opposed to the circumscription relation). Dependency
is the notion that linguistic units, e.g., words, are connected to one another
by directed connections (or dependencies). A dependency is determined by the
relationship between a word (a head) and its dependencies. The methods for
extracting grammar dependencies are typically supervised and use a reference
tag-set and a standard input representation format known as the CoNLL
standard developed and updated within the CoNLL scienti c conference (Conference
on Computational Natural Language Learning). In our implementation we used
MaltParser2 that is a system for data-driven dependency parsing [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ].
Template Matching Template matching is in charge of classifying question
from the syntactic point of view and extract the question terms that are needed
to instantiate the query for retrieving the answer. Basically, a template represents
a category of syntactically homogeneous questions. In our system, templates are
encoded in terms of ASP rules. By using ASP we can work in a declarative
fashion and avoid implementing the template matching procedure from scratch.
        </p>
        <p>To this end, the output of the NLP phase is modeled by using ASP facts
that constitute the input of the template matching module. In particular, words
are indexed by their position in the sentence and they are associated with their
morphological feature by using facts of the following forms:
word(pst,wrd).</p>
        <p>pos(pst,pos tag).</p>
        <p>gr(pst1,pst2,rel tag).
the rst kind of fact associates position of words (pst) in a sentence to the word
itself (wrd); the second associates words (pst) with their POS tags (pos tag),
1 https://opennlp.apache.org
2 http://www.maltparser.org/
and the latter models grammatical relations (a.k.a. typed dependencies)
specifying the type of grammatical relation (rel tag) holding among pair of words
(pst1,pst2). The possible tags and relations terms are constants representing
the labels produced by the NLP tools mentioned in the previous subsection.</p>
        <p>Consider, for example, the question who painted Guernica?, the output of
the NLP phase would result in the following facts:
word(1, "who"). word(2, "painted"). word(3, "Guernica").
word(4, "?"). pos(1, pr). pos(2, vb). pos(3, np). pos(4, f).
gr(2, 1, nsubj). gr(2, 3, dobj). gr(2, 4, punct).</p>
        <p>In the template matching phase, questions are matched against question
templates. Templates identify categories of questions that are uniform from the
syntactic point of view and we express them in the form of ASP rules.</p>
        <p>Basically, each template rule models a condition under which we identi ed a
possible syntactic question pattern for a template. An example of template rule
that matches questions of the form who action object? is the following:
template(who action object, terms 2(V, O), 8)
:word (1, "who"), word(2, V), word(3, O), word(4, "?"),
pos(1, pr), pos(2, vb), pos(3, np), pos(4, f),
gr(2, 1, nsubj), gr(2, 3, dobj), gr(2, 4, punct).</p>
        <p>In the example, who action object is a constant that identi es the template,
while terms(V; W ) is a function symbol that allows extracting the terms of the
input question, respectively the verb V and the object O. The intuitive meaning
of a template rule is that whenever all atoms in the body (the part of the rule
a ter :-) are true, then the template matches. The number 8, is the weight
of the template rule. The weight is a number expressing the importance of a
pattern. By using weights one can freely express preferences; for instance in our
implementation we set this number to the size of elements in the body of the
ASP template rule to favor more speci c templates over more generic ones.</p>
        <p>Question templates are intended to be de ned by the application designer,
which is a reasonable choice in applications like the one we considered, where the
number of templates to produce is limited. Nonetheless, to assist the de nition
of templates we developed a graphical user interface. Such interface helps the
user at building template rules by working and generalizing examples, and does
not require any previous knowledge of ASP or speci c knowledge of NLP tools.
The template editing interface is not described in this paper for space reasons.</p>
        <p>
          In our prototype, we used DLV [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] as engine that computes the answer sets
(thus the best matches) of the template matching phase, and the DLV Wrapper
library to [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ] to manage DLV invocations from Java code. DLV has a Datalog
subsystem supporting e cient ontological reasoning [
          <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4 ref5">4,2,5,3,1</xref>
          ].
        </p>
        <p>
          Intent Determination The identi cation of a question by templates might be
not su cient to identify its intent or purpose. For example, who painted
Guernica? and who killed Caesar? have similar syntactic structures and may fall into
the same template, but they have two di erent purposes. The intent
determination process is based on the lexical expansion of question terms extracted in the
template matching phase and has the role of identifying what the question asks
(i.e., its intent), starting from the result of the template matching phase. In other
words, it disambiguates the intent of questions that fall into the same syntactic
category. In the previous example, painting is hyponym (i.e., a speci c instance)
of creating and thus we can understand that the intent is to determine the
creator of a work, while killing does not have such relationships and we should,
therefore, instantiate a di erent intent. In the same way, who painted Guernica?,
who made Guernica? are questions that can be correctly mapped with a single
template and can be correctly recognized by the same intent thanks to the fact
that all three verbs are hyponyms or synonyms of the verb create. Words
semantic relations can be obtained by using dedicated dictionaries, like wordnet [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]
or BabelNet [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. In our system we implemented the intent determination
module in Java and used the BabelNet API library for accessing word relations. In
particular, intent determination is implemented as a series of Java conditional
checks (possibly nested) on word relations. Such conditional checks are expressed
as a question term Q, a word relation R and a target word T . The BabelNet
service receives such triple and returns true/false depending on whether Q is in
relation R w.r.t. T , R is either synonymy, hyponymy or hyperonymy.
        </p>
        <p>The implementation of intent determination is done by the designer as
template de nition. Our system implements a set of intents that were identi ed
during the analysis by a partner of the project.</p>
        <p>
          Query Execution The intents identi ed in the previous phase are mapped one
to one with template queries, called prepared statements in programming jargon.
In the Query Execution phase, the query template corresponding to the identi ed
event is lled with the slots with terms extracted from the template matching
phase and executed over the knowledge base. The CIDOC-crm speci cation is,
by de nition, an RDF knowledge base [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], thus we implemented the queries
corresponding to intents in the SPARQL language [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. In our prototype, we
store our data and run our queries using Apache Jena, as programmatic query
API, and Virtuoso Open-Source Edition as knowledge base service.
Answer Generation Finally, the latest phase of a typical user interaction with
the QA system is the so-called Answer Generation. In this phase, the results
produced by the execution of the query on the knowledge base are transformed
into a natural language answer that is nally presented to the user. To implement
this phase, we have designed answer templates that are natural language patterns
with parameterized slots that are lled according to the question intent and the
terms extracted from the database.
2.3
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>Preliminary experiments</title>
        <p>We conducted an experimental analysis to assess the performance of the system
on a set of 60 template rules, which are able to handle basic question forms
and types for the cultural heritage domain distributed in 20 di erent intents.We
report, on a sample of 167 questions, an average template matching time of 34
milliseconds (ms) on an Intel i7-7700HQ CPU architecture. For what concerns
the other phases of the QA system, the NL phase average execution time is of
30 ms and is at most 50 ms, the intent determination phase average execution
time is of 50 ms and is at most 580 ms and the average query execution time is
of 8 ms and is at most 32 ms. Overall the system presents good execution times,
which are acceptable for a real-time QA system.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Discussion and future works</title>
      <p>In this work, we presented an approach to the problem of Knowledge-based
Question Answering in the Cultural Heritage domain. The presented solution
transforms input questions into SPARQL queries that are executed on a
knowledge base. The concrete application of the approach to the Cultural Heritage
use-case shows that one of the main advantages of the approach is that it is
suited for fast prototyping, as it allows the fast integration of new question
forms and new question intents. This feature suits the need of closed domains
that are characterized by a limited number of intents and question forms. On the
other hand a drawback of the approach is that it relies on the manual work of a
designer that provides question templates and the strategy for intent matching.
This problem is partly solved by some other QA approaches, as the ones based
on machine learning that requires less engineering e orts in the implementation
of question understanding at the cost of providing a corpora of examples.</p>
      <p>
        The prototype can be seen as a good starting point for many future works.
One of the most natural observation that comes in mind is that the intent
determination phase can be also encoded using logic programs, thus making also the
speci cation of this step exible and independent from a static decision made at
compile time. Indeed, having the whole question understanding process in ASP
makes it more homogeneous and easier to develop and maintain. A prerequisite
of implementing intent determination in ASP is to make ASP code interact with
the dictionary that provides words semantic relations. We think that this can be
achieved either by porting dictionary data in ASP or by using ASP extensions
with external computation. Another interesting thing to investigate is how the
ASP-based implementation compares with other techniques for question
answering, as the ones only based on machine learning (e.g., [
        <xref ref-type="bibr" rid="ref18 ref23">23,18</xref>
        ]); and develop an
architecture where the two can be combined in a single implementation.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Adrian</surname>
          </string-name>
          , W.T.,
          <string-name>
            <surname>Manna</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leone</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Amendola</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Adrian</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Entity set expansion from the web via ASP</article-title>
          .
          <source>In: ICLP (Technical Communications)</source>
          .
          <source>OASICS</source>
          , vol.
          <volume>58</volume>
          , pp.
          <volume>1</volume>
          :
          <issue>1</issue>
          {
          <issue>1</issue>
          :
          <fpage>5</fpage>
          .
          <string-name>
            <given-names>Schloss</given-names>
            <surname>Dagstuhl - Leibniz-Zentrum fuer Informatik</surname>
          </string-name>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Amendola</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leone</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Manna</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Finite model reasoning over existential rules</article-title>
          .
          <source>TPLP</source>
          <volume>17</volume>
          (
          <issue>5-6</issue>
          ),
          <volume>726</volume>
          {
          <fpage>743</fpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Amendola</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leone</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Manna</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Querying nite or arbitrary models? no matter! existential rules may rely on both once again (discussion paper)</article-title>
          .
          <source>In: SEBD. CEUR Workshop Proceedings</source>
          , vol.
          <year>2037</year>
          , p.
          <fpage>218</fpage>
          .
          <string-name>
            <surname>CEUR-WS.org</surname>
          </string-name>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Amendola</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leone</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Manna</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Finite controllability of conjunctive query answering with existential rules: Two steps forward</article-title>
          .
          <source>In: IJCAI</source>
          . pp.
          <volume>5189</volume>
          {
          <fpage>5193</fpage>
          . ijcai.
          <source>org</source>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Amendola</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leone</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Manna</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Veltri</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Enhancing existential rules by closed-world variables</article-title>
          .
          <source>In: IJCAI</source>
          . pp.
          <volume>1676</volume>
          {
          <fpage>1682</fpage>
          . ijcai.
          <source>org</source>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Bonatti</surname>
            ,
            <given-names>P.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Calimeri</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leone</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ricca</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Answer set programming</article-title>
          .
          <source>In: 25 Years GULP. Lecture Notes in Computer Science</source>
          , vol.
          <volume>6125</volume>
          , pp.
          <volume>159</volume>
          {
          <fpage>182</fpage>
          . Springer (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Brewka</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eiter</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Truszczynski</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Answer set programming at a glance</article-title>
          .
          <source>Communications of the ACM</source>
          <volume>54</volume>
          (
          <issue>12</issue>
          ),
          <volume>92</volume>
          {
          <fpage>103</fpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Consortium</surname>
            ,
            <given-names>W.W.W.</given-names>
          </string-name>
          , et al.:
          <article-title>Rdf 1.1 concepts and abstract syntax (</article-title>
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Doerr</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>The cidoc conceptual reference module: an ontological approach to semantic interoperability of metadata</article-title>
          .
          <source>AI</source>
          magazine
          <volume>24</volume>
          (
          <issue>3</issue>
          ),
          <volume>75</volume>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Doerr</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gradmann</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hennicke</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Isaac</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Meghini</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , Van de Sompel, H.:
          <article-title>The europeana data model (edm)</article-title>
          .
          <source>In: World Library and Information Congress: 76th IFLA general conference and assembly</source>
          . pp.
          <volume>10</volume>
          {
          <issue>15</issue>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Gebser</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leone</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maratea</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perri</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ricca</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schaub</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Evaluation techniques and systems for answer set programming: a survey</article-title>
          .
          <source>In: IJCAI</source>
          . pp.
          <volume>5450</volume>
          {
          <fpage>5456</fpage>
          . ijcai.
          <source>org</source>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Gelfond</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lifschitz</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Classical negation in logic programs</article-title>
          and disjunctive databases.
          <source>New generation computing 9(3-4)</source>
          ,
          <volume>365</volume>
          {
          <fpage>385</fpage>
          (
          <year>1991</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Harris</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Seaborne</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Prud</surname>
          </string-name>
          'hommeaux, E.:
          <article-title>Sparql 1.1 query language</article-title>
          .
          <source>W3C recommendation 21(10)</source>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Kudo</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          : Crf++. http://crfpp. sourceforge. net/ (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>La</surname>
            <given-names>erty</given-names>
          </string-name>
          , J.,
          <string-name>
            <surname>McCallum</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pereira</surname>
            ,
            <given-names>F.C.</given-names>
          </string-name>
          :
          <article-title>Conditional random elds: Probabilistic models for segmenting and labeling sequence data (</article-title>
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Leone</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pfeifer</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Faber</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eiter</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gottlob</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perri</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scarcello</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>The dlv system for knowledge representation and reasoning</article-title>
          .
          <source>ACM Transactions on Computational Logic (TOCL) 7</source>
          (
          <issue>3</issue>
          ),
          <volume>499</volume>
          {
          <fpage>562</fpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Lierler</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maratea</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ricca</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Systems, engineering environments, and competitions</article-title>
          .
          <source>AI</source>
          Magazine
          <volume>37</volume>
          (
          <issue>3</issue>
          ),
          <volume>45</volume>
          {
          <fpage>52</fpage>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Lukovnikov</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fischer</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lehmann</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Auer</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Neural network-based question answering over knowledge graphs on word and character level</article-title>
          .
          <source>In: Proceedings of the 26th international conference on World Wide Web</source>
          . pp.
          <volume>1211</volume>
          {
          <fpage>1220</fpage>
          .
          <string-name>
            <surname>International World Wide Web Conferences Steering Committee</surname>
          </string-name>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>WordNet: An electronic lexical database</article-title>
          . MIT press (
          <year>1998</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Navigli</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ponzetto</surname>
            ,
            <given-names>S.P.</given-names>
          </string-name>
          : Babelnet:
          <article-title>The automatic construction, evaluation and application of a wide-coverage multilingual semantic network</article-title>
          .
          <source>Arti cial Intelligence</source>
          <volume>193</volume>
          ,
          <fpage>217</fpage>
          {
          <fpage>250</fpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Nivre</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hall</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nilsson</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chanev</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eryigit</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , Kubler,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Marinov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Marsi</surname>
          </string-name>
          , E.:
          <article-title>Maltparser: A language-independent system for data-driven dependency parsing</article-title>
          .
          <source>Natural Language Engineering</source>
          <volume>13</volume>
          (
          <issue>2</issue>
          ),
          <volume>95</volume>
          {
          <fpage>135</fpage>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Ricca</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>A java wrapper for DLV. In: Answer Set Programming</article-title>
          .
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>78</volume>
          .
          <string-name>
            <surname>CEUR-WS.org</surname>
          </string-name>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Zhong</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xiong</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Socher</surname>
          </string-name>
          , R.: Seq2sql:
          <article-title>Generating structured queries from natural language using reinforcement learning</article-title>
          .
          <source>arXiv preprint arXiv:1709.00103</source>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>