<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Leveraging small language models for Text2SPARQL tasks to improve the resilience of AI assistance</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Felix Brei</string-name>
          <email>brei@infai.org</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Johannes Frey</string-name>
          <email>frey@informatik.uni-leipzig.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lars-Peter Meyer</string-name>
          <email>lpmeyer@infai.org</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ETi Competence Center @ Institute for Applied Informatics</institution>
          ,
          <addr-line>Germany, https:// cc-eti.org</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Institute of Computer Science, Leipzig University</institution>
          ,
          <addr-line>Germany, https:// cs.uni-leipzig.de</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>KMI Competence Center @ Institute for Applied Informatics</institution>
          ,
          <addr-line>Germany, https:// kmi-leipzig.de</addr-line>
        </aff>
      </contrib-group>
      <fpage>11</fpage>
      <lpage>23</lpage>
      <abstract>
        <p>In this work we will show that language models with less than one billion parameters can be used to translate natural language to SPARQL queries after fine-tuning. Using three diferent datasets ranging from academic to real world, we identify prerequisites that the training data must fulfill in order for the training to be successful. The goal is to empower users of semantic web technology to use AI assistance with afordable commodity hardware, making them more resilient against external factors.</p>
      </abstract>
      <kwd-group>
        <kwd>Language models</kwd>
        <kwd>SPARQL generation</kwd>
        <kwd>Question Answering</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>The usage of Large Language Models (LLMs) has increased exponentially since the advent of
ChatGPT. According to Similarweb, the website of OpenAI alone was visited more than 1.6
billion times by February 20241. In addition to that, Microsoft has launched several AI assistants
called ’Copilots’ which are based on LLMs 2,3, as well as Google releasing their AI called Bard
which is now known as Gemini4,5. This suggests that the big tech companies believe in the
potential of LLMs to become part of our daily lives, just like smartphones or computers in
general. But do they hold up to the expectations?</p>
      <p>
        Several test suites were derived to assess the generative capabilities of LLMs, for example
TruthfulQA[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], HellaSwag [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] or the Abstraction and Reasoning Corpus (ARC) [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. These test
suites, among others, are run regularly on the latest entries to the LLM circus and the results
for open LLMs are presented publicly on the Huggingface OpenLLM leaderboard [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. We can
see that the performance increases drastically over time, with Bloom [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] scoring an average of
CEUR
Workshop
Proceedings
46.06 in August 2023 and Smaug-72B6 holding the record in February 2024 with a score of 80.48,
only half a year later.
      </p>
      <p>
        These test suites however cover mostly natural language domains, like the task to continue
a sentence, answer questions or extract information from a paragraph of text. Based on the
experience from early experiments [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], a test suite [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] was developed that evaluates capabilities
of LLMs to interface knowledge graphs and assist in knowledge engineering tasks. While the
smaller open-source GPT4All models severely struggled, the state-of-the-art commercial LLMs
GPT4 and Claude showed promising results [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and a trend of performance improvements over
the course of 2023 [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] in dealing with KGs in Turtle format.
      </p>
      <p>
        Alas, these results come with several caveats:
1. The commercial LLMs that were tested are all hosted externally. This can be problematic
regarding data protection, because a user has to send a information to a third party.
2. Because of their sheer size (GPT4 has one trillion parameters7), running these models
locally is prohibitively costly and therefore not an option for a lot of research institutes and
other parties. On top of that, training a model of such size is also extremely expensive8.
3. Even these commercial models were at the time of writing still significantly challenged
by SPARQL query generation or RML mapping generation [
        <xref ref-type="bibr" rid="ref10 ref8">8, 10</xref>
        ] indicating a need for
specific training or fine-tuning of all models w.r.t. handling those tasks in a reliable and
eficient way.
4. Since all these larger are hosted on third party platforms, users are at the mercy of
the vendors to keep the services running and afordable. However, vendors suddenly
changing their licensing and cost model has already happened in the past9, as well as
deep sea cables being damaged10, separating certain areas of the world from the internet
and leaving local companies only with the computational resources they have on site.
      </p>
      <p>So we ask ourselves the following question: Given a single task that we want so solve using
LLMs, is it possible to achieve a similar performance of these large models with a much smaller
one? This would enable small businesses to use AI assistance with afordable hardware they
can host on site, increasing their resilience against outages, vendors changing their pricing
models, disruption due to trade embargoes and other external factors.</p>
      <p>As a first step into this direction, we study the task of translating a natural language question
into a SPARQL query because we think that this task enables people who are not familiar with
SPARQL to extract knowledge and insights from a knowledge graph which would otherwise not
be possible for them. The paper is organized as follows: First, we look at related research in this
ifeld and explain where we fit into the big picture. Then we explain the setup of our experiments,
namely which model families were chosen and why and which datasets we trained them on.
After that, we present and explain the results of our work and finally, we draw conclusions and
give an outlook on the directions that our research will head next.
6https://huggingface.co/abacusai/Smaug-72B-v0.1
7https://www.semafor.com/article/03/24/2023/the-secret-history-of-elon-musk-sam-altman-and-openai
8https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/
9https://www.theverge.com/2023/9/12/23870547/unit-price-change-game-development
10https://edition.cnn.com/2024/03/04/business/red-sea-cables-cut-internet/index.html</p>
    </sec>
    <sec id="sec-3">
      <title>2. Related Work</title>
      <p>
        Current approaches focus on fine-tuning large language models. For example the authors of
[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] propose a methodology for fine-tuning OpenLLaMA to generate SPARQL queries over life
science knowledge graphs using data augmentation techniques, such as providing meaningful
variable names and inline comments, improving the performance of the model in generating
accurate SPARQL queries. The authors of [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] use Llama as their basis for fine-tuning to
generate SPARQL queries over Wikidata.
      </p>
      <p>These two papers have shown that translating natural language to SPARQL queries is possible,
but they use models with at least three (OpenLLaMA) resp. seven (LLaMA) billion parameters.
The hardware required to train these models can be expensive, which is why we want to explore
models that are even smaller.</p>
      <p>Smaller, fine-tuned models for one specific task are also able to beat the performance of
LLMs, e.g. SQLCoder-7B 11 performs better on SQL than state of the art GPT4. Our research is
comparable to that, but with much less parameters and SPARQL instead of SQL.</p>
      <p>
        [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] manages to fine-tune T5 on SPARQL queries for Wikidata, but to achieve these results,
the data had to be preprocessed in a way that is specific to T5. Furthermore, while this paper
explores other ways to tackle this task in general, it only looks at T5 instead of other model
families as we do.
      </p>
      <p>
        [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] gives a comprehensive overview and performs a comparison of pre-trained LMs (PLMs),
non-pre-trained LMs (NPLMs), and LLMs, testing various fine-tuning methods using LLMs. [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]
ifne-tunes a lightweight model for SPARQL generation using synthetic training data generated
by the FlexKBQA framework on a target knowledge graph (sampling structured query templates
that are converted into SPARQL query instances and translated into natural language questions
using LLMs). The light-weight model can perform further self-guided training on real queries
to address a distribution shift between synthetic and real queries. [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] uses a GPT model to
investigate what parts of the Text2SPARQL task are the hardest for the model to solve so
appropriate countermeasures can be taken.
      </p>
      <p>
        [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] proposes a whole new architecture specific for SPARQL generation based on GPT. This
direction we assume promising for the future, but here we are focusing on more foundational
research first to understand which model families work best on a given dataset and why.
      </p>
    </sec>
    <sec id="sec-4">
      <title>3. Experimental Setup</title>
      <sec id="sec-4-1">
        <title>3.1. Model families</title>
        <p>As was mentioned in the introduction, the focus of our work is to fine-tune language models
that can be considered small by modern standards. We chose one billion parameters as an
arbitrary limit on the number of parameters, but as a general guideline we consulted the Steam
Hard- and Software Survey 12 and found that 57.22% of their users use a GPU with 8GB of VRAM
or more (January 2024). A model with less than one billion parameters should fit into this
amount of VRAM comfortably, showing again that these LLMs can be trained and run locally.
11https://huggingface.co/defog/sqlcoder-7b-2
12https://store.steampowered.com/hwsurvey/</p>
        <p>Another consideration is the public availability of the models. We believe that research
should be available to anyone who is interested and this should be reflected in the choice of
models. Therefore, we only select models that are openly available on Huggingface.</p>
        <p>
          Following these criteria, we observe quickly that there are only three large model families
that fit the bill, which we introduce here briefly. A full list of models evaluated is given in table
1
3.1.1. T5 and Flan-T5
In June 2020 Google released an LLM called Text-To-Text Transfer Transformer, or T5 in short
[
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. The base version consists of roughly 220 million parameters, with smaller and larger
versions available. With T5, Google wanted to provide a single LLM that can solve any NLP
task like text classification, sentiment analysis and so on. A user must provide a prefix like
’Translate the following sentence to french:’ and the LLM then infers how to process the rest of
the prompt. In 2022, researchers at Google released new versions of T5 called FLAN-T5 [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]
(FLAN stands for fine-tuning language models [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]) which, according to the authors, should
outperform T5 on any given task.
3.1.2. BART
BART was developed by Facebook and released in October 2019 [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. It consists of 139 million
parameters and is a combination of a BERT-like encoder [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ] with a GPT-like autoregressive
decoder [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]. In August 2020, a multilingual version called mBART was released [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ]. The
authors put special emphasis on the fact that BART is just a pretrained model and needs to be
ifne-tuned for a given specific task. We also included mREBEL models as a specialized version
of BART for multilingual relation extraction [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ] since it was finetuned with knowledge graphs
in mind.
3.1.3. M2M100 and NLLB-200
The M2M100 model was introduced in 2020 [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ] as a many-to-many translation tool for 100
languages. The original version consists of 1.3 billion parameters which exceeds the upper
bound we imposed. But there is a distilled version available directly from the Facebook research
team at Huggingface called M2M100-418M13 which we use in our experiments.
        </p>
        <p>
          Its successor, the NLLB-200 model, was introduced in 2022 [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ] and stands for ’no language
left behind’ . Again we use the distilled version NLLB-200-Distilled-600M14 instead of the 3.3
billion full version of the model. As the authors state, the model is ’primarily intended for
research in machine translation’ which fits our bill perfectly.
        </p>
        <p>This leaves us with a selection of models to be assessed in our experiment that can be seen in
table 1.
13https://huggingface.co/facebook/m2m100_418M
14https://huggingface.co/facebook/nllb-200-distilled-600M</p>
      </sec>
      <sec id="sec-4-2">
        <title>3.2. Datasets used for Fine Tuning and Evaluation</title>
        <p>In order to study how well the models can be fine-tuned towards a target KG, we use three
evaluation datasets from diferent domains and with varying complexity. These datasets are
comprised of a number of natural language questions, which are mapped to a SPARQL query
w.r.t. the target KG. For the first two datasets (organizational graph and CoyPu graph) we
generate questions and queries by sending the graph via the OpenAI API to GPT4 and prompting
it to generate tuples of natural language question, matching SPARQL query, and the expected
result of the query. These tuples are filtered by checking if the results that the SPARQL query
returns match with the expected results. Both datasets are then augmented by sending each
remaining question again to GPT and asking it to paraphrase the question, giving us a total of
two natural language questions per SPARQL query.</p>
        <sec id="sec-4-2-1">
          <title>3.2.1. Organizational Graph</title>
          <p>
            Introduced in [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ], this small knowledge graph uses established vocabularies to describe an
organization with departments and employees. There is a clear schema that maps person and
department names to their corresponding RDF resource, for example ”Anne Miller” maps to
:anne while ”Bob Tanner” maps to :bob. In this dataset and the next we also let the language
model omit the prefix definitions for the queries and assume they are already present in the
preamble of the executed SPARQL query. Using GPT4 we generated a dataset consisting of 69
datapoints, which were split into 53 tuples for training and 16 for testing.
          </p>
        </sec>
        <sec id="sec-4-2-2">
          <title>3.2.2. A subset of the CoyPu graph</title>
          <p>
            The CoyPu project15 aims to improve supply chain resiliency for corporations by combining
diferent data sources about public infrastructure, trades and trade agreements, events like
disasters and conflicts and many more into a large knowledge graph. Querying this knowledge
graph has the potential to help businesses identify risks like single points of failures and mitigate
them. This usefulness combined with the fact that the other two datasets have more of an
academic background made us decide that we use a subset of the CoyPu knowledge graph as
another dataset for training. Creating a viable subset lead to its own dificulties and hurdles
however, which we consider as future work. This dataset contains 131 tuples in total, which
were split into 105 for training and 26 for testing.
3.2.3. QALD10
The Question Answering over Linked Data (QALD) dataset is a standard benchmark16 with
QALD10 being the latest incarnation [
            <xref ref-type="bibr" rid="ref28">28</xref>
            ]. It consists of SPARQL queries along with matching
questions in diferent natural languages, w.r.t. Wikidata. In this work, we focus on English and
iflter the dataset accordingly. This dataset is especially dificult for a language model to handle
because there is no clear indication how to link entities from a given question like ”Barack
Obama” to their Wikidata entity ID (:Q76), giving rise to a whole field of research called Entity
Linking [
            <xref ref-type="bibr" rid="ref29">29</xref>
            ].
          </p>
        </sec>
      </sec>
      <sec id="sec-4-3">
        <title>3.3. Fine-tuning</title>
        <p>For every evaluation dataset individually, we perform fine-tuning of the selected models using
PyTorch (100 epochs). Since a single run of fine-tuning does not hold much statistical significance
and involves random parameters, we performe isolated runs of the training for a total of ten
times. For each run we shufle the training data with a predetermined random seed to make
the results reproducible. Specifically, each run is given an ID from 01 to 10 and the seeds
are generated by calculating the SHA512 sum of the ID and taking the first eight digits, so 01
results in the seed 99975818, 02 in 56899599 and so on.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. Results</title>
      <p>In the following subsections we only include those language models in the plots that generated
at least one correct query. The T5 family consistently generated not a single correct query on
the organizational graph which is why it is absent in the result tables and figures. In fact, all T5
models did not produce a single correct result across all runs.</p>
      <p>To generate the datapoints for each plot, we interrupted the training every five epochs and
made the language models translate the questions from the evaluation split into SPARQL queries.
We then executed the queries and compared the result sets to determine whether the answers is
correct.</p>
      <sec id="sec-5-1">
        <title>4.1. Organizational Graph</title>
        <p>data is plotted on the left side of figure 3. We can see that for this dataset, BART-L performs best
(as well as the other sizes of BART), with M2M100 being close behind. Another thing we see
from the left plot in figure 3 is that except for one outlier coming from mREBEL-L, the success
of fine-tuning is reliable and reproducible.</p>
        <p>Looking at common errors made during the translation, we found that the best models rarely
generated SPARQL that could not be parsed, but they rather mixed up terms and injected parts
of the training data into the queries. An example is shown in table 2.
4.2. CoyPu
In figure 2 we can see that the performance during the first run of the experiment varies less
drastically than for the Organizational Graph. The standard deviations seen in table 5 are similar
though so we think this is just a coincidence. Again, the (FLAN-)T5 models never generate even
a single correct query, so they are excluded from consecutive runs.</p>
        <p>We can also see that for this dataset, M2M100 outperformed the other models and BART-L is
in fact one of the worst, which is a complete shift from the results before. This again shows
The language models have a really hard time with the QALD10 dataset. While the structure of
the generated query comes close to the correct ones, the models cannot handle the translation
from entity names like Kobe Bryant to their corresponding Wikidata IDs like Q25369. We
expected this to happen since the whole field of entity linking is ongoing research and far from
trivial.</p>
        <p>Another problem here is that the QALD10 dataset requires the inclusion of all necessary
prefix definitions as part of the query, which was not a requirement both in the organizational
as well as the CoyPu graph datasets.</p>
        <p>To provide some numbers for clarification: The best performing model was M2M100-418M.
The validation dataset contains 394 questions, and only 104 were turned into SPARQL queries
that could be parsed. Out of these 104 parsable questions, 51 returned an empty result set. The
remaining queries except for three used COUNT and returned 0 because the result set of the
underlying query was empty. The three final ones did return wrong bindings.</p>
        <p>BART-L only managed to translate one single question into a valid SPARQL query, but the
result set was not correct. Interestingly, mBART-L generated 101 parsable SPARQL queries,
which makes it a close second to M2M100-418M. The error distribution is about the same as for
M2M100-418M though, so no question was correctly answered.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Conclusion and Future Work</title>
      <p>In this paper we have shown that fine-tuning language models for the translation of natural
language to SPARQL queries is indeed possible, although there are some limitations like the
requirement of a clear and concise mapping from entities in a question to entities in the
knowledge graph, like Anne Miller to :anne instead of :person1234. If this requirement is met,
both the BART family as well as the M2M100 family are able to fulfill this task.</p>
      <p>There is a large amount of avenues that can be explored from here on. First, we should find
a better way to define the limit on the number of parameters that a model is allowed to have.
Here we have focused on maximum one billion, as this is a limit for most consumer GPUs, but
probably there is a connection between model size and sparql generation capabilities.</p>
      <p>Secondly, we want to explore how these results can be used to deploy a fine-tuned language
model next to a RAG agent to improve its question answering capabilities. So far, LLMs used
by RAG agents often lack the ability to correctly apply aggregate functions, which could be
remedied by ofering the RAG agent a SPARQL query as another source of information.</p>
      <p>Since all these models are open source, we can also modify them by manipulating existing
layers as well as removing some or inserting new ones. This might be a way to reduce inference
time and improve the performance even further. One could also derive completely new models
from scratch, since most pre-training datasets are openly available and pre-training is fast due
to the small size of the models.</p>
      <p>And on top of that, we still have the problem that both the organizational graph dataset
as well as the CoyPu dataset were generated using GPT which defeats the purpose of being
independent from third parties. We will also investigate in the future how the training data
can be generated with open source LLMs like Falcon, Bloom and others so even this step of
the pipeline can be executed locally. Here it does not matter if we have enough GPU memory
available, since the creation of the training and testing datasets is only done once, so it is not an
issue if this step takes a bit longer.</p>
      <p>The goal of this paper was to do a small survey of the out-of-the-box capabilities of readily
available language models. What we have seen so far looks promising and there is a lot of
intriguing research to be done in the near future.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This work was partially supported by grants from the German Federal Ministry for Economic
Afairs and Climate Action (BMWK) to the CoyPu project (01MK21007A) and KISS project
(01MK22001A) as well as from the German Federal Ministry of Education and Research (BMBF)
to the project StahlDigital (13XP5116B).
Source code for the training, organizational graph dataset and CoyPu dataset can be found at
https://github.com/AKSW/LMs4Text2SPARQL and at DOI:10.5281/zenodo.10996425.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hilton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Evans</surname>
          </string-name>
          ,
          <article-title>Truthfulqa: Measuring how models mimic human falsehoods</article-title>
          ,
          <year>2022</year>
          . arXiv:
          <volume>2109</volume>
          .
          <fpage>07958</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Zellers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Holtzman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bisk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Farhadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <source>Hellaswag: Can a machine really ifnish your sentence?</source>
          ,
          <year>2019</year>
          . arXiv:
          <year>1905</year>
          .07830.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>F.</given-names>
            <surname>Chollet</surname>
          </string-name>
          ,
          <source>On the measure of intelligence</source>
          ,
          <year>2019</year>
          . arXiv:
          <year>1911</year>
          .01547.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E.</given-names>
            <surname>Beeching</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Fourrier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Habib</surname>
          </string-name>
          , S. Han,
          <string-name>
            <given-names>N.</given-names>
            <surname>Lambert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Rajani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Sanseviero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Tunstall</surname>
          </string-name>
          , T. Wolf, Open llm leaderboard, https://huggingface.co/spaces/HuggingFaceH4/open_llm_ leaderboard,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>B.</given-names>
            <surname>Workshop</surname>
          </string-name>
          , Bloom:
          <article-title>A 176b-parameter open-access multilingual language model</article-title>
          ,
          <year>2023</year>
          . arXiv:
          <volume>2211</volume>
          .
          <fpage>05100</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.-P.</given-names>
            <surname>Meyer</surname>
          </string-name>
          , C. Stadler,
          <string-name>
            <given-names>J.</given-names>
            <surname>Frey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Radtke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Junghanns</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Meissner</surname>
          </string-name>
          , G. Dziwis,
          <string-name>
            <given-names>K.</given-names>
            <surname>Bulert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <article-title>Llm-assisted knowledge graph engineering: Experiments with chatgpt</article-title>
          , in: C.
          <string-name>
            <surname>Zinke-Wehlmann</surname>
          </string-name>
          , J. Friedrich (Eds.),
          <source>First Working Conference on Artificial Intelligence Development for a Resilient and Sustainable Tomorrow (AITomorrow)</source>
          <year>2023</year>
          , Informatik aktuell,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>658</fpage>
          -43705-
          <issue>3</issue>
          _
          <fpage>8</fpage>
          . arXiv:
          <volume>2307</volume>
          .
          <fpage>06917</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.-P.</given-names>
            <surname>Meyer</surname>
          </string-name>
          , J. Frey,
          <string-name>
            <given-names>K.</given-names>
            <surname>Junghanns</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Brei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Bulert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gründer-Fahrer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <article-title>Developing a scalable benchmark for assessing large language models in knowledge graph engineering</article-title>
          , in: N.
          <string-name>
            <surname>Keshan</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Neumaier</surname>
            ,
            <given-names>A. L.</given-names>
          </string-name>
          <string-name>
            <surname>Gentile</surname>
          </string-name>
          , S. Vahdati (Eds.),
          <source>Proceedings of the Posters and Demo Track of the 19th International Conference on Semantic Systems (SEMANTICS</source>
          <year>2023</year>
          ),
          <year>2023</year>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3526</volume>
          /paper-04.pdf. arXiv:
          <volume>2308</volume>
          .
          <fpage>16622</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Frey</surname>
          </string-name>
          , L. Meyer,
          <string-name>
            <given-names>N.</given-names>
            <surname>Arndt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Brei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Bulert</surname>
          </string-name>
          ,
          <article-title>Benchmarking the abilities of large language models for RDF knowledge graph creation and comprehension: How well do llms speak turtle?</article-title>
          , in: M.
          <string-name>
            <surname>Alam</surname>
          </string-name>
          , M. Cochez (Eds.),
          <source>Proceedings of the Workshop on Deep Learning for Knowledge Graphs (DL4KG</source>
          <year>2023</year>
          )
          <article-title>co-located with the 21th International Semantic Web Conference (ISWC</article-title>
          <year>2023</year>
          ), Athens, November 6-
          <issue>10</issue>
          ,
          <year>2023</year>
          , volume
          <volume>3559</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2023</year>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3559</volume>
          /paper-3.pdf. arXiv:
          <volume>2309</volume>
          .
          <fpage>17122</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Frey</surname>
          </string-name>
          , L.-P. Meyer,
          <string-name>
            <given-names>F.</given-names>
            <surname>Brei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gründer-Fahrer</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Martin, Assessing the evolution of llm capabilities for knowledge graph engineering in 2023</article-title>
          , in: A.
          <string-name>
            <surname>M. Peñuela</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Corcho</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Groth</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Simperl</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Tamma</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Nuzzolese</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Poveda-Villalón</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Sabou</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Presutti</surname>
            ,
            <given-names>I. Celino</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Revenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Raad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Sartini</surname>
          </string-name>
          , P. Lisena (Eds.),
          <article-title>ESWC 2024 Satellite Events</article-title>
          , Hersonissos, Crete, Greece, May
          <volume>26</volume>
          - 30,
          <year>2024</year>
          , Proceedings.,
          <year>2024</year>
          . URL: https://www.researchgate.net/publication/378804553_Assessing_
          <article-title>the_Evolution_of_ LLM_capabilities_for_Knowledge_Graph_Engineering</article-title>
          _in_
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hofer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Frey</surname>
          </string-name>
          , E. Rahm,
          <article-title>Towards self-configuring knowledge graph construction pipelines using llms - a case study with rml</article-title>
          ,
          <source>in: Fifth International Workshop on Knowledge Graph Construction @ ESWC2024</source>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Rangel</surname>
          </string-name>
          ,
          <string-name>
            <surname>T. M. de Farias</surname>
            ,
            <given-names>A. C.</given-names>
          </string-name>
          <string-name>
            <surname>Sima</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Kobayashi</surname>
          </string-name>
          ,
          <article-title>Sparql generation: an analysis on ifne-tuning openllama for question answering over a life science knowledge graph</article-title>
          ,
          <year>2024</year>
          . arXiv:
          <volume>2402</volume>
          .
          <fpage>04627</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Xu</surname>
          </string-name>
          , S. Liu,
          <string-name>
            <given-names>T.</given-names>
            <surname>Culhane</surname>
          </string-name>
          , E. Pertseva,
          <string-name>
            <surname>M.-H. Wu</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Semnani</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Lam</surname>
          </string-name>
          ,
          <article-title>Fine-tuned LLMs know more, hallucinate less with few-shot sequence-to-sequence semantic parsing over Wikidata</article-title>
          , in: H.
          <string-name>
            <surname>Bouamor</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Pino</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          Bali (Eds.),
          <source>Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing</source>
          , Association for Computational Linguistics, Singapore,
          <year>2023</year>
          , pp.
          <fpage>5778</fpage>
          -
          <lpage>5791</lpage>
          . URL: https://aclanthology.org/
          <year>2023</year>
          .emnlp-main.
          <volume>353</volume>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2023</year>
          .emnlp-main.
          <volume>353</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>D.</given-names>
            <surname>Banerjee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. A.</given-names>
            <surname>Nair</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. N.</given-names>
            <surname>Kaur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Usbeck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Biemann</surname>
          </string-name>
          ,
          <article-title>Modern baselines for sparql semantic parsing</article-title>
          ,
          <source>in: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '22</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          ,
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .1145/ 3477495.3531841.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>P. A. K. K. Diallo</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Reyd</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Zouaq</surname>
          </string-name>
          ,
          <article-title>A comprehensive evaluation of neural sparql query generation from natural language questions</article-title>
          ,
          <year>2024</year>
          . arXiv:
          <volume>2304</volume>
          .
          <fpage>07772</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Duan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Flexkbqa: A flexible llm-powered framework for few-shot knowledge base question answering</article-title>
          ,
          <source>ArXiv abs/2308</source>
          .12060 (
          <year>2023</year>
          ). URL: https://api.semanticscholar.org/CorpusID:261076103.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bustamante</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Takeda</surname>
          </string-name>
          ,
          <article-title>Sparql generation with entity pre-trained gpt for kg question answering</article-title>
          ,
          <source>ArXiv abs/2402</source>
          .00969 (
          <year>2024</year>
          ). URL: https://api.semanticscholar.org/CorpusID: 267406567.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>M. R. A. H. Rony</surname>
            , U. Kumar,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Teucher</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Kovriguina</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Lehmann</surname>
          </string-name>
          ,
          <article-title>Sgpt: A generative approach for sparql query generation from natural language questions</article-title>
          ,
          <source>IEEE Access 10</source>
          (
          <year>2022</year>
          )
          <fpage>70712</fpage>
          -
          <lpage>70723</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2022</year>
          .
          <volume>3188714</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>C.</given-names>
            <surname>Rafel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roberts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Narang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Matena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>Exploring the limits of transfer learning with a unified text-to-text transformer</article-title>
          ,
          <source>Journal of Machine Learning Research</source>
          <volume>21</volume>
          (
          <year>2020</year>
          )
          <fpage>1</fpage>
          -
          <lpage>67</lpage>
          . URL: http://jmlr.org/papers/v21/
          <fpage>20</fpage>
          -
          <lpage>074</lpage>
          .html.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>H. W.</given-names>
            <surname>Chung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Hou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Longpre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zoph</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Fedus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dehghani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Brahma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Webson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Gu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Dai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Suzgun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chowdhery</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Narang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Mishra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Petrov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. H.</given-names>
            <surname>Chi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dean</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roberts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <surname>Scaling</surname>
          </string-name>
          instruction-finetuned
          <source>language models</source>
          ,
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .48550/ARXIV.2210.11416.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bosma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Guu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. W.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lester</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Dai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <article-title>Finetuned language models are zero-shot learners</article-title>
          ,
          <source>in: International Conference on Learning Representations</source>
          ,
          <year>2022</year>
          . URL: https://openreview.net/forum?id=gEZrGCozdqR.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ghazvininejad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mohamed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Levy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Stoyanov</surname>
          </string-name>
          , L. Zettlemoyer,
          <article-title>BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension</article-title>
          , CoRR abs/
          <year>1910</year>
          .13461 (
          <year>2019</year>
          ). arXiv:
          <year>1910</year>
          .13461.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          , M.-
          <string-name>
            <given-names>W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          , Bert:
          <article-title>Pre-training of deep bidirectional transformers for language understanding</article-title>
          ,
          <year>2019</year>
          . arXiv:
          <year>1810</year>
          .04805.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>A.</given-names>
            <surname>Radford</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Child</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Luan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Amodei</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Sutskever</surname>
          </string-name>
          ,
          <article-title>Language models are unsupervised multitask learners (</article-title>
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Tran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.-J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Chaudhary</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <article-title>Multilingual translation with extensible multilingual pretraining and finetuning (</article-title>
          <year>2020</year>
          ). arXiv:
          <year>2008</year>
          .00401.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>P.-L. Huguet Cabot</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Tedeschi</surname>
          </string-name>
          , A.
          <string-name>
            <surname>-C. Ngonga Ngomo</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Navigli</surname>
          </string-name>
          ,
          <article-title>Redfm: a filtered and multilingual relation extraction dataset</article-title>
          ,
          <source>in: Proc. of the 61st Annual Meeting of the Association for Computational Linguistics: ACL</source>
          <year>2023</year>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Toronto, Canada,
          <year>2023</year>
          . URL: https://arxiv.org/abs/2306.09802.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bhosale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schwenk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ma</surname>
          </string-name>
          , A.
          <string-name>
            <surname>El-Kishky</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Goyal</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Baines</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Celebi</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Wenzek</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Chaudhary</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Goyal</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Birch</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Liptchinsky</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Edunov</surname>
            , E. Grave,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Auli</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Joulin</surname>
          </string-name>
          ,
          <string-name>
            <surname>Beyond</surname>
          </string-name>
          english-centric
          <source>multilingual machine translation</source>
          ,
          <year>2020</year>
          . arXiv:
          <year>2010</year>
          .11125.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>N.</given-names>
            <surname>Team</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Costa-jussà</surname>
          </string-name>
          , J.
          <string-name>
            <surname>Cross</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Çelebi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Elbayad</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Heafield</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Hefernan</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Kalbassi</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Lam</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Licht</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Maillard</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Wenzek</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Youngblood</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Akula</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Barrault</surname>
            ,
            <given-names>G. M.</given-names>
          </string-name>
          <string-name>
            <surname>Gonzalez</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Hansanti</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Hofman</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Jarrett</surname>
            ,
            <given-names>K. R.</given-names>
          </string-name>
          <string-name>
            <surname>Sadagopan</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Rowe</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Spruit</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Tran</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Andrews</surname>
            ,
            <given-names>N. F.</given-names>
          </string-name>
          <string-name>
            <surname>Ayan</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Bhosale</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Edunov</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Fan</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Gao</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Goswami</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Guzmán</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Koehn</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Mourachko</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Ropers</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Saleem</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Schwenk</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>No language left behind: Scaling human-centered machine translation</article-title>
          ,
          <year>2022</year>
          . arXiv:
          <volume>2207</volume>
          .
          <fpage>04672</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>A.</given-names>
            <surname>Perevalov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Diefenbach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Usbeck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Both</surname>
          </string-name>
          , Qald-9
          <article-title>-plus: A multilingual dataset for question answering over dbpedia and wikidata translated by native speakers</article-title>
          ,
          <source>in: 2022 IEEE 16th International Conference on Semantic Computing (ICSC)</source>
          ,
          <source>IEEE Computer Society</source>
          , Los Alamitos, CA, USA,
          <year>2022</year>
          , pp.
          <fpage>229</fpage>
          -
          <lpage>234</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICSC52841.
          <year>2022</year>
          .
          <volume>00045</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>D.</given-names>
            <surname>Diomedi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hogan</surname>
          </string-name>
          ,
          <article-title>Entity linking and filling for question answering over knowledge graphs</article-title>
          ,
          <source>in: Proceedings of NLIWoD2022</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>