<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>monsense Knowledge and Controllable Techniques for an Efective and Eficient Approach to Text Generation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Iván Martínez-Murillo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dept. of Software and Computing Systems, University of Alicante, Apdo. de Correos 99</institution>
          ,
          <addr-line>E-03080, Alicante</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Natural Language Generation</institution>
          ,
          <addr-line>Controllable techniques, Hallucination, Eficient architectures, Task-</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>The Natural Language Generation (NLG) field has advanced at a breakneck speed, favoured by the development of Large Language Models (LLMs). Notwithstanding, these models also have some drawbacks. On the one hand, these models can introduce some risks such as hallucination or bias which can be used in an unethical way to potentially generate dis- and mis-information. On the other hand, the expense of time and cost of training these models is too high. In account of this, the purpose of this paper is to propose a new research line for my PhD thesis. During the research, I will propose an eficient architecture, that could generate quality text in a controllable way, while integrating external commonsense knowledge. The objective is that this proposed architecture could achieve similar performance to state-of-the-art models while being more eficient.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Justification of the research</title>
      <p>
        The rapid development of generative Artificial Intelligence (AI) has caused an augment of
interest in society in AI tools. These tools can produce a positive impact in lots of areas, saving
the time and efort of solving some tasks [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3</xref>
        ].
      </p>
      <p>
        In particular, state-of-the-art Natural Language Generation (NLG) tools can produce text
that, in some cases, can be indistinguishable from human-generated texts. This could have
lots of benefits in some sectors such as academia, tourism or marketing [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Nonetheless,
these tools also have some drawbacks. First of all, text generated by these tools may contain
hallucinations, which is the phenomena that occur when a text is nonsensical or unfaithful to
the provided source [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Secondly, AI-generated text could be biased in some cases, which is
the misrepresentation or attribution errors that result in favouring certain groups or ideas [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
Finally, these tools also lack of logical reasoning, a fact that it is essential to human intelligence
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. In the wake of these limitations, these tools can be used in a bad and unethical way to
potentially generate dis- and mis-information.
      </p>
      <p>Moreover, the core of these tools are Large Language Models (LLMs). The expense of time
and cost needed to train these models is extremely high, being only within the reach of large
CEUR
Workshop
Proceedings
companies.</p>
      <p>Therefore, the motivation for the present research arises from the need in the academia to
ifnd eficient architectures that could produce text in a controlled manner, achieving a similar
performance to state-of-the-art models, but solving the hallucination issue.</p>
      <p>The remainder of this article is organised as follows: Section 2 presents an overview of
the relevant literature concerning NLG; Section 3 shows the main hypotheses and objectives
planned for this research; Finally, Section 4, and Section 5 detail the methodology this PhD will
follow and some relevant research topics for discussion.</p>
    </sec>
    <sec id="sec-3">
      <title>2. Background and Related Work</title>
      <p>Before introducing my proposal, this section aims to contextualise this study within the state of
the art of the NLG.</p>
      <p>
        NLG is the subfield in the Natural Language Processing (NLP) area that aims to produce
meaningful sentences to meet a communicative goal [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Depending on several aspects of the
generation, NLG can be classified according to two criteria:
• Type of input: Depending on the type of input, NLG can be catalogued as (1) text-to-text
generation (T2T) and (2) data-to-text generation (D2T) [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. In D2T, input data can assume
diferent types such as binary data, images, voice, database, ontologies, etc. Recently,
another concept of NLG has emerged, (3) none-to-text generation (N2T) [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], which
corresponds to the generation to which no input is received.
• Task typology: Based on the communicative goal, NLG can be grouped into (1) text
abbreviation; (2) text expansion; and (3) text rewriting and reasoning. Text abbreviation
tasks consist in detecting the most important information in a text and fusing that
information into a short text, e.g. text summarisation. Text expansion tasks aim to
generate complete sentences from some meaningful words, e.g. topic-to-essay. Finally,
Text rewriting and reasoning tasks try to rewrite a text into another style or apply reasoning
methods, e.g. text simplification.
      </p>
      <p>
        To achieve the communicative goal of these tasks, the NLG area has been studied for a long
time. First researches date by the end of 1970 [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Notwithstanding, it has not been until recent
years that the NLG field has achieved an exponential improvement, producing text in a very
similar way to humans. But, how did we get to this?
      </p>
      <p>
        In a first stage, the NLG task was seen as a sequential scheme of four diferent stages
(preprocessing, macroplanning, microplanning and realisation). Modular architectures followed
this scheme, making a clear distinction between the distinct sub-tasks of each stage. The most
famous modular architecture was proposed by Reiter [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Figure 1 shows the sub-task division
in this architecture.
      </p>
      <p>
        Other works within this architecture can be found in [
        <xref ref-type="bibr" rid="ref13 ref14 ref15">13, 14, 15, 16</xref>
        ].
      </p>
      <p>Later, that clear distinction between the distinct sub-tasks became more flexible originating
what is known as planning perspectives. This scheme was similar to the employed in modular
architectures, but it allows to combine and implement two or more diferent sub-tasks in one
sub-task, e.g. to combine text structuring and sentence aggregation sub-tasks. Some examples
of this approach are present in [17, 18, 19, 20, 21, 22, 23, 24].</p>
      <p>Finally, the sub-task division started to disappear, originating global approaches. This
type of architecture does not make a distinction among sub-tasks, performing every task as a
whole, and relying on statistical learning and neural networks. Some proposed architectures
within global approaches are: Graph Neural Networks [25], Generative Adversarial Nets [26],
Recurrent Neural Networks [27], Pre-trained Models [28], Memory Networks [29], Transformers
[30] and Copy and Pointing Mechanism [31]. This group of approaches have made the major
development in the NLG area. The most important proposal in this group was the Transformers
architecture and its concept of attention. Models based on this architecture achieve a high
performance at NLG tasks. The best-performing models based on Transformers are LLMs
such as GPT4 [32] or LLaMa [33], which have neural networks with billions of parameters.
Nowadays, most of the research in the industry is focused on developing bigger LLMs, as it is
thought that a bigger LLM would achieve better performance. The cost and time of training
these models are unassumable for the academia. On account of that issue, there is a need in the
academia to find more eficient architectures that could perform similarly to LLMs.</p>
      <p>Consequently, my line of work will focus on exploring eficient architectures that could
generate text with similar results to state-of-the-art models. Moreover, controllable
generation methods, techniques to integrate external commonsense knowledge and task-agnostic
architectures will be studied in order to reduce the phenomena known as hallucination.</p>
    </sec>
    <sec id="sec-4">
      <title>3. Main Hypothesis and Objectives</title>
      <p>This PhD thesis is based on the hypothesis that integrating external commonsense knowledge
along with controllable text generation techniques in an eficient architecture will help to reduce
the hallucination issue, and besides performing similarly to state-of-the-art models. Thus, the
main objective of this research is the proposal of an eficient architecture that could achieve
a good performance in diferent NLG tasks, e.g. text summarisation, and text simplification,
and could reduce hallucination as much as possible. In order to complete this main objective,
several sub-objectives have been proposed:
• A1. To explore optimal controllable text generation techniques.
• A2. To examine hallucination mitigation techniques.
• A3. To study how to integrate external commonsense knowledge.
• A4. To analyse and test diferent task-agnostic architectures incorporating the previously
studied techniques.
• B1. To compare the performance of open-source state-of-the-art architectures using a
common benchmark.
• B2. To propose a cost-efective architecture that can generate text in a controllable way
and evaluate it.
• C1. To adapt the proposed architecture to perform in some NLG tasks, e.g., summarisation
or text simplification.</p>
      <p>The planned schedule of these sub-objectives can be seen in Figure 2, starting from February
2023. Group A corresponds with the study and test of state-of-the-art techniques. After an
initial study, during Group B, an eficient architecture will be proposed, tested and compared
with other open-source architectures using a common benchmark. Finally, in Group C the
proposed architecture will be adapted to perform in diferent NLG tasks.</p>
    </sec>
    <sec id="sec-5">
      <title>4. Methodology and proposed experiment</title>
      <p>The proposed methodology to carry out this research is based on a complete and comprehensive
training in all areas of NLG, including general training on NLP. After having the basic notions
of NLG, the research focuses on an exhaustive analysis of the state of the art of NLG, especially
on deep learning techniques that allow controlled language generation and integrate
commonsense knowledge. Subsequently, the experimentation also starts, testing diferent open-source
architectures, along with the most relevant studied techniques. After having tested several
architectures, an eficient base model will be proposed, integrating commonsense knowledge and
controllable generation techniques into it. Then, it will be evaluated against other architectures
using a common benchmark. Finally, the proposed architecture will be adapted to perform
diferent tasks.</p>
      <p>At present, I am experimenting with the CommonGen dataset [34]. The CommonGen dataset
consists of a set of common concepts and some reference sentences using those concepts and
the main idea is to test machines for the ability of generative commonsense reasoning. I am
testing with diferent types of approaches such as SimpleNLG, Factorised Language Models, or
Neural Models over this dataset. With the proposed experiment, the main idea is to combine
the best-obtained architecture with controllable generation techniques in order to obtain a base
model.</p>
    </sec>
    <sec id="sec-6">
      <title>5. Research issues to discuss</title>
      <p>In order to advance towards an efective and eficient approach for controllable text generation,
several research issues are suggested and briefly discussed.</p>
      <p>What does controllable text generation mean, and what are the most eficient methods
to incorporate it? Controllable text generation is the task of producing text in a way that its
attributes can be controlled [35]. These attributes can adopt a wide variety of ranges, such as
stylistics, to include specific information in the content, based on the demographic attributes
of the interlocutor, etc. As seen in [36], there are three ways to approach controllable text
generation.</p>
      <p>1. Via hyperparameters: Training data in LLMs can be unbalanced due to the fact that it is
dificult to balance that huge amount of data. Modifying hyperparameters may generalise the
knowledge better and consequently raise obtained results.</p>
      <p>2. Via additional input: To fine-tune a pre-trained model with more information than just
the text could enhance its performance.</p>
      <p>3. Via conditional training: Using internal control variables could enrich the generation
with specific capabilities.</p>
      <p>
        What is hallucination and what are the ways to reduce its occurrence? Hallucination
in NLG occurs when a text generated by an AI lacks of coherence or deviates from the intended
sense of the source input [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. It can be classified into two categories: intrinsic hallucinations,
which appear when the generated text contradicts the source input, and extrinsic hallucinations
which arise when the source input cannot substantiate the generated text.
      </p>
      <p>There exist diferent types of approaches to minimise the occurrence of hallucinations.
Firstly, constructing a reliable dataset, which does not contain any type of contradiction in the
data. Secondly, modifying the encoder/decoder architecture can enhance the ability to better
understand and represent the knowledge. Thirdly, proposing an optimal training strategy such
as controllable text generation could benefit the model. Finally, one important approach is to
integrate external commonsense knowledge into the models.</p>
      <p>How to integrate external commonsense knowledge? Commonsense knowledge is an
important factor in human communication, as it facilitates inference without the explicit mention
of context [37]. Although current state-of-the-art models exhibit some common sense abilities, it
is not complete yet. Traditionally, commonsense has been injected into NLG systems in the form
of rules and ontologies. Nowadays, the approaches have focused on injecting commonsense
into neural NLG models through pre-trained models and using commonsense graphs. But there
is still much work to do in this field in order to reach a complete commonsense knowledge.</p>
      <p>Can a smaller architecture obtain similar performance than LLMs? There are some
structures such as Plug and Play models or Variational Autoencoders that are more eficient
than LLMs. Integrating commonsense knowledge and controllable generation techniques into
these models could help to perform like LLMs while being smaller and more eficient models.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgements</title>
      <p>This research work is part of the R&amp;D projects “CORTEX: Conscious Text Generation”
(PID2021123956OB-I00), funded by MCIN/ AEI/10.13039/501100011033/ and by “ERDF A way of making
Europe”
[16] S. Nirenburg, V. R. Lesser, E. Nyberg, Controlling a language generation planner., in:</p>
      <p>IJCAI, 1989, pp. 1524–1530.
[17] R. E. Fikes, N. J. Nilsson, Strips: A new approach to the application of theorem proving to
problem solving, Artificial intelligence 2 (1971) 189–208.
[18] D. Appelt, Planning english sentences. cambridge university press, 1985.
[19] E. H. Hovy, Approaches to the planning of coherent text, Springer, 1991.
[20] J. A. Bateman, Enabling technology for multilingual natural language generation: the
kpml development environment, Natural Language Engineering 3 (1997) 15–55.
[21] A. Koller, M. Stone, Sentence generation as a planning problem, in: Proceedings of
the 45th Annual Meeting of the Association of Computational Linguistics, Association
for Computational Linguistics, Prague, Czech Republic, 2007, pp. 336–343. URL: https:
//aclanthology.org/P07-1043.
[22] V. Rieser, O. Lemon, Natural language generation as planning under uncertainty for spoken
dialogue systems, Empirical Methods in Natural Language Generation: Data-oriented
Methods and Empirical Evaluation (2009) 105–120.
[23] C. Nakatsu, M. White, Generating with discourse combinatory categorial grammar,</p>
      <p>Linguistic Issues in Language Technology 4 (2010).
[24] O. Lemon, Learning what to say and how to say it: Joint optimisation of spoken dialogue
management and natural language generation, Computer Speech &amp; Language 25 (2011)
210–221.
[25] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, G. Monfardini, The graph neural
network model, IEEE transactions on neural networks 20 (2008) 61–80.
[26] M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, I. J. Goodfellow,
J. Pouget-Abadie, Generative adversarial nets, Advances in neural information processing
systems 27 (2014) 2672–2680.
[27] I. Sutskever, O. Vinyals, Q. V. Le, Sequence to sequence learning with neural networks,</p>
      <p>Advances in neural information processing systems 27 (2014).
[28] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, J. Dean, Distributed representations of
words and phrases and their compositionality, Advances in neural information processing
systems 26 (2013).
[29] S. Sukhbaatar, J. Weston, R. Fergus, et al., End-to-end memory networks, Advances in
neural information processing systems 28 (2015).
[30] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, I.
Polosukhin, Attention is all you need, 2017. arXiv:1706.03762.
[31] A. See, P. J. Liu, C. D. Manning, Get to the point: Summarization with pointer-generator
networks, arXiv preprint arXiv:1704.04368 (2017).
[32] OpenAI, Gpt-4 technical report, 2023. arXiv:2303.08774.
[33] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière,
N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, G. Lample, Llama: Open
and eficient foundation language models, 2023. arXiv:2302.13971.
[34] B. Y. Lin, W. Zhou, M. Shen, P. Zhou, C. Bhagavatula, Y. Choi, X. Ren, CommonGen:
A constrained text generation challenge for generative commonsense reasoning, in:
Findings of the Association for Computational Linguistics: EMNLP 2020, Association for
Computational Linguistics, Online, 2020, pp. 1823–1840. URL: https://www.aclweb.org/
anthology/2020.findings-emnlp.165.
[35] S. Prabhumoye, A. W. Black, R. Salakhutdinov, Exploring controllable text generation
techniques, in: Proceedings of the 28th International Conference on Computational
Linguistics, International Committee on Computational Linguistics, Barcelona, Spain
(Online), 2020, pp. 1–14. URL: https://aclanthology.org/2020.coling-main.1. doi:10.18653/
v1/2020.coling- main.1.
[36] E. Erdem, M. Kuyu, S. Yagcioglu, A. Frank, L. Parcalabescu, B. Plank, A. Babii, O. Turuta,
A. Erdem, I. Calixto, et al., Neural natural language generation: A survey on multilinguality,
multimodality, controllability and learning, Journal of Artificial Intelligence Research 73
(2022) 1131–1207.
[37] S. Mahamood, M. Clinciu, D. Gkatzia, It’s common sense, isn’t it? demystifying human
evaluations in commonsense-enhanced nlg systems (2021).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>W. P.</given-names>
            <surname>Walters</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Murcko, Assessing the impact of generative ai on medicinal chemistry</article-title>
          ,
          <source>Nature biotechnology 38</source>
          (
          <year>2020</year>
          )
          <fpage>143</fpage>
          -
          <lpage>145</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mayahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vidrih</surname>
          </string-name>
          ,
          <article-title>The impact of generative ai on the future of visual content marketing</article-title>
          ,
          <source>arXiv preprint arXiv:2211.12660</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>G.</given-names>
            <surname>Cooper</surname>
          </string-name>
          ,
          <article-title>Examining science education in chatgpt: An exploratory study of generative artificial intelligence</article-title>
          ,
          <source>Journal of Science Education and Technology</source>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Y. K.</given-names>
            <surname>Dwivedi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kshetri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Hughes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. L.</given-names>
            <surname>Slade</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jeyaraj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Kar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Baabdullah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Koohang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Raghavan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ahuja</surname>
          </string-name>
          , et al.,
          <article-title>“so what if chatgpt wrote it?” multidisciplinary perspectives on opportunities, challenges and implications of generative conversational ai for research, practice and policy</article-title>
          ,
          <source>International Journal of Information Management</source>
          <volume>71</volume>
          (
          <year>2023</year>
          )
          <fpage>102642</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Frieske</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Su</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Ishii</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. J.</given-names>
            <surname>Bang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Madotto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Fung</surname>
          </string-name>
          ,
          <article-title>Survey of hallucination in natural language generation</article-title>
          ,
          <source>ACM Comput. Surv</source>
          .
          <volume>55</volume>
          (
          <year>2023</year>
          ). URL: https://doi.org/10.1145/3571730. doi:
          <volume>10</volume>
          .1145/3571730.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E.</given-names>
            <surname>Ferrara</surname>
          </string-name>
          ,
          <article-title>Should chatgpt be biased? challenges and risks of bias in large language models</article-title>
          ,
          <source>arXiv preprint arXiv:2304.03738</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>H.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ning</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Teng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y. Zhang,</surname>
          </string-name>
          <article-title>Evaluating the logical reasoning ability of chatgpt and gpt-4</article-title>
          , arXiv preprint arXiv:
          <volume>2304</volume>
          .03439 (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>E.</given-names>
            <surname>Reiter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Dale</surname>
          </string-name>
          ,
          <article-title>Building applied natural language generation systems</article-title>
          ,
          <source>Natural Language Engineering</source>
          <volume>3</volume>
          (
          <year>1997</year>
          )
          <fpage>57</fpage>
          -
          <lpage>87</lpage>
          . doi:
          <volume>10</volume>
          .1017/S1351324997001502.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Vicente</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Barros</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. S.</given-names>
            <surname>Peregrino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Agulló</surname>
          </string-name>
          , E. Lloret,
          <article-title>La generación de lenguaje natural: análisis del estado actual</article-title>
          ,
          <source>Computación y Sistemas</source>
          <volume>19</volume>
          (
          <year>2015</year>
          )
          <fpage>721</fpage>
          -
          <lpage>756</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>K. R.</given-names>
            <surname>Chandu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. W.</given-names>
            <surname>Black</surname>
          </string-name>
          ,
          <article-title>Positioning yourself in the maze of neural text generation: A task-agnostic survey</article-title>
          ,
          <year>2020</year>
          . URL: https://arxiv.org/abs/
          <year>2010</year>
          .07279. doi:
          <volume>10</volume>
          .48550/ARXIV.
          <year>2010</year>
          .
          <volume>07279</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>D. D. McDonald</surname>
          </string-name>
          ,
          <article-title>Natural language generation</article-title>
          .,
          <source>Handbook of natural language processing 2</source>
          (
          <year>2010</year>
          )
          <fpage>121</fpage>
          -
          <lpage>144</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>E.</given-names>
            <surname>Reiter</surname>
          </string-name>
          ,
          <article-title>Has a consensus nl generation architecture appeared</article-title>
          , and is it psycholinguistically plausible?,
          <year>1994</year>
          . arXiv:cmp-lg/9411032.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>W. C.</given-names>
            <surname>Mann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Moore</surname>
          </string-name>
          ,
          <article-title>Computer generation of multiparagraph english text</article-title>
          ,
          <source>American Journal of Computational Linguistics</source>
          <volume>7</volume>
          (
          <year>1981</year>
          )
          <fpage>17</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>E.</given-names>
            <surname>Hovy</surname>
          </string-name>
          ,
          <article-title>Generating natural language under pragmatic constraints</article-title>
          ,
          <source>Journal of Pragmatics</source>
          <volume>11</volume>
          (
          <year>1987</year>
          )
          <fpage>689</fpage>
          -
          <lpage>719</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>W.</given-names>
            <surname>Levelt</surname>
          </string-name>
          , Speaking: From intention to articulation mit press, Cambridge, MA (
          <year>1989</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>