<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Formal Dialogue and Large Language Models</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mark Snaith</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Simon Wells</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Edinburgh Napier University</institution>
          ,
          <addr-line>10 Colinton Road, Edinburgh, EH10 5DT, Scotland</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>School of Computing, Engineering and Technology, Robert Gordon University</institution>
          ,
          <addr-line>Aberdeen, AB10 7GJ, Scotland</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper, we present preliminary work into combining formal models of dialogue and large language models, before going on to discuss how this provides a foundation for similar approaches involving computational models of argument. First, we address the twin issues of how a formal dialogue game can usefully regulate dialogical utterances generated by an LLM during an extended, goal-oriented conversation, and conversely, how LLMs can close the human-level language generation gap associated with formal dialogue games. We then proceed to identify how our solution to these issues can underpin future work towards using computational argumentation to provide reasoning-like capabilities to LLMs, and using LLMs for tasks such as searching and summarisation of analysed argument data.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Formal dialogue</kwd>
        <kwd>argumentation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <p>
        Research into dialogue games originated in a variety of goal oriented studies; whether to explain proof
procedures [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], to model the Aristotelian conception of Dialectic [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], to understand how people interact
for example, during deliberation [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], or to manage the interactions between intelligent agents in
multiagent systems [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. At various times, researchers have attempted to determine ways to organise these
various approaches. For example, McBurney and Parsons [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] proposed a set of desiderata associated
with dialogue games in agent communication, similarly, Wells [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] put forward a number of criteria for
specifying dialectical games which was subsequently developed into the Dialogue Game Description
Language [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. These approaches were all broadly in the context of intelligent agent interaction via
dialogue games, and mainly sought to give structure within which benchmarks, standards, expectations,
and points of comparison, could be situated whilst also seeking to depict the space of possible dialogue
games so that a principled exploration could be made.
      </p>
      <p>
        Various tools and platforms have been developed to support computational implementations of
dialogue games, and their subsequent execution to be used in inter-agent communication. The Dialogue
Game Description Language (DGDL) is a domain-specific language for describing dialogue games [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ],
while the Dialogue Game Execution Platform (DGEP) provides an environment for interpreting and
running games specified in DGDL. Together, DGDL and DGEP have been shown to support systems
for public deliberation [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and health coaching [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], as well as underpinning generalised platforms for
structured conversational systems [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ] and influencing related work in this area [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        By contrast, Large Language Models (LLMs) provide a less structured approach to conversational
interaction. Given a prompt, they will predict the best response to make. Prompt engineering is the
process of designing and structuring prompts towards more efective results. This leverages an LLMs
ability for in-context learning [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], where in addition to its underlying model, the LLM temporarily
learns either from the prompt itself, or via information provided specifically as context from which it
should derive its response. One approach to the latter is Retrieval Augmented Generation (RAG) [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
A RAG-based system first retrieves a set of documents (using semantic searching), then feeds those
documents into the LLM as context for producing the final output.
      </p>
      <p>The diferences between dialogue games and LLMs are therefore quite clear: the former provide
a structured account of how a multi-party, goal-oriented conversation should proceed, without any
consideration as to the precise content of each move. LLMs, on the other hand, are adept at generating
natural language text in response to arbitrary prompts, without any consideration as to “legal” dialogical
(conversational) flow. These diferences, however, present opportunities for each to enhance and support
the capabilities of the other. Combining the rigid dialectical structures provided by dialogue games,
with the rich generative capabilities of LLMs will simultaneously address the human-level language gap
associated with the former, and the lack of useful regulation of utterances associated with the latter.</p>
    </sec>
    <sec id="sec-3">
      <title>3. The PrEFACE Library</title>
      <p>The PrEFACE (Prompt Engineering for Argumentative Conversational Exchanges) library is a tool for
closing the loop between Formal Dialogues Games and Large Language Models. Figure 1 illustrates
PrEFACE schematically from an Input/Output perspective. PrEFACE is designed to sit between an LLM
and an agent such that the agent decides what kind of utterances to make, based upon the current
dialogical context, and makes a request to PrEFACE, supplying the requisite dialogue context request
structure. PrEFACE uses this structure to generate a prompt which is then provided to the LLM. The
LLM generates a response, which is returned to PrEFACE. The response is then returned to the agent
alongside additional metadata. The following sub-sections describe each stage of the process in more
detail.</p>
      <sec id="sec-3-1">
        <title>3.1. The Dialogical Context Request Structure (DCRS)</title>
        <p>The Dialogical Context Request Structure (DCRS) is the primary means by which PrEFACE is directed
to generate prompts. A completed DCRS is a JSON1 document that describes the current dialogical
context along with the associated utterance that it is requested for the LLM to generate. The DCRS is
shown in Figure 2 and corresponds to stage 1 of Figure 1.</p>
        <sec id="sec-3-1-1">
          <title>PrEFACE</title>
        </sec>
        <sec id="sec-3-1-2">
          <title>Library</title>
        </sec>
        <sec id="sec-3-1-3">
          <title>Stage 2</title>
        </sec>
        <sec id="sec-3-1-4">
          <title>Prompt to LLM</title>
        </sec>
        <sec id="sec-3-1-5">
          <title>Stage 3</title>
          <p>LLM API</p>
        </sec>
        <sec id="sec-3-1-6">
          <title>LLM Textual</title>
        </sec>
        <sec id="sec-3-1-7">
          <title>Output</title>
        </sec>
        <sec id="sec-3-1-8">
          <title>Stage 4</title>
        </sec>
        <sec id="sec-3-1-9">
          <title>PrEFACE</title>
        </sec>
        <sec id="sec-3-1-10">
          <title>Library</title>
        </sec>
        <sec id="sec-3-1-11">
          <title>List of</title>
        </sec>
        <sec id="sec-3-1-12">
          <title>Utterances + Meta Data</title>
        </sec>
        <sec id="sec-3-1-13">
          <title>Stage 5</title>
          <p>
            PrEFACE builds on the Dialogue Game Description Language (DGDL) [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ] as a pragmatic initial tool
for consistently describing the moves within a dialogue. Because DGDL has a finite range of components
that are used to describe each locution, there are a finite number of prompt templates that need to be
constructed within PrEFACE.
          </p>
          <p>The DCRS is an object comprising five blocks, including metadata, history, topic, response, and
knowledge. A completed DCRS object, in JSON format, is supplied to PrEFACE to initiate the prompt
generation process.</p>
          <p>The first block, metadata, is necessary to positively identify an instance of the DCRS so that it can
{
"metadata": {
"name": "dcrs",
"version": "1.0"
},
"history": {
"utterances": [
{
"index": "0",
"speaker": "Alice",
"move_name": "assert",
"content": "We should take climate</p>
          <p>change seriously"
]</p>
          <p>}
},
"topic": {
"description": "Climate change is something</p>
          <p>to be taken seriously",
"stance": "PRO"
},
"response": {
"move_name": "agreeWithReason",
"scaffold": "We should take climate change</p>
          <p>seriously because $p$",
"opener": "I agree because $p$",
"requirements": [],</p>
          <p>"effects": []
},
"knowledge": {
"type": "",
"content": []
}
be verified against the correct version of the DCRS JSON schema. This block is mandatory but after
schema verification, plays no further part in prompt generation.</p>
          <p>The second block, history, provides a list of utterance objects that correspond to prior interactions
during the dialogue that are relevant to the utterance to be generated. Each utterance object comprises
an index, speaker, move_name, and content. The index key is negatively indexed from the current
utterance, the one being generated, e.g. the immediate previous utterance was -1, the one before that
was -2, etc. Under most circumstances, an utterance will be generated to respond to the last utterance of
another participant to the dialogue. In a small number of cases, two or more previous utterances might
be required in order to provide additional context for generation of the target utterance, for example, to
give the context of a micro-exchange within the dialogue, e.g. a Question-Answer-Challenge complex.
A third scenario occurs when backtracking occurs within the dialogue, returning to address an utterance
that had previously been made within the dialogue and was perhaps insuficiently resolved at that
earlier time-point. Speaker refers to the dialogue participant that uttered this content. "self" is used to
indicate that an utterance was communicated by the current speaker who is generating the PrEFACE
utterance. Speaker identification is required to diferentiate between the current speaker, the participant
to whom they are responding directly, i.e. to a question directed from another participant towards
the current speaker, and also to diferentiate any other participants in the dialogue who might have
previously participated in the current micro-dialogue. For example, speakers A, B, and C where speaker
A poses a question, speaker B (self) answers it, and speaker C then challenges speaker B"s answer,
leading to self making a defence oriented towards speaker C but directly related to the originating
question from speaker A. The move_name key refers to the DGDL move_name label, used to distinguish
between available moves. Finally content refers to the propositional content of the move, expressed as
a string.</p>
          <p>
            The response block is used to specify the kind of utterance the agent requires the LLM to generate.
For any given move within a DGDL game description there may be a range of either prescribed or
permitted responses which together constitute the set of legal moves. PrEFACE is designed to handle
one move at a time, so an agent must process the legal moves list one element at a time, passing in the
specific move type that must be generated. This way PrEFACE is focused upon engineering a single
prompt per dialogue context and where there are legal moves that could be generated, but which fall
outside of the agents strategy, resources are not wasted. Responses comprise a move_name, scafold,
opener, requirements, and efects. The move_name is a string describing the required move which
would usually constitute a DGDL move_name or a speech act label. The scafold is a string template
that defines the form of the required move, for example. “It is not the case that X because Y”, where
X and Y are parameters that must be filled when generating an utterance. Not all dialogue games
specify a scafold for their moves however. More common is the opener, as found in the Ravenscroft’s
Critical Reasoning Game (CRG)[
            <xref ref-type="bibr" rid="ref16">16</xref>
            ] where moves such as ‘Suggest1’ specifies the opener “My idea is”.
Completing the response block are the requirements and efects, which directly map onto the equivalent
elements of the DGDL specification for the associated move.
          </p>
          <p>The Topic block is used to communicate an overall topic for the dialogue and stance regarding that
topic, to the LLM. The topic is a string description and the stance is a single string from the set of
{“support” | “neutral” | “oppose” }.</p>
          <p>Finally, the knowledge block is used to specify a subset of the agent’s domain knowledge that can
assist the LLM in generating higher quality utterances. The DCRS uses information, e.g. descriptions
of moves and knowledge base (KB) contents, to provide context that is both relevant, only the move
sequence that needs to be responded to is supplied, and eficient, only the subset of knowledge that
the agent deems relevant to this exchange is used in the subsequent prompt generation. This means
that the calling agent, the one providing the DCRS, must decide which knowledge is relevant to supply,
and which locution to request in response, from the space of legal moves as defined by the dialogue
game. Where there are multiple possible responses that the agent might make, then the agent must
make multiple requests via PrEFACE.</p>
          <p>Whilst an LLM encodes a huge amount of information, this is generic, and often conflicting,
information, due to the wide ranging nature of the training sets used to create the model. In contrast,
You are assisting a user in a dialogue on the topic of: {topic.description}.
Their stance is: {topic.stance}.</p>
          <p>Your job is to find a value for $response consistent with $knowledgebase.
Use $context to help but don’t repeat anything contained within it
$knowledgebase = [{knowledge.content}]}
$context = [{history.utterance}]}</p>
          <p>Return only a response without any extra text
an intelligent agent likely has a KB that is both coherent and specific to its use case. The knowledge
block is used, optionally, to supply suficient, specific, knowledge to the LLM to enable the prompt
to tune the LLMs’s response. Conversely, there are two cases in which an agent might opt to not
supply a knowledge block to PrEFACE. The first is when considering the minimal case for generating
an utterance using a DGDL fragment containing an opener and a stance and the second is when the
relevant knowledge within an agent’s KB has been fully explored, but the agent is not yet ready to
capitulate the dialogue, in this case the LLM might return a serendipitously useful utterance that enables
the dialogue to continue based upon the generalised knowledge encoded in the LLM.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Prompt generation</title>
        <p>Given a DCRS document, PrEFACE uses the attributes to generate a suitable prompt that will, at least
in principle, return an appropriate response to be used as content for the dialogue move. The primary
basis of this prompt is the response.scafold field, with the LLM instructed to find a value for the missing
parameter(s), however other attributes also play a role in ensuring a relevant response.</p>
        <p>In Section 3.1, we describe how the DCRS contains history and knowledge blocks. The content of these
are provided as context to the LLM, along with instructions that define the role the LLM should fulfil.
Our approach to crafting this context is based on OpenAI’s Assistants API 2. An Assistant is designed to
follow specific instructions and make use of provided contextual information in responding to user
queries.</p>
        <p>The structure of the context has been designed to encode 1) the topic and stand; 2) the specific task
and associated constraints; 3) the knowledge and history provided as part of the DCRS; and 4) a specific
instruction to provide only the response without extraneous text. An example of this structure is shown
in Figure 3.</p>
        <p>The prompt itself is then based on the value provided in response.scafold . This scafold, defined by
the game developer alongside the DGDL specification, provides a natural language template for what
form the content should take in fulfilling the move. As an example, the scafolding for an argue move
may be “$p because $q”, where $p is a variable that will be instantiated with some previous claim, and
$q is a variable that will ultimately be sourced from the LLM. Section 3.3 provides worked example
showing how the scafold becomes a concrete prompt.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Worked Example</title>
        <p>A request to PrEFACE consists of six steps, from the point at which the dialogue machinery indicates
that the agent has available moves, through to those moves being instantiated with propositional
content. Here we provide a worked example of the full process.
2https://platform.openai.com/docs/assistants/overview</p>
        <p>
          We assume that a dialogue is taking place, based on a DGDL description that is being executed by the
Dialogue Game Execution Engine (DGEP), a platform for running and regulating implemented formal
dialogue games. The versions of DGDL and DGEP that we use are those incorporated into the Agents
United platform3 [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], which returns the set of available moves as a JSON object in the form shown
in Figure 4. Where DGEP identifies that the agent has available moves, the PrEFACE request process
begins. While all moves are subject to the process, for brevity we focus on a single move, .
        </p>
        <p>For each available move, the first stage is to create the concrete DCRS object. This involves adding
relevant utterances and knowledge to the ℎ and  fields respectively, and generating
a concrete scafold that will be used as the basis of the LLM prompt. The DCRS for an  move
contains a scafold of the form $p because $q, where $p and $q are variables representing the two
components of the argument. The value for $p is obtained from the  object in the move, while the
value for $q is the response to be sourced from the LLM. Our concrete scafold therefore becomes We
should take climate change seriously because $response.</p>
        <p>The second stage is to initialise PrEFACE using the DCRS, which in turn sets up the necessary objects
and functions for interacting with the LLM, and generates the final prompt that will be submitted. This
includes providing the instructions and context described in section 3.2. One PrEFACE is initialised,
the fourth stage is to send the prompt and await the response. To ensure a tightly-bound response,
we leverage the function calling capabilities of GPT-based LLMs, with the result passed to a callback
function whose eventual purpose will be to verify the response4.</p>
        <p>When a final response is obtained, the fifth and final stage is to embed that response into the reply
that - assuming this move is chosen - will be returned to DGEP. This reply is also a JSON object, of
the form shown in Figure 5. Once this process is complete for all available move types, the agent will
choose (using some strategy) a completed move object to return to DGEP.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Future developments and directions</title>
      <p>The version of the PreFACE library described in section 3 provides a simple method of linking formal
models of dialogue with large language models. This however represents only an initial starting point;
PreFACE and the DCRS remain under active development and will continue to have their capabilities
enhanced through further insights from both testing, and the literature. Furthermore, we intend
to explore how PreFACE can be used as a generalised tool for harnessing the capabilities of LLMs
3Available from https://github.com/AgentsUnited
4It is as-yet unclear what such a verification would involve, and so we leave this to future work.
in the broader field of computational argumentation. In this section, we briefly discuss such future
developments, from the immediate term to a longer-term vision.</p>
      <sec id="sec-4-1">
        <title>4.1. Retrieval Augmented Generation</title>
        <p>
          Retrieval Augmented Generation (RAG) allows LLMs to use external sources in generating responses
to prompts. A common use-case is allowing an LLM to access domain-specific documents as a way
of providing conversational interfaces for users to find relevant information (e.g. an employee asking
“What is the company annual leave policy?", and the RAG-supported LLM will retrieve the answer
from the relevant document). Within PreFACE, RAG provides opportunities to further supplement an
agent’s knowledge through retrieval of relevant arguments from a data store such as ArgDB5 [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. A
vector-based version of ArgDB is currently under development, and we intend to use this to integrate
RAG capabilities into PreFACE in the near future.
        </p>
        <p>There are however further hurdles that will need to be overcome. ArgDB stores analysed argument
data as directed graphs, represented in JSON. While LLMs are capable of parsing and querying JSON,
they are not as yet capable of understanding the semantics; that is, they cannot understand the inference
and conflict relationships described by the JSON objects. An additional step is therefore required to
represent the JSON returned from ArgDB in a format that can more readily be interpreted by an LLM.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Argument Summarisation</title>
        <p>As noted in Section 4.1 above, analysed argument data is stored as directed graphs. These graphs
comprise individual premises, conclusions and the relationships between them, and can be easily
visualised. Generating textual summaries of these analyses can be useful, for example in helping
understand a complex topic with multiple conflicting viewpoints.</p>
        <p>It is our intention in future work to explore how PreFACE can be extended to not only find suitable
content for a dialogue move, but also provide summaries of analysed arguments stored in ArgDB. We
envisage that this will leverage the (upcoming) RAG capabilities, but instead of finding a specific piece
of content to fulfil a dialogical function, instead the LLM will be used to summarise a collection of
arguments and the relationships between them.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>This paper has presented preliminary and in-progress work towards combining the strict dialectical
structures imposed by formal dialogue games, and the human-level language generation capabilities of
large language models (LLMs). We presented the PreFACE library, a tool that allows a software agent
to query an LLM for an appropriate response given their currently available dialogue move(s). The
Dialogue Context Request Structure (DCRS) allows the agent to provide the current dialogical context
along with other associated details relevant to the utterance that the LLM is being request to generate.
5https://github.com/Open-Argumentation/ArgDB</p>
      <p>As the capabilities of LLMs continue to expand, so to will the demand for further application areas.
The work we present here lays a foundation to make such expansions into domains that require both
strong natural language generation, and strict conversational structures.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wells</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Snaith</surname>
          </string-name>
          ,
          <article-title>On the role of dialogue models in the age of large language models</article-title>
          ,
          <source>in: Proceedings of the the 23rd International Workshop on Computational Models of Natural Argument (CMNA'23)</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.</given-names>
            <surname>Lorenzen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lorenz</surname>
          </string-name>
          , Dialogische Logik, Dormstatdt, Wissenschftliche Buchgesellschaft,
          <year>1978</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>N.</given-names>
            <surname>Rescher</surname>
          </string-name>
          , Dialectics, State University of New York Press, Albany,
          <year>1977</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Godden</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wells</surname>
          </string-name>
          ,
          <article-title>Burdens of proposing: On the burden of proof in deliberation dialogues</article-title>
          ,
          <source>Informal Logic</source>
          <volume>42</volume>
          (
          <year>2022</year>
          )
          <fpage>291</fpage>
          -
          <lpage>342</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wells</surname>
          </string-name>
          , Formal Dialectical Games in Multiagent Argumentation,
          <source>Ph.D. thesis</source>
          , University of Dundee,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>P.</given-names>
            <surname>McBurney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Parsons</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wooldridge</surname>
          </string-name>
          ,
          <article-title>Desiderata for agent argumentation protocols</article-title>
          ,
          <source>Proceedings of the First AAMAS</source>
          (
          <year>2002</year>
          )
          <fpage>402</fpage>
          -
          <lpage>409</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wells</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Reed</surname>
          </string-name>
          ,
          <article-title>Testing formal dialectic</article-title>
          ,
          <source>in: Proceedings of the Second International Workshop on Argumentation in Multi-Agent Systems (ArgMAS)</source>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wells</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Reed</surname>
          </string-name>
          ,
          <article-title>A domain specific language for describing diverse systems of dialogue</article-title>
          ,
          <source>Journal of Applied Logic</source>
          <volume>10</volume>
          (
          <year>2012</year>
          )
          <fpage>309</fpage>
          -
          <lpage>329</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Snaith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lawrence</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Reed</surname>
          </string-name>
          ,
          <article-title>Mixed initiative argument in public deliberation</article-title>
          , in: F. De Cindio,
          <string-name>
            <given-names>A.</given-names>
            <surname>Macintosh</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          Peraboni (Eds.), From e-Participation to Online Deliberation,
          <source>Proceedings of the Fourth International Conference on Online Deliberation, OD2010</source>
          , Leeds, UK,
          <year>2010</year>
          , pp.
          <fpage>2</fpage>
          -
          <lpage>13</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Snaith</surname>
          </string-name>
          , H. op den Akker, T. Beinema,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bruijnes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fides-Valero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Huizing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kantharaju</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Klaassen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Konsolakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Reidsma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Weusthof</surname>
          </string-name>
          ,
          <article-title>A demonstration of multi-party dialogue using virtual coaches: the first council of coaches demonstrator</article-title>
          ,
          <source>in: Proceedings of the 7th International Conference on Computational Models of Argument (COMMA</source>
          <year>2018</year>
          ), Warsaw, Poland,
          <year>2018</year>
          , pp.
          <fpage>473</fpage>
          -
          <lpage>474</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>R. B. Kantharaju</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Pease</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Reidsma</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Pelachaud</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Snaith</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Bruijnes</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Klaassen</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Beinema</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Huizing</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Simonetti</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Heylen</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <article-title>op den Akker, Integrating argumentation with social conversation between multiple virtual coaches</article-title>
          ,
          <source>in: IVA 2019 - Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery</source>
          , Paris, France,
          <year>2019</year>
          , pp.
          <fpage>203</fpage>
          -
          <lpage>205</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>T.</given-names>
            <surname>Beinema</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Davison</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Reidsma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Banos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bruijnes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Donval</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Á. F.</given-names>
            <surname>Valero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Heylen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hofs</surname>
          </string-name>
          , G. Huizing,
          <string-name>
            <given-names>R. B.</given-names>
            <surname>Kantharaju</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Klaassen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kolkmeier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Konsolakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pease</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Pelachaud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Simonetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Snaith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Traver</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Van Loon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. H.</given-names>
            <surname>Visser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Jacky</given-names>
            <surname>Weusthof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Yunus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. J.</given-names>
            <surname>Hermens</surname>
          </string-name>
          , H. op den Akker,
          <article-title>Agents united: An open platform for multi-agent conversational systems</article-title>
          ,
          <source>in: Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>17</fpage>
          -
          <lpage>24</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>T.</given-names>
            <surname>Krauthof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Baurmann</surname>
          </string-name>
          , G. Betz,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mauve</surname>
          </string-name>
          ,
          <article-title>Dialog-based online argumentation</article-title>
          .,
          <source>in: COMMA</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>33</fpage>
          -
          <lpage>40</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bommasani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Rafel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zoph</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Borgeaud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Yogatama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bosma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Metzler</surname>
          </string-name>
          , et al.,
          <article-title>Emergent abilities of large language models</article-title>
          ,
          <source>arXiv:2206.07682</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>P.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Perez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Piktus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Petroni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Karpukhin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Küttler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lewis</surname>
          </string-name>
          , W.-t. Yih,
          <string-name>
            <given-names>T.</given-names>
            <surname>Rocktäschel</surname>
          </string-name>
          , et al.,
          <article-title>Retrieval-augmented generation for knowledge-intensive nlp tasks</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>33</volume>
          (
          <year>2020</year>
          )
          <fpage>9459</fpage>
          -
          <lpage>9474</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ravenscroft</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wells</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sagar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Reed</surname>
          </string-name>
          ,
          <article-title>Mapping persuasive dialogue games onto argumentation structures</article-title>
          ,
          <source>in: AISB Symposium on Persuasive Technology &amp; Digital Behaviour Interventions</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wells</surname>
          </string-name>
          ,
          <article-title>Datastores for argumentation data</article-title>
          .,
          <source>in: CMNA@ COMMA</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>31</fpage>
          -
          <lpage>40</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>