<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Formal Dialogue and Large Language Models</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Mark</forename><surname>Snaith</surname></persName>
							<email>m.snaith@rgu.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="department">School of Computing, Engineering and Technology</orgName>
								<orgName type="institution">Robert Gordon University</orgName>
								<address>
									<postCode>AB10 7GJ</postCode>
									<settlement>Aberdeen</settlement>
									<country key="GB">Scotland, UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Simon</forename><surname>Wells</surname></persName>
							<email>s.wells@napier.ac.uk</email>
							<affiliation key="aff1">
								<orgName type="institution">Edinburgh Napier University</orgName>
								<address>
									<addrLine>10 Colinton Road</addrLine>
									<postCode>EH10 5DT</postCode>
									<settlement>Edinburgh</settlement>
									<country key="GB">Scotland, UK</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Formal Dialogue and Large Language Models</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">F5C04701A97383C088E41037559FCA9A</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:21+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>Formal dialogue, argumentation,</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In this paper, we present preliminary work into combining formal models of dialogue and large language models, before going on to discuss how this provides a foundation for similar approaches involving computational models of argument. First, we address the twin issues of how a formal dialogue game can usefully regulate dialogical utterances generated by an LLM during an extended, goal-oriented conversation, and conversely, how LLMs can close the human-level language generation gap associated with formal dialogue games. We then proceed to identify how our solution to these issues can underpin future work towards using computational argumentation to provide reasoning-like capabilities to LLMs, and using LLMs for tasks such as searching and summarisation of analysed argument data.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In this paper, we first address the twin issues of how a formal dialogue game can usefully regulate dialogical utterances generated by an LLM during an extended, goal-oriented conversation, and conversely, how LLMs can close the human-level language generation gap associated with formal dialogue games.</p><p>To that end, we present the PreFACE library, a tool that allows a software agent to query an LLM for an appropriate response given their currently available dialogue move(s). We then proceed to identify further potential uses for PreFACE in underpinning future work towards using computational argumentation to provide reasoning-like capabilities to LLMs, and using LLMs for tasks such as argument mining, and searching and summarisation of analysed argument data.</p><p>In <ref type="bibr" target="#b0">[1]</ref> the authors addressed questions about the role of formal dialogue in the age of LLMs and established that (1) LLMs, at least in their current form, do not yet subsume all of the functionalities of formal approaches to dialogue, and (2) that formal dialogue games can have an important regulatory role, ensuring that the dialogues that are generated conform to normative expectations of how dialogues should progress. Whilst it might be the case that future generations of LLM will effectively engage in focused and extended argumentative dialogues of various types, there are situations where it is unfeasible to continuously retrain an LLM, or to supply all of the domain and corpus knowledge to the LLM, especially where interacting agents are heterogeneous, owned, operated or governed by differing individuals and organisations, and maintaining proprietary knowledge that is outside of the LLM.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Background</head><p>Research into dialogue games originated in a variety of goal oriented studies; whether to explain proof procedures <ref type="bibr" target="#b1">[2]</ref>, to model the Aristotelian conception of Dialectic <ref type="bibr" target="#b2">[3]</ref>, to understand how people interact for example, during deliberation <ref type="bibr" target="#b3">[4]</ref>, or to manage the interactions between intelligent agents in multiagent systems <ref type="bibr" target="#b4">[5]</ref>. At various times, researchers have attempted to determine ways to organise these various approaches. For example, McBurney and Parsons <ref type="bibr" target="#b5">[6]</ref> proposed a set of desiderata associated with dialogue games in agent communication, similarly, Wells <ref type="bibr" target="#b6">[7]</ref> put forward a number of criteria for specifying dialectical games which was subsequently developed into the Dialogue Game Description Language <ref type="bibr" target="#b7">[8]</ref>. These approaches were all broadly in the context of intelligent agent interaction via dialogue games, and mainly sought to give structure within which benchmarks, standards, expectations, and points of comparison, could be situated whilst also seeking to depict the space of possible dialogue games so that a principled exploration could be made.</p><p>Various tools and platforms have been developed to support computational implementations of dialogue games, and their subsequent execution to be used in inter-agent communication. The Dialogue Game Description Language (DGDL) is a domain-specific language for describing dialogue games <ref type="bibr" target="#b7">[8]</ref>, while the Dialogue Game Execution Platform (DGEP) provides an environment for interpreting and running games specified in DGDL. Together, DGDL and DGEP have been shown to support systems for public deliberation <ref type="bibr" target="#b8">[9]</ref> and health coaching <ref type="bibr" target="#b9">[10]</ref>, as well as underpinning generalised platforms for structured conversational systems <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12]</ref> and influencing related work in this area <ref type="bibr" target="#b12">[13]</ref>.</p><p>By contrast, Large Language Models (LLMs) provide a less structured approach to conversational interaction. Given a prompt, they will predict the best response to make. Prompt engineering is the process of designing and structuring prompts towards more effective results. This leverages an LLMs ability for in-context learning <ref type="bibr" target="#b13">[14]</ref>, where in addition to its underlying model, the LLM temporarily learns either from the prompt itself, or via information provided specifically as context from which it should derive its response. One approach to the latter is Retrieval Augmented Generation (RAG) <ref type="bibr" target="#b14">[15]</ref>. A RAG-based system first retrieves a set of documents (using semantic searching), then feeds those documents into the LLM as context for producing the final output.</p><p>The differences between dialogue games and LLMs are therefore quite clear: the former provide a structured account of how a multi-party, goal-oriented conversation should proceed, without any consideration as to the precise content of each move. LLMs, on the other hand, are adept at generating natural language text in response to arbitrary prompts, without any consideration as to "legal" dialogical (conversational) flow. These differences, however, present opportunities for each to enhance and support the capabilities of the other. Combining the rigid dialectical structures provided by dialogue games, with the rich generative capabilities of LLMs will simultaneously address the human-level language gap associated with the former, and the lack of useful regulation of utterances associated with the latter.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">The PrEFACE Library</head><p>The PrEFACE (Prompt Engineering for Argumentative Conversational Exchanges) library is a tool for closing the loop between Formal Dialogues Games and Large Language Models. Figure <ref type="figure">1</ref> illustrates PrEFACE schematically from an Input/Output perspective. PrEFACE is designed to sit between an LLM and an agent such that the agent decides what kind of utterances to make, based upon the current dialogical context, and makes a request to PrEFACE, supplying the requisite dialogue context request structure. PrEFACE uses this structure to generate a prompt which is then provided to the LLM. The LLM generates a response, which is returned to PrEFACE. The response is then returned to the agent alongside additional metadata. The following sub-sections describe each stage of the process in more detail.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">The Dialogical Context Request Structure (DCRS)</head><p>The Dialogical Context Request Structure (DCRS) is the primary means by which PrEFACE is directed to generate prompts. A completed DCRS is a JSON<ref type="foot" target="#foot_0">1</ref> document that describes the current dialogical context along with the associated utterance that it is requested for the LLM to generate. The DCRS is shown in Figure <ref type="figure" target="#fig_0">2</ref> and corresponds to stage 1 of Figure <ref type="figure">1</ref>. { "metadata": { "name": "dcrs", "version": "1.0" }, "history": { "utterances": [ { "index": "", "speaker": "", "move_name": "", "content": "" } ] }, "topic": { "description": "", "stance": "" }, "response": { "move_name": "", "scaffold": "", "opener": "", "requirements": [],</p><p>"effects": [] }, "knowledge": { "type": "", "content": [] } } { "metadata": { "name": "dcrs", "version": "1.0" }, "history": { "utterances": [ { "index": "0", "speaker": "Alice", "move_name": "assert", "content": "We should take climate change seriously" } ] }, "topic": { "description": "Climate change is something to be taken seriously", "stance": "PRO" }, "response": { "move_name": "agreeWithReason", "scaffold": "We should take climate change seriously because $p$", "opener": "I agree because $p$", "requirements": [], "effects": [] }, "knowledge": { "type": "", "content": [] } } PrEFACE builds on the Dialogue Game Description Language (DGDL) <ref type="bibr" target="#b7">[8]</ref> as a pragmatic initial tool for consistently describing the moves within a dialogue. Because DGDL has a finite range of components that are used to describe each locution, there are a finite number of prompt templates that need to be constructed within PrEFACE.</p><p>The DCRS is an object comprising five blocks, including metadata, history, topic, response, and knowledge. A completed DCRS object, in JSON format, is supplied to PrEFACE to initiate the prompt generation process.</p><p>The first block, metadata, is necessary to positively identify an instance of the DCRS so that it can be verified against the correct version of the DCRS JSON schema. This block is mandatory but after schema verification, plays no further part in prompt generation.</p><p>The second block, history, provides a list of utterance objects that correspond to prior interactions during the dialogue that are relevant to the utterance to be generated. Each utterance object comprises an index, speaker, move_name, and content. The index key is negatively indexed from the current utterance, the one being generated, e.g. the immediate previous utterance was -1, the one before that was -2, etc. Under most circumstances, an utterance will be generated to respond to the last utterance of another participant to the dialogue. In a small number of cases, two or more previous utterances might be required in order to provide additional context for generation of the target utterance, for example, to give the context of a micro-exchange within the dialogue, e.g. a Question-Answer-Challenge complex. A third scenario occurs when backtracking occurs within the dialogue, returning to address an utterance that had previously been made within the dialogue and was perhaps insufficiently resolved at that earlier time-point. Speaker refers to the dialogue participant that uttered this content. "self" is used to indicate that an utterance was communicated by the current speaker who is generating the PrEFACE utterance. Speaker identification is required to differentiate between the current speaker, the participant to whom they are responding directly, i.e. to a question directed from another participant towards the current speaker, and also to differentiate any other participants in the dialogue who might have previously participated in the current micro-dialogue. For example, speakers A, B, and C where speaker A poses a question, speaker B (self) answers it, and speaker C then challenges speaker B"s answer, leading to self making a defence oriented towards speaker C but directly related to the originating question from speaker A. The move_name key refers to the DGDL move_name label, used to distinguish between available moves. Finally content refers to the propositional content of the move, expressed as a string.</p><p>The response block is used to specify the kind of utterance the agent requires the LLM to generate. For any given move within a DGDL game description there may be a range of either prescribed or permitted responses which together constitute the set of legal moves. PrEFACE is designed to handle one move at a time, so an agent must process the legal moves list one element at a time, passing in the specific move type that must be generated. This way PrEFACE is focused upon engineering a single prompt per dialogue context and where there are legal moves that could be generated, but which fall outside of the agents strategy, resources are not wasted. Responses comprise a move_name, scaffold, opener, requirements, and effects. The move_name is a string describing the required move which would usually constitute a DGDL move_name or a speech act label. The scaffold is a string template that defines the form of the required move, for example. "It is not the case that X because Y", where X and Y are parameters that must be filled when generating an utterance. Not all dialogue games specify a scaffold for their moves however. More common is the opener, as found in the Ravenscroft's Critical Reasoning Game (CRG) <ref type="bibr" target="#b15">[16]</ref> where moves such as 'Suggest1' specifies the opener "My idea is". Completing the response block are the requirements and effects, which directly map onto the equivalent elements of the DGDL specification for the associated move.</p><p>The Topic block is used to communicate an overall topic for the dialogue and stance regarding that topic, to the LLM. The topic is a string description and the stance is a single string from the set of {"support" | "neutral" | "oppose" }.</p><p>Finally, the knowledge block is used to specify a subset of the agent's domain knowledge that can assist the LLM in generating higher quality utterances. The DCRS uses information, e.g. descriptions of moves and knowledge base (KB) contents, to provide context that is both relevant, only the move sequence that needs to be responded to is supplied, and efficient, only the subset of knowledge that the agent deems relevant to this exchange is used in the subsequent prompt generation. This means that the calling agent, the one providing the DCRS, must decide which knowledge is relevant to supply, and which locution to request in response, from the space of legal moves as defined by the dialogue game. Where there are multiple possible responses that the agent might make, then the agent must make multiple requests via PrEFACE.</p><p>Whilst an LLM encodes a huge amount of information, this is generic, and often conflicting, information, due to the wide ranging nature of the training sets used to create the model. In contrast, You are assisting a user in a dialogue on the topic of: {topic.description}. Their stance is: {topic.stance}.</p><p>Your job is to find a value for $response consistent with $knowledgebase. Use $context to help but don't repeat anything contained within it  an intelligent agent likely has a KB that is both coherent and specific to its use case. The knowledge block is used, optionally, to supply sufficient, specific, knowledge to the LLM to enable the prompt to tune the LLMs's response. Conversely, there are two cases in which an agent might opt to not supply a knowledge block to PrEFACE. The first is when considering the minimal case for generating an utterance using a DGDL fragment containing an opener and a stance and the second is when the relevant knowledge within an agent's KB has been fully explored, but the agent is not yet ready to capitulate the dialogue, in this case the LLM might return a serendipitously useful utterance that enables the dialogue to continue based upon the generalised knowledge encoded in the LLM.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Prompt generation</head><p>Given a DCRS document, PrEFACE uses the attributes to generate a suitable prompt that will, at least in principle, return an appropriate response to be used as content for the dialogue move. The primary basis of this prompt is the response.scaffold field, with the LLM instructed to find a value for the missing parameter(s), however other attributes also play a role in ensuring a relevant response.</p><p>In Section 3.1, we describe how the DCRS contains history and knowledge blocks. The content of these are provided as context to the LLM, along with instructions that define the role the LLM should fulfil. Our approach to crafting this context is based on OpenAI's Assistants API <ref type="foot" target="#foot_1">2</ref> . An Assistant is designed to follow specific instructions and make use of provided contextual information in responding to user queries.</p><p>The structure of the context has been designed to encode 1) the topic and stand; 2) the specific task and associated constraints; 3) the knowledge and history provided as part of the DCRS; and 4) a specific instruction to provide only the response without extraneous text. An example of this structure is shown in Figure <ref type="figure" target="#fig_2">3</ref>.</p><p>The prompt itself is then based on the value provided in response.scaffold. This scaffold, defined by the game developer alongside the DGDL specification, provides a natural language template for what form the content should take in fulfilling the move. As an example, the scaffolding for an argue move may be "$p because $q", where $p is a variable that will be instantiated with some previous claim, and $q is a variable that will ultimately be sourced from the LLM. Section 3.3 provides worked example showing how the scaffold becomes a concrete prompt.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Worked Example</head><p>A request to PrEFACE consists of six steps, from the point at which the dialogue machinery indicates that the agent has available moves, through to those moves being instantiated with propositional content. Here we provide a worked example of the full process.</p><p>{ "agent":[ { "moveID":"argue", "opener":"because $q", "target":"human", "reply":{ "p": "We should take climate change seriously", "q": "$q" } }, ... ] }</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 4: JSON excerpt of available moves from DGEP</head><p>We assume that a dialogue is taking place, based on a DGDL description that is being executed by the Dialogue Game Execution Engine (DGEP), a platform for running and regulating implemented formal dialogue games. The versions of DGDL and DGEP that we use are those incorporated into the Agents United platform<ref type="foot" target="#foot_2">3</ref>  <ref type="bibr" target="#b11">[12]</ref>, which returns the set of available moves as a JSON object in the form shown in Figure <ref type="figure">4</ref>. Where DGEP identifies that the agent has available moves, the PrEFACE request process begins. While all moves are subject to the process, for brevity we focus on a single move, 𝑎𝑟𝑔𝑢𝑒.</p><p>For each available move, the first stage is to create the concrete DCRS object. This involves adding relevant utterances and knowledge to the ℎ𝑖𝑠𝑡𝑜𝑟𝑦 and 𝑘𝑛𝑜𝑤𝑙𝑒𝑑𝑔𝑒 fields respectively, and generating a concrete scaffold that will be used as the basis of the LLM prompt. The DCRS for an 𝑎𝑟𝑔𝑢𝑒 move contains a scaffold of the form $p because $q, where $p and $q are variables representing the two components of the argument. The value for $p is obtained from the 𝑟𝑒𝑝𝑙𝑦 object in the move, while the value for $q is the response to be sourced from the LLM. Our concrete scaffold therefore becomes We should take climate change seriously because $response.</p><p>The second stage is to initialise PrEFACE using the DCRS, which in turn sets up the necessary objects and functions for interacting with the LLM, and generates the final prompt that will be submitted. This includes providing the instructions and context described in section 3.2. One PrEFACE is initialised, the fourth stage is to send the prompt and await the response. To ensure a tightly-bound response, we leverage the function calling capabilities of GPT-based LLMs, with the result passed to a callback function whose eventual purpose will be to verify the response <ref type="foot" target="#foot_3">4</ref> .</p><p>When a final response is obtained, the fifth and final stage is to embed that response into the reply that -assuming this move is chosen -will be returned to DGEP. This reply is also a JSON object, of the form shown in Figure <ref type="figure">5</ref>. Once this process is complete for all available move types, the agent will choose (using some strategy) a completed move object to return to DGEP.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Future developments and directions</head><p>The version of the PreFACE library described in section 3 provides a simple method of linking formal models of dialogue with large language models. This however represents only an initial starting point; PreFACE and the DCRS remain under active development and will continue to have their capabilities enhanced through further insights from both testing, and the literature. Furthermore, we intend to explore how PreFACE can be used as a generalised tool for harnessing the capabilities of LLMs { "speaker":"agent", "target":"human", "reply":{ "p": "We should take climate change seriously", "q": "Ignoring it would have devastating consequences" } } Figure <ref type="figure">5</ref>: JSON format of a reply to be sent to DGEP in the broader field of computational argumentation. In this section, we briefly discuss such future developments, from the immediate term to a longer-term vision.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Retrieval Augmented Generation</head><p>Retrieval Augmented Generation (RAG) allows LLMs to use external sources in generating responses to prompts. A common use-case is allowing an LLM to access domain-specific documents as a way of providing conversational interfaces for users to find relevant information (e.g. an employee asking "What is the company annual leave policy?", and the RAG-supported LLM will retrieve the answer from the relevant document). Within PreFACE, RAG provides opportunities to further supplement an agent's knowledge through retrieval of relevant arguments from a data store such as ArgDB<ref type="foot" target="#foot_4">5</ref>  <ref type="bibr" target="#b16">[17]</ref>. A vector-based version of ArgDB is currently under development, and we intend to use this to integrate RAG capabilities into PreFACE in the near future.</p><p>There are however further hurdles that will need to be overcome. ArgDB stores analysed argument data as directed graphs, represented in JSON. While LLMs are capable of parsing and querying JSON, they are not as yet capable of understanding the semantics; that is, they cannot understand the inference and conflict relationships described by the JSON objects. An additional step is therefore required to represent the JSON returned from ArgDB in a format that can more readily be interpreted by an LLM.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Argument Summarisation</head><p>As noted in Section 4.1 above, analysed argument data is stored as directed graphs. These graphs comprise individual premises, conclusions and the relationships between them, and can be easily visualised. Generating textual summaries of these analyses can be useful, for example in helping understand a complex topic with multiple conflicting viewpoints.</p><p>It is our intention in future work to explore how PreFACE can be extended to not only find suitable content for a dialogue move, but also provide summaries of analysed arguments stored in ArgDB. We envisage that this will leverage the (upcoming) RAG capabilities, but instead of finding a specific piece of content to fulfil a dialogical function, instead the LLM will be used to summarise a collection of arguments and the relationships between them.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions</head><p>This paper has presented preliminary and in-progress work towards combining the strict dialectical structures imposed by formal dialogue games, and the human-level language generation capabilities of large language models (LLMs). We presented the PreFACE library, a tool that allows a software agent to query an LLM for an appropriate response given their currently available dialogue move(s). The Dialogue Context Request Structure (DCRS) allows the agent to provide the current dialogical context along with other associated details relevant to the utterance that the LLM is being request to generate.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: The Dialogue Context Request Structure JSON format template (L) and instantiated (R)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>$</head><label></label><figDesc>knowledgebase = [{knowledge.content}]} $context = [{history.utterance}]} Return only a response without any extra text</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Example context prompt</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>The stages of activity during PrEFACE based prompt engineering.</figDesc><table><row><cell>Diaological</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Context</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Request</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>List of</cell></row><row><cell>Structure</cell><cell></cell><cell>Prompt to</cell><cell></cell><cell>LLM Textual</cell><cell></cell><cell>Utterances</cell></row><row><cell>(DCRS)</cell><cell>PrEFACE Library</cell><cell>LLM</cell><cell>LLM API</cell><cell>Output</cell><cell>PrEFACE Library</cell><cell>+ Meta Data</cell></row><row><cell>Stage 1</cell><cell>Stage 2</cell><cell>Stage 3</cell><cell></cell><cell>Stage 4</cell><cell></cell><cell>Stage 5</cell></row><row><cell>Figure 1:</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">http://www.json.org/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://platform.openai.com/docs/assistants/overview</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">Available from https://github.com/AgentsUnited</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">It is as-yet unclear what such a verification would involve, and so we leave this to future work.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">https://github.com/Open-Argumentation/ArgDB</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>As the capabilities of LLMs continue to expand, so to will the demand for further application areas. The work we present here lays a foundation to make such expansions into domains that require both strong natural language generation, and strict conversational structures.</p></div>
			</div>


			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>S. Wells) https://marksnaith.com (M. Snaith); https://www.simonwells.org/ (S. Wells) 0000-0001-9979-9374 (M. Snaith); 0000-0003-4512-7868 (S. Wells)    </p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">On the role of dialogue models in the age of large language models</title>
		<author>
			<persName><forename type="first">S</forename><surname>Wells</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Snaith</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the the 23rd International Workshop on Computational Models of Natural Argument (CMNA&apos;23)</title>
				<meeting>the the 23rd International Workshop on Computational Models of Natural Argument (CMNA&apos;23)</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Lorenzen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lorenz</surname></persName>
		</author>
		<title level="m">Dialogische Logik, Dormstatdt, Wissenschftliche Buchgesellschaft</title>
				<imprint>
			<date type="published" when="1978">1978</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Dialectics</title>
		<author>
			<persName><forename type="first">N</forename><surname>Rescher</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1977">1977</date>
			<publisher>State University of New York Press</publisher>
			<pubPlace>Albany</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Burdens of proposing: On the burden of proof in deliberation dialogues</title>
		<author>
			<persName><forename type="first">D</forename><surname>Godden</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wells</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Informal Logic</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="page" from="291" to="342" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Formal Dialectical Games in Multiagent Argumentation</title>
		<author>
			<persName><forename type="first">S</forename><surname>Wells</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2007">2007</date>
		</imprint>
		<respStmt>
			<orgName>University of Dundee</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Ph.D. thesis</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Desiderata for agent argumentation protocols</title>
		<author>
			<persName><forename type="first">P</forename><surname>Mcburney</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Parsons</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wooldridge</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the First AAMAS</title>
				<meeting>the First AAMAS</meeting>
		<imprint>
			<date type="published" when="2002">2002</date>
			<biblScope unit="page" from="402" to="409" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Testing formal dialectic</title>
		<author>
			<persName><forename type="first">S</forename><surname>Wells</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Reed</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Second International Workshop on Argumentation in Multi-Agent Systems (ArgMAS)</title>
				<meeting>the Second International Workshop on Argumentation in Multi-Agent Systems (ArgMAS)</meeting>
		<imprint>
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A domain specific language for describing diverse systems of dialogue</title>
		<author>
			<persName><forename type="first">S</forename><surname>Wells</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Reed</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Applied Logic</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="309" to="329" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Mixed initiative argument in public deliberation</title>
		<author>
			<persName><forename type="first">M</forename><surname>Snaith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lawrence</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Reed</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fourth International Conference on Online Deliberation</title>
				<editor>
			<persName><forename type="first">F</forename><forename type="middle">De</forename><surname>Cindio</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Macintosh</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Peraboni</surname></persName>
		</editor>
		<meeting>the Fourth International Conference on Online Deliberation<address><addrLine>Leeds, UK</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2010">OD2010. 2010</date>
			<biblScope unit="page" from="2" to="13" />
		</imprint>
	</monogr>
	<note>From e-Participation to Online Deliberation</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A demonstration of multi-party dialogue using virtual coaches: the first council of coaches demonstrator</title>
		<author>
			<persName><forename type="first">M</forename><surname>Snaith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Op Den Akker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Beinema</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bruijnes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Fides-Valero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Huizing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Kantharaju</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Klaassen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Konsolakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Reidsma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Weusthof</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 7th International Conference on Computational Models of Argument (COMMA 2018)</title>
				<meeting>the 7th International Conference on Computational Models of Argument (COMMA 2018)<address><addrLine>Warsaw, Poland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="473" to="474" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Integrating argumentation with social conversation between multiple virtual coaches</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">B</forename><surname>Kantharaju</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pease</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Reidsma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Pelachaud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Snaith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bruijnes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Klaassen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Beinema</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Huizing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Simonetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Heylen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Op Den Akker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IVA 2019 -Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents</title>
				<meeting><address><addrLine>Paris, France</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="203" to="205" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Agents united: An open platform for multi-agent conversational systems</title>
		<author>
			<persName><forename type="first">T</forename><surname>Beinema</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Davison</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Reidsma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Banos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bruijnes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Donval</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Á</forename><forename type="middle">F</forename><surname>Valero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Heylen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Hofs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Huizing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">B</forename><surname>Kantharaju</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Klaassen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kolkmeier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Konsolakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pease</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Pelachaud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Simonetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Snaith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Traver</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Van Loon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">H</forename><surname>Visser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jacky</forename><surname>Weusthof</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Yunus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">J</forename><surname>Hermens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Op Den Akker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents</title>
				<meeting>the 21st ACM International Conference on Intelligent Virtual Agents</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="17" to="24" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Dialog-based online argumentation</title>
		<author>
			<persName><forename type="first">T</forename><surname>Krauthoff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Baurmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Betz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mauve</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">COMMA</title>
		<imprint>
			<biblScope unit="page" from="33" to="40" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Tay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Bommasani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Raffel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Zoph</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Borgeaud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Yogatama</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bosma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Metzler</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2206.07682</idno>
		<title level="m">Emergent abilities of large language models</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Retrieval-augmented generation for knowledge-intensive nlp tasks</title>
		<author>
			<persName><forename type="first">P</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Perez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Piktus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Petroni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Karpukhin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Küttler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>-T. Yih</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Rocktäschel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in Neural Information Processing Systems</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page" from="9459" to="9474" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Mapping persuasive dialogue games onto argumentation structures</title>
		<author>
			<persName><forename type="first">A</forename><surname>Ravenscroft</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wells</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sagar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Reed</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AISB Symposium on Persuasive Technology &amp; Digital Behaviour Interventions</title>
				<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Datastores for argumentation data</title>
		<author>
			<persName><forename type="first">S</forename><surname>Wells</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CMNA@ COMMA</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="31" to="40" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
