<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Human-in-the-loop approach to digitisation of engineering drawings</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Andrew</forename><forename type="middle">M</forename><surname>Fagan</surname></persName>
							<email>andrew.fagan@strath.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Strathclyde</orgName>
								<address>
									<settlement>Glasgow</settlement>
									<country key="GB">United Kingdom</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Graeme</forename><forename type="middle">M</forename><surname>West</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Strathclyde</orgName>
								<address>
									<settlement>Glasgow</settlement>
									<country key="GB">United Kingdom</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Stephen</forename><forename type="middle">D J</forename><surname>Mcarthur</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Strathclyde</orgName>
								<address>
									<settlement>Glasgow</settlement>
									<country key="GB">United Kingdom</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Human-in-the-loop approach to digitisation of engineering drawings</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">184D6D9915572743441B7A2FB4777BFC</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T04:46+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Human-in-the-loop</term>
					<term>Digitisation</term>
					<term>Engineering Drawings</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In the nuclear power industry, the high-performance blackbox systems which are prevalent in modern AI research are difficult to match to applications which take advantage of their strengths. These systems generally require volumes of labelled or well formatted data, and will provide a high level of performance which cannot be easily understood, explained or audited. Indeed, most AI systems deployed in this industry are done so with constant oversight by a skilled human operator, negating many of their advantages in cost, speed and reliability. In many cases, the time taken to format data for these systems is prohibitive even when automation might be desirable. This paper presents a framework for deploying a variety of AI techniques to industries where human oversight is required. Instead of treating the user as an external element while automating the task, this framework incorporates them as an active participant in the process, augmenting their performance while leveraging their strengths to make the AI systems more reliable and user-friendly. The framework is concerned primarily with the problem of digitising Elementary Wiring Diagrams, an important class of engineering drawing. These are in regular use even as low-quality scans of paper documents, and while digitising them is desirable and has many potential benefits, the level of time investment required by skilled engineers is prohibitive. Mistakes in digitisation are also potentially costly, meaning that the skilled engineer must remain involved in the process.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Engineering drawings serve several key purposes in design engineering, maintenance and asset management. The ability to quickly determine which components are connected to and affected by others can assist in tasks such as fault diagnosis and upgrading equipment, especially when designs are digitised with intelligent metadata and cross referencing between multiple drawings.</p><p>In the nuclear power industry, and in others such as the oil and gas industry, many assets have been in use for a significant period. With their associated drawings having been originally drafted on paper, they are now digitised in lowquality scanned formats. As it would take considerable effort by skilled engineers to manually redraft these as modern CAD drawings or to review them to find useful metadata, a system for intelligently parsing these drawings would be of significant value.</p><p>Unfortunately, while this problem has been approached for many classes of drawings, such as piping and instrumentation diagrams (P&amp;ID), success has resulted from the existence of at least moderately sized labelled datasets, which are used as a starting point for symbol classification. In attempting to utilise this library of research on a class of drawing where no such dataset exists, such as elementary wiring diagrams (EWDs), this is the first and most manually intensive problem to overcome.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Background</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Digitisation of Engineering Drawings</head><p>Digitisation of engineering drawings has been of interest to many since as early as the 1980s <ref type="bibr" target="#b0">[1]</ref>, and continues to be worked on. The most active research on the subject is being done on P&amp;ID diagrams. These drawings are comparable in many ways to EWDs, in that they are composed of symbols, text and topological connections. They differ primarily in the family of symbols which they utilise, with P&amp;ID diagrams featuring a more complicated variety of symbols, with embedded text and subtle lines which can change the meaning of a symbol.</p><p>In the domain of P&amp;ID digitisation, Morena-Garcia identified three key challenges: quality, skewing and topology <ref type="bibr" target="#b1">[2]</ref>. Briefly, the quality problem refers to the variance caused by hand drawn symbols and distortion caused by scanning paper documents. The skewing problem refers to class imbalance caused by some symbols being more common than others, while the topology problem is both the in-drawing topology of recognising lines and connecting symbols, as well as the meta-topology of drawings which connect to others. Again it can be seen intuitively that all of these problems transfer to the EWD domain, and so must be considered.</p><p>A wide variety of techniques have been applied to address these common problems, for example the utilisation of Generative Adversarial Networks <ref type="bibr" target="#b2">[3]</ref> or class decomposition <ref type="bibr" target="#b3">[4]</ref> to address the class imbalance problem. However, this research relies on a large dataset of labelled P&amp;ID symbols <ref type="bibr" target="#b4">[5]</ref>, so applying this rich library of techniques to EWDs requires either a manually intensive amount of splitting and labelling drawings, or some interim techniques to allow the dataset to be expanded.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Human-in-the-loop systems</head><p>In nuclear power applications, there is a consistent problem with utilising data intensive AI solutions. In many nuclear applications, including the problem of digitising engineering drawings, there is a lack of quality data. Labelled data is usually minimal, hampering supervised techniques, and the data which is available is often low quality and poorly formatted, which creates difficulties applying unsupervised or semi-supervised methods. In addition to this, most modern AI solutions are "black boxes", which may provide a high degree of performance but are not easily understood. This is also a poor fit for the nuclear domain, as most decisions must be justified and audited.</p><p>This leads to the main difficulty with nuclear applications of AI: a skilled human engineer must remain involved in any uncertain process, usually either checking the AI decision at the end or making their own decision based on AI suggestions. Therefore an AI must provide enough of a benefit to justify the time of a skilled engineer to format or label data, as well as the development and verification time required to design and implement such a system, and must perform better with supervision than an engineer working alone.</p><p>However, these restrictions also present an opportunity to explore ways in which the human expertise can be leveraged to improve performance, by taking advantage of the human in the loop rather than working around them.</p><p>Human-in-the-loop (HITL) systems refer generally to systems where AI and humans interact, though is more commonly used when referring to systems in which the human is not simply a supervisor of the AI, but when the human and AI cooperate to accomplish a task <ref type="bibr" target="#b5">[6]</ref>. This human-AI cooperation is sometimes referred to as shared autonomy <ref type="bibr" target="#b6">[7]</ref>.</p><p>HITL has been applied to the process of designing AI systems in HELIX <ref type="bibr" target="#b7">[8]</ref>, a human centric tool for rapidly design and iteration on data science workflows. The design of HELIX involves providing the user with a high fidelity interface which suits their specialisation (in this case a programming language) and high quality visualisation tools to allow them to do the same work more effectively. The idea of the human user being a skilled expert in the target domain and the utilisation of their knowledge to improve the system transfers well to the nuclear domain, where the users will be experienced engineers, though the critical difference is that they will not necessarily have programming experience, but rather engineering and design experience.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Framework for human-in-the-loop digitisation</head><p>In order to address the problem of digitising EWDs using a human in the loop, there are several key considerations to address. To be of use to the end user, the system must be more efficient than an engineer simply digitising the drawing manually, even when the AI modules are at their weakest or have not yet been implemented. If the user is willing to spend a significant amount of time labelling data, they would prefer to redraw instead, in line with their expertise.</p><p>This also means that even once AI modules are implemented and are displaying a reasonable performance level, the user should not have to spend time tweaking until the AI digitises the drawing by itself. If the user makes corrections, those should be logged for future use, but the completely labelled and connected drawing they have produced should not be discarded. It is the desired output of the system, and spending more time on a drawing once it has been digitised is wasteful.</p><p>The output of the system is the original drawing marked up with a label and bounding box on each symbol, blocks of text transcribed and topological connections represented by connecting lines. Additional information such as that originally contained in embedded tables would instead be attached to the relevant component in a text field. This intermediate state could in principle be used to automatically generate a new digital drawing, but could also be applied to other potential uses, such as a subsequent database application for spanning between components across an entire plant, allowing for many potential workflow improvements.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Fig. 1. HITL framework for EWD digitisation</head><p>The proposed framework is shown in figure <ref type="figure">1</ref>. It consists firstly of an attempt by any number of AI modules to parse the drawing. These modules could be as simple as a classifier which identifies individual symbols or a rule based system governing how components can connect together, but could equally be complex holistic solutions of the type which exist for P&amp;ID diagrams, as discussed in section 2.1. They return their findings using a common interface, represented on the drawing in the same format available to the user.</p><p>The modularity provided by a swappable bank of AIs moves the focus away from individual techniques and instead to the performance of the system as a whole and the interplay between human and AI. It also allows the addition and removal of modules as appropriate. Early on, for example, a Convolutional Neural Network is likely to be of extremely limited value due to their reliance on volume of available data, but when a large dataset is available their performance is likely to be very high in comparison with other methods. We therefore might remove a CNN based method in the early stages, and only reintroduce it when it attains a reasonable level of accuracy.</p><p>In contrast, initial work utilising a Quality Assured Template Matching <ref type="bibr" target="#b8">[9]</ref> based approach successfully identified 20% of symbols with no false positives, and required only one example per symbol class, but additional data and experimentation provided greatly diminished returns, suggesting it might be a useful module early on, but might fall out of favour as more data becomes available, or else might be relegated to a reliable check on the output of other modules. In principle, this selection and comparison of modules could be done automatically based on the accuracy metrics of modules, but in the simplest implementation of this framework it would instead be a choice on the part of the designer which they might revisit regularly.</p><p>The annotated drawing is then opened up to the human expert, who has the opportunity to modify the AI attempt, either by adding additional labels not flagged by an AI, or by modifying or deleting AI outputs they disagree with. The interface at this stage should be extremely simple and intuitive. The users actions at this stage, and therefore areas where AI modules were correct or incorrect, will be logged into the learning data repository, as well as areas where modules might have disagreed with one another.</p><p>The learning data will not be a single exhaustive dataset of the type that might be used to train a machine learning system, such as labelled pictures. It will instead be a log of user actions. If the user highlights a region and labels it as a relay, the data logged would be the name of the drawing in question, the coordinates highlighted, and the label "relay". Likewise if the user deletes or modifies an area the AI flagged as a resistor, the coordinates would be logged either as a negative example or with the label flagged by the user. By saving this log instead of a single formatted dataset, we allow creativity when deciding what data would be valuable to a new AI module, by the use of the presenter modules.</p><p>The data presenters are another modular element to the system which allows the learning data to be parsed in many ways. A presenter in this case is a simple module which transforms a subset of the learning data into something usable by another part of the system. A simple example of this would be a presenter which collates all of the learning data relating to symbols, in the form of drawing name, coordinates and label, and extracts the pixels from the drawing to create a dataset for training a classifier. The presenters might be simple programs, or might themselves be more complex systems. An example of a different type of presenter is provided by the implemented subset of the system which, given a single coordinate point clicked by the user, performs simple image manipulation to find a potential area of interest which it flags on the drawing for the user's further input. Another more complex presenter could take that same single co-ordinate and run a classifier on a sliding window around it to identify the area of interest and its type more accurately.</p><p>The current implementation consists of the subset of the framework marked "Labelling System". The lack of implemented AI modules in this version allows verification of the speed at which the operator will initially be able to digitise a drawing. By utilising the aforementioned presenter, the user can click a single point on a symbol, resulting in a best guess bounding box drawn around it based on the centre of mass within that region. The user can then pick from a list what the symbol is as well as adjust the bounding box, and their decisions are logged. It is then easy to add text to the symbols and draw links between them. This is a smoother process than manually segmenting each image, and results in more consistently sized images being identified. Compared to manually redesigning a drawing in a CAD format, which industrial partners suggest takes several hours, this process takes only minutes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Conclusions and future work</head><p>The framework is still at an early stage of implementation, and requires further testing to verify the extent to which it improves the digitisation workflow. The framework does take steps towards all three of the challenges to digitisation of engineering drawings. The quality problem is addressed by minimising the impact of quality-based failures: the user has the ability to intercept mistakes made by the AI before they are output and allows the AI more leniency before enough data is available to potentially overcome the poor quality.</p><p>The skewing problem is also addressed by this, but is further supported by the modularity of the framework. The class imbalance caused by the relative rarity of some symbols over others can eventually be addressed by the generation of additional artificial data, once enough labelled data exists, but can also be addressed in the shorter term by building smaller and more specific modules which identify only one symbol at a time.</p><p>The topology problem is in some ways the hardest problem to overcome, and features the least available research. It requires taking a more holistic view of the document rather than dividing into small units like identification of symbols or text, while also requiring a high level of fidelity in other systems before beginning. The framework addresses this problem both by allowing the user to make connections in the early stages before a module is written to overcome this problem, but also by the modularity of the system, by allowing identification to be easily abstracted away into another module.</p><p>In the future, the framework will be ported to other domains, initially to other similar drawings such as P&amp;ID diagrams, but eventually to entirely different classes of problems. The domain of AI planning <ref type="bibr" target="#b9">[10]</ref> in particular features scheduling problems which a skilled expert must undertake. Many techniques which produce partial solutions are available, but the human centred framework might provide an excellent way to turn these techniques into a more complete solution.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="4,134.77,281.99,345.83,217.89" type="bitmap" /></figure>
		</body>
		<back>

			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Supported by the Engineering and Physical Sciences Research Council (EPSRC), the Advanced Nuclear Research Centre (ANRC) and Bruce Power.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Automatic interpretation of lines and text in circuit diagrams</title>
		<author>
			<persName><forename type="first">H</forename><surname>Bunke</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. NATO ASI</title>
				<meeting>NATO ASI<address><addrLine>Oxford</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1981">1981. 1982</date>
			<biblScope unit="page" from="297" to="310" />
		</imprint>
	</monogr>
	<note>Pattern recognition theory and applications</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Digitisation of Assets from the Oil &amp; Gas Industry: Challenges and Opportunities</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">F</forename><surname>Moreno-Garcia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Elyan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2019 International Conference on Document Analysis and Recognition Workshops (ICDARW)</title>
				<meeting><address><addrLine>Sydney</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2019-11">Nov. 2019</date>
			<biblScope unit="page" from="2" to="5" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Deep learning for symbols detection and classification in engineering drawings</title>
		<author>
			<persName><forename type="first">E</forename><surname>Elyan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Jamieson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ali-Gombe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Networks</title>
		<idno type="ISSN">18792782</idno>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Symbols Classification in Engineering Drawings</title>
		<author>
			<persName><forename type="first">E</forename><surname>Elyan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">M</forename><surname>Garcia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Jayne</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International Joint Conference on Neural Networks</title>
				<meeting>the International Joint Conference on Neural Networks</meeting>
		<imprint>
			<publisher>Institute of Electrical and Electronics Engineers Inc</publisher>
			<date type="published" when="2018-10">July. Oct. 2018</date>
			<biblScope unit="volume">2018</biblScope>
			<biblScope unit="page">9781509060146</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Symbols in Engineering Drawings (SiED): An Imbalanced Dataset Benchmarked by Convolutional Neural Networks</title>
		<author>
			<persName><forename type="first">E</forename><surname>Elyan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">F</forename><surname>Moreno-García</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Johnston</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 21st EANN (Engineering Applications of Neural Networks)</title>
				<meeting>the 21st EANN (Engineering Applications of Neural Networks)</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A Survey on human-in-Theloop applications towards an internet of all</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">Sousa</forename><surname>Nunes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Sa</forename><surname>Silva</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Communications Surveys and Tutorials</title>
		<idno type="ISSN">1553877X</idno>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="944" to="965" />
			<date type="published" when="2015-04">Apr. 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Shared autonomy via hindsight optimization for teleoperation and teaming</title>
		<author>
			<persName><forename type="first">S</forename><surname>Javdani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Admoni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Pellegrinelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Srinivasa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Bagnell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Robotics Research</title>
		<idno type="ISSN">17413176</idno>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="717" to="742" />
			<date type="published" when="2018-06">Jun. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Accelerating human-in-the-loop machine learning: Challenges and opportunities</title>
		<author>
			<persName><forename type="first">D</forename><surname>Xin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Macke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Parameswaran</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">DEEM&apos;18: Proceedings of the Second Workshop on Data Management for End-To-End Machine Learning</title>
				<meeting><address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2018-04">Apr. 2018</date>
			<biblScope unit="page" from="1" to="4" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">QATM: Qualityaware template matching for deep learning</title>
		<author>
			<persName><forename type="first">J</forename><surname>Cheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Abdalmageed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Natarajan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition</title>
				<meeting>the IEEE Computer Society Conference on Computer Vision and Pattern Recognition</meeting>
		<imprint>
			<date type="published" when="2019-06">2019. June, 2019</date>
			<biblScope unit="page" from="11" to="545" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">The emerging landscape of explainable automated planning &amp; decision making</title>
		<author>
			<persName><forename type="first">T</forename><surname>Chakraborti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sreedharan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kambhampati</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IJCAI International Joint Conference on Artificial Intelligence</title>
				<meeting><address><addrLine>Janua</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020-07">Jul. 2020</date>
			<biblScope unit="volume">2021</biblScope>
			<biblScope unit="page" from="4803" to="4811" />
		</imprint>
	</monogr>
	<note>ternational Joint Conferences on Artificial Intelligence</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
