<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Analysis of Decision Trees for Coreference Resolution Task in Ukrainian Language</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Sergiy</forename><surname>Pogorilyy</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Pavlo</forename><surname>Biletskyi</surname></persName>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="institution">Taras Shevchenko National University of Kyiv</orgName>
								<address>
									<addrLine>60 Volodymyrska Street</addrLine>
									<postCode>01033</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="department">Information Technology and Implementation (IT&amp;I-2023)</orgName>
								<address>
									<addrLine>November 20-21</addrLine>
									<postCode>2023</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Analysis of Decision Trees for Coreference Resolution Task in Ukrainian Language</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">C402421F0E398E583E2AC6A08F6542A0</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T20:00+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Coreference resolution</term>
					<term>natural language processing (NLP)</term>
					<term>decision trees</term>
					<term>artificial intelligence (AI)</term>
					<term>vector words representation</term>
					<term>neural networks</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>A method of coreference resolution for Ukrainian language based on decision trees is described and examined. For this, Elmo text representations and additional features used in application, based on decision trees. Selected details of decision tree's structure are studied and explained. The specifics of the Ukrainian language, which are important for the task of coreference resolution, are mentioned. The analysis of the results shows, that the decision tree allows automated creation of logical structure, which can be used to access coreference of a pair of objects and create clusters of such objects.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Natural language processing (NLP) is a large field that includes many tasks: translation between natural languages, human-computer interfaces, analysis and generation of natural speech, information extraction. One of the tasks of natural language processing is coreference resolution. Coreferentiality in texts means a relationship between syntactic units indicating the same object (referent) in a given context <ref type="bibr" target="#b0">[1]</ref>. Below are examples of coreference (the referent is highlighted in bold, the pronouns are underlined).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Simple anaphora example:</head><p>He crossed the mountain. It was high.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Simple cataphora example:</head><p>She took the road to the right. Mery was in a good mood today. An example of a compound antecedent:</p><p>John, David and Julia were tired. They all worked underground. In comparison with many other European languages -English, German, French, Italian, Spanishthe Ukrainian language has an arbitrary word order, just like other Slavic languages -Polish, Russian, Serbian, Croatian, Romanian. For example, for the Ukrainian language:</p><p>Karpo prykynuv take slivtse, shcho batko perestav struhaty i pochav pryslukhatys. Vin hlianuv na syniv cherez khvorostianu stinu. Syny stoialy bez dila y balakaly, pospyravshys na zastupy (Ivan Nechui-Levytskyi, «Kaidasheva simia», original word order).</p><p>From the second sentence in the example, it is possible to rearrange the words to form other grammatically correct sentences with very close meanings, but with different accents, depending on the author's intention:  Cherez khvorostianu stinu vin hlianuv na syniv.  Na syniv vin hlianuv cherez khvorostianu stinu.  Cherez stinu khvorostianu vin hlianuv na syniv.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head></head><p>Cherez khvorostianu stinu hlianuv vin na syniv.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head></head><p>Cherez khvorostianu stinu na syniv hlianuv vin.</p><p>At the same time, only one combination is possible in English: «He looked at his sons through the twig wall», because it uses the standard subject-verb-object (SVO, subject verb object) word order. In other languages, the subject-object-verb (SOV, subject object verb) order is also common. In this regard, the algorithm for the Ukrainian language should be able to work with different word orders.</p><p>Coreference resolution allows finding connections between sentences and within them, extracting information from texts, improving the results of text analysis in other tasks, such as translation from one language to another, dependency parsing, named entity recognition, assessment of the coherence of texts. At the initial stages of research, algorithms based on rules manually formed by experienced linguists were used to search for coreference objects. Such algorithms were created for a specific language and had to take into account many features to achieve high results.</p><p>Over time, automated approaches such as neural networks and decision trees began to be used to solve the problem. They do not require manual rule creation, but require large data sets to prepare them. Nevertheless, often, automated algorithms use simple rules to form initial clusters.</p><p>In previous works, such as <ref type="bibr" target="#b1">[2]</ref>, <ref type="bibr" target="#b2">[3]</ref>, usage of the decision trees was considered for coreference resolution and utilized on MUC-6 dataset <ref type="bibr" target="#b1">[2]</ref> for English language and CoNLL-2012 <ref type="bibr" target="#b2">[3]</ref> for Arabic, Chinese and English. Decision trees appliance on Ukrainian-language dataset containing more than 360,000 words was first introduced in our previous article <ref type="bibr" target="#b3">[4]</ref>. This article examines decision tree structure in more detail.</p><p>A method of coreference resolution for Ukrainian language based on decision trees is described and examined. For this, Elmo text representations and additional features used in application, based on decision trees. Selected details of decision tree's structure are studied and explained. The specifics of the Ukrainian language, which are important for the task of coreference resolution, are mentioned. The analysis of the results shows, that the decision tree allows automated creation of logical structure, which can be used to access coreference of a pair of objects and create clusters of such objects.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Means used to implement the coreference resolution algorithm</head><p>The application uses vector representations of words obtained by applying the ELMo library <ref type="bibr" target="#b4">[5]</ref>. This library allows converting words into vectors corresponding to their semantic, lexical, and syntactic meaning. Unlike other libraries that allow to form vector representations of words, such as Word2Vec <ref type="bibr" target="#b5">[6]</ref>, vector representations formed by ELMo take into account not only the meaning of a single word, but also the meaning of surrounding words, which allows to better find connections between individual words and sentences. ELMo uses neural networks to obtain vector representations of words and requires training. Version used in application adapted for the Ukrainian language.</p><p>The Scikit-learn library <ref type="bibr" target="#b6">[7]</ref> is used. This library includes many tools for solving regression, clustering and classification problems, in particular, it contains an optimized implementation of decision trees with the ability to configure tree construction parameters.</p><p>One of the main advantages of decision trees over other algorithms is the ability to visualize the constructed tree. This allows conducting analysis of created tree and it`s internal operational logics. Also, decisions automatically made by the tree on each of its steps allow to improve the analysis of data, find dependencies between parameters, determine the limit of overfitting and adjust parameters for forming decision tree. Furthermore, after analysis completed, it is possible to make changes in decision tree structure and see, if changes are improving the result. For this, the Graphviz library [8], which is specifically developed for graph visualization was used.</p><p>A prepared corpus of Ukrainian-language texts containing more than 360,000 words (&gt; 2,500 texts) was used to create a decision tree. The marking of coreference objects in it is carried out manually, and for obtaining additional information (gender, number, lemmatized (initial) version of the word) the UDpipe library <ref type="bibr" target="#b7">[9]</ref>, which uses neural networks and is trained on the Ukrainian text corpus is used.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Data format for representing coreference objects and their analysis</head><p>To analyze texts using decision trees, they need to be presented in the correct form. The following characteristics are used in the work to describe coreference objects (considered in the article <ref type="bibr" target="#b7">[9]</ref>):</p><p> Cosine similarity of vectors of the semantic representation of objects under consideration. For this, a pre-trained ELMo model was used. In the work, if the number of words included in the object &gt; 1, the arithmetic mean of word vectors is used.</p><p> The number of words between the selected objects.  The number of objects between the selected potentially coreference objects.  Boolean, true if the first object is a pronoun.  Boolean, true if the second object is a pronoun.  Boolean, true if the lemmatized versions (initial word forms) of the objects match. In the algorithm, the matching of lemmatized versions is defined as the matching of at least one word in both objects under consideration.</p><p> Boolean, true if both objects have the same number (singular or plural).  Boolean, true if both objects have the same genus.  Boolean, true if both objects are proper names.</p><p>The input of the algorithm is a text consisting of individual words, punctuation, and additional information prepared using the UDpipe library. This includes gender, number, lemmatized version of the word, part of speech. Also, for words included in coreference groups, an identifier is added, which allows to assign a specific word or word combination to a coreference group (prepared manually).</p><p>For the algorithm operation, a Python list is formed for each text under consideration, containing the indexes of the words included in the word combinations, which are potentially coreferential objects, before combining them into coreferential clusters, but in a format that facilitates their subsequent union (1), is a separate word, -is a word combination.</p><p>(1)</p><p>Also, lists containing correctly formed clusters of coreference objects are created. These lists are necessary at the stage of comparing the clusters obtained as a result of predictions of the decision tree and valid clusters (2), , -is a separate cluster.</p><p>(</p><formula xml:id="formula_0">)<label>2</label></formula><p>Since the clustering task is reduced in the algorithm to a classification task, to create decision trees and predictions with their help, lists with the parameters of each pair of potentially coreferential objects (3) are needed, which include cosSimthe cosine similarity of objects, nWBtwthe number of words between objects under consideration, nObjBetwthe number of objects between the objects under consideration, len1the length (number of words) of the first object, len2the length of the second object, 1pronwhether the first object is a pronoun, 2pronwhether the second object is a pronoun, 1prpwhether the first object is a proper name, 2prpwhether the second object is a proper name, lemSwhether the lemmatized versions of the objects match, gendSwhether the objects have the same gender , numS-whether the objects have the same number.</p><p>(3) Each pair of objects is denoted as x (4).</p><p>(</p><formula xml:id="formula_1">)<label>4</label></formula><p>Labels indicating whether the pair of objects under consideration are coreferential are also required to check algorithm (5).</p><p>(5)</p><p>After creating a decision tree, when using it for predictions from list (1), a list (5) containing a list of predicted clusters is generated.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>(6)</head><p>The resulting lists with characteristics of groups of coreference objects (3) as well as their labeling (4), containing 2,400,000 samples of coreference and non-coreference objects, are divided into two parts -the first (1500 texts, ~ 60%) is used to form a decision tree, the second (1015 texts, ~ 40%)to check the effectiveness of the algorithm (analysis of the obtained results).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Algorithm for coreference resolution using decision trees 4.1. Formation of the decision tree</head><p>The decision tree is implemented using the sklearn library, which has the decision tree class sklearn.tree.DecisionTreeClassifier. When creating an instance of a class, the parameters of the decision tree are determined -the criteria for splitting into subtrees during formation, the maximum depth of the tree, the minimum number of elements for splitting, the minimum number of elements in one leaf of the tree, weights for the expected classes, and others. By default, the parameters of the tree are specified to obtain an error-free configuration of the tree on the sample used for its formation. This approach leads to excessive adaptation of the tree to the data used in its formation and reduces the accuracy of work on sets that were not previously analyzed by the algorithm. Also, for large volumes of data, the use of such a configuration creates an excessively large tree, the formation of which takes a long time.</p><p>Thus, it becomes necessary to limit the size of the tree in order to achieve higher results on the data that were not used to form the tree. For this, the parameter min_impurity_decrease was used, which allows determining the minimally sufficient value of reducing heterogeneity in subsequent subtrees during splitting. Unlike other ways of limiting the size of the tree, such as limiting the depth or limiting the number of elements in the leaves, this indicator allows more evenly limit the size of the tree. For the formation of final version of the decision tree, parameter of min_impurity_decrease was set to 0.000003. To form a tree, prepared data of pairs of potentially coreferential objects (3) with a marking of whether they are coreferential (4) are submitted to the fit function. The output is a tree capable of analyzing pairs of coreference objects. Thus, the problem of coreference object clustering is reduced to the problem of decision tree classification. Part of the resulting tree is shown in Fig. <ref type="figure" target="#fig_0">1</ref>. For illustration purposes, the depth of the tree is artificially limited to fit the resulting tree on the screen. As can be seen from the figure, the division into coreferential and non-coreferential objects begins with the characteristic that allows the best separation of the current group of objects -the coincidence of lemmatized versions of objects. Further, thanks to the use of decision trees, the following logic can be followed in the subtree of the tree (Fig. <ref type="figure">2</ref>)if the lemmatized versions of the objects match, the length of the first and second objects is 1, and the first object is a proper name, then with a high probability a pair of objects is coreferential. Further filtering allows finding more coreferent objects: in Fig. <ref type="figure">3</ref> if first object is not proper noun but pronoun, then it is considered coreferent to the second object. On Fig. <ref type="figure">4</ref> if first object is proper noun and its length is 2, then it is also considered coreferent. Decision trees are also allow filtering of negative coreference examples, as it is shown at Fig. <ref type="figure" target="#fig_2">5</ref>, on which there is subtree of the decision tree right after initial filtering if lemmatized versions of objects are not same. If second object is not pronoun and cosine similarity measure of objects is lower then 0.426, then with high probability objects are not coreferent. Entire decision tree is too large to show at figure, that is why to limit its size, parameter min_impurity_decrease was set to 0.00005. Resulting tree is shown at the Fig. <ref type="figure" target="#fig_3">6</ref>. This variant of the tree shows similar performance to the tree presented in results on metric, but for the MUC metric results are lower for recall -19.24, which also decreased F1 score to 30.77 percent (in comparison with results from table 1). </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Using a decision tree for coreference resolution</head><p>After forming a decision tree, the created tree used to obtain clusters of coreference objects: pairs of objects -candidates for coreference are considered <ref type="bibr" target="#b0">(1)</ref>. For each pair of objects, the parameters (3) necessary for the classification of objects by a decision tree are determined. If the pair is classified to be coreferential, the objects are merged. After the formation of clusters containing several objects, their merging occurs if at least one pair of objects from the first and second clusters is recognized to be coreferential. During the experimental studies, it was found that when several cycles of passes are used, while cluster merging is possible, the results on the metrics (next paragraph) were 2-5% higher than without the use of cyclic passes, therefore, in the final version of the algorithm, several passes are used, while during cycle there is at least one merging. As a result of the algorithm, a list of clusters containing coreference objects within common clusters is formed (5).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Analysis of the obtained results</head><p>The metrics <ref type="bibr" target="#b9">[11]</ref> and MUC <ref type="bibr" target="#b10">[12]</ref> are used to evaluate the results obtained while using the coreference resolution algorithm. These metrics allow comparing groups of clusters -with the correct ordering of coreference objects (2) and with the predicted ordering <ref type="bibr" target="#b4">(5)</ref>, numerically reflecting the difference between clusters. For each metric, the precision (ratio of correctly selected objects to all selected objects), recall (ratio of correctly selected objects to all objects belonging to this cluster), and F1 measure (mean harmonic of precision and recall) shown.  The metric is used in a wide range of clustering problems. The metric considers individual elements in the list of predicted clusters, for which the integral value is calculated. The precision for the metric is defined as the arithmetic average of the accuracy for each element: <ref type="bibr" target="#b6">(7)</ref> where n -number of the selected element, Nnumber of elements in the list, -number of elements that belong to the same coreference group as the selected element (including the selected element) and are included in the predicted cluster, -total number of elements in the predicted cluster.</p><p>Recall for metric is defined as the arithmetic mean of recall for each element:</p><p>(8)</p><p>where n -number of the selected element, N -number of elements in the list, -number of elements that belong to the same coreference group as the selected element (including the selected element) and are included in the expected cluster, -total number of elements in the same group as the selected element.</p><p>The MUC metric is specially designed to evaluate the performance of algorithms that solve coreference resolution task. The MUC metric considers the entire list of clusters for which indications are calculated. For MUC, recall is determined by the formula: <ref type="bibr" target="#b7">(9)</ref> where number of elements in the true coreference cluster, -number of subgroups into which the true coreference cluster is divided by the assumed clusters. Precision is defined as: <ref type="bibr" target="#b8">(10)</ref> where the number of elements in the assumed coreference cluster, -the number of subgroups into which the true clusters divide by the assumed coreference cluster.</p><p>For each metric, the F1 measure is calculated, determined by the formula:</p><p>where R -recall, P -precision.</p><p>The evaluation of results of the obtained decision tree is performed on the part of the corpus that was not used during trees formation (1015 texts). During experimental research, the optimal value of the min_impurity_decrease parameter was determined, for which the highest results on the and MUC metrics were achieved. The results of a comparison of decision trees algorithm <ref type="bibr" target="#b3">[4]</ref> with other algorithms used for the analysis of Ukrainian-language texts <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b8">10]</ref> are shown in the table. 1. For the metric , the decision tree shows highest results compared to other approaches. On the MUC metric, the decision tree approach shows significantly higher results than those achieved using the convolutional neural network (CNN) <ref type="bibr" target="#b8">[10]</ref> and the natural language model RoBERTa <ref type="bibr" target="#b0">[1]</ref>, similar to a two-way neural network with long and short-term memory (BiLSTM) <ref type="bibr" target="#b8">[10]</ref>. It is worth noting that the algorithm's precision for this decision tree on the MUC and metrics is the highest compared to other approaches, while the completeness similar to the BiLSTM variant with one pass. Parameters of the decision tree formation allow to increase its precision by reducing the recall or vice versa, thus adapting to cases, for which accuracy for found coreference objects (precision) or finding the majority of coreference objects (recall) is more important.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusions</head><p>In this work, the usage of decision trees for coreference resolution in Ukrainian-language texts is considered. Ukrainian language specifics and differences from other languages relevant for coreference resolution were reviewed. Details of the structure of the decision tree were examined. The structure of the decision tree reveals simple rules that allow the separation and filtration of coreferenced and non-coreferenced objects. Formed decision tree allow deeper analysis of its logic in comparison with other automated algorithms, such as neural networks. Furthermore, the analysis of incorrectly classified objects may reveal additional features, which may improve the quality of classification. After forming, their usage requires much less computational resources in comparison with algorithms based on neural networks, which increases clustering speed.</p><p>Results obtained and analysis of internal logic show that decision trees may be used for coreference resolution in Ukrainian language.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Subtree of the decision tree (1)</figDesc><graphic coords="4,90.50,458.65,413.75,204.45" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :Figure 3 :Figure 4 :</head><label>234</label><figDesc>Figure 2: Subtree of the decision tree (2)</figDesc><graphic coords="5,125.30,143.05,344.15,188.15" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Subtree of the decision tree (5)</figDesc><graphic coords="6,119.50,115.90,355.95,168.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Decision tree</figDesc><graphic coords="6,72.00,308.15,450.95,360.95" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Comparison of algorithm performance on and MUC metrics.</figDesc><table><row><cell>Model</cell><cell>Metric</cell><cell>MUC</cell><cell></cell></row><row><cell>CNN</cell><cell>Precision</cell><cell>24.23</cell><cell>97.88</cell></row><row><cell></cell><cell>Recall</cell><cell>12.45</cell><cell>84.99</cell></row><row><cell></cell><cell>F1</cell><cell>16.44</cell><cell>92.11</cell></row><row><cell>BiLSTM</cell><cell>Precision</cell><cell>56.91</cell><cell>95.94</cell></row><row><cell>(Single-pass)</cell><cell>Recall</cell><cell>30.20</cell><cell>88.76</cell></row><row><cell></cell><cell>F1</cell><cell>29.46</cell><cell>92.21</cell></row><row><cell>BiLSTM</cell><cell>Precision</cell><cell>56.36</cell><cell>93.13</cell></row><row><cell>(multiple-passes)</cell><cell>Recall</cell><cell>39.68</cell><cell>90.43</cell></row><row><cell></cell><cell>F1</cell><cell>45.88</cell><cell>91.76</cell></row><row><cell>RoBERTa</cell><cell>Precision</cell><cell>27.39</cell><cell>91.22</cell></row><row><cell></cell><cell>Recall</cell><cell>13.10</cell><cell>89.65</cell></row><row><cell></cell><cell>F1</cell><cell>17.72</cell><cell>90.43</cell></row><row><cell>Decision tree</cell><cell>Precision</cell><cell>73.46</cell><cell>98.15</cell></row><row><cell></cell><cell>Recall</cell><cell>29.08</cell><cell>88.14</cell></row><row><cell></cell><cell>F1</cell><cell>41.67</cell><cell>92.87</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Usage of a graphics processor to accelerate coreference resolution while using the RoBERTa model</title>
		<author>
			<persName><forename type="first">S</forename><surname>Pogorilyy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Biletskyi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Scientific works of DonNTU, Series: Informatics, cybernetics and computer technology</title>
				<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="4" to="9" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Coreference Resolution Using Decision Trees</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Dzunic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Momcilovic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Todorovic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Stankovic</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">8th Seminar on Neural Network Applications in Electrical Engineering</title>
				<imprint>
			<date type="published" when="2006">2006. 2006</date>
			<biblScope unit="page" from="109" to="114" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Latent Trees for Coreference Resolution</title>
		<author>
			<persName><forename type="first">E</forename><surname>Fernandes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Nogueira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Milidiú</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computational Linguistics</title>
		<imprint>
			<biblScope unit="volume">40</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="801" to="835" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Coreference resolution algorithm for Ukrainian-language texts using decision trees</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">D</forename><surname>Pogorilyy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">V</forename><surname>Biletskyi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Prombles in programming</title>
				<imprint>
			<date type="published" when="2022">2022. 2022</date>
			<biblScope unit="page" from="85" to="91" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Deep contextualized word representations</title>
		<author>
			<persName><forename type="first">M</forename><surname>Peters</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Neumann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Iyyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gardner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Clark</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zettelemoyer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</title>
				<meeting>the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="2227" to="2237" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Distributed Representations of Words and Phrases and their Compositionality</title>
		<author>
			<persName><forename type="first">T</forename><surname>Mikolov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Sutskever</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Corrado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Dean</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 26th International Conference on Neural Information Processing Systems</title>
				<meeting>the 26th International Conference on Neural Information Processing Systems</meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="3111" to="3119" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<ptr target="https://scikit-learn.org/" />
		<title level="m">Scikit-learn library</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<ptr target="https://lindat.mff.cuni.cz/services/udpipe/" />
		<title level="m">UDpipe library</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">The complex method of coreferent clusters detection based on a BiLSTM neural network</title>
		<author>
			<persName><forename type="first">S</forename><surname>Telenyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Pogorilyy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kramov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Knowledge Based Systems</title>
		<imprint>
			<biblScope unit="page" from="205" to="210" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Algorithms for Scoring Coreference Chains</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bagga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Baldwin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The First International Conference on Language Resources and Evaluation Workshop on Linguistics Coreference</title>
				<imprint>
			<date type="published" when="1998">1998</date>
			<biblScope unit="page" from="563" to="566" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A Model-Theoretic Coreference Scoring Scheme</title>
		<author>
			<persName><forename type="first">M</forename><surname>Vilain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Burger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Aberdeen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Connolly</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Hirschman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 6th Conference on Message Understanding (MUC)</title>
				<meeting>the 6th Conference on Message Understanding (MUC)</meeting>
		<imprint>
			<date type="published" when="1995">1995</date>
			<biblScope unit="page" from="45" to="52" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
