<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A Intelligent Detection Method for Irony and Stereotype Based on Hybird Neural Networks Notebook for PAN at CLEF 2022</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Zexian</forename><surname>Yang</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Foshan University</orgName>
								<address>
									<settlement>Foshan</settlement>
									<country key="CN">China</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">of the Evaluation Forum</orgName>
								<orgName type="laboratory">CLEF 2022 -Conference and Labs</orgName>
								<address>
									<addrLine>September 5-8</addrLine>
									<postCode>2022</postCode>
									<settlement>Bologna</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ma</forename><surname>Li</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Foshan University</orgName>
								<address>
									<settlement>Foshan</settlement>
									<country key="CN">China</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">of the Evaluation Forum</orgName>
								<orgName type="laboratory">CLEF 2022 -Conference and Labs</orgName>
								<address>
									<addrLine>September 5-8</addrLine>
									<postCode>2022</postCode>
									<settlement>Bologna</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Wenyin</forename><surname>Yang</surname></persName>
							<email>cswyyang@163.com</email>
							<affiliation key="aff0">
								<orgName type="institution">Foshan University</orgName>
								<address>
									<settlement>Foshan</settlement>
									<country key="CN">China</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">of the Evaluation Forum</orgName>
								<orgName type="laboratory">CLEF 2022 -Conference and Labs</orgName>
								<address>
									<addrLine>September 5-8</addrLine>
									<postCode>2022</postCode>
									<settlement>Bologna</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Qidi</forename><surname>Lao</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Foshan University</orgName>
								<address>
									<settlement>Foshan</settlement>
									<country key="CN">China</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">of the Evaluation Forum</orgName>
								<orgName type="laboratory">CLEF 2022 -Conference and Labs</orgName>
								<address>
									<addrLine>September 5-8</addrLine>
									<postCode>2022</postCode>
									<settlement>Bologna</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Zhenlin</forename><surname>Tan</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Foshan University</orgName>
								<address>
									<settlement>Foshan</settlement>
									<country key="CN">China</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">of the Evaluation Forum</orgName>
								<orgName type="laboratory">CLEF 2022 -Conference and Labs</orgName>
								<address>
									<addrLine>September 5-8</addrLine>
									<postCode>2022</postCode>
									<settlement>Bologna</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">A Intelligent Detection Method for Irony and Stereotype Based on Hybird Neural Networks Notebook for PAN at CLEF 2022</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">2614D269915B348E05B8AB80702B880C</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T03:24+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Author Profiling, Irony and Stereotype Spreaders, Bi-LSTM 0000-0001-6060-7603 (A. 1)</term>
					<term>0000-0002-5013-052X (A. 2)</term>
					<term>0000-0003-4842-9060 (A. 3)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>For the task of Profiling Irony and Stereotype Spreaders on Twitter[1,2], a deep learning model based on a combination of RNN and CNN is proposed in this paper. A special RNN is used to solve the context's long-term dependency, and CNN is used to further extract relational features. The task involves classifying authors as Ironic or non-Ironic based on the number of their tweets, and the task is a judgment for those authors who use irony to spread a stereotype (ISS), that the task does as a binary classification task. After training and predicting on the task datasets given by PAN 22, the accuracy of the model announced by the organizer is about 0.9056.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Today, with the birth of various new technologies such as big data and cloud computing, the technology of online social platforms is becoming more and more mature. Freely express personal remarks, so that people can express their personal remarks more freely on the online communication platform. Nowadays, because people wantonly publish such inflammatory remarks that are not conducive to the stable development of the country, social stability, and the physical and mental health of others, such remarks will cause serious harm to indiviuuals or the entire society <ref type="bibr" target="#b2">[3]</ref>. Therefore, the social platform designs a corresponding algorithm to identify whether the speech sent by the user is excessive, incitement, hatred, or other speech to be restricted <ref type="bibr" target="#b3">[4]</ref>. However, people's expressions today are also improving with the advancement of technology. After social platforms have restricted excessive speech, people use language in a metaphorical and subtle way to express the opposite of the literal meaning. That is, the language is ironic and negative. However, this type of language is offensive irony, used to ridicule and despise victims, causing certain psychological trauma to users. Considering the huge amount of daily information on social platforms, it is time-consuming, expensive, and inefficient to manually detect such ironic remarks. Therefore, it is necessary to develop an algorithm that can automatically identify ironic speech <ref type="bibr" target="#b4">[5]</ref>.</p><p>Therefore, the task of Profiling Irony and Stereotype Spreaders on Twitter at PAN 2022 is to verify whether the authors are likely to spread ironic remarks. Based on preprocessing the datasets with a custom function, this paper proposes a Bidirectional Long Short-Term Memory network (Bi-LSTM) and a Convolutional Neural Network (CNN) <ref type="bibr" target="#b5">[6]</ref> composition. The spaces in the text are segmented through Textvectorization, and the segmented words are generated one by one corresponding to numerical values, thereby constructing a dictionary. Each word segmented from the training set will be used as a value, and each word is mapped to the value of the key in the dictionary. Then the positive integer sequence obtained by the Textvectorization text preprocessing function, and then the keys in the dictionary are mapped to the 120-dimensional word embedding layer. Put the data into the designed model to get the final desired result.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Datasets</head><p>Profiling Irony and Stereotype Spreaders on Twitter provided a training set and a test set. The data sets are shown in Table <ref type="table" target="#tab_0">1</ref>. The datasets are all composed of XML files. Each XML file corresponds to an author, and there are 200 tweets in the XML file corresponding to each author. In the official training data set, there is also a real value file, giving each author the corresponding XML file tag of N or NI. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Irony and Stereotype Evangelist Identification Model Structure</head><p>The neural network model proposed in this paper is to realize the discrimination task. The model consists of a textvectorization layer, an embedding layer, a Bi-LSTM layer,a convolutional layer and a fully connected layer. The neural network structure is shown in Figure <ref type="figure" target="#fig_0">1</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Textvectorization Layer</head><p>The data after preprocessing is passed into the Textvectorization layer of the model. This layer mainly passes in the processed XML file data, divides the words according to spaces, and maps the words into the required integer sequence. In the dictionary learned by the Textvectorization function, in addition to the learned content, it also includes an empty character as padding (filling if the sentence length is not enough), and Unknown (UNK) represents that the character does not exist. It will be performed using UNK. This layer will further process the data for the word embedding layer mapping.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Word Embedding Layer</head><p>Embedding layer <ref type="bibr" target="#b6">[7]</ref> as a dictionary, that is, map integer indices (specific words) into dense vectors, will receive integers as input, look up these integers in the internal dictionary, and return the associated vector. And we will use the tensor input composed of the previous layer of integers to map to a 120-dimensional vector, and use the vector to solve the disadvantage that the integer encoding cannot express the relationship between words.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>3.3.</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Bi-LSTM Layer</head><p>A special model in RNN (Recurrent neural network) is called LSTM <ref type="bibr" target="#b7">[8]</ref> which is used to solve the context dependence problem in RNN and is suitable for processing time series data. The structure is shown in Figure <ref type="figure" target="#fig_1">2</ref>. Since LSTM can only use historical data, it cannot use future data information. Thus, the forward LSTM and the backward LSTM are combined to obtain a new Bi-LSTM structure <ref type="bibr" target="#b8">[9]</ref>. Using Bi-LSTM is to insert the same input sequence into the forward and backward two LSTM, and then connect the hidden layers of the two networks together. computable information is improved so that the model can obtain historical and future information. Bi-LSTM <ref type="bibr" target="#b10">[10]</ref> includes four parts: memory gate i, forgetting gate f, output gate o, and cell state c. The calculation process of LSTM is:</p><p>(A) Choose the forgotten information, enter the word at the current moment through the hidden layer state at the previous moment ℎ −1 , and obtain the value of the forgetting gate . The formula is as follows:</p><formula xml:id="formula_0">= ( + ℎ −1 + )<label>(1)</label></formula><p>(B) By selecting the information to be memorized by inputting the hidden layer state at the previous moment ℎ −1 and the word at the current moment , the value of the memory gate and the temporary cell state are obtained. The formula is as follows:</p><formula xml:id="formula_1">= ( + ℎ −1 + ) (2) = ℎ( + ℎ −1 + )<label>(3)</label></formula><p>(C) By inputting the value of the memory gate , the value of the forgetting gate and the temporary cell state , the cell state at the current moment is obtained. The formula is as follows:</p><formula xml:id="formula_2">= × −1 + ×<label>(4)</label></formula><p>(D) Through the hidden layer state at the previous moment ℎ −1 , the input word at the current moment , and the cell state at the current moment , the output gate and the hidden layer state at the current moment ℎ are obtained. The formula is as follows:</p><formula xml:id="formula_3">= ( + ℎ −1 + ) (5) ℎ = × ℎ( )<label>(6)</label></formula><p>(E) Finally, since Bi-LSTM has forward LSTM and reverse LSTM represented by ℎ and ℎ , respectively, represent the output context hidden layer state vector, and connect and get the output of Bi-LSTM <ref type="bibr" target="#b8">[9]</ref> at time t as</p><formula xml:id="formula_4">ℎ = × ℎ( )<label>(7)</label></formula><p>In the formula, W and u represent the weight matrix, and b represents the offset.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>3.4.</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>CNN Layer</head><p>Because Bi-LSTM can extract the feature relationship of the bidirectional time series dimension of the text, the CNN layer <ref type="bibr" target="#b11">[11]</ref> is utilized to further extract the associated features in order to improve semantic analysis on the association between neighboring features. The complexity and quantity of parameters used in neural network model training can be decreased while maintaining the essential characteristics. It can successfully prevent overfitting and enhance the model's capacity for generalization.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>3.5.</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>DNN Layer</head><p>In the fully connected layer of the last two layers, the first layer uses the nonlinear activation function "Relu" for classification, and the last layer uses a simple linear activation function for the final result classification, and obtains the final two-classification result definition. Positive value is NI and negative value is I.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiments and Results</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>4.1.</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Experimental setting</head><p>The word embedding layer included with Keras is used in this study to map words into 120dimensional vectors. The activity of the model is then increased by adjusting the rate of SpatialDropout1D to 0.2. The Bi-LSTM was designed with 128 units. Relu is used as the activation function in Conv1D along with 64 convolution kernels, a convolution kernel stride of 1, a size of 4, GlobalMaxPooling1D for pooling calculation, and a rate of 0.3 dropout to avoid overfitting. The output unit of the first fully connected layer is 128 and the activation function is Relu. The weight matrix for classification is initialized by a unique kernel initializer in the final fully linked layer. During the training process, set the epoch to 5, and its optimization is Adam.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Results</head><p>The data given by the organizer is divided into 80% for training and 20% for verification, and the trained model is used to verify the model. The training sample uses 5 epochs (E1, E2, E3, E4, E5). The results are shown in Table <ref type="table" target="#tab_1">2</ref>. Task organizers invited participants to deploy their model on TIRA <ref type="bibr" target="#b12">[12]</ref>. Through the five epochs, it can be clearly seen that the model is continuously trained, the accuracy is continuously improved, and the loss is reduced. After the accuracy of the validation set reaches the fifth time, the accuracy does not change, and the loss starts to increase. The organizer's test set is used to verify the model, and the accuracy attained is 0.9056.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>In this paper, we describe the ironic speech task at PAN 22, in which we propose a deep learningbased model to detect Twitter users who spread ironic speech. By fine-tuning the hyperparameters during the training process of our proposed model, the model achieves the best accuracy of 0.9056. As the organizers announced, the model achieved accuracy on the English training set and on the final test set. At the same time, the experiment shows that the task is more challenging. Twitter is more than just text; it also has numerous emojis, which can be used sarcastically, and some people intentionally misspell the text. Errors never go away to avoid machine detection. This type of more complex detection remains a huge challenge, and people must think about it in order to devise better solutions to these problems.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Architecture diagram for model</figDesc><graphic coords="2,137.16,419.04,334.08,288.36" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: LSTM structure</figDesc><graphic coords="3,159.96,420.00,288.12,196.68" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Statistics of datasets</figDesc><table><row><cell>Datasets</cell><cell>Number of texts</cell><cell>Number of author</cell><cell>Number of datas</cell></row><row><cell>Training set</cell><cell>420</cell><cell>420</cell><cell>84000</cell></row><row><cell>Test set</cell><cell>180</cell><cell>180</cell><cell>36000</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>The result of training set</figDesc><table><row><cell>Epoch</cell><cell>Accuracy</cell><cell>Loss</cell><cell>val_accuracy</cell><cell>val_loss</cell></row><row><cell>E1</cell><cell>0.6310</cell><cell>0.6393</cell><cell>0.6220</cell><cell>0.6786</cell></row><row><cell>E2</cell><cell>0.6518</cell><cell>0.6038</cell><cell>0.8452</cell><cell>0.4877</cell></row><row><cell>E3</cell><cell>0.8363</cell><cell>0.4370</cell><cell>0.8571</cell><cell>0.3242</cell></row><row><cell>E4</cell><cell>0.9524</cell><cell>0.1464</cell><cell>0.8810</cell><cell>0.3010</cell></row><row><cell>E5</cell><cell>0.9851</cell><cell>0.0219</cell><cell>0.8810</cell><cell>0.3124</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Acknowledgments</head><p>This work was supported by grants from the Basic and Applied Basic Research Fund of Guangdong Province No.2019A1515111080.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Overview of PAN 2022: Authorship Verification, Profiling Irony and Stereotype Spreaders, Style Change Detection, and Trigger Detection</title>
		<author>
			<persName><forename type="first">J</forename><surname>Bevendorff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chulvi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Fersini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">C]//European Conference on Information Retrieval</title>
				<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="331" to="338" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Profiling Irony and Stereotype Spreaders on Twitter (IROSTEREO) at PAN 2022</title>
		<author>
			<persName><forename type="first">R</forename><surname>Ortega-Bueno</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chulvi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Rangel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Fersini</surname></persName>
		</author>
		<ptr target="CEUR-WS.org" />
	</analytic>
	<monogr>
		<title level="m">CLEF 2022 Labs and Workshops</title>
		<title level="s">Notebook Papers</title>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Overview of PAN 2021: Authorship Verification, Profiling Hate Speech Spreaders on Twitter, and Style Change Detection[C]//International Conference of the Cross-Language Evaluation Forum for European Languages</title>
		<author>
			<persName><forename type="first">J</forename><surname>Bevendorff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chulvi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peña</forename><surname>Sarracén</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G L D L</forename></persName>
		</author>
		<imprint>
			<date type="published" when="2021">2021</date>
			<publisher>Springer</publisher>
			<biblScope unit="page" from="419" to="431" />
			<pubPlace>Cham</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Overview of the 8th author profiling task at pan 2020: Profiling fake news spreaders on twitter</title>
		<author>
			<persName><forename type="first">F</forename><surname>Rangel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Giachanou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B H H</forename><surname>Ghanem</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">C]//CEUR Workshop Proceedings. Sun SITE Central Europe</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">2696</biblScope>
			<biblScope unit="page" from="1" to="18" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Profiling hate speech spreaders on twitter task at PAN 2021</title>
		<author>
			<persName><forename type="first">F</forename><surname>Rangel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sarracén</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chulvi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">/CLEF</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Detection of hate speech spreaders using convolutional neural networks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Siino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Di</forename><surname>Nuovo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Tinnirello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename></persName>
		</author>
		<imprint>
			<date type="published" when="2021">2021</date>
			<publisher>CLEF</publisher>
		</imprint>
	</monogr>
	<note type="report_type">C</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Evaluating word embedding models: Methods and experimental results</title>
		<author>
			<persName><forename type="first">B</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Chen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">APSIPA transactions on signal and information processing</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page">8</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A review of recurrent neural networks: LSTM cells and network architectures</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Si</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Hu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J]. Neural computation</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="1235" to="1270" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Bidirectional LSTM with attention mechanism and convolutional layer for text classification</title>
		<author>
			<persName><forename type="first">G</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Guo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J</title>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title/>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">337</biblScope>
			<biblScope unit="page" from="325" to="338" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">n-BiLSTM: BiLSTM with n-gram Features for Text Classification</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Rao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE 5th Information Technology and Mechatronics Engineering Conference (ITOEC). IEEE</title>
				<imprint>
			<date type="published" when="2020">2020. 2020</date>
			<biblScope unit="page" from="1056" to="1059" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Convolutional neural networks: an overview and application in radiology</title>
		<author>
			<persName><forename type="first">R</forename><surname>Yamashita</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Nishio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R K G</forename><surname>Do</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J]. Insights into imaging</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="611" to="629" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">TIRA integrated research architecture[M]//Information Retrieval Evaluation in a Changing World</title>
		<author>
			<persName><forename type="first">M</forename><surname>Potthast</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gollub</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wiegmann</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2019">2019</date>
			<publisher>Springer</publisher>
			<biblScope unit="page" from="123" to="160" />
			<pubPlace>Cham</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
