<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A Semantic Code Search Method Based on Program Conversion 1</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Yan</forename><surname>Tao</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">School of Software Engineering</orgName>
								<orgName type="institution">Chengdu University of Information Technology</orgName>
								<address>
									<postCode>610225</postCode>
									<settlement>Chengdu</settlement>
									<region>Sichuan</region>
									<country key="CN">China</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Le</forename><surname>Wei</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">School of Software Engineering</orgName>
								<orgName type="institution">Chengdu University of Information Technology</orgName>
								<address>
									<postCode>610225</postCode>
									<settlement>Chengdu</settlement>
									<region>Sichuan</region>
									<country key="CN">China</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">Automatic Software Generation &amp; Intelligence Service Key Laboratory of Sichuan Province</orgName>
								<address>
									<postCode>610225</postCode>
									<settlement>Chengdu</settlement>
									<country key="CN">China</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Hongping</forename><surname>Shu</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">Automatic Software Generation &amp; Intelligence Service Key Laboratory of Sichuan Province</orgName>
								<address>
									<postCode>610225</postCode>
									<settlement>Chengdu</settlement>
									<country key="CN">China</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">A Semantic Code Search Method Based on Program Conversion 1</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">84778AF6EAA0CEC886A6C2DDA98AF01E</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T05:56+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Code Search</term>
					<term>CodeBERT</term>
					<term>Programming Transformation</term>
					<term>Semantic Code</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Aiming at the problem that code fragments can't be captured accurately and quickly due to ignoring semantic information and structural information of source code in code search task, A semantic code search method based on program conversion is proposed (Semantic Code Search based on Program Conversion, SCSPC). The SCSPC diversified the data of the acquired code fragments through data enhancement, changed the program through variable renaming, exchanging two independent statements, circular exchange, inserting exception capture and if equivalent replacing switch statement, and trained the CodeBERT model with mixed objective functions (masking language modeling and replacing token detection) to generate sentence vectors with rich semantics for the code fragments and natural languages, and compared the similarity of the vectors to complete the code search. Experimental results show that compared with SWIM, QECK and CODEnn models, the average reciprocal ranking and hit rate (S@10) of SCSPC method are increased by 2% and 0.041 respectively.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In modern software development process, software reuse has been applied to various fields. The same function may exist in different domains or different software systems, and the code to implement the function is also similar. Developers usually search for code in a large number of procedural code libraries, so as to save development time and efficiency.</p><p>At present, code search is mainly based on information retrieval methods and deep learning methods. Methods based on information retrieval focus on how to generate correct keywords and calculate the matching degree between them. For example, CodeHow proposed by Lv et al. <ref type="bibr" target="#b0">[1]</ref>, this model is based on the natural language similarity and the influence of API on code search, and understands the query by identifying the API that the query may refer to. After determining the potential API of the query, the information of the API is merged into the process of code search. Lu et al. <ref type="bibr" target="#b1">[2]</ref> proposed a synonym expansion query model generated by WordNet <ref type="bibr" target="#b2">[3]</ref>. Iman et al. <ref type="bibr" target="#b3">[4]</ref> proposed a pattern-based search technology, which uses the clone detection method based on vector space model to support the discovery of working code instances and search out all types of statements (such as control flow, data flow and API).</p><p>Current code search methods still have problems. First, they ignore that codes may come from different dimensions, which makes it difficult to cover all perspectives with a single code representation. Second, it is difficult for a single natural language query to express the intentions of different users, which makes the search results inaccurate. Aiming at the above problems, this paper proposes a code search method based on program transformation. The method uses program transformation to enhance data and diversify code fragments. The CodeBERT model is used to map natural language and code snippets into the same vector space, and the similarity between the vectors is calculated and sorted to search for the corresponding code snippet for the user.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">The model of CodeBERT</head><p>Pretrained models are essentially a method of transfer learning. The most typical pre-trained model is BERT <ref type="bibr" target="#b4">[5]</ref> (Bidirectional Encoder Representation from Transformers). BERT uses a bidirectional encoder from Transformer <ref type="bibr" target="#b5">[6]</ref>, which is geared towards textual languages but is not suitable for representing semantic relationships between bimodal languages.</p><p>The CodeBERT <ref type="bibr" target="#b6">[7]</ref> model captures the semantic association between natural language and code snippet, and can quickly complete tasks such as semantic similarity search. The CodeBERT model is based on a Transformer network with 12 layers. Its pre-training completes two tasks: Masked Language Model (MLM) and Substitution Token Detection (RTD) task. The masked language model is to randomly delete a word in a sentence and then judge what the deleted word is, a task pretrained using NL-PL pairs. As shown in Figure <ref type="figure" target="#fig_0">1</ref>, in the pre-training phase, [CLS] is placed at the beginning of the natural language sentence to do binary classification. Separate natural language and code snippets with flags. The input is of the form {[CLS], NL, [SEP], PL, [EOS]}. Consider natural language as a sequence of words and code snippets as a whole set of tokens. The output is the Embedding value and the [CLS] value for each Token. The Token detection task uses two generators combined with the context to generate the token at [MASK], and trains the discriminator to predict whether the token in the sentence is replaced or not. This task is pretrained using single-modal data, and the training process is shown in Figure <ref type="figure" target="#fig_1">2</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>CodeBERT [CLS] Sort a dictionary by [MASK] [SEP]</head><p>Natural Language  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>[(key,dictionary[key]) for key in [MASK] (dictionary.key())]</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Programming Language</head><note type="other">Key</note></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Code search method based on program transformation 3.1. The framework of the method</head><p>The ultimate goal of method framework code search is to find code snippets that match the user's expectations. Program transformation based Semantic code search (SCSPC) captures the semantic relationships between natural languages and programming languages and generates a common representation. The work of this paper mainly includes semantic-preserving code program transformation, model training, and code search result matching. Its overview diagram is shown in Figure <ref type="figure" target="#fig_2">3</ref>. Firstly, the SCSPC method uses data augmentation to transform the source code. The specific steps are to rename the source code variables, swap the position of independent statements, loop exchange, add capture statements, and equivalently replace switch statements with if. Each transformation has different effects on the structure of the method, so as to improve the diversity of the training data. Then, natural language and code snippets are input into the CodeBERT model, which are mapped into sentence vectors and code block vectors respectively. Finally, the cosine distance between the code block vector and the sentence vector is calculated by the cosine similarity, and the code search is completed by sorting the similarity. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">The model training of the method</head><p>Model training focuses on capturing the semantic connections between queries and code snippets and generating a common representation, as shown in Figure <ref type="figure" target="#fig_3">4</ref>. The model mainly consists of embedding layer and encoding layer. Within the embedding layer, input queries and code snippets are mapped into a high-dimensional vector space. Set the maximum length of the input query to 1024. Think of it as a sequence of words, where each query uses &lt;CLS&gt; and &lt;EOS&gt; to represent the beginning and end of the query. Use &lt;SEP&gt; to split words. For a code fragment, think of it as a sequence of tokens. The encoding layer learns code snippets c and natural language d using a bidirectional encoder. The main calculation formula is shown in <ref type="bibr" target="#b0">(1)</ref>. PE is the position embedding and i denotes the dimension of the embedding. Linear stands for linear layer, Q, K, V are three vectors in the attention mechanism, which are query vector, key vector, and value vector. LayerNorm represents the normalization layer, RELU is the activation function, and using RELU can speed up the convergence of the network. </p><formula xml:id="formula_0">Q Linear X XW K Linear X XW V Linear X XW X X X X LayerNorm X X Line = + = = + = = = = = = = + = = ， ( ( ( ))) attention ar RULU Linear X                (1)</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiment 4.1. Data of the experiment</head><p>In order to ensure that the collected source code has real reliability. The source code of this paper is from the most popular hosting open source website platform Github and the Deep Program group of Microsoft Research-Cambridge jointly launched the dataset CodeSearchNet for code representation learning <ref type="bibr" target="#b7">[8]</ref>. The CodeSearchNet dataset is composed of 2 million annotation-code pairs from open source libraries. Specifically, comments are top-level function or method comments, and code is the entire function or method. In this experiment, the Java language from the CodeSearchNet Challenge was selected as the dataset. It contains 500,754 pairs of functional Java code snippets and their descriptions. 450, 439 pairs were used as training, 33, 658 pairs were used as validation set, and 28, 910 pairs were used as test set.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Evaluation criteria of the experiment</head><p>In this paper, SuccessRate@k and MRR (Mean Reciprocal Rank) are used to verify the effectiveness of the SCSPC method. The formula for calculating the average reciprocal rank is given in (2). Q denotes the set of automatically evaluated queries, and ranki is the rank corresponding to the ith query. The higher the value of MRR, the better the performance of the code search. </p><formula xml:id="formula_1">1 1 MRR = Q i i Q rank = <label>(2)</label></formula><p>SuccessRate@k denotes the percentage of queries for which more than one correct fragment successfully exists among the top k code fragments returned by the search model, and is calculated as shown in <ref type="bibr" target="#b2">(3)</ref>. Where Q is the query set in the automatic evaluation, Rankq is the highest Rank of the hit code snippet in the search results, and 𝜎 is a function that returns 1 if the rank value of the q query is less than k, and 0 otherwise. This evaluation metric is very important. Because a good search engine should be able to find the code snippets that developers need by searching fewer search results. In this experiment, the results were evaluated for k of 1, 5, and 10. If the value is higher, the model performance of code search is better.</p><p>( )</p><formula xml:id="formula_2">1 1 @ Q q q SuccessRate k rank k Q σ = = ≤  (3)</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Results of the experiment</head><p>In order to verify the effectiveness of the SCSPC method, SWIM <ref type="bibr" target="#b8">[9]</ref>, QECK <ref type="bibr" target="#b10">[10]</ref> and CODEnn are selected as the three benchmark methods to compare this experiment, and the code repository in Section 4.1 is used to construct the training set, validation set and test set.</p><p>In this experiment, the value of search result K is set as 1, 5, and 10. A value of 1 represents the probability that the correct search result appears in the first position. The K values of 3 and 5 represent the search performance when the correct search result does not appear in the first position and when there are multiple correct search results. 1, 5, and 10 are also three parameters commonly used in the search system.</p><p>Figure <ref type="figure" target="#fig_6">5</ref> illustrates the comparison between the SCSPC method and the other three benchmark methods on the evaluation metric MRR. It can be seen that SCSPC has better performance on MRR than SWIM, QECK and CODEnn. SWIM and other methods ignore the semantics between code fragments, and SCSPC method performs diversified processing on data, which can search code more accurately. Table <ref type="table" target="#tab_0">1</ref> lists the comparison between the SCSPC method and the other three benchmark methods on the evaluation metric SuccessRate@K. It can be seen that the SCSPC method is better than the other three benchmark methods when k is 1, 5 and 10. The SCSPC method fully captures the semantic relationship between code snippets and natural language, and has a greater probability of correct answers in the search results. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>In this paper, we propose SCSPC, a semantic code search method based on program transformation. This method performs data augmentation on Java code snippets in CodeSearchNet, and performs program transformation on five different aspects of the code while maintaining the semantic unchanged to obtain a large code corpus of high quality. Then, the CodeBERT model is fine-tuned to generate sentence vectors for natural language and code snippets to complete code search. This method deals with data diversification and effectively improves the accuracy of search results. In the following research on code search, we can improve the accuracy of the results from all aspects of the code snippet.</p><p>The method in this paper is to search in the case of specified programming languages, and future work can expand the scope of programming languages to further mine other semantic information of the code and improve the search accuracy.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1. The model of masked language.Figure 2. The tasks of token detection.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 .</head><label>2</label><figDesc>Figure 1. The model of masked language.Figure 2. The tasks of token detection.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 .</head><label>3</label><figDesc>Figure 3. Overview diagram of semantic code search methods based on program transformations.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 .</head><label>4</label><figDesc>Figure 4. The process of model training.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>1</head><label>1</label><figDesc></figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 5 .</head><label>5</label><figDesc>Figure 5. The comparison of the performance of the three methods.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>Performance comparison of three methods based on SuccessRate@k</figDesc><table><row><cell></cell><cell>0.66</cell><cell></cell><cell></cell></row><row><cell></cell><cell>0.64</cell><cell></cell><cell></cell></row><row><cell></cell><cell>0.62</cell><cell></cell><cell></cell></row><row><cell></cell><cell>0.6</cell><cell></cell><cell></cell></row><row><cell>MRR</cell><cell>0.58</cell><cell></cell><cell></cell></row><row><cell></cell><cell>0.56</cell><cell></cell><cell></cell></row><row><cell></cell><cell>0.54</cell><cell></cell><cell></cell></row><row><cell></cell><cell>0.52</cell><cell></cell><cell></cell></row><row><cell></cell><cell>0.5</cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell>SWIM</cell><cell>QECK</cell><cell>CODEnn</cell><cell>SCSPC</cell></row><row><cell cols="2">Method</cell><cell>S@1</cell><cell></cell><cell>S@5</cell><cell>S@10</cell></row><row><cell cols="2">SWIM</cell><cell>0.546</cell><cell></cell><cell>0.558</cell><cell>0.594</cell></row><row><cell cols="2">QECK</cell><cell>0.553</cell><cell></cell><cell>0.634</cell><cell>0.664</cell></row><row><cell cols="2">CODEnn</cell><cell>0.549</cell><cell></cell><cell>0.647</cell><cell>0.693</cell></row><row><cell cols="2">SCSPC</cell><cell>0.641</cell><cell></cell><cell>0/693</cell><cell>0.734</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Acknowledgments</head><p>This paper is supported by the following projects: Research on software code reuse and automatic generation(2020YFG0299).</p></div>
			</div>


			<div type="availability">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Code Code1 Code2 Code3 Code4 Code5</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Effective Code Search Based on API Understanding and Extended Boolean Model 30th</title>
		<author>
			<persName><forename type="first">F</forename><surname>Lv</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S W ;</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><surname>Codehow</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE/ACM International Conference on Automated Software Engineering (ASE)</title>
				<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2015">2015. 2015</date>
			<biblScope unit="page" from="260" to="270" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Query expansion via WordNet for effective code search 22nd International Conference on Software Analysis</title>
		<author>
			<persName><forename type="first">M L</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Lo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Evolution, and Reengineering (SANER)</title>
				<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="volume">2015</biblScope>
			<biblScope unit="page" from="545" to="549" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A survey on software components search and retrieval</title>
		<author>
			<persName><forename type="first">D</forename><surname>Lucr</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings 30th Euromicro Conference</title>
				<meeting>30th Euromicro Conference</meeting>
		<imprint>
			<date type="published" when="2004">2004. 2004</date>
			<biblScope unit="page" from="152" to="159" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Spotting working code example</title>
		<author>
			<persName><forename type="first">I</forename><surname>Keivanloo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Rilling</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zou</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">pre-training of deep bidirectional transformers for language understanding</title>
		<author>
			<persName><forename type="first">J</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">W</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lee K ;</forename><surname>Bert</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc of Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</title>
				<meeting>of Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies<address><addrLine>Stroudsburg, PA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="4171" to="4186" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Vaswani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Shazeer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Parmar</surname></persName>
		</author>
		<title level="m">Attention Is All You Need arXiv</title>
				<imprint>
			<date type="published" when="2017">2017. 2017</date>
			<biblScope unit="page" from="5998" to="6008" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">CodeBERT: A Pre-Trained Model for Programming and Natural Languages</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Feng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Tang</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2020">2020. 2020</date>
			<biblScope unit="page" from="1536" to="1547" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">CodeSearchNet Challenge: Evaluating the State of Semantic Code Search</title>
		<author>
			<persName><forename type="first">H</forename><surname>Husain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gazit</surname></persName>
		</author>
		<idno>2022-2-21</idno>
		<ptr target="http://arxiv.org/abs/1909.09436" />
		<imprint>
			<date type="published" when="2019">2019. 2019</date>
		</imprint>
	</monogr>
	<note>EB/OL</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<author>
			<persName><forename type="first">M</forename><surname>Raghothaman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Wei</forename><forename type="middle">Y</forename><surname>Hamadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y ;</forename><surname>Swim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Synthesizing What I Mean</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title/>
	</analytic>
	<monogr>
		<title level="j">IEEE</title>
		<imprint>
			<biblScope unit="page" from="357" to="367" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Query Expansion Based on Crowd Knowledge for Code Search</title>
		<author>
			<persName><forename type="first">L</forename><surname>Nie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Ren</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Services Computing</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="771" to="783" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note>J</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
