<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Identifying Diagnostic Test Accuracy Publications using a Deep Model</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Gaurav</forename><surname>Singh</surname></persName>
							<email>gaurav.singh.15@ucl.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="institution">UCL</orgName>
								<address>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Iain</forename><surname>Marshall</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">Kings College London</orgName>
								<address>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">James</forename><surname>Thomas</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">UCL</orgName>
								<address>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Byron</forename><surname>Wallace</surname></persName>
							<affiliation key="aff2">
								<orgName type="institution">Northeastern University</orgName>
								<address>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Identifying Diagnostic Test Accuracy Publications using a Deep Model</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">BD3CECD8516D60EED1E49DD4A23C3B46</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T20:31+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In this work, we used a deep model architecture to identify DTA studies pertaining to a given review topic. We were provided the list of relevant documents selected based on abstracts and full text for different reviews topics. We extracted the abstract and title to be used as features to describe those documents, and learned the deep neural net model that takes as input the abstract and title of the studies, and topic of the review to obtain a binary classification of whether that study is a relevant DTA to the review in question.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Model</head><p>The proposed model takes as input the title and abstract of the paper as sequences of words. These are then fed into the embeddings layer that outputs a matrix of words vectors corresponding to the given words. It is then passed through a 1-dimensional convolution layer of filter length 3. Similarly, the topic of the review in question is also passed through the embedding layer and into the convolution layer of filter length 3. The embeddings generated by the three different convolution layers are then merged, and passed through a dense fully connected layer with dropout, and sigmoid activation function for output. The loss function used at the output layer is binary cross-entropy.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Tuning</head><p>All the parameters were tuned on a held out validation dataset. The probabilities of dropout were tuned over a range of 10 equidistant values in the interval [0, 1]. The optimal value of dropout probability obtained was 0.6. The structure of the network was also trained on the held out validation dataset. We experimented with different filter lengths, and different number of convolution layers.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Results</head><p>We can see the performance of the model on the held out dataset in Figure <ref type="figure" target="#fig_1">2</ref>. We can observe that the model managed to work much better than a random classifier would have performed. We can see the macro-averaged performance of the model in identifying relevant abstracts, and relevant full text documents in Table <ref type="table">1</ref>. We can see the micro-averaged performance of the model in identifying relevant abstracts and relevant full text documents in Table <ref type="table">2</ref>, obtained using the script provided for evaluation. Accuracy 0.79537 AUC 0.56379 WSS @ 95.0 % 0.08171 WSS @ 100.0 % 0.00083 Accuracy 0.99482 AUC 0.57593 WSS @ 95.0 % 0.14705 WSS @ 100.0 % 0.00335 Table <ref type="table">1</ref>: Results on the test set for identifying relevant abstracts (left), and results on the test set for identifying relevant studies (right). In both cases, we use the abstract and title of the paper, in addition to the review topic to identify the relevant studies. It is only different in the ground truth labels generated based on the abstract or the full text of the study. Note that these results are macro averages, and not micro averages across different reviews.</p><p>WSS@100 0.072 WSS@95 0.064 NCG@10 0.117 NCG@20 0.229 NCG@30 0.347 NCG@40 0.440 NCG@50 0.536 NCG@60 0.627 NCG@70 0.729 NCG@80 0.826 NCG@90 0.906 NCG@100 0.998 T. Cost 3918.733 Norm Area 0.507 WSS@100 0.077 WSS@95 0.076 NCG@10 0.059 NCG@20 0.152 NCG@30 0.247 NCG@40 0.359 NCG@50 0.467 NCG@60 0.584 NCG@70 0.688 NCG@80 0.788 NCG@90 0.891 NCG@100 0.992 T. Cost 3918.733 Norm Area 0.522 Table <ref type="table">2</ref>: Results on the test set for identifying relevant abstracts (left), and results on the test set for identifying relevant studies (right). In both cases, we use the abstract and title of the paper, in addition to the review topic to identify the relevant studies. It is only different in the ground truth labels generated based on the abstract or the full text of the study. Note that these results are micro averages over reviews obtained using the evaluation script provided.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Discussion</head><p>In previous work, we have built a classifier which, when presented with an unknown citation (i.e. title/abstract), can predict whether it describes a Randomized Controlled Trial (RCT) or not. Performance and technical details can be found in Wallace et al. <ref type="bibr" target="#b0">[1]</ref>. The performance of this classifier on studies retrieved in searches for systematic reviews is good, and can reduce the manual screening burden by up to 80% while maintaining 100% recall. This is potentially very useful, but it is able to do this because: 1) it has been built on a large unbiased training dataset of 280,000 manually-labelled citations; and 2) the searches for systematic reviews of RCTs retrieve a large number of references which are not RCTs.  We appear to have a different situation with regards to DTA studies. We do not have the luxury of a large dataset on which to build a DTA classifier. The data presented for this exercise, for example, are the result of searches and screening decisions for DTA systematic reviews -rather than searches and screening decisions for DTAs. This means that the negative class in the DTA dataset contains large numbers of DTA studies, because they were irrelevant for the specific DTA review in question. This makes it impossible to use this dataset to build a generic DTA classifier. Moreover, we also built a DTA classifier from records we obtained outside this dataset -approximately 1,500 records which were manually labelled as to whether they described a DTA study or not. The results obtained, when using this classifier against the DTA training dataset for this task are shown in the Figure <ref type="figure" target="#fig_3">3 and 4</ref>. Other than a small boost at the bottom left of the graph in Figure <ref type="figure" target="#fig_3">4</ref>, we can see that this classifier does not perform well. Especially, in comparison to the results of the deep model presented in the previous section.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 :</head><label>1</label><figDesc>Fig. 1: Deep Model Architecture used for the Task.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 :</head><label>2</label><figDesc>Fig. 2: It plots the number of relevant documents identified based on abstracts versus the number of documents manually annotated (left), and the number of relevant documents identified based on full text versus the number of documents manually annotated (right). It is based on the held out data during training.</figDesc><graphic coords="2,135.65,456.25,170.50,127.87" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 3 :</head><label>3</label><figDesc>Fig. 3: It plots the number of relevant documents identified (as per full text) versus the number of abstracts manually annotated, using review independent DTA classifier.</figDesc><graphic coords="4,187.43,115.84,240.50,144.50" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Fig. 4 :</head><label>4</label><figDesc>Fig. 4: It plots the number of relevant documents identified (as per abstract) versus the number of abstracts manually annotated, using review independent DTA classifier.</figDesc><graphic coords="4,183.37,320.22,248.61,130.62" type="bitmap" /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Acknowledgements</head><p>JT and GS acknowledge support from Cochrane via the Transform project. BCWs contribution to the work was supported by the Agency for Healthcare Research Quality, grant R03-HS025024, and from the National Institutes of Health/National Cancer Institute, grant UH2-CA203711. IJM acknowledges support from the UK Medical Research Council, through its Skills Development Fellowship program, grant MR/N015185/1.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Identifying reports of randomized controlled trials (rcts) via a hybrid machine learning and crowdsourcing approach</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">C</forename><surname>Wallace</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Noel-Storr</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">J</forename><surname>Marshall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Cohen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">R</forename><surname>Smalheiser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Thomas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the American Medical Informatics Association</title>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
