<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A Progressive Visual Analytics Tool for Incremental Experimental Evaluation</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Fabio</forename><surname>Giachelle</surname></persName>
							<email>fabio.giachelle@unipd.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Information Engineering</orgName>
								<orgName type="institution">University of Padua</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Gianmaria</forename><surname>Silvello</surname></persName>
							<email>gianmaria.silvello@unipd.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Information Engineering</orgName>
								<orgName type="institution">University of Padua</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">A Progressive Visual Analytics Tool for Incremental Experimental Evaluation</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">876DBEAEB18956CA9FA232B903FB253A</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T19:40+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>visual analytics</term>
					<term>experimental evaluation</term>
					<term>incremental indexing</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper presents a visual tool -AVIATOR -that integrates the progressive visual analytics paradigm in the IR evaluation process. This tool serves to speed-up and facilitate the performance assessment of retrieval models enabling a result analysis through visual facilities. AVIATOR goes one step beyond the common "computewait-visualize" analytics paradigm, introducing a continuous evaluation mechanism that minimizes human and computational resource consumption.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">MOTIVATIONS</head><p>The development of a new retrieval model is a demanding activity that goes beyond the definition and the implementation of the model itself. A retrieval model can be conceived as part of an ecosystem where each component interacts with the others to produce the final document ranking for the user. As shown in <ref type="bibr" target="#b3">[4]</ref>, the effectiveness of a model highly depends on the pipeline components it interacts with (e.g., stoplist and stemmer). To determine which configuration is best in order to get the most out of a model is a demanding activity. In fact, it requires the inspection of several component pipelines and a comparison to baselines through multiple test collections and evaluation measures.</p><p>The typical evaluation process comprises the following phases: corpus preprocessing (e.g., tokenization, stopword removal, stemming) and indexing phase, the retrieval phase and the evaluation phase itself. If something is modified in the preprocessing phase, the whole collection has to be re-indexed before testing the retrieval model and conducting the evaluation again. Unfortunately, indexing a collection may require hours, if not days, depending on the hardware and on the collection size. To assess the best configuration of components over multiple collections on the basis of a grid search requires great human effort and computational resources.</p><p>We propose an "all-in-one visual analytics tool for the evaluation of IR systems" (AVIATOR) to speed up this evaluation process. The idea behind the tool is to improve test retrieval models, calculate approximate measures, explore the results and make baseline comparisons during the indexing phase. AVIATOR allows the user to issue queries to a system while the indexing phase is still running and to explore partial evaluation results in an intuitive way thanks to visual analytics advances. In particular, leveraging on the progressive visual analytics paradigm "enable(s) an analyst to inspect partial results of an algorithm as they become available and interact with the algorithm to prioritize subspaces of interest" <ref type="bibr">[8]</ref>.</p><p>Visual analytics and IR experimental evaluation have interacted before producing visual tools to design and ease failure analysis <ref type="bibr" target="#b1">[2]</ref>, what-if analysis <ref type="bibr" target="#b2">[3]</ref>, to explore pooling strategies <ref type="bibr" target="#b6">[7]</ref> and to enable interactive grid exploration over a large combinatorial space of systems <ref type="bibr" target="#b0">[1]</ref>. Nevertheless, they followed the "compute-wait-visualize" paradigm of visual analytics. AVIATOR moves a step beyond (partially) removing the "wait" phase. To the best of our knowledge, our paper is the first to focus on progressive visual analytics employed in IR to enable the dynamic and incremental evaluation of IR systems.</p><p>A video showing the main functionalities of the system is available at the URL: https://www.gigasolution.it/v/Aviator.mp4.</p><p>Outline. A general overview of AVIATOR is presented in Section 2. AVIATOR comprises of a back-end component that deals with the incremental indexing and retrieval (Section 3) and of a front-end component that enables the interactive exploration of the partial experimental results (Section 4).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">SYSTEM OVERVIEW</head><p>AVIATOR embodies five phases: preprocessing, incremental indexing, retrieval, evaluation and visual analysis.</p><p>In the preprocessing phase the document corpus D is partitioned into</p><formula xml:id="formula_0">n bundles B = [B 1 , B 2 , . . . , B n ], where B i , with i ∈ [1, n − 1], has size k = ⌊ |D | n ⌋ and B n has size |D| − k(n − 1)</formula><p>. The bundles are populated by uniformly sampling D such that</p><formula xml:id="formula_1">B i ∩ B j = ∅, ∀i, j ∈ [1, n]</formula><p>. This sampling strategy is described in <ref type="bibr" target="#b4">[5]</ref>, where it is also shown that biased sub-collections exhibit similar behavior with uniform samples in terms of precision. As shown in Figure <ref type="figure" target="#fig_0">1</ref>, in the incremental indexing phase, we adopt two parallel system threads each one implementing an independent instance of the same Information Retrieval System (IRS). These threads are referred to as dynamic and stable core, respectively. The dynamic core indexes the first corpus bundle and then releases the partial index to the stable core. The stable core enables the user to run the retrieval phase on the partial index, while the dynamic core proceeds to index the second bundle. When the second bundle has been indexed, an interrupt is issued to the stable core and the user decides if s/he wants to update the index and run a new retrieval phase or to continue with the index already at hand.</p><p>In the retrieval phase the partial index is queried by the user. Currently, AVIATOR is based on batch retrieval on shared test collections. Hence, in each retrieval phase at least 50 queries are issued and a TREC-like run is returned for evaluation. The user can select several standard retrieval models or can use a custom one loaded into the system. This phase can be considered as dynamic since the user can keep querying the partial index by changing the retrieval model or its parameters.</p><p>The runs produced in the retrieval phase undergo continuous evaluation as they are being produced. Once the evaluation phase is performed, the results are visualized by the visual analytics component that enables the user to conduct an in-depth and intuitive analysis.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">BACK-END COMPONENT</head><p>The back-end component implements the first four phases described above. AVIATOR is a client-server application built on top of an IR system of choice. In the current implementation, AVIATOR is based on Apache Solr<ref type="foot" target="#foot_0">1</ref> which in turn exploits the widely-used Apache Lucene search engine. In the back-end, AVIATOR acts as a wrapper on the IR system, controlling every stage of the IR process (indexing, retrieval and evaluation) via HTTP through a REpresentational State Transfer (REST)ful Web service.</p><p>AVIATOR's demo version is based on the Disk 4&amp;5 of the TREC TIPSTER collection<ref type="foot" target="#foot_1">2</ref> and on the 50 topics (no. 351 − 400) of the TREC7 ad-hoc track <ref type="bibr">[10]</ref>. For testing purposes AVIATOR was designed to work with 64 different IR system pipelines including four different stoplists (indri, lucene, terrier, nostop), four stemmers (Hunspell, Krovetz, Porter, nostem), and four IR models (BM25, boolean<ref type="foot" target="#foot_2">3</ref> , Dirichlet LM, TF-IDF).</p><p>The incremental index is designed to work on 10 corpus bundles (10%, 20%, . . ., 100% of the corpus). This implies that, at the time of writing, the AVIATOR demo version works on 160 (4 × 4 × 10) different indexes that if statically stored in the memory would occupy up to 230 GB.</p><p>The system run obtained over a partial index is an approximation of the "true" run obtained on the complete index. In Figure <ref type="figure">2</ref> we show the average nDCG was shown in relation to the system performance difference between partial and full index. As expected, the precision of the measure grows with the index size and the approximate effectiveness is consistent across all the 64 tested systems. For instance, with index size 60% for most of the systems the nDCG estimation is 40% lower than the true value obtained with the full index. Figure <ref type="figure">3</ref> shows that, on the TREC7, the system rankings obtained on partial indexes are highly correlated to the ranking obtained on the full one. The correlation is based on Kendall's τ <ref type="bibr" target="#b5">[6]</ref> and, following a common rule of thumb [9], two rankings were found to be highly correlated, with τ &gt; 0.8. Thus, in comparing all the 64 IR systems on the 20% of the full index (B 2 ) our systems ranking is quite close to the one obtained with the full index. The correlation increases rather rapidly and when half of the collection is indexed, AVIATOR generates a reliable estimation of average system performances.</p><p>Figure <ref type="figure" target="#fig_1">4</ref> illustrates a topic based analysis of the nDCG where relative difference between the 64 runs are calculated on partial indexes and the final runs are calculated on the full index. We can see that with a 10% index (bundle 1), half of the topics have an nDCG presenting a 80% difference with the true nDCG value. Nevertheless, as can be seen, nDCG approximation improves steadily as the index grows. With half of the collection indexed (bundle 5) the nDCG approximation for half of the topics shows less than a 40% difference from the final value.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">FRONT-END COMPONENT</head><p>The front-end component is a Web application designed on the basis of the Model-View-Controller design pattern. Its development leverages on HTML5, D3 <ref type="foot" target="#foot_3">4</ref> and JQuery<ref type="foot" target="#foot_4">5</ref> JavaScript libraries.</p><p>Figure <ref type="figure" target="#fig_2">5</ref> shows the configuration page of AVIATOR. The user can select amongst different corpora, topic sets and pool files. The current version of AVIATOR is based on the TIPSTER collection and TREC7 ad-hoc topics and pool file. For demo purposes the partial indexes have been precomputed. The interaction with the system can therefore be artificially sped up to avoid the actual waiting time between one index version and the next. Moreover, the user can select the stoplist and the stemmer to be used for building the index and a retrieval model. In turn, the retrieval model can be changed afterwards and other models can be added to the evaluation and analytics phase.</p><p>Figure <ref type="figure" target="#fig_3">6</ref> shows the main analytics interface for the topic based analysis. In the top of the screen, the main settings related to the collection and index are reported, as a reference for the user. Below, the two tabs can be used to conduct a topic based or an overall analysis. In the top-right corner, the user can see the percentage of the corpus and the number of documents currently indexed. The main interaction interface shows a scatter plot with the Average Precision values of the retrieval model selected in the configuration phase. Just above the scatter plot, the two tabs can be used to add new retrieval models and to change the evaluation measure, as shown in Figure <ref type="figure" target="#fig_4">7</ref> (all measures returned by trec_eval are available).</p><p>Figure <ref type="figure" target="#fig_5">8</ref> illustrates the scatter plot. Four different retrieval models can be compared through a pop-up window that can be triggered with a mouse over the points of the plot. The pop-up reports the retrieval model, the measure value and the topic being inspected. The user can zoom over a specific part of the scatter plot to better inspect the results.</p><p>Figure <ref type="figure" target="#fig_6">9</ref> illustrates how the user is notified when a new version of the index is ready. The user can decide whether or not to work on a new version of the index. When a new version of the index is loaded all the visualizations are updated accordingly and the user settings are maintained from one version to the next.</p><p>Figure <ref type="figure" target="#fig_7">10</ref> shows the interface enabling the inspection of overall results (averaged over all topics) of the tested retrieval models. In this case too, a mouse over the plot bars triggers a pop-up window, providing detailed information on the inspected system.     In TIPSTER TEXT PROGRAM PHASE III:. Morgan Kaufmann.    </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Incremental indexing: the interaction between the stable and dynamic cores.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Boxplot distribution of topic-based nDCG relative differences between partial and full indexes.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: The AVIATOR configuration interface.</figDesc><graphic coords="4,53.80,83.69,240.25,207.89" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: The AVIATOR inspection interface: topic per topic visualization with a single model.</figDesc><graphic coords="4,53.80,326.44,240.25,219.29" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: The AVIATOR inspection interface: evaluation measure selection.</figDesc><graphic coords="4,317.96,241.66,240.25,118.64" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: The AVIATOR inspection interface: in-depth result analysis.</figDesc><graphic coords="4,317.96,402.15,240.25,110.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 9 :</head><label>9</label><figDesc>Figure 9: The AVIATOR inspection interface: index update.</figDesc><graphic coords="4,317.96,543.78,240.25,115.97" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 10 :</head><label>10</label><figDesc>Figure 10: The AVIATOR inspection interface: overall analysis.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>nDCG percentage difference partial vs full index Figure 2: Average nDCG relative difference between partial indexes at different levels of cut-off and the full index. Each line shows one of the 64 tested IR systems.</head><label></label><figDesc></figDesc><table><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>1</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>0.95</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>0.9</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>Kendall's</cell><cell>0.85</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>0.8</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>0.75</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>0.7</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>10%</cell><cell>20%</cell><cell>30%</cell><cell>40%</cell><cell>50%</cell><cell>60%</cell><cell>70%</cell><cell>80%</cell><cell>90%</cell><cell>100%</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>Index size</cell></row><row><cell></cell><cell>90%</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>80%</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>70%</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>nDCG relative difference</cell><cell>40% 50% 60%</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>30%</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>20%</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>10% 10%</cell><cell>20%</cell><cell>30%</cell><cell>40%</cell><cell>50%</cell><cell>60%</cell><cell>70%</cell><cell>80%</cell><cell>90%</cell><cell>100%</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="2">Index size</cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Kendall's correlation between B i and B 10 Figure 3: Kendall's τ correlation between the system rank- ings (based on nDCG) obtained over increasingly more com- plete index bundles</head><label></label><figDesc></figDesc><table><row><cell></cell><cell>100%</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>90%</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Relative difference with the full index</cell><cell>20% 30% 40% 50% 60% 70% 80%</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>10%</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>0%</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>1</cell><cell>2</cell><cell>3</cell><cell>4</cell><cell>5</cell><cell>6</cell><cell>7</cell><cell>8</cell><cell>9</cell><cell>10</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="2">Index Bundles</cell><cell></cell><cell></cell><cell></cell></row></table><note>(B i , i ∈<ref type="bibr" target="#b0">[1,</ref> 10]) and the complete index bundle (B 10 ).</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Boxplot distribution of topic-based relative differences between partial and full indexes</head><label></label><figDesc></figDesc><table /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">http://lucene.apache.org/solr/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://trec.nist.gov/data/qa/T8_QAdata/disks4_5.html</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">The boolean model, implemented in Apache Solr, uses a simple matching coefficient to rank documents</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">https://d3js.org</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">https://jquery.com</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">CLAIRE: A combinatorial visual analytics system for information retrieval evaluation</title>
		<author>
			<persName><forename type="first">M</forename><surname>Angelini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Fazzini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Santucci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Silvello</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.jvlc.2013.12.003</idno>
		<ptr target="https://doi.org/10.1016/j.jvlc.2013.12.003" />
	</analytic>
	<monogr>
		<title level="m">formation Processing &amp; Management in print</title>
				<imprint>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">VIRTUE: A Visual Tool for Information Retrieval Performance Evaluation and Failure Analysis</title>
		<author>
			<persName><forename type="first">M</forename><surname>Angelini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Santucci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Silvello</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.jvlc.2013.12.003</idno>
		<ptr target="https://doi.org/10.1016/j.jvlc.2013.12.003" />
	</analytic>
	<monogr>
		<title level="j">J. Vis. Lang. Comput</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="394" to="413" />
			<date type="published" when="2014">2014. 2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A Visual Analytics Approach for What-If Analysis of Information Retrieval Systems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Angelini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Santucci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Silvello</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. 39th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2016)</title>
				<meeting>39th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2016)<address><addrLine>New York, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Toward an anatomy of IR system component performances</title>
		<author>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Silvello</surname></persName>
		</author>
		<idno type="DOI">10.1002/asi.23910</idno>
		<ptr target="https://doi.org/10.1002/asi.23910" />
	</analytic>
	<monogr>
		<title level="j">Journal of the Association for Information Science and Technology</title>
		<imprint>
			<biblScope unit="volume">69</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="187" to="200" />
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">On Collection Size and Retrieval Effectiveness</title>
		<author>
			<persName><forename type="first">D</forename><surname>Hawking</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">E</forename><surname>Robertson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information Retrieval</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="99" to="105" />
			<date type="published" when="2003">2003. 2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">G</forename><surname>Kendall</surname></persName>
		</author>
		<title level="m">Rank correlation methods</title>
				<meeting><address><addrLine>Oxford, England</addrLine></address></meeting>
		<imprint>
			<publisher>Griffin</publisher>
			<date type="published" when="1948">1948</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Visual Pool: A Tool to Visualize and Interact with the Pooling Method</title>
		<author>
			<persName><forename type="first">A</forename><surname>Lipani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lupu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hanbury</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. 40th Annual International ACM SIGIR</title>
				<meeting>40th Annual International ACM SIGIR</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
