<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Searching Unseen Sources for Historical Information: Evaluation Design for the NTCIR-18 SUSHI Pilot Task</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Douglas</forename><forename type="middle">W</forename><surname>Oard</surname></persName>
							<email>oard@umd.edu</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Maryland</orgName>
								<address>
									<settlement>College Park</settlement>
									<region>MD</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Tokinori</forename><surname>Suzuki</surname></persName>
							<email>tokinori@inf.kyushu-u.ac.jp</email>
							<affiliation key="aff1">
								<orgName type="institution">Kyushu University</orgName>
								<address>
									<settlement>Fukuoka</settlement>
									<country key="JP">Japan</country>
								</address>
							</affiliation>
							<affiliation key="aff4">
								<orgName type="department">Searching Unseen Sources for Historical Information</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Emi</forename><surname>Ishita</surname></persName>
							<email>ishita.emi.982@m.kyushu-u.ac.jp</email>
							<affiliation key="aff1">
								<orgName type="institution">Kyushu University</orgName>
								<address>
									<settlement>Fukuoka</settlement>
									<country key="JP">Japan</country>
								</address>
							</affiliation>
							<affiliation key="aff4">
								<orgName type="department">Searching Unseen Sources for Historical Information</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Noriko</forename><surname>Kando</surname></persName>
							<email>kando@nii.ac.jp</email>
							<affiliation key="aff2">
								<orgName type="institution">National Institute of Informatics</orgName>
								<address>
									<settlement>Tokyo</settlement>
									<country key="JP">Japan</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff3">
								<orgName type="department">Testbeds and Community for Information Access Research</orgName>
								<orgName type="laboratory">The First Workshop on Evaluation Methodologies</orgName>
								<address>
									<addrLine>December 12</addrLine>
									<postCode>2024</postCode>
									<settlement>Tokyo</settlement>
									<country key="JP">Japan</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Searching Unseen Sources for Historical Information: Evaluation Design for the NTCIR-18 SUSHI Pilot Task</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">C5DB42F40726780D7BD2257D64CCD435</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:46+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Information retrieval</term>
					<term>Archival access</term>
					<term>Evaluation</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In evaluation of ranked retrieval, the usual assumption is that the documents to be searched can be indexed before the query is received and the search is performed. The NTCIR-18 SUSHI Pilot Task, by contrast, models the case in which only a small sample of the documents to be searched can be indexed before the query is received. This task model arises in the context of searching within large archives of paper documents, for example. The stark difference in what can be indexed before the query is received has consequences for both task design and evaluation design, both of which are discussed in this paper.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Information retrieval has generally been modeled on the idea of the library catalog. We have some collection of materials, we can index those materials in some way, and then in response to a query we can suggest the materials that the searcher might want to see. Archives, 1 however, are different. Archives collect unique historical materials, and because those materials are unique, and thus irreplaceable, archives typically must place much greater emphasis on acquisition and preservation than on access. Access is not ignored, but it must be done within stringent resource constraints. It is thus not practical to describe (or digitize) many of the individual docuements in an archival collection. Instead, archivists typically describe collections at higher levels of aggregation, such as folders, boxes, or segments of the collection that go by names such as record groups, series, or fonds.</p><p>This situation creates challenges for searchers, in part because different parts of an archive often are arranged differently. This happens because archivists can economize on the effort needed to arrange the materials in a collection by taking advantage of the original order in which the materials were organized when they were in active use <ref type="bibr" target="#b0">[1]</ref>. For example, materials on space exploration might be originally arranged by program (Mercury, Gemini, Apollo, Shuttle, …), while materials on diplomacy might have been organized by country (Sweden, Uganda, Japan, …). Further down in the organization, we might find the diplomacy materials organized by topic (agriculture, education, economy, …), while the space exploration materials might be organized by function (design, testing, contract management, …). The Swedish agriculture records might then be further organized by author (Kissinger, Smith, Kennan, …), whereas the Apollo design materials might be organized by component (space suit, thruster, radio, …). And so on all the way down. Well, actually not all the way down, since the description process simply must stop before getting to the level of individual documents. After all, the U.K. National Archives has about 14 billion printed pages, which if put in a single stack would stretch halfway around the world. Nobody, and indeed no group of 100 people, could ever hope to look at all of that, much less to describe all of it at the level of individual documents. But even that is just a tiny part of the problem; there are about 26,000 archives in the United States alone <ref type="bibr" target="#b1">[2]</ref>, many with nowhere near the level of funding (relative to the size of their collections) as the U.K. or U.S. National Archives. Simply put, nobody will ever see all of this stuff.</p><p>The net result of this situation is to shift a greater burden onto those who want to find things in an archive. Searchers must learn where collections that might contain what they want are stored, they must know how those collections are organized, they must (given the paucity of digitization) then travel to an archive, request access to containers (e.g., boxes) that might contain what they are looking for, and then look through those materials. This is a process in which time is measured in weeks, and costs might be measured in hundreds or even thousands of U.S. dollars. Per search.</p><p>Our goal in the NTCIR-18 SUSHI 2 Pilot Task 3 is to begin to reduce the time and expense of finding materials in an archive. Our focus is on materials that are on physical media (e.g., paper or microfilm), and on materials that, like the vast majority of most archives (e.g., like 97.6% of the U.S. National Archives) have not yet been digitized, or even described at the level of individual documents. We seek to do that by supporting the development of automated systems that can learn from a very limited number of examples that have been digitized, and from whatever metadata at higher levels of aggregation might exist, to suggest where in a collection a searcher might most productively look. SUSHI is designed to support that research by developing test collections that model the real problem, and that do so in a way that supports insightful and affordable evaluation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">The Folder Ranking Task</head><p>The principal task in the NTCIR-18 SUSHI Pilot Task is Folder Ranking. Figure <ref type="figure" target="#fig_0">1</ref> displays the input (a query) and the output (a ranked list of folders) for this task. Specifically, given a query and an unsorted list of all folders in the collection, use the metadata describing each of those folders, together with a sparse sample of digitized documents and document-level metadata from some documents in some of those folders, rank the folders a searcher might most want to see. Table <ref type="table">1</ref> illustrates some of the metadata that is available in our Folder Ranking Task test collection. The sparse digitized sample in our dry run test collection includes five documents per box, with one document sampled from each of the five largest folders in each box (although the 21 boxes with fewer than five folders have more than one sample drawn from some folders). For actual final Folder Ranking Task, we plan to explore a mixture of even sampling (the same number of folders per box) and uneven sampling (with more samples from some boxes than from others). Both approaches can be useful. For early experiments drawing the same number of samples from each box can help achieve better control over experimental conditions. In real archives, however, digitization and description are both unevenly applied, and uneven sampling can help to characterize the additional challenges that such a situation produces.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Evaluation for the NTCIR-18 Pilot Task</head><p>In order to simulate the real task, we have assembled a collection that we can easily sample and that we can easily judge for relevance. This is a collection of 31,682 U.S. State Department documents from 1,337 folders in 124 boxes. What makes the collection easily judged is that it is fully digitized, and that we have topical metadata for every document. Table <ref type="table">1</ref> shows examples of that metadata, which (like the documents) is from the the US National Archives.</p><p>Since a system under test will know which folders exist, we operationalize the idea of "searching well" as ranking those folders well. We measure how well a system constructs that ranking using NDCG@5 as the principal evaluation measure. A cutoff at 5 corresponds to about a half hour's work by someone who is actually looking at physical documents in an archive. We estimate this from the fact that a folder contains an average of 31682/1337 ≈ 24 documents, together with out expectation that a skilled searcher could recognize a relevant document in 15 seconds (skilled users of archives flip through documents very quickly). Obtaining the boxes that contain those folders might take another hour or two, but requesting new boxes could be interleaved with examining results from prior requests.</p><p>In the NTCIR-18 SUSHI Pilot Task we have two ways of getting the topics on which queries are based. The first approach, used for the dry run, was to randomly select a "query document" that participating systems did not see at training time, and then to use the title metadata for that document as the query. <ref type="foot" target="#foot_0">4</ref>We then treat any document with the same title metadata (from anywhere in the collection, not just from the known query document) as being relevant. This is an extended variant of known-item retrieval Note that participating systems can't see those titles (because systems can only see document-level metadata for training documents, and we don't treat training documents as relevant). So systems must perform some sort of inference in order to rank folders without ever having seen a single one of the relevant documents anywhere in the collection <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4]</ref>.</p><p>The approach we used to create the dry run test collection can be useful as a basis for initial system development, but exact matching on document titles is at best a weak proxy for true human relevance judgments. We therefore need a second approach in which actual people make those judgments. For our final evaluation, we plan to rely instead on human assessors, preferably graduate students with a background in history or library science. The assessors will initially create search topics in the traditional title/description/narrative format, <ref type="foot" target="#foot_1">5</ref> based on their understanding of the collection's content. They will then check to see if at least a few relevant documents actually exist, using a full-collection document-level search system that we have built using PyTerrier. With this system, assessors can issue queries (either free form, or copied form a topic field), rank documents using that query based on one of several ways of indexing the collection (e.g., OCR-only, Title-only, or both), <ref type="foot" target="#foot_2">6</ref> skim the folder label and title for every document in the resulting ranked list, selectively view PDF scans of individual documents, search within a document for any term, and record their tentative relevance judgments for any documents that they encounter during the topic development process. Once they have finalized a topic, we will save their tentative relevance judgments so that they can later finalize those judgments when performing relevance assessment.</p><p>The relevance assessment process will then be performed in the same way, using the same system, but with enough time allocated for more careful searching, a process known as interactive search and judgment <ref type="bibr" target="#b4">[5]</ref>. During relevance assessment we will also ask the assessors to review the tentative relevance judgments that they had created during topic development. We separate the topic development and relevance judgment processes both for convenience (we need the topics sooner) and to encourage assessors to treat the topics as well defined and immutable during the relevance assessment process.</p><p>In this first year of the task we don't plan to use pooling to build assessment pools because participating systems will have ranked lists of folders, but relevance judgments are made not on folders but on individual documents (documents which the participating systems never saw, and could not have ranked). In future evaluations we may consider assessment processes that could benefit from folder pooling (e.g., allocating some assessor time to searching pooled folders more thoroughly). For the NTCIR-18 SUSHI Pilot Task we won't use folder pooling as a part of our assessment process, but we will look at what folder pooling would have produced in order to see if the density of relevant folders is markedly higher than random selection, and if it is we might employ folder poling in the future.</p><p>Because the systems to be evaluated produced ranked lists of folders, we must map our documentlevel relevance judgments to folder-level relevance judgments in some way. As Figure <ref type="figure" target="#fig_0">1</ref> illustrates, for the NTCIR-18 SUSHI Pilot Task we will aggregate document-level judgments to folder-level judgments by simply using as the folder's judgment the highest judgment for any judged document in that folder. The resulting relevance judgments can then be used directly to compute folder-level nDCG@5, or binarized to compute, for example, Mean Average Precision (MAP).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Future Evaluation Design Issues</head><p>That's as far as we expect to be able to get for the NTCIR-18 SUSHI Pilot Task, but the task design raises several other important evaluation design issues. Here we highlight three of those questions.</p><p>First, our use of nDCG@5 simplifies the goal perhaps more than we might like. All evaluation involves model building, and all models are simplifications of reality <ref type="bibr" target="#b5">[6]</ref>. But we might productively complexify our evaluation measure in two ways. First, we might switch from a one-and-done measurement approach to one based on the density of relevant documents in a folder. In our present approach, systems get no more credit for finding a folder with five relevant documents than for finding a folder with just one. It is probably more realistic to have some extra credit for highly ranking a folder with a larger number of relevant documents, and perhaps to get somewhat more credit for finding folders with fewer documents that have to be looked through (for any given number of relevant documents in the folder). A cost model based on the discovery rate of relevant documents would be one formulation that could address both factors. For this, we might also look for inspiration to the evaluation measures that were designed for the INEX Retrieval In Context task, where the time required to examine ranked elements from hierarchically structured content was the focus (in that case, the time required to examine ranked passages that had been extracted from documents) <ref type="bibr" target="#b6">[7]</ref>. <ref type="foot" target="#foot_3">7</ref>The situation is, however, not really even that simple. In the U.S. National Archives, for example, searchers request access not to folders, but to the boxes that contain the folders they want to see. All else equal, we would therefore prefer to find highly ranked folders that happen to be in the same box (or in nearby boxes, since for practical reasons archivists are often equally happy to fetch a short sequence of boxes that are stored together). There's probably no end to how much we could complexify this (e.g., how about constructing the shortest path through an archive to pick up some set of boxes that contain folders that together contain some given number of relevant documents?). We are not yet ready to commit to a new measure, but we are able to see that we likely will ultimately want one.</p><p>Second, confidence intervals and testing for significant differences is a bit more complex in this environment than in a typical ranked retrieval evaluation. The reason for this is that we need to account not just for random variations from the choice of query, but also random variations from our choice of the training set. In our dry run we can see the effect of the training set because we run half our queries with one randomly sampled training set and half with another. But when we compute confidence intervals, we ignore the training set variation and (for convenience) compute the confidence intervals only over the queries. In the future we will likely want to use somethign along the lnes of an ANOVA in an effort to tease apart topic trainign set effects <ref type="bibr" target="#b7">[8]</ref>.</p><p>Third, our present approach is vulnerable to the common criticism of classic information retrieval test collections that they are not typically designed to characterize cross-collection differences. This may be a minor sin when we might expect BM25 or BERT to work about as well on one English news collection as another, but in SUSHI we are seeking to model a real situation in which different collections can have vastly different metadata structures. Because just searching the folder metadata is an obvious baseline against which to compare, we need to pay attention to these differences in what metadata is available. And, of course, real archival collections are not all equally amenable to OCR-there are handwritten collections, photograph collections, collections written entirely in hieroglyphics or cuneiform, and (in one memorable case) a collection that consisted of nothing but x-ray film containing images of fish skeletons. We're not going to be able to explore that entire space of possible collections in finite time, but that's not the key concern here. Rather, the question is how best to explore any of it.</p><p>To see why that canbe challenging, it may help to articulate what a collection must have. First, it must have content for which we can create topics and for which we can perform relevance judgments. So the fish skeletons are off the table. Second, we must know at least which box contained each item (e.g., each document), and we would love to also know which folder in that box contained each item. Third, we really want to have scanned images of all the documents. This third one seems non-negotiable -we tried going over to the U.S. National Archives and doing relevance judgments for the top-ranked box for one query. It was a 10 minute drive from our office, but once we got there it took three hours just to get the box. So doing large numbers of relevance judgments by requesting and then examining paper doesn't seem like a scaleable solution.</p><p>It is not hard to find reasonably large collections of digitized materials, and it is not hard to find large collections that have good box and folder metadata. But it is harder than you might expect to find both together (and even harder if you initially prefer to omit handwritten materials and photographs). Because most archives do not yet make both content and metadata available through an API, building relationships with institutions that have something close to what we need will be key to gaining access to the collections we need, and for getting approval to share those collections broadly with other researchers.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">The Archival Reference Detection Task</head><p>The sizes of our training sets in the Folder Ranking Task are designed to model the sparsity of existing document-level metadata in real collections. For example, sampling an average of 5 documents per box closely approximates the actual fraction of the U.S. National Archives collection that has document-level description. One way of improving the potential for inferring where to look for materials that might match a query is to increase the number of documents for which document-level metadata is available. That is the goal of the Archival Reference Detection Task. Given a text of endnote or footnote, the task is to determine whether that footnote or endnote contains any references to archival materials.</p><p>The key insight that motivates our Archival Reference Detection Task is that when scholars cite materials from archives in their published work, that creates an additional source of document-level description that we could use in search tasks as well. We can use these descriptions in two ways. Most directly, we can parse the archival reference to extract document-level metadata such as a document title and location information (e.g., which archive, which series, which box, and perhaps even which folder). For example:</p><p>• Roosevelt to Secretary of War, June 3, 1939, Roosevelt Papers, O.F. 268, <ref type="bibr">Box 10</ref>; unsigned memorandum, Jan. 6, 1940, ibid., <ref type="bibr">Box 11</ref>.</p><p>• Wheeler, D., and R. García-Herrera, 2008: Ships' logbooks in climatological research: Reflections and prospects. Ann. New York Acad. Sci., 1146, 1-15, doi:10.1196/annals.1446.006. Several archive sources have been used in the preparation of this paper, including the following: Logbook of HMS Richmond. The U.K. National Archives. ADM/51/3949</p><p>In the first example, we can see the collection name "Roosevelt Papers, " document descriptions, and some box numbers. As the second example illustrates, scholars sometimes also package descriptive text together with an archival reference in the same footnote or endnote. When present, that could potentially serve as a useful free-form document-level description of some identifiable document in an archive. We could also potentially use the content at and near the point where the corresponding citation was made in the main body of a paper as a free-form description not only of what the cited document contains, but also of how that document's content might be useful (in at least one context).</p><p>In our early work on detecting archival references in papers on History <ref type="bibr" target="#b8">[9]</ref>, we found that 45 of 3,500 automatically extracted footnotes or endnotes were references to archival materials, a prevalence of about 1.3%. From this we can estimate that if we wish to collect 10,000 archival references, we would need to run Archival Reference Detection on about a million documents. We thus chose one million documents as our target collection size for the Archival Reference Detection Task. This is a classification task in which systems are asked to return a binary decision indicating whether each footnote or endnote includes an archival reference. For footnotes or endnotes that are classified by a system as an archival references, a system can optionally also elect to extract the archival reference (i.e., to segment the archival reference from any descriptive text that the author had packaged with it).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Evaluation for the NTCIR-18 Pilot Task</head><p>We provided a dry run test collection for the Archival Reference Detection Task that we had manually annotated for the presence or absence of archival references in our previous research. That collection contains 1,836 footnotes or endnotes from open-access papers in the field of History that we obtained using the Semantic Scholar API. <ref type="foot" target="#foot_4">8</ref> Annotations were performed by two annotators, one of whom was a Ph.D. student with expertise in the use of cultural heritage materials. Cohen's Kappa for agreement on a subset of that collection, with one of us as the second annotator, was 0.8 <ref type="bibr" target="#b8">[9]</ref>, which Landis and Koch would characterize as substantial agreement <ref type="bibr" target="#b9">[10]</ref>. From this we conclude that the manual annotation task is tractable, at least at small scale.</p><p>We therefore began a larger crawl of Semantic Scholar for use in the Archival Reference Detection Task. Because this is a highly unbalanced binary classification task, we will evaluate participating systems on that larger collection using the F 1 measure (the harmonic mean of precision and recall). To control evaluation costs, we will compute 𝐹 1 using a stratified sample. Our stratification will be based on the number of participating systems that classified each footnote or endnote as an archival reference. For example, we will sample footnotes or endnotes that are classified as archival references by every participating system most densely, and we will very sparsely sample footnotes or endnotes that are not classified as archival references by any system. We plan to evaluate the optional task of segregating an archival reference from any text that is packaged with it in the same footnote or endnote using the Jaccard coefficient. We will compute this Jaccard coefficient on characters, dividing the number of characters that both the system and the annotator believe are in the archival reference by the number of characters that either the system or the annotator believe are in the archival reference (i.e., the intersection over the union).</p><p>To create these annotations, we plan to hire annotators who are graduate students in history or in some related discipline. Because the scholarly papers from which we extract footnotes and endnotes will be in English, we will further require reading fluency in English. Based on our earlier experience with human annotation, we expect that (after training) annotators will be able to classify about two footnotes or endnotes per minute. We thus expect each annotator to produce about 1,000 annotations per week. We will therefore design our sampling to select about 5,000 footnotes or endnotes for annotation. We will then subsample from the annotations marked as archival references by an annotator and ask that annotators further segment the archival reference from any text that is packed with it in the same footnote or endnote. In our experience, such packaging is relatively infrequent, occurring perhaps 10% of the time, so we expect this second annotation process to go fairly quickly. Overall, we expect that annotation will require about one month, although to guard against unexpected delays we perform assessment in batches that are all sampled in a way that would allow estimation (with broader confidence intervals) even if annotation of some of the later batches can not be completed in the available time.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Future Evaluation Design Issues</head><p>Our goal in the Archival Reference Detection Task is to begin the process of finding and using archival references by first finding them. In future editions of the task, we can then extend the goal to include not just classification and segmentation, but also extraction of specific fields (such as title, archive and box) and extraction of associated descriptive text from the main body of the paper that cited this archival reference. Once that has been done, we could progress to extrinsic evaluation, measuring the benefits to an actual search task of having a broader set of document-level metadata (and other forms of document-level description) on which to base its inference. This will ultimately require new test collections for the search task, since our present Folder Ranking Task test collection draws content from too narrow a subset of the full archival universe to be useful for evaluating the impact of footnotes or endnotes that could reference anything in any archive anywhere.</p><p>Our initial experience with archival reference detection points to two other potential challenges. One is that our present approach to stratified sampling is better suited to characterizing what has been found than it is to characterizing what has been missed. This is a natural consequence of the class skew in the classification task. In future shared task evaluations, we might want to consider the use of active learning as a way of better exploring the range of cases that all participating systems are missing <ref type="bibr" target="#b10">[11]</ref>.</p><p>A second challenge is that materials in some archives are quite clearly receiving more attention in the scholarly literature than materials in other archives. For example, we see anecdotally that materials in the U.K. National Archives have been cited much more often (in the small sets that we have examined to date) than are materials in the U.S. National Archives, despite the holdings of those two institutions being of similar sizes. More careful study of this skewed distribution seems to be called for, and of course we can expect that the large-scale results from the Archival Reference Detection Task could serve as one useful basis for such a study.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Building a Research Community</head><p>Several years ago, one of us wrote a thought piece in which we identified factors that might lead to a decline in the demand for shared task information retrieval evaluations <ref type="bibr" target="#b11">[12]</ref>. Without rehashing the complete argument, the basic idea was that many of the important affordances of shared task evaluations can now also be achieved in other ways, and that some of those way have advantages in cost, friction, lead time, or scalability. There was, however, one exception to that prognostication, and that was the value of shared tasks for building research communities. With that in mind, we are pleased to now have six registered teams (as of late October). 9 Looking toward the future, we also know some people who are working on other aspects of archival access, and we are familiar with an earlier metadata-focused cultural heritage task run at CLEF, 10 We've tried to spread the word in both of those communities. We have also tried to lower barriers to staring on the SUSHI task by making baseline systems available that potential participants can easily modify without having to develop code anew for the rather complex data handling that is needed to fully specify the training and test conditions in the Folder Ranking Task. <ref type="bibr" target="#b8">9</ref> We also note, however, that two of those teams include task organizers 10 http://ims.dei.unipd.it/data/chic/</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>In this paper we have described the design of the NTCIR-18 SUSHI pilot task, and we have identified some new evaluation questions that emerge from that work. Although SUSHI is already an NTCIR-18 Pilot Task, there are still many issues around evaluation design, community building, and scaling up the size of our test collection that we feel could benefit from further discussion at this workshop.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Creation of relevance judgments for folders based on the max judgments for judged documents in that folder. R: Relevant, HR: Highly Relevant.</figDesc></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_0">This is actually a bit oversimplified. From initial experiments, we learned that we also need to limit the length of the title, because some titles are so long as to be unrealistic surrogates for a human-issued query. We therefore control the query length by first assembling all titles in the collection, making separate randomly-ordered lists for all unique 2, 3, 4, and</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_1">-word titles, and then having two annotators each select 25 of each that look to them like realistic queries.<ref type="bibr" target="#b4">5</ref> In our dry run collection we use the same topic format, but for the dry run all three topic fields are identical.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_2">Other ways include using folder labels to expand document text, or using GPT summaries of OCR text.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_3">Thanks to an anonymous reviewer for this suggestion!</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_4">https://www.semanticscholar.org/product/api</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>This work has been supported in part by the Japan Society for the Promotion of Science KAKENHI Grant Number 23KK0005 and the National Institute of Informatics Open Collaborative Research 2024 (24S0505).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">The historical hazards of finding aids</title>
		<author>
			<persName><forename type="first">G</forename><surname>Wiedeman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The American Archivist</title>
		<imprint>
			<biblScope unit="volume">82</biblScope>
			<biblScope unit="page" from="381" to="420" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">US archival repository location data</title>
		<author>
			<persName><forename type="first">B</forename><surname>Goldman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M</forename><surname>Tansey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Ray</surname></persName>
		</author>
		<ptr target="https://osf.io/cft8r/" />
		<imprint>
			<date type="published" when="2023-10-03">2023. October 3, 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Known by the company it keeps: Proximity-based indexing for physical content in archival repositories</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">W</forename><surname>Oard</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Theory and Practice of Digital Libraries</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="17" to="30" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Searching for physical documents in archival repositories</title>
		<author>
			<persName><forename type="first">T</forename><surname>Suzuki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">W</forename><surname>Oard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Ishita</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Tomiura</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval</title>
				<meeting>the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval</meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="2614" to="2618" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Forming test collections with no system pooling</title>
		<author>
			<persName><forename type="first">M</forename><surname>Sanderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Joho</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 27th annual International ACM SIGIR Conference on Research and Development in Information Retrieval</title>
				<meeting>the 27th annual International ACM SIGIR Conference on Research and Development in Information Retrieval</meeting>
		<imprint>
			<date type="published" when="2004">2004</date>
			<biblScope unit="page" from="33" to="40" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Science and statistics</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">E</forename><surname>Box</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the American Statistical Association</title>
		<imprint>
			<biblScope unit="volume">71</biblScope>
			<biblScope unit="page" from="791" to="799" />
			<date type="published" when="1976">1976</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Expected reading effort in focused retrieval evaluation</title>
		<author>
			<persName><forename type="first">P</forename><surname>Arvola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kekäläinen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Junkkari</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information Retrieval</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="460" to="484" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">How do you test a test? a multifaceted examination of significance tests</title>
		<author>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sanderson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining</title>
				<meeting>the Fifteenth ACM International Conference on Web Search and Data Mining</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="280" to="288" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Automatically detecting references from the scholarly literature to records in archives</title>
		<author>
			<persName><forename type="first">T</forename><surname>Suzuki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">W</forename><surname>Oard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Ishita</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Tomiura</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 25th International Conference on Asia-Pacific Digital Libraries</title>
				<meeting>the 25th International Conference on Asia-Pacific Digital Libraries</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="100" to="107" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">The measurement of observer agreement for categorical data</title>
		<author>
			<persName><forename type="first">J</forename><surname>Landis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Koch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Biometrics</title>
		<imprint>
			<date type="published" when="1977">1977</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Evaluation of machine-learning protocols for technology-assisted review in electronic discovery</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">V</forename><surname>Cormack</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">R</forename><surname>Grossman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 37th International ACM SIGIR Conference on Research &amp; Development in Information Retrieval</title>
				<meeting>the 37th International ACM SIGIR Conference on Research &amp; Development in Information Retrieval</meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="153" to="162" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">The future of information retrieval evaluation: NTCIR&apos;s legacy of research impact</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">W</forename><surname>Oard</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Evaluating Information Retrieval and Access Tasks</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
