<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Data Requirements for Evaluation of Personalization of Information Retrieval - A Position Paper</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nicholas J. Belkin</string-name>
          <email>belkin@rutgers.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daniel Hienert</string-name>
          <email>Daniel.Hienert@gesis.org</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Philipp Mayr</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Chirag Shah</string-name>
          <email>chirags@rutgers.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>GESIS</institution>
          ,
          <addr-line>Cologne</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>School of Communication &amp; Information, Rutgers University</institution>
          ,
          <addr-line>New Brunswick, NJ</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Two key, but usually ignored, issues for the evaluation of methods of personalization for information retrieval are: that such evaluation must be of a search session as a whole; and, that people, during the course of an information search session, engage in a variety of activities, intended to accomplish different goals or intentions. Taking serious account of these factors has major implications for not only evaluation methods and metrics, but also for the nature of the data that is necessary both for understanding and modeling information search, and for evaluation of personalized support for information retrieval (IR). In this position paper, we: present a model of IR demonstrating why these factors are important; identify some implications of accepting their validity; and, on the basis of a series of studies in interactive IR, identify some types of data concerning searcher and system behavior that we claim are, at least, necessary, if not necessarily sufficient, for meaningful evaluation of personalization of IR.</p>
      </abstract>
      <kwd-group>
        <kwd>Interactive IR</kwd>
        <kwd>Information Seeking</kwd>
        <kwd>Evaluation</kwd>
        <kwd>Task</kwd>
        <kwd>Session</kwd>
        <kwd>User log data</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        When people, seeking information in order to accomplish some task, or achieve some
goal, engage with information retrieval (IR) systems, they often, perhaps most
typically, conduct what one can term an Information Seeking Session (ISS). Although the
minimal form of such a session can be a single query put to the system, a response by
the system, a choice by the person of an item from the response, and the end of the
session, both research in information behavior, and observation of behavior in
operational systems, demonstrate that such behavior is not typical. Rather, an ISS often,
perhaps most often, consists of a number of such iterations [
        <xref ref-type="bibr" rid="ref15 ref19">1, 15, 19</xref>
        ]. What, then,
happens during the course of such an ISS? Although the typical IR system affords the
person little more than the ability to formulate and reformulate queries, and to observe
and select items from the system’s response, in the form of links to information
objects, there is substantial evidence that they are intending to accomplish many tasks or
goals, in each such iteration, other than finding an information object that is relevant
to the query that is submitted [
        <xref ref-type="bibr" rid="ref13 ref21">2, 8, 13, 21</xref>
        ]. Such goals may include, inter alia,
learning about a domain, learning about the contents of a data base, comparing information
objects, or identifying useful information objects through recognition, rather than
specification. We term such goals information seeking intentions (not to be confused
with the term intent, normally used to refer either to the general goal of the search as a
whole [e.g. 3, 11] or the topic of the search [e.g. 18] ([
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] uses the term interactive
intentions).
      </p>
      <p>Under this understanding of people’s behaviors in interaction with IR systems, we,
as have others [e.g. 7, 18], propose that IR is best construed as a sequence of
interactions of the person with the IR system, motivated overall by some external task or
goal, with each interaction being itself motivated by some information seeking
intention. These interactions can be considered as sub-tasks that arise in the person’s
attempt to eventually achieve the overall goal of the ISS, which is to obtain that which
is deemed useful in accomplishing the motivating task/goal. This view of IR has
important implications for what it means to personalize support for IR interaction, and
how to accomplish such personalization, for how to evaluate such support, and,
importantly, for what data are required to both to accomplish such support, and to
evaluate it. In this position paper, we address, in particular, the issue of the data required to
accomplish and evaluate personalization of IR interaction.
2</p>
      <p>
        Implications of the ISS Model of IR for Personalization and
its Evaluation
It has been suggested that the ISS model of IR implies that usefulness, rather than
relevance, is the appropriate criterion of evaluation of interactive IR [
        <xref ref-type="bibr" rid="ref11">3, 6, 11</xref>
        ]. This is
based on the idea that support for the ISS should be evaluated with respect to the
extent to which the entire ISS has been useful in helping the person to achieve the
motivating task/goal. But, since the ISS consists of a sequence of information seeking
intentions, evaluation must also be with respect to how useful the IR system has been
in supporting these various intentions themselves, and with the usefulness of the
support of those intentions to accomplishment of the motivating task/goal. Taking this
stance suggests further that adaptation of the IR system to both motivating task/goal,
and to the person’s various information seeking intentions, are necessary to
accomplish effective personalization of support in the ISS. This in turn suggests that, in
order to accomplish, and evaluate the effectiveness of personalization, it is minimally
necessary for the IR system to obtain data which will provide knowledge of the
person’s motivating task/goal, the goal of the ISS as a whole, and the goals of the various
intentions. This leads to the question of just what data these might be, and how they
might be collected.
      </p>
      <p>In order to address the question just posed, we first examine just what data have
already been collected in the course of the rather substantial record of research in the
evaluation of interactive IR, and how it has been collected. We analyze and compare
the data types collected in such studies in order to see if they can lead to the
understandings which we propose are necessary to evaluate support for personalization in
the manner we require. This is the subject of the next section of this paper. On the
basis of these results, we then propose a research agenda, which could lead to
methods for identification and collection of the data necessary, beyond those which have
already been identified, for proper evaluation of at least the aspect of personalization
which we have described.
3</p>
      <p>
        Existing Data Sets for IR and Interactive IR Evaluation
There exist a number of data sets for the evaluation of IR and Interactive IR (IIR)
evaluation purposes. Table 1 gives an overview of a selection of existing data sets
with their properties. We divided the data sets roughly in three different groups: (1)
evaluation campaign data sets, (2) real-world data sets, and (3) lab study data sets.
Evaluation campaign data sets normally involve a fixed corpus, given topics, queries
and relevance judgements for all or a set of result documents. The goal here is to
optimize the ranking function of the system based on given topics and the expected
relevant results. Thereby the focus of investigation has opened from the query level
(TREC Web Track [7]), over the session (TREC Dynamic Track [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]) to the task
(TREC Task Track [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]). The TREC session track [5] in contrast combines a given
corpus, topics, queries and relevance judgments with retrieved results, click data and
dwell times from crowd workers conducting searches within a session. Similar is the
INEX Interactive Track [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] where different tasks of real users are conducted. The
current PIR-CLEF1 initiative then moves the scope again to tasks and adds some
personal information about the user and a number of weighted terms describing the user
interests based on documents of interest.
      </p>
      <p>
        A second group of evaluation data sets comes from real-world search engines. For
web search there exist e.g. the Yandex Web Search click data [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] which includes
queries, retrieved results, click data and document dwell times extracted from
transaction logs. Document relevance is computed from dwell times on documents. For
discipline specific search there exist e.g. the SUSS data set with retrieval sessions
performed in a social sciences academic search engine [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. These log data sets normally
contain a high number of search sessions and a variety of users which make them
appropriate for large-scale analysis.
      </p>
      <p>Evaluation data sets from lab studies examine how real users conduct a certain task
type with a given topic. From the data side the desire is to log as much information as
possible in order to analyze user behavior from this data. This includes logging all
interaction with the system including keyboard and mouse interaction and in addition
the eye movements. For subjective measures subjects are interviewed before and after
the search session e.g. to learn something about the task’s difficulty and success.
Additionally, to identify learning steps and decisions within the search session and
certain query segments, a number of user interviews have to be conducted after the
search session. Given a task, the documents usefulness needs to be assessed in
relation to the overall task or sub task (not to the query). Another important issue is to
understand the role of each query segment for the overall task. A current investigation
1 http://www.ir.disco.unimib.it/pirclef2017/description-of-the-laboratory/
is to ask the user for the intention of each individual query segment. The question can
be: Was it to find new information or to evaluate already found information objects?
In reviewing Table 1, we note several important differences between the different data
sets, which are especially significant for evaluation of personalization. The most
obvious is that only one data set includes detailed information about the search session
at the query segment level (last row of Table 1). Since data at this level would be
crucial for evaluation of the aspect of personalization we discuss, it’s clear that the
methods used in this, and similar studies, need to be considered for evaluation of
personalization. However, this type of study suffers from at least two other problems: a
small number of cases (users, tasks, topics); and, controlled, rather than real, tasks and
task types. A conclusion that one can draw from this comparison, is that what is
required is some means for incorporating, in one general type of study, methods which
allow: the collection of (relatively) large numbers of cases, of real tasks, addressed
over whole search sessions, segmented and identified by information search
intentions. Developing a means for doing this, effectively defines a research agenda for the
design of studies which aim to evaluate personalization of support for IR. Below, we
provide some examples of the types of data that would need to be collected in such
studies, as contemplated for, e.g. PIR-CLEF 2018.</p>
      <p>As an example, we could extract various aspects of learning that take place
throughout the search. Specifically, we should try to understand how the searcher is
learning about the task and the domain as he/she retrieves and assesses information,
and how that learning affects his/her ongoing search activities. Some of the questions
to ask the searcher or an assessor for eliciting such information are:
! What has been learned from the domain knowledge?
! What has been learned from the content obtained?
! How useful is what has been learned for the sub task?
! How well did the system support learning?</p>
      <p>Another important aspect that we find useful to elicit from an IIR study is that of
evaluation. Specifically, we believe it is important to discover how searchers (not
external judges) evaluate an item for correctness and usefulness, and not just
relevance. Searchers are also often comparing several relevant/useful items and picking
the best and it would be interesting to know how they make such decisions. Some of
the questions that could be asked to the searcher or an assessor to gather such
information are:
! Which items were evaluated?
! What was evaluated?
! What were the criteria?
! How useful were the items for the sub task?
! How well did the system in supporting evaluating the items?</p>
      <p>
        These two types of intention of course do not cover all information seeking
intentions that could occur during an ISS (see, e.g. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] or [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] for more inclusive lists),
but, as examples, they indicate the nature of the data that would be required to
evaluate personalization to any such intention. We hope that these examples are sufficient
to indicate at least some aspects of what would need to be covered in a research
agenda for specification of data types for evaluation of personalization of information
retrieval.
6
7
1.
2.
3.
4.
5.
6.
      </p>
    </sec>
    <sec id="sec-2">
      <title>Conclusion</title>
      <p>We have proposed a view of IR that implies that personalization should be with
respect not only to context, but to the various information search intentions that people
have during the course of an information seeking session. We have identified some
types of data which we claim would be necessary in order to evaluate the
effectiveness of such personalization. We suggest that, learning just what data are necessary,
and developing methods to gather such data, constitute the basis for a research agenda
central to the general task of evaluation of personalization of support for IR. This
could also be a starting point for considering the nature of the task for PIR-CLEF
2018.</p>
      <p>This work was partly funded by Deutsche Forschungsgemeinschaft (DFG), grant no.
MA 3964/5-1; the AMUR project at GESIS; and, by the National Science Foundation,
grant no. IIS-1423239.</p>
    </sec>
    <sec id="sec-3">
      <title>Acknowledgments References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Bates</surname>
            ,
            <given-names>M.J.:</given-names>
          </string-name>
          <article-title>The design of browsing and berrypicking techniques for the online search interface</article-title>
          .
          <source>Online Rev</source>
          .
          <volume>13</volume>
          ,
          <issue>5</issue>
          ,
          <fpage>407</fpage>
          -
          <lpage>424</lpage>
          (
          <year>1989</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Belkin</surname>
            ,
            <given-names>N.J.</given-names>
          </string-name>
          :
          <article-title>Intelligent information retrieval: Whose intelligence?</article-title>
          <source>In: ISI '96: Proceedings of the Fifth International Symposium for Information Science</source>
          . pp.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Belkin</surname>
            ,
            <given-names>N.J.</given-names>
          </string-name>
          :
          <article-title>On the evaluation of interactive information retrieval systems</article-title>
          . In: Larsen,
          <string-name>
            <surname>B.</surname>
          </string-name>
          et al. (eds.)
          <article-title>The Janus Faced Scholar. A Festschrift in Honour of Peter Ingwersen</article-title>
          . pp.
          <fpage>13</fpage>
          -
          <lpage>21</lpage>
          , Copenhagen: Royal School of Library and Information
          <string-name>
            <surname>Science</surname>
          </string-name>
          (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Broder</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>A Taxonomy of Web Search</article-title>
          .
          <source>SIGIR Forum</source>
          .
          <volume>36</volume>
          ,
          <issue>2</issue>
          ,
          <fpage>3</fpage>
          -
          <lpage>10</lpage>
          (
          <year>2002</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Carterette</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          et al.:
          <article-title>Overview of the TREC 2014 session track</article-title>
          .
          <source>Proceedings of TREC</source>
          <year>2014</year>
          , (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Cole</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          et al.:
          <article-title>Usefulness as the criterion for evaluation of interactive information retrieval</article-title>
          .
          <source>In: Proceedings of the Workshop on Human-Computer Interaction and Information Retrieval</source>
          . pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Collins-Thompson</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          et al.:
          <article-title>TREC 2014 web track overview</article-title>
          .
          <source>Proceedings of TREC</source>
          <year>2014</year>
          , (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Cool</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Belkin</surname>
            ,
            <given-names>N.J.:</given-names>
          </string-name>
          <article-title>A Classification of Interactions with Information</article-title>
          . In: Bruce,
          <string-name>
            <surname>H.</surname>
          </string-name>
          et al. (eds.)
          <article-title>Emerging frameworks and methods</article-title>
          .
          <source>Proceedings of the Fourth International Confer-ence on Conceptions of Library and Information Science (CoLIS4)</source>
          . pp.
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          Libraries Unlimited, Greenwood Village, CO (
          <year>2004</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Fuhr</surname>
          </string-name>
          , N.:
          <article-title>A Probability Ranking Principle for Interactive Information Retrieval</article-title>
          .
          <source>Inf Retr</source>
          .
          <volume>11</volume>
          ,
          <issue>3</issue>
          ,
          <fpage>251</fpage>
          -
          <lpage>265</lpage>
          (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Hagen</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          et al.:
          <article-title>How Writers Search: Analyzing the Search and Writing Logs of Non-fictional Essays</article-title>
          . In: Kelly,
          <string-name>
            <surname>D.</surname>
          </string-name>
          et al. (eds.)
          <source>Proceedings of the 1st ACM SIGIR Conference on Human Information Interaction and Retrieval (CHIIR 16)</source>
          . pp.
          <fpage>193</fpage>
          -
          <lpage>202</lpage>
          ACM (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Hienert</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mutschke</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>A Usefulness-based Approach for Measuring the Local and Global Effect of IIR Services</article-title>
          .
          <source>In: Proceedings of the 2016 ACM on Conference on Human Information Interaction and Retrieval</source>
          . pp.
          <fpage>153</fpage>
          -
          <lpage>162</lpage>
          ACM, New York, NY, USA (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Mayr</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kacem</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>A Complete Year of User Retrieval Sessions in a Social Sciences Academic Search Engine</article-title>
          .
          <source>In: 21st International Conference on Theory and Practice of Digital Libraries (TPDL</source>
          <year>2017</year>
          ).
          <article-title>(</article-title>
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Mitsui</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          et al.:
          <article-title>Extracting Information Seeking Intentions for Web Search Sessions</article-title>
          .
          <source>In: Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval</source>
          . pp.
          <fpage>841</fpage>
          -
          <lpage>844</lpage>
          ACM, New York, NY, USA (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Pharo</surname>
          </string-name>
          , N. et al.:
          <article-title>Overview of the INEX 2010 Interactive Track</article-title>
          . In:
          <article-title>Proceedings of the 9th International Conference on Initiative for the Evaluation of XML Retrieval: Comparative Evaluation of Focused Retrieval</article-title>
          . pp.
          <fpage>227</fpage>
          -
          <lpage>235</lpage>
          SpringerVerlag, Berlin, Heidelberg (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Rieh</surname>
            ,
            <given-names>S.Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xie</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          :
          <article-title>Analysis of Multiple Query Reformulations on the Web: The Interactive Information Retrieval Context</article-title>
          .
          <source>Inf Process Manage</source>
          .
          <volume>42</volume>
          ,
          <issue>3</issue>
          ,
          <fpage>751</fpage>
          -
          <lpage>768</lpage>
          (
          <year>2006</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Rose</surname>
            ,
            <given-names>D.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Levinson</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Understanding User Goals in Web Search</article-title>
          .
          <source>In: Proceedings of the 13th International Conference on World Wide Web</source>
          . pp.
          <fpage>13</fpage>
          -
          <lpage>19</lpage>
          ACM, New York, NY, USA (
          <year>2004</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Serdyukov</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          et al.:
          <source>WSCD2013: Workshop on Web Search Click Data 2013. In: Proceedings of the Sixth ACM International Conference on Web Search and Data Mining</source>
          . pp.
          <fpage>787</fpage>
          -
          <lpage>788</lpage>
          ACM, New York, NY, USA (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Teevan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          et al.:
          <source>Personalizing Search via Automated Analysis of Interests and Activities. In: Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval</source>
          . pp.
          <fpage>449</fpage>
          -
          <lpage>456</lpage>
          ACM, New York, NY, USA (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Teevan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          et al.:
          <article-title>Potential for Personalization</article-title>
          .
          <source>ACM Trans Comput-Hum Interact</source>
          .
          <volume>17</volume>
          ,
          <issue>1</issue>
          ,
          <issue>4</issue>
          :
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          :
          <fpage>31</fpage>
          (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Verma</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          et al.:
          <article-title>Overview of the TREC Tasks Track 2016</article-title>
          .
          <source>In: Proceedings of TREC</source>
          <year>2016</year>
          .
          <article-title>(</article-title>
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Xie</surname>
            ,
            <given-names>H.I.</given-names>
          </string-name>
          :
          <article-title>Shifts of Interactive Intentions and Information-seeking Strategies in Interactive Information Retrieval</article-title>
          .
          <source>J Am Soc Inf Sci. 51</source>
          ,
          <issue>9</issue>
          ,
          <fpage>841</fpage>
          -
          <lpage>857</lpage>
          (
          <year>2000</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Xie</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Interactive Information Retrieval in Digital Environments</article-title>
          . IGI Global, Hershey, PA, USA (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>G.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Soboroff</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>TREC 2016 Dynamic Domain Track Overview</article-title>
          .
          <source>In: Proceedings of TREC</source>
          <year>2016</year>
          .
          <article-title>(</article-title>
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>