<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Overview of the SBS 2015 Interactive Track</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Maria Gade</string-name>
          <email>maria.gaede@ibi.hu-berlin.de</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mark Hall</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hugo Huurdeman</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jaap Kamps</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marijn Koolen</string-name>
          <email>marijn.kooleng@uva.nl</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mette Skov</string-name>
          <email>skov@hum.aau.dk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elaine Toms</string-name>
          <email>e.toms@sheffield.ac.uk</email>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>David Walsh</string-name>
          <email>david.walshg@edgehill.ac.uk</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Aalborg University</institution>
          ,
          <country country="DK">Denmark</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Edge Hill University</institution>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Humboldt University Berlin</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Amsterdam</institution>
          ,
          <country country="NL">Netherlands</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>University of She eld</institution>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Users looking for books online are confronted with both professional meta-data and user-generated content. The goal of the Interactive Social Book Search Track was to investigate how users used these two sources of information, when looking for books in a leisure context. To this end participants recruited by four teams performed two di erent tasks using one of two book-search interfaces. Additionally one of the two interfaces also investigated whether user performance can be improved by providing a user-interface that supports multiple search stages.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>The goal of the Interactive Social Book Search (ISBS) task is to investigate how
book searchers use professional metadata and user-generated content at di erent
stages of the search process. The purpose of this task is to gauge user interaction
and user experience in social book search by observing user activity with a large
collection of rich book descriptions under controlled and simulated conditions,
aiming for as much "real-life" experiences intruding into the experimentation.
The output will be a rich data set that includes both user pro les, selected
individual di erences (such as a motivation to explore), a log of user interactivity,
and a structured set of questions about the experience.</p>
      <p>The Interactive Social Book Search (ISBS) Task started in 2014 as a merge
of the INEX Social Book Search (SBS, [7{9]) track and the Interactive task
of CHiC [12, 15]. The SBS Track started in 2011 and has focused on
systemoriented evaluation of book search systems that use both professional metadata
and user-generated content. Out of three years of SBS evaluation arose a need to
understand how users interact with these di erent types of book descriptions and
how systems could support user to express and adapt their information needs
during the search process. The CHiC Interactive task focused on interaction of
users browsing and searching in the Europeana collection. One of the questions
is what types of metadata searchers use to determine relevance and interest.
The collection, use case and task were deemed not interesting and useful enough
to users. The SBS contributes a new document collection, use case and search
tasks. The rst year of the ISBS will therefore focus on switching to the SBS
collection and use case, with as few other changes as possible.</p>
      <p>The goal of the interactive book search task is to investigate how searchers
interact with book search systems that o er di erent types of book metadata.
The addition of opinionated descriptions and user-supplied tags allows users to
search and select books with new criteria. User reviews may reveal information
about plot, themes, characters, writing style, text density, comprehensiveness
and other aspects that are not described by professional metadata. In
particular, the focus is on complex goal-oriented tasks as well as non-goal oriented tasks.
For traditional tasks such as known-item search, there are e ective search
systems based on access points via formal metadata (i.e. book title, author name,
publisher, year, etc). But even here user reviews and tags may prove to have
an important role. The long-term goal of the task is investigate user behaviour
through a range of user tasks and interfaces and to identify the role of di erent
types of metadata for di erent stages in the book search process.</p>
      <p>For the Interactive task, the main research question is:
RQ How do searchers use professional metadata and user-generated content in
book search?</p>
      <p>This can be broken down into a few more speci c questions:
RQ1 How should the UI combine professional and user-generated information?
RQ2 Should the UI adapt itself as the user progresses through their search task,
and if so, how?</p>
      <p>In this paper, we report on the setup and the results of the ISBS track 2015.
Section 2 lists the participating teams. The experimental setup of the task is
discussed in detail in Section 3 and the results in Section 4. We close in Section 5
with a summary and plans for 2016.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Participating Teams</title>
      <p>In this section we provide information on the participating teams. In Table 1 we
show which institutes participated in this Track and the number of users that
took part in their experiments. Three users in our experiment selected Other
for the institute via which they were recruited, but did not ll in an institution
name.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Experimental Setup</title>
      <p>In this section we rst describe the background of the Social Book search Lab
and the role of the Interactive track, then describe the tasks, questionnaires, the
system and user interfaces, participants and the procedure for recruiting and
informing participants.
3.1</p>
      <sec id="sec-3-1">
        <title>Social Book Search</title>
        <p>
          The goal of the interactive Social Book Search (ISBS) track is to investigate how
searchers make use of and appreciate professional metadata and user-generated
content for book search on the Web and to develop interfaces that support
searchers through the various stages of their search task. The user has a
speci c information need against a background of personal tastes, interests and
previously seen books. Through social media, book descriptions are extended
far beyond what is traditionally stored in professional catalogues. Not only are
books described in the users' own vocabulary, but they are also reviewed and
discussed online, and added to online personal catalogues of individual readers.
This additional information is subjective and personal, and opens up
opportunities to aid users in searching for books in di erent ways that go beyond the
traditional editorial metadata based search scenarios, such as known-item and
subject search. For example, readers use many more aspects of books to help
them decide which book to read next [13], such as how engaging, fun, educational
or well-written a book is. In addition, readers leave a trail of rich information
about themselves in the form online pro les which contain personal catalogues
of the books they have read or want to read, personally assigned tags and ratings
for those books and social network connections to other readers. This results in
a search task that may require a di erent model than pure search [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] or pure
recommendation.
        </p>
        <p>The ISBS track investigates book requests and suggestions from the
LibraryThing (LT) discussion forums as a way to model book search in a social
environment. The discussions in these forums show that readers frequently turn to
others to get recommendations and tap into the collective knowledge of a group
of readers interested in the same topic.</p>
        <p>
          The track builds on the INEX Amazon/LibraryThing (A/LT) collection [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ],
which contains 1.5 million book descriptions from Amazon, enriched with content
from LT. This collection contains both professional metadata and user-generated
content.6
        </p>
        <p>The records contain title information as well as a Dewey Decimal Classi
cation (DDC) code (for 61% of the books) and category and subject information
supplied by Amazon. We note that for a sample of Amazon records the subject
descriptors are noisy, with a number of inappropriately assigned descriptors that
seem unrelated to the books. Each book is identi ed by an ISBN. Since di erent
editions of the same work have di erent ISBNs, there can be multiple records for
a single intellectual work. Each book record is an XML le with elds like isbn,
title, author, publisher, dimensions, numberofpages and publicationdate. Curated
metadata comes in the form of a Dewey Decimal Classi cation in the dewey
eld, Amazon subject headings in the subject eld, and Amazon category labels
in the browseNode elds. The social metadata from Amazon and LT is stored in
the tag, rating, and review elds.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>User Tasks</title>
        <p>This year in addition to the two main user tasks, a training task was developed
to ensure that participants are familiar with all the functions o ered by the two
interfaces. The queries and topics used in the training task were chosen so as
not to overlap with the goal-oriented task. However, a potential in uence on the
non-goal task cannot be ruled out.</p>
        <p>Similar to last year, two tasks were created to investigate the impact of
di erent task types on the participants interactions with the interfaces and also
the professional and user-generated book meta-data. For both tasks, participants
were asked to describe their motivation for particular book selections in the
book-bag.</p>
        <p>The goal-oriented task contains ve sub-tasks ensuring that participants
spend enough time on nding relevant books. While the rst sub-tasks de nes
a clear goal, the other sub-tasks are more open giving the user enough room
to interact with and the available content and met-data options. The following
instruction text was provided to participants:</p>
        <p>Imagine you participate in an experiment at a desert-island for one
month. There will be no people, no TV, radio or other distraction. The
only things you are allowed to take with you are 5 books. Please search
for and add 5 books to your book-bag that you would want to read
during your stay at the desert-island:
{ Select one book about surviving on a desert island
{ Select one book that will teach you something new
6 This collection is a subset of a larger collection of 2.8 million description. The subset
contains all book descriptions which have a cover image.</p>
        <p>{ Select one book about one of your personal hobbies or interests
{ Select one book that is highly recommended by other users (based
on user ratings and reviews)
{ Select one book for fun
Please add a note (in the book-bag) explaining why you selected each of
the ve books.</p>
        <p>
          The non-goal task was developed based on the open-ended task used in the
iCHiC task at CLEF 2013 [14] and the ISBS task at CLEF 2014 [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. The aim
of this task is to investigate how users interact with the system when they have
no pre-de ned goal in a more exploratory search context. It also allows the
participants to bring their own goals or sub-tasks to the experiment in line with
the \simulated work task" idea [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. The following instruction text was provided
to participants:
        </p>
        <p>Imagine you are waiting to meet a friend in a co ee shop or pub or the
airport or your o ce. While waiting, you come across this website and
explore it looking for any book that you nd interesting, or engaging
or relevant. Explore anything you wish until you are completely and
utterly bored. When you nd something interesting, add it to the
bookbag. Please add a note (in the book-bag) explaining why you selected
each of the books.
3.3</p>
      </sec>
      <sec id="sec-3-3">
        <title>Experiment Structure</title>
        <p>
          The experiment was conducted using the SPIRE system7 [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], using the ow
shown in Figure 1. Each participant ran through the Pre-Task, Task, Post-Task
steps once for each of the two tasks. When a new participant started the
experiment, the SPIRE system automatically allocated them to one of the two tested
interfaces and to a given task order. Interface allocation and task order were
automatically balanced to minimise bias in the resulting data.
        </p>
        <p>Participant responses were collected in the following ve steps using a
selection of questionnaires:
{ Consent { all participants had to con rm that they understood the tasks
they would be asked to undertake and the types of data collected in the
experiment. Participants also speci ed which institute had recruited them;
{ Demographics { the following factors were acquired in order to characterise
the participants: gender, age, achieved education level, current education
level, and employment status;
{ Culture { to quantify language and cultural in uences, the following
factors were collected: country of birth, country of residence, mother tongue,
primary language spoken at home, languages used to search the web;
7 Based on the Experiment Support System { https://bitbucket.org/mhall/
experiment-support-system</p>
        <p>{ Post-Task { in the post task questions, participants were asked to judge how
useful each of the interface components and meta-data parts that they had
used in the task were, using 5-point Likert-like scales;
{ Engagement { after participants had completed both tasks, they were asked
to complete O'Brien et al.'s [11] engagement scale.
3.4</p>
      </sec>
      <sec id="sec-3-4">
        <title>System and Interfaces</title>
        <p>The two tested interfaces (baseline and multi-stage) were both built using the
PyIRE8 workbench, which provides the required functionality for creating
interactive IR interfaces and logging all interactions between the participants and the
system. This includes any queries they enter, the books shown for the queries,
pagination, facets selected, books viewed in detail, metadata facets viewed, books
added to the book-bag, and books removed from the book-bag. All log-data is
automatically timestamped and linked to the participant and task.</p>
        <p>Both interfaces used a shared IR backend implemented using ElasticSearch9,
which provided free-text search, faceted search, and access to the individual
books complete metadata. The 1.5 million book descriptions are indexed with
all professional metadata and user-generated content. For indexing and retrieval
the default parameters are used, which means stopwords are removed, but no
stemming is performed. The Dewey Decimal Classi cation numbers are replaced
by their natural language description. For instance, the DDC number 573 is
replaced by the descriptor Physical anthropology. User tags from LibraryThing
are indexed both as text strings, such that complex terms are broken down into
8 Python interactive Information Retrieval Evaluation workbench { https://
bitbucket.org/mhall/pyire
9 ElasticSearch { http://www.elasticsearch.org/
individual terms (e.g. physical anthropology is indexed as physical and
anthropology) and as non-analyzed terms, which leaves complex terms intact and is used
for faceted search.</p>
        <p>The baseline interface shown in gure 2 represents a standard faceted
websearch interface, the only additions being the task information (top-left) and the
list of past searches (top-right). The main interface consists of a search box at
the top, two facets on the left, and the search results list (center). On the
righthand side is the book-bag, which shows the participants which books they have
collected for their task and also provides the notes eld, which the participants
were instructed to use to explain why they had chosen that book.</p>
        <p>The two facets provided on the left use the Amazon subject classi cation
and the user tags to generate the two lists together with numeric indicators for
how many books each facet contained. Selecting a facet restricted the search
results to books with that facet. Participants could select multiple facets from
both lists.</p>
        <p>In the search results list each book consisted of a thumbnail image, title,
authors, aggregate user rating, a description, publication information (type,
publisher, pages, year, ISBN ...), user reviews, and user tags (where available). The
aggregate user rating was displayed using 1 to 5 stars in half-star steps,
calculated by aggregating the 1-5 star ratings for each user review. If the book had
no user reviews, then no stars were shown. Additionally each book had a "Add
to Bookbag" button that participants used to add that book into their bookbag.
The multi-stage interface aims to support users by taking the di erent stages
of the search process into account. The idea behind the multi-stage interface
design is supported by two theoretical components.</p>
        <p>Firstly, several information search process models look at stages in the search
process. A well-known example is Kuhlthau [10], who discovered \common
patterns in users' experience" during task performance. She developed a model
consisting of six stages, which describe users' evolving thoughts, feelings and
actions in the context of complex tasks. Vakkari [16] later summarized Kuhlthau's
stages into three categories (pre-focus, focus formulation, and post-focus), and
points to the types of information searched for in the di erent stages.</p>
        <p>The multi-stage search interface constructed for iSBS was inspired by [16]. It
includes three distinct panels, potentially supporting di erent stages: browse, in
which users can explore categories of books, search, supporting in-depth
searching, and book-bag, in which users can review and re ne their book-bag selections.</p>
        <p>Secondly, when designing a new search interface for social book search it has
also been relevant to look more speci cally at the process of choosing a book
to read. A model of decision stages in book selection [13] identi es the
following decision stages: browse category, selecting, judging, sampling, and sustained
reading. This work supports the need for a user interface that takes the di erent
search and decision stages into account. However, the di erent stages in [13]
closely relate to a speci c full text digital library, and therefore the model was
not applicable to the present collection.</p>
        <p>
          When the multi-stage interface rst loads, participants are shown the browse
stage ( g. 3), which is aimed at supporting the initial exploration of the
dataset. The main feature to support the free exploration is the hierarchy browsing
component on the left, which shows a hierarchical tree of Amazon subject
classi cations. This was generated using the algorithm described in [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], which uses
the relative frequencies of the subjects to arrange them into the tree-structure
with the most-frequent subjects at the top of the tree. The search result list
is designed to be more compact to allow the user to browse books quickly and
shows only the book's title and aggregate ratings (if available). Clicking on the
book title showed a popup window with the book's full meta-data using the
same layout and content as used in the baseline interface's search result list.
        </p>
        <p>Participants switched to the search stage by clicking on the "Search" section
in the gray bar at the top. The search stage ( g. 4 uses the same interface as the
baseline with only two di erences. The rst is that as the book-bag is a separate
stage, it is not shown on the search stage interface itself. The second is that if
the participants select a topic in the browse stage, this topic is pre-selected as a
lter for any queries in the blow box to the left of the search box. Participants
can click on that box to see a drop-down menu of the selected topic and its
parent topics. Participants can select a higher-level topic to widen their search.</p>
        <p>The nal stage is the book-bag shown in Figure 5, where participants review
the books they have collected and can provide the notes for each book. For each
book, four buttons were provided that allowed the user to search for similar
books by title, author, topic, and user tags. The similar books are shown on the
right using the same compact layout as in the browse stage. As in the browse
stage, clicking on a book in that list shows a popup window with the book's
details.
3.5</p>
      </sec>
      <sec id="sec-3-5">
        <title>Participants</title>
        <p>Demographic information on the 192 participants is given in Table 2. Of these
participants, 120 were female and 72 male. In terms of age, 72 were between 18
and 25, 80 between 26 and 35, 25 between 36 and 45, 8 between 46 and 55, 6
between 56 and 65 and 1 over 65. 60 were in employment, 3 unemployed, 128
were students and 1 selected other. Participants came from 36 di erent
countries (country of birth) including Germany (63 participants), UK (33), Denmark
(21), Norway (20), the Netherlands (11), resident in 13 di erent countries, again
mainly in Germany, UK, Denmark, Norway and the Netherlands. Participants
mother tongues included German, Dutch, English, Danish, Romanian, Farsi or
Portuguese and 23 others. The majority of participants executed the tasks
remotely (136), only 56 users conducted the experiment at a lab. 95 participants
used the novel multi-stage interface, while 97 used the baseline interface.
3.6</p>
      </sec>
      <sec id="sec-3-6">
        <title>Procedure</title>
        <p>Participants were invited by the individual teams, either using e-mail (Aalborg,
Amsterdam, Edge Hill) or by recruiting students from a lecture or lab (Edge
Hill, Humboldt). Where participants were invited by e-mail, the e-mail
contained a link to the online experiment, which would open in the participant's
browser. Where participants were recruited in a lecture or lab, the experiment
URL was distributed using e-learning platforms. The following browsers and
operating systems had been tested: Windows, OS X, Linux using Internet Explorer,
Chrome, Mozilla Firefox, and Safari. The only di erence between browsers was
that some of the graphical re nements such as shadows are not supported on
Internet Explorer and fall back to a simpler line-based display.</p>
        <p>After participants had completed the experiment as outlined above (3.3),
they were provided with additional information on the tasks they had
completed and with contact information, should they wish to learn more about the
experiment. Where participants that completed the experiment in a lab, teams
were able to conduct their own post-experiment process, which mostly focused
on gathering additional feedback on the system from the participants.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Results</title>
      <p>Based on the participant responses and log data we have aggregated summary
statistics for a number of basic performance metrics.</p>
      <p>
        Session length was measured automatically using JavaScript and stored with
the participants' responses. Table 3 shows median and inter-quartile ranges for
all interface and task combinations. Session lengths are signi cantly lower for
the baseline interface (wilcoxon signed rank p &lt; 0:05). Also all session lengths
are signi cantly longer than in the iSBS 2014 experiment [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
Number of queries was extracted from the log-data. In both interfaces it was
possible to issue queries by typing keywords into the search box or by clicking on
a meta-data eld to search for other books with that meta-data eld value. Both
types of query have been aggregated and Table 4 shows the number of queries for
each interface and task. The number of queries per session is signi cantly higher
for the baseline interface over the multi-stage interface for both tasks (wilcox
p &lt; 0:05) and also for the goal-oriented over the non-goal task in both interfaces
(wilcox p &lt; 0:01).
      </p>
      <p>Number of books collected was extracted from the log-data. Participants
collected those books that they felt were of use to them. The numbers reported
in Table 5 are based on the number of books participants had in their book-bag
when they completed the session, not the total number of books collected over
the course of their session, as participants could always remove books from their
book-bag in the course of the session.</p>
      <p>Unlike the other metrics, there is no signi cant di erence between the two
interfaces. On the goal-oriented task this was expected as participants were asked
to collect ve books. On the non-goal task this indicates that the interface had
no impact on what participants felt was enough to complete the task.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusions and Plans</title>
      <p>This was the second year of the Interactive Social Book Search Track. Our goal
was to investigate how users deal with professional metadata and user-generated
content when searching for books and how di erent types of interfaces support
users in goal-oriented and non-goal-oriented tasks. The track makes use of a large
collection of book descriptions from Amazon and LibraryThing with a mixture of
professional metadata in the form of subject descriptors and classi cation codes
and user-generated content in the form of user reviews, tags and ratings. Because
search processes often consist of multiple stages, we developed two interfaces to
identify and analyse these di erent stages. One interface resembles traditional
search interfaces familiar from Amazon and LibraryThing, the other is a
multistage interface where the rst part provides a broad overview of the collection,
the second part allows the user to look at search results in a more detailed view
and the nal part allows the user to directly compare selected books in great
detail.</p>
      <p>The rst edition had a short data gathering period and served as a pilot to
develop and test the interfaces, especially the multistage interface. This year the
track ran over the full annual cycle, with a data gathering period of almost ve
months, resulting in a shared data pool of 192 participants from many di erent
backgrounds. The tasks have been re-designed to make users interact more and
for longer periods of time during the experiments, with very satisfying results.
This year, users spent signi cantly more time on both tasks, even though for the
open task they could stop as soon as they wanted.</p>
      <p>Users issued fewer queries with the multistage interface than with the baseline
interface, probably because the multistage interface allows browsing as an extra
mode of exploring the collection. The papers contributed by the participating
teams contain more detailed analyses of the interactions and will focus on speci c
research questions and ndings.</p>
      <p>For the next year, we plan to have multiple, short experiments, more focused
on speci c research questions, with fewer users per experiment. Another option
is to let individual teams plan their own experiments. Either way, the challenge
will be to have enough commonality between the experiments so that the total
data set provides a coherent insight into book search interactions in the di erent
stages that users move through.</p>
      <p>Acknowledgments We would like to thank Preben Hansen, Monica Landoni,
Birger Larsen, Vivien Petras and Robert Villa for their feedback and advice in
setting up the track.
7. M. Koolen, G. Kazai, J. Kamps, A. Doucet, and M. Landoni. Overview
of the INEX 2011 books and social search track. In S. Geva, J. Kamps,
and R. Schenkel, editors, Focused Retrieval of Content and Structure: 10th
International Workshop of the Initiative for the Evaluation of XML Retrieval
(INEX 2011), volume 7424 of LNCS. Springer, 2012.
8. M. Koolen, G. Kazai, J. Kamps, M. Preminger, A. Doucet, and M. Landoni.</p>
      <p>Overview of the INEX 2012 social book search track. In S. Geva, J. Kamps,
and R. Schenkel, editors, Focused Access to Content, Structure and Context:
11th International Workshop of the Initiative for the Evaluation of XML
Retrieval (INEX'12), LNCS. Springer, 2013.
9. M. Koolen, G. Kazai, M. Preminger, and A. Doucet. Overview of the INEX
2013 social book search track. In CLEF 2013 Evaluation Labs and Workshop,
Online Working Notes, 2013.
10. C. C. Kuhlthau. Inside the search process: Information
seeking from the user's perspective. Journal of the American
Society for Information Science, 42(5):361{371, 1991. ISSN
10974571. doi: 10.1002/(SICI)1097-4571(199106)42:5h361::AID-ASI6i3.0.CO;
2-#. URL http://dx.doi.org/10.1002/(SICI)1097-4571(199106)42:
5&lt;361::AID-ASI6&gt;3.0.CO;2-#.
11. H. L. O'Brien and E. G. Toms. The development and evaluation of a survey
to measure user engagement. Journal of the American Society for
Information Science and Technology, 61(1):50{69, 2009.
12. V. Petras, T. Bogers, E. Toms, M. Hall, J. Savoy, P. Malak, A. Pawowski,
N. Ferro, and I. Masiero. Cultural heritage in clef (chic) 2013. In P. Forner,
H. Mller, R. Paredes, P. Rosso, and B. Stein, editors, Information Access
Evaluation. Multilinguality, Multimodality, and Visualization, volume 8138
of Lecture Notes in Computer Science, pages 192{211. Springer Berlin
Heidelberg, 2013. ISBN 978-3-642-40801-4. doi: 10.1007/978-3-642-40802-1 23.</p>
      <p>URL http://dx.doi.org/10.1007/978-3-642-40802-1_23.
13. K. Reuter. Assessing aesthetic relevance: Children's book selection in a
digital library. JASIST, 58(12):1745{1763, 2007.
14. E. Toms and M. M. Hall. The chic interactive task (chici) at
clef2013.
http://www.clef-initiative.eu/documents/71612/1713e643-27c34d76-9a6f-926cdb1db0f4, 2013.
15. E. G. Toms and M. M. Hall. The CHiC Interactive Task (CHiCi) at
CLEF2013. In CLEF 2013 Evaluation Labs and Workshop, Online Working
Notes, 2013.
16. P. Vakkari. A theory of the task-based information retrieval process: a
summary and generalisation of a longitudinal study. Journal of documentation,
57(1):44{60, 2001.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>T.</given-names>
            <surname>Beckers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Fuhr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Pharo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Nordlie</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K. N.</given-names>
            <surname>Fachry</surname>
          </string-name>
          .
          <article-title>Overview and results of the inex 2009 interactive track</article-title>
          . In M. Lalmas,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Jose</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rauber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Sebastiani</surname>
          </string-name>
          , and I. Frommholz, editors,
          <source>ECDL</source>
          , volume
          <volume>6273</volume>
          of Lecture Notes in Computer Science, pages
          <volume>409</volume>
          {
          <fpage>412</fpage>
          . Springer,
          <year>2010</year>
          . ISBN 978-3-
          <fpage>642</fpage>
          -15463-8.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>P.</given-names>
            <surname>Borlund</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Ingwersen</surname>
          </string-name>
          .
          <article-title>The development of a method for the evaluation of interactive information retrieval systems</article-title>
          .
          <source>Journal of documentation</source>
          ,
          <volume>53</volume>
          (
          <issue>3</issue>
          ):
          <volume>225</volume>
          {
          <fpage>250</fpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>M.</given-names>
            <surname>Hall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Huurdeman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Koolen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Skov</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Walsh</surname>
          </string-name>
          .
          <article-title>Overview of the INEX 2014 interactive social book search track</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>M. M. Hall</surname>
            and
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Toms</surname>
          </string-name>
          .
          <article-title>Building a common framework for iir evaluation</article-title>
          .
          <source>In CLEF 2013 - Information Access Evaluation</source>
          . Multilinguality, Multimodality, and Visualization, pages
          <volume>17</volume>
          {
          <fpage>28</fpage>
          ,
          <year>2013</year>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -40802-1
          <fpage>3</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>M. M. Hall</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Fernando</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Clough</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Soroa</surname>
            , E. Agirre, and
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Stevenson</surname>
          </string-name>
          .
          <article-title>Evaluating hierarchical organisation structures for exploring digital libraries</article-title>
          .
          <source>Information Retrieval</source>
          ,
          <volume>17</volume>
          (
          <issue>4</issue>
          ):
          <volume>351</volume>
          {
          <fpage>379</fpage>
          ,
          <year>2014</year>
          . ISSN 1386-
          <fpage>4564</fpage>
          . doi:
          <volume>10</volume>
          .1007/s10791-014-9242-y. URL http://dx.doi.org/10.1007/ s10791-014-9242-y.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>M.</given-names>
            <surname>Koolen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kamps</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.</given-names>
            <surname>Kazai. Social Book</surname>
          </string-name>
          <article-title>Search: The Impact of Professional and User-Generated Content on Book Suggestions</article-title>
          .
          <source>In Proceedings of the International Conference on Information and Knowledge Management (CIKM</source>
          <year>2012</year>
          ). ACM,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>