=Paper= {{Paper |id=Vol-1391/78-CR |storemode=property |title=Overview of the SBS 2015 Interactive Track |pdfUrl=https://ceur-ws.org/Vol-1391/78-CR.pdf |volume=Vol-1391 |dblpUrl=https://dblp.org/rec/conf/clef/GadeHHKKSTW15 }} ==Overview of the SBS 2015 Interactive Track== https://ceur-ws.org/Vol-1391/78-CR.pdf
    Overview of the SBS 2015 Interactive Track

Maria Gäde1 , Mark Hall2 , Hugo Huurdeman3 , Jaap Kamps3 , Marijn Koolen3 ,
                Mette Skov4 , Elaine Toms5 , and David Walsh2
                       1
                        Humboldt University Berlin, Germany
                          maria.gaede@ibi.hu-berlin.de
                      2
                         Edge Hill University, United Kingdom
                    {mark.hall,david.walsh}@edgehill.ac.uk
                      3
                        University of Amsterdam, Netherlands
                  {h.c.huurdeman,kamps,marijn.koolen}@uva.nl
                           4
                             Aalborg University, Denmark
                                  skov@hum.aau.dk
                     5
                        University of Sheffield, United Kingdom
                             e.toms@sheffield.ac.uk



       Abstract. Users looking for books online are confronted with both pro-
       fessional meta-data and user-generated content. The goal of the Interac-
       tive Social Book Search Track was to investigate how users used these
       two sources of information, when looking for books in a leisure context.
       To this end participants recruited by four teams performed two different
       tasks using one of two book-search interfaces. Additionally one of the two
       interfaces also investigated whether user performance can be improved
       by providing a user-interface that supports multiple search stages.


1    Introduction

The goal of the Interactive Social Book Search (ISBS) task is to investigate how
book searchers use professional metadata and user-generated content at different
stages of the search process. The purpose of this task is to gauge user interaction
and user experience in social book search by observing user activity with a large
collection of rich book descriptions under controlled and simulated conditions,
aiming for as much ”real-life” experiences intruding into the experimentation.
The output will be a rich data set that includes both user profiles, selected
individual differences (such as a motivation to explore), a log of user interactivity,
and a structured set of questions about the experience.
    The Interactive Social Book Search (ISBS) Task started in 2014 as a merge
of the INEX Social Book Search (SBS, [7–9]) track and the Interactive task
of CHiC [12, 15]. The SBS Track started in 2011 and has focused on system-
oriented evaluation of book search systems that use both professional metadata
and user-generated content. Out of three years of SBS evaluation arose a need to
understand how users interact with these different types of book descriptions and
how systems could support user to express and adapt their information needs
during the search process. The CHiC Interactive task focused on interaction of
users browsing and searching in the Europeana collection. One of the questions
is what types of metadata searchers use to determine relevance and interest.
The collection, use case and task were deemed not interesting and useful enough
to users. The SBS contributes a new document collection, use case and search
tasks. The first year of the ISBS will therefore focus on switching to the SBS
collection and use case, with as few other changes as possible.
    The goal of the interactive book search task is to investigate how searchers
interact with book search systems that offer different types of book metadata.
The addition of opinionated descriptions and user-supplied tags allows users to
search and select books with new criteria. User reviews may reveal information
about plot, themes, characters, writing style, text density, comprehensiveness
and other aspects that are not described by professional metadata. In particu-
lar, the focus is on complex goal-oriented tasks as well as non-goal oriented tasks.
For traditional tasks such as known-item search, there are effective search sys-
tems based on access points via formal metadata (i.e. book title, author name,
publisher, year, etc). But even here user reviews and tags may prove to have
an important role. The long-term goal of the task is investigate user behaviour
through a range of user tasks and interfaces and to identify the role of different
types of metadata for different stages in the book search process.
    For the Interactive task, the main research question is:
RQ How do searchers use professional metadata and user-generated content in
  book search?
    This can be broken down into a few more specific questions:
RQ1 How should the UI combine professional and user-generated information?
RQ2 Should the UI adapt itself as the user progresses through their search task,
  and if so, how?
    In this paper, we report on the setup and the results of the ISBS track 2015.
Section 2 lists the participating teams. The experimental setup of the task is
discussed in detail in Section 3 and the results in Section 4. We close in Section 5
with a summary and plans for 2016.


2    Participating Teams
In this section we provide information on the participating teams. In Table 1 we
show which institutes participated in this Track and the number of users that
took part in their experiments. Three users in our experiment selected Other
for the institute via which they were recruited, but did not fill in an institution
name.


3    Experimental Setup
In this section we first describe the background of the Social Book search Lab
and the role of the Interactive track, then describe the tasks, questionnaires, the
 Table 1. Overview of the participating teams and number of users per team

               Institute                              # users
               Aalborg University                     36
               University of Amsterdam                22
               Edge Hill University                   20
               Humboldt University                    67
               Manchester Metropolitan University     23
               Oslo & Akershus University College     20
               Stockholm University                   1
               Other                                  3
               Total                                  192



system and user interfaces, participants and the procedure for recruiting and
informing participants.


3.1   Social Book Search

The goal of the interactive Social Book Search (ISBS) track is to investigate how
searchers make use of and appreciate professional metadata and user-generated
content for book search on the Web and to develop interfaces that support
searchers through the various stages of their search task. The user has a spe-
cific information need against a background of personal tastes, interests and
previously seen books. Through social media, book descriptions are extended
far beyond what is traditionally stored in professional catalogues. Not only are
books described in the users’ own vocabulary, but they are also reviewed and
discussed online, and added to online personal catalogues of individual readers.
This additional information is subjective and personal, and opens up opportu-
nities to aid users in searching for books in different ways that go beyond the
traditional editorial metadata based search scenarios, such as known-item and
subject search. For example, readers use many more aspects of books to help
them decide which book to read next [13], such as how engaging, fun, educational
or well-written a book is. In addition, readers leave a trail of rich information
about themselves in the form online profiles which contain personal catalogues
of the books they have read or want to read, personally assigned tags and ratings
for those books and social network connections to other readers. This results in
a search task that may require a different model than pure search [6] or pure
recommendation.
     The ISBS track investigates book requests and suggestions from the Library-
Thing (LT) discussion forums as a way to model book search in a social envi-
ronment. The discussions in these forums show that readers frequently turn to
others to get recommendations and tap into the collective knowledge of a group
of readers interested in the same topic.
     The track builds on the INEX Amazon/LibraryThing (A/LT) collection [1],
which contains 1.5 million book descriptions from Amazon, enriched with content
from LT. This collection contains both professional metadata and user-generated
content.6
     The records contain title information as well as a Dewey Decimal Classifica-
tion (DDC) code (for 61% of the books) and category and subject information
supplied by Amazon. We note that for a sample of Amazon records the subject
descriptors are noisy, with a number of inappropriately assigned descriptors that
seem unrelated to the books. Each book is identified by an ISBN. Since different
editions of the same work have different ISBNs, there can be multiple records for
a single intellectual work. Each book record is an XML file with fields like isbn,
title, author, publisher, dimensions, numberofpages and publicationdate. Curated
metadata comes in the form of a Dewey Decimal Classification in the dewey
field, Amazon subject headings in the subject field, and Amazon category labels
in the browseNode fields. The social metadata from Amazon and LT is stored in
the tag, rating, and review fields.

3.2     User Tasks
This year in addition to the two main user tasks, a training task was developed
to ensure that participants are familiar with all the functions offered by the two
interfaces. The queries and topics used in the training task were chosen so as
not to overlap with the goal-oriented task. However, a potential influence on the
non-goal task cannot be ruled out.
    Similar to last year, two tasks were created to investigate the impact of
different task types on the participants interactions with the interfaces and also
the professional and user-generated book meta-data. For both tasks, participants
were asked to describe their motivation for particular book selections in the
book-bag.

The goal-oriented task contains five sub-tasks ensuring that participants
spend enough time on finding relevant books. While the first sub-tasks defines
a clear goal, the other sub-tasks are more open giving the user enough room
to interact with and the available content and met-data options. The following
instruction text was provided to participants:

      Imagine you participate in an experiment at a desert-island for one
      month. There will be no people, no TV, radio or other distraction. The
      only things you are allowed to take with you are 5 books. Please search
      for and add 5 books to your book-bag that you would want to read
      during your stay at the desert-island:
        – Select one book about surviving on a desert island
        – Select one book that will teach you something new
6
    This collection is a subset of a larger collection of 2.8 million description. The subset
    contains all book descriptions which have a cover image.
       – Select one book about one of your personal hobbies or interests
       – Select one book that is highly recommended by other users (based
          on user ratings and reviews)
       – Select one book for fun
      Please add a note (in the book-bag) explaining why you selected each of
      the five books.


The non-goal task was developed based on the open-ended task used in the
iCHiC task at CLEF 2013 [14] and the ISBS task at CLEF 2014 [3]. The aim
of this task is to investigate how users interact with the system when they have
no pre-defined goal in a more exploratory search context. It also allows the
participants to bring their own goals or sub-tasks to the experiment in line with
the “simulated work task” idea [2]. The following instruction text was provided
to participants:

      Imagine you are waiting to meet a friend in a coffee shop or pub or the
      airport or your office. While waiting, you come across this website and
      explore it looking for any book that you find interesting, or engaging
      or relevant. Explore anything you wish until you are completely and
      utterly bored. When you find something interesting, add it to the book-
      bag. Please add a note (in the book-bag) explaining why you selected
      each of the books.


3.3     Experiment Structure

The experiment was conducted using the SPIRE system7 [4], using the flow
shown in Figure 1. Each participant ran through the Pre-Task, Task, Post-Task
steps once for each of the two tasks. When a new participant started the exper-
iment, the SPIRE system automatically allocated them to one of the two tested
interfaces and to a given task order. Interface allocation and task order were
automatically balanced to minimise bias in the resulting data.
    Participant responses were collected in the following five steps using a selec-
tion of questionnaires:

 – Consent – all participants had to confirm that they understood the tasks
   they would be asked to undertake and the types of data collected in the
   experiment. Participants also specified which institute had recruited them;
 – Demographics – the following factors were acquired in order to characterise
   the participants: gender, age, achieved education level, current education
   level, and employment status;
 – Culture – to quantify language and cultural influences, the following fac-
   tors were collected: country of birth, country of residence, mother tongue,
   primary language spoken at home, languages used to search the web;
7
    Based on the Experiment Support System – https://bitbucket.org/mhall/
    experiment-support-system
Fig. 1. The path participants took through the experiment. Each participant
completed the Pre-Task, Task, Post-Task twice (once for each of the tasks). The
SPIRE system automatically balanced the task order. No data was acquired in
the Introduction, Pre-Task, and Thank you steps.


 – Post-Task – in the post task questions, participants were asked to judge how
   useful each of the interface components and meta-data parts that they had
   used in the task were, using 5-point Likert-like scales;
 – Engagement – after participants had completed both tasks, they were asked
   to complete O’Brien et al.’s [11] engagement scale.


3.4   System and Interfaces

The two tested interfaces (baseline and multi-stage) were both built using the
PyIRE8 workbench, which provides the required functionality for creating inter-
active IR interfaces and logging all interactions between the participants and the
system. This includes any queries they enter, the books shown for the queries,
pagination, facets selected, books viewed in detail, metadata facets viewed, books
added to the book-bag, and books removed from the book-bag. All log-data is
automatically timestamped and linked to the participant and task.
    Both interfaces used a shared IR backend implemented using ElasticSearch9 ,
which provided free-text search, faceted search, and access to the individual
books complete metadata. The 1.5 million book descriptions are indexed with
all professional metadata and user-generated content. For indexing and retrieval
the default parameters are used, which means stopwords are removed, but no
stemming is performed. The Dewey Decimal Classification numbers are replaced
by their natural language description. For instance, the DDC number 573 is
replaced by the descriptor Physical anthropology. User tags from LibraryThing
are indexed both as text strings, such that complex terms are broken down into
8
  Python interactive Information Retrieval Evaluation workbench – https://
  bitbucket.org/mhall/pyire
9
  ElasticSearch – http://www.elasticsearch.org/
individual terms (e.g. physical anthropology is indexed as physical and anthropol-
ogy) and as non-analyzed terms, which leaves complex terms intact and is used
for faceted search.




                    Fig. 2. Baseline interface – results view.



The baseline interface shown in figure 2 represents a standard faceted web-
search interface, the only additions being the task information (top-left) and the
list of past searches (top-right). The main interface consists of a search box at
the top, two facets on the left, and the search results list (center). On the right-
hand side is the book-bag, which shows the participants which books they have
collected for their task and also provides the notes field, which the participants
were instructed to use to explain why they had chosen that book.
   The two facets provided on the left use the Amazon subject classification
and the user tags to generate the two lists together with numeric indicators for
how many books each facet contained. Selecting a facet restricted the search
results to books with that facet. Participants could select multiple facets from
both lists.
    In the search results list each book consisted of a thumbnail image, title, au-
thors, aggregate user rating, a description, publication information (type, pub-
lisher, pages, year, ISBN ...), user reviews, and user tags (where available). The
aggregate user rating was displayed using 1 to 5 stars in half-star steps, calcu-
lated by aggregating the 1-5 star ratings for each user review. If the book had
no user reviews, then no stars were shown. Additionally each book had a ”Add
to Bookbag” button that participants used to add that book into their bookbag.
The multi-stage interface aims to support users by taking the different stages
of the search process into account. The idea behind the multi-stage interface
design is supported by two theoretical components.
    Firstly, several information search process models look at stages in the search
process. A well-known example is Kuhlthau [10], who discovered “common pat-
terns in users’ experience” during task performance. She developed a model
consisting of six stages, which describe users’ evolving thoughts, feelings and ac-
tions in the context of complex tasks. Vakkari [16] later summarized Kuhlthau’s
stages into three categories (pre-focus, focus formulation, and post-focus), and
points to the types of information searched for in the different stages.
    The multi-stage search interface constructed for iSBS was inspired by [16]. It
includes three distinct panels, potentially supporting different stages: browse, in
which users can explore categories of books, search, supporting in-depth search-
ing, and book-bag, in which users can review and refine their book-bag selections.
    Secondly, when designing a new search interface for social book search it has
also been relevant to look more specifically at the process of choosing a book
to read. A model of decision stages in book selection [13] identifies the follow-
ing decision stages: browse category, selecting, judging, sampling, and sustained
reading. This work supports the need for a user interface that takes the different
search and decision stages into account. However, the different stages in [13]
closely relate to a specific full text digital library, and therefore the model was
not applicable to the present collection.




                  Fig. 3. Multistage interface – Browse view.


     When the multi-stage interface first loads, participants are shown the browse
stage (fig. 3), which is aimed at supporting the initial exploration of the data-
set. The main feature to support the free exploration is the hierarchy browsing
component on the left, which shows a hierarchical tree of Amazon subject clas-
sifications. This was generated using the algorithm described in [5], which uses
the relative frequencies of the subjects to arrange them into the tree-structure
with the most-frequent subjects at the top of the tree. The search result list
is designed to be more compact to allow the user to browse books quickly and
shows only the book’s title and aggregate ratings (if available). Clicking on the
book title showed a popup window with the book’s full meta-data using the
same layout and content as used in the baseline interface’s search result list.




                   Fig. 4. Multistage interface – Search view.



     Participants switched to the search stage by clicking on the ”Search” section
in the gray bar at the top. The search stage (fig. 4 uses the same interface as the
baseline with only two differences. The first is that as the book-bag is a separate
stage, it is not shown on the search stage interface itself. The second is that if
the participants select a topic in the browse stage, this topic is pre-selected as a
filter for any queries in the blow box to the left of the search box. Participants
can click on that box to see a drop-down menu of the selected topic and its
parent topics. Participants can select a higher-level topic to widen their search.




                  Fig. 5. Multistage interface – Book-bag view.
                  Table 2. Demographics of the participants

                 Gender               #        Location       #
                 Female             120        Lab            56
                 Male                72        Remote        136
                 Country of Birth #            Age            #
                 Germany             63        18–25          72
                 Uk                  33        26–35          80
                 Denmark             21        36–45          25
                 Norway              20        46–55           8
                 Netherlands         11        56–65           6
                 Other               44        66–             1



    The final stage is the book-bag shown in Figure 5, where participants review
the books they have collected and can provide the notes for each book. For each
book, four buttons were provided that allowed the user to search for similar
books by title, author, topic, and user tags. The similar books are shown on the
right using the same compact layout as in the browse stage. As in the browse
stage, clicking on a book in that list shows a popup window with the book’s
details.


3.5   Participants

Demographic information on the 192 participants is given in Table 2. Of these
participants, 120 were female and 72 male. In terms of age, 72 were between 18
and 25, 80 between 26 and 35, 25 between 36 and 45, 8 between 46 and 55, 6
between 56 and 65 and 1 over 65. 60 were in employment, 3 unemployed, 128
were students and 1 selected other. Participants came from 36 different coun-
tries (country of birth) including Germany (63 participants), UK (33), Denmark
(21), Norway (20), the Netherlands (11), resident in 13 different countries, again
mainly in Germany, UK, Denmark, Norway and the Netherlands. Participants
mother tongues included German, Dutch, English, Danish, Romanian, Farsi or
Portuguese and 23 others. The majority of participants executed the tasks re-
motely (136), only 56 users conducted the experiment at a lab. 95 participants
used the novel multi-stage interface, while 97 used the baseline interface.


3.6   Procedure

Participants were invited by the individual teams, either using e-mail (Aalborg,
Amsterdam, Edge Hill) or by recruiting students from a lecture or lab (Edge
Hill, Humboldt). Where participants were invited by e-mail, the e-mail con-
tained a link to the online experiment, which would open in the participant’s
browser. Where participants were recruited in a lecture or lab, the experiment
URL was distributed using e-learning platforms. The following browsers and op-
erating systems had been tested: Windows, OS X, Linux using Internet Explorer,
Chrome, Mozilla Firefox, and Safari. The only difference between browsers was
that some of the graphical refinements such as shadows are not supported on
Internet Explorer and fall back to a simpler line-based display.
    After participants had completed the experiment as outlined above (3.3),
they were provided with additional information on the tasks they had com-
pleted and with contact information, should they wish to learn more about the
experiment. Where participants that completed the experiment in a lab, teams
were able to conduct their own post-experiment process, which mostly focused
on gathering additional feedback on the system from the participants.


4   Results
Based on the participant responses and log data we have aggregated summary
statistics for a number of basic performance metrics.

Session length was measured automatically using JavaScript and stored with
the participants’ responses. Table 3 shows median and inter-quartile ranges for
all interface and task combinations. Session lengths are significantly lower for
the baseline interface (wilcoxon signed rank p < 0.05). Also all session lengths
are significantly longer than in the iSBS 2014 experiment [3].


Table 3. Session lengths for the two interfaces and tasks. Times are in min-
utes:seconds and are reported median (inter-quartile range).

               Interface     Goal-oriented          Non-goal
               Baseline    10:30min (10:25min) 5:33min (7:37min)
               Multi-Stage 12:52min (9:20min) 7:18min (10:52min)




Number of queries was extracted from the log-data. In both interfaces it was
possible to issue queries by typing keywords into the search box or by clicking on
a meta-data field to search for other books with that meta-data field value. Both
types of query have been aggregated and Table 4 shows the number of queries for
each interface and task. The number of queries per session is significantly higher
for the baseline interface over the multi-stage interface for both tasks (wilcox
p < 0.05) and also for the goal-oriented over the non-goal task in both interfaces
(wilcox p < 0.01).

Number of books collected was extracted from the log-data. Participants
collected those books that they felt were of use to them. The numbers reported
Table 4. Number of queries executed. Numbers are reported median (inter-
quartile range).

                      Interface Goal-oriented Non-goal
                      Baseline        8 (5)         2 (3)
                      Multi-Stage     6 (6.5)       1 (3)



in Table 5 are based on the number of books participants had in their book-bag
when they completed the session, not the total number of books collected over
the course of their session, as participants could always remove books from their
book-bag in the course of the session.


Table 5. Number of books collected. Numbers are reported median (inter-
quartile range).

                      Interface Goal-oriented Non-goal
                      Baseline        5 (0)         3 (3)
                      Multi-Stage     5 (0)         3 (3)



    Unlike the other metrics, there is no significant difference between the two
interfaces. On the goal-oriented task this was expected as participants were asked
to collect five books. On the non-goal task this indicates that the interface had
no impact on what participants felt was enough to complete the task.


5   Conclusions and Plans
This was the second year of the Interactive Social Book Search Track. Our goal
was to investigate how users deal with professional metadata and user-generated
content when searching for books and how different types of interfaces support
users in goal-oriented and non-goal-oriented tasks. The track makes use of a large
collection of book descriptions from Amazon and LibraryThing with a mixture of
professional metadata in the form of subject descriptors and classification codes
and user-generated content in the form of user reviews, tags and ratings. Because
search processes often consist of multiple stages, we developed two interfaces to
identify and analyse these different stages. One interface resembles traditional
search interfaces familiar from Amazon and LibraryThing, the other is a multi-
stage interface where the first part provides a broad overview of the collection,
the second part allows the user to look at search results in a more detailed view
and the final part allows the user to directly compare selected books in great
detail.
    The first edition had a short data gathering period and served as a pilot to
develop and test the interfaces, especially the multistage interface. This year the
track ran over the full annual cycle, with a data gathering period of almost five
months, resulting in a shared data pool of 192 participants from many different
backgrounds. The tasks have been re-designed to make users interact more and
for longer periods of time during the experiments, with very satisfying results.
This year, users spent significantly more time on both tasks, even though for the
open task they could stop as soon as they wanted.
    Users issued fewer queries with the multistage interface than with the baseline
interface, probably because the multistage interface allows browsing as an extra
mode of exploring the collection. The papers contributed by the participating
teams contain more detailed analyses of the interactions and will focus on specific
research questions and findings.
    For the next year, we plan to have multiple, short experiments, more focused
on specific research questions, with fewer users per experiment. Another option
is to let individual teams plan their own experiments. Either way, the challenge
will be to have enough commonality between the experiments so that the total
data set provides a coherent insight into book search interactions in the different
stages that users move through.

Acknowledgments We would like to thank Preben Hansen, Monica Landoni,
Birger Larsen, Vivien Petras and Robert Villa for their feedback and advice in
setting up the track.

Bibliography
 1. T. Beckers, N. Fuhr, N. Pharo, R. Nordlie, and K. N. Fachry. Overview
    and results of the inex 2009 interactive track. In M. Lalmas, J. M. Jose,
    A. Rauber, F. Sebastiani, and I. Frommholz, editors, ECDL, volume 6273
    of Lecture Notes in Computer Science, pages 409–412. Springer, 2010. ISBN
    978-3-642-15463-8.
 2. P. Borlund and P. Ingwersen. The development of a method for the evalua-
    tion of interactive information retrieval systems. Journal of documentation,
    53(3):225–250, 1997.
 3. M. Hall, H. Huurdeman, M. Koolen, M. Skov, and D. Walsh. Overview of
    the INEX 2014 interactive social book search track.
 4. M. M. Hall and E. Toms. Building a common framework for iir evaluation. In
    CLEF 2013 - Information Access Evaluation. Multilinguality, Multimodality,
    and Visualization, pages 17–28, 2013. doi: 10.1007/978-3-642-40802-1 3.
 5. M. M. Hall, S. Fernando, P. Clough, A. Soroa, E. Agirre, and M. Steven-
    son. Evaluating hierarchical organisation structures for exploring digi-
    tal libraries. Information Retrieval, 17(4):351–379, 2014. ISSN 1386-
    4564. doi: 10.1007/s10791-014-9242-y. URL http://dx.doi.org/10.1007/
    s10791-014-9242-y.
 6. M. Koolen, J. Kamps, and G. Kazai. Social Book Search: The Impact of Pro-
    fessional and User-Generated Content on Book Suggestions. In Proceedings
    of the International Conference on Information and Knowledge Management
    (CIKM 2012). ACM, 2012.
 7. M. Koolen, G. Kazai, J. Kamps, A. Doucet, and M. Landoni. Overview
    of the INEX 2011 books and social search track. In S. Geva, J. Kamps,
    and R. Schenkel, editors, Focused Retrieval of Content and Structure: 10th
    International Workshop of the Initiative for the Evaluation of XML Retrieval
    (INEX 2011), volume 7424 of LNCS. Springer, 2012.
 8. M. Koolen, G. Kazai, J. Kamps, M. Preminger, A. Doucet, and M. Landoni.
    Overview of the INEX 2012 social book search track. In S. Geva, J. Kamps,
    and R. Schenkel, editors, Focused Access to Content, Structure and Context:
    11th International Workshop of the Initiative for the Evaluation of XML
    Retrieval (INEX’12), LNCS. Springer, 2013.
 9. M. Koolen, G. Kazai, M. Preminger, and A. Doucet. Overview of the INEX
    2013 social book search track. In CLEF 2013 Evaluation Labs and Workshop,
    Online Working Notes, 2013.
10. C. C. Kuhlthau.          Inside the search process: Information seek-
    ing from the user’s perspective.           Journal of the American So-
    ciety for Information Science, 42(5):361–371, 1991.              ISSN 1097-
    4571. doi: 10.1002/(SICI)1097-4571(199106)42:5h361::AID-ASI6i3.0.CO;
    2-#. URL http://dx.doi.org/10.1002/(SICI)1097-4571(199106)42:
    5<361::AID-ASI6>3.0.CO;2-#.
11. H. L. O’Brien and E. G. Toms. The development and evaluation of a survey
    to measure user engagement. Journal of the American Society for Informa-
    tion Science and Technology, 61(1):50–69, 2009.
12. V. Petras, T. Bogers, E. Toms, M. Hall, J. Savoy, P. Malak, A. Pawowski,
    N. Ferro, and I. Masiero. Cultural heritage in clef (chic) 2013. In P. Forner,
    H. Mller, R. Paredes, P. Rosso, and B. Stein, editors, Information Access
    Evaluation. Multilinguality, Multimodality, and Visualization, volume 8138
    of Lecture Notes in Computer Science, pages 192–211. Springer Berlin Hei-
    delberg, 2013. ISBN 978-3-642-40801-4. doi: 10.1007/978-3-642-40802-1 23.
    URL http://dx.doi.org/10.1007/978-3-642-40802-1_23.
13. K. Reuter. Assessing aesthetic relevance: Children’s book selection in a
    digital library. JASIST, 58(12):1745–1763, 2007.
14. E. Toms and M. M. Hall.             The chic interactive task (chici) at
    clef2013. http://www.clef-initiative.eu/documents/71612/1713e643-27c3-
    4d76-9a6f-926cdb1db0f4, 2013.
15. E. G. Toms and M. M. Hall. The CHiC Interactive Task (CHiCi) at
    CLEF2013. In CLEF 2013 Evaluation Labs and Workshop, Online Working
    Notes, 2013.
16. P. Vakkari. A theory of the task-based information retrieval process: a sum-
    mary and generalisation of a longitudinal study. Journal of documentation,
    57(1):44–60, 2001.