<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>J. Gwizdka)
~ https://nilavra.in (N. Bhattacharya); http://gwizdka.com
(J. Gwizdka)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>A Triangulation Perspective for Search as Learning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nilavra Bhattacharya</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jacek Gwizdka</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Information, The University of Texas at Austin</institution>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>Search engines and information retrieval (IR) systems are becoming increasingly important as educational platforms to foster learning. Modern search systems still have room to improve in this regard. We posit that learning-during-search is a good candidate for a human-centred metric of IR evaluation. This involves measuring two phenomena: learning, and searching. We discuss ways to measure learning, and propose a conceptual framework for describing searchers' knowledge-change during search. We stress the need for developing better measures for the search process, and discuss why we need to rethink the existing models of information seeking.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <sec id="sec-1-1">
        <title>As early as in 1980, Bertam Brookes, in his ‘fundamental</title>
        <p>
          equation’ of information and knowledge: [] + ∆  =
[ + ∆ ] had stated that a searcher’s current state
of knowledge, [], is changed to the new knowledge
structure, [ + ∆ ], by exposure to information ∆ ,
with the ∆  indicating the efect of the change [ 1, p. 131].
This indicates that searchers acquire new knowledge in
the search process, and the same information ∆  may
have diferent efects on diferent searchers’ knowledge
states. Fifteen years later, Marchionini described
information seeking as “a process, in which humans purposefully
engage in order to change their state of knowledge” [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ].
Thus, we have known for quite a while that search is
driven by higher-level human needs, and Information
Retrieval (IR) is a means to an end, and not the end in
itself.
        </p>
        <p>
          When we consider information seeking as a process
that changes the searcher’s knowledge-state, the question
arises whether the assessment of
knowledge-acquisitionduring-search, or learning, should subsume the standard
IR evaluation metrics and the search interface usability
metrics. It seems that to diagnose a problem or to
understand a success of a search system, we would still
need to control the standard aspects of a search system
(e.g., results ranking, search user interface design
features). However, a direct assessment of these
“lowerlevel” aspects would lose on importance. On the other
hand, support for more rapid learning across a number
of searchers, and over a range of diferent search tasks
can be indicative of an IR system that is more efective
at supporting intelligence amplification and knowledge
building [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. In the last decade, this recognition that IR
systems of tomorrow can become “rich learning spaces”
and foster knowledge gain, has led to the emergence of
the Search as Learning (SAL) research community [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ],
and the need to consider learning-during-search as a
metric for evaluation of Interactive IR (IIR) systems.
        </p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Metrics for Learning &amp;</title>
    </sec>
    <sec id="sec-3">
      <title>Knowledge</title>
      <sec id="sec-3-1">
        <title>2.1. Experts vs. Novices</title>
        <sec id="sec-3-1-1">
          <title>If we consider learning-during-search to be a good can</title>
          <p>
            didate for IR evaluation criterion, the next challenge is
how to measure learning, or knowledge acquisition,
possibly in an automated fashion. We can turn to
educational psychology literature. A research report by the US
National Research Council [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ] identified the following
key principles about experts’ knowledge, illustrating the
results of successful knowledge acquisition:
1. “Experts notice features and meaningful patterns
of information that are not noticed by novices.”
2. “Experts have acquired a great deal of content
knowledge that is organized in ways that reflect
a deep understanding of their subject matter.”
3. “Experts’ knowledge cannot be reduced to sets of
isolated facts or propositions but, instead, reflects
contexts of applicability: that is, the knowledge
is ‘conditionalized’ on a set of circumstances.”
4. “Experts are able to flexibly retrieve important
aspects of their knowledge with little attentional
efort.”
          </p>
        </sec>
        <sec id="sec-3-1-2">
          <title>Some of the above findings have been used by our com</title>
          <p>
            munity in the past. E.g, user learning has been measured
by user’s familiarity with concepts and relationships
between concepts [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ], gains in user’s understanding of the
topic structure [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ], and user’s ability to formulate more
Pre-Search
Knowledge State
(A)
efective queries [
            <xref ref-type="bibr" rid="ref6 ref8">8, 6</xref>
            ]. From the above findings, we can
think about ways to consider Expert’s Knowledge on the
search topic as ‘gold-standard’ or ‘ground-truth’ (by
algorithmic parlance), for developing learning based IIR
evaluation metrics.
          </p>
        </sec>
        <sec id="sec-3-1-3">
          <title>We can conceptualize a triangle-based framework for</title>
          <p>
            searchers’ knowledge-change during search (Fig. 1)
Searchers initiate a search session with a Pre-Search
Knowledge state. During search, they undergo a change
in knowledge. On conclusion of search, searchers attain
2.2. Measuring Knowledge-Change the Post-Search Knowledge state. We can attempt to
meaRecent literature on Search-as-Learning adopts three sure this dynamic knowledge-change from a stationary
broad approaches to measure learning, or knowledge- reference point: Expert Knowledge on the search topic
change, with their own strengths and limitations. The (ground-truth). If we imagine these three
knowledgeifrst approach asks searchers to rate their self-perceived states to be the three vertices of a triangle (Fig. 1, left),
pre-search and post-search knowledge levels [
            <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
            ]. This and if, by some hypothetical metric, we can compute
approach is the easiest to construct, and can be gener- the distance between any two of these knowledge-state
alised over any search topic. However, self-perceptions points, then we have found a way to quantify
learningmay not objectively represent true learning. The second during-search.
approach tests searchers’ knowledge using factual mul- Moving further, if we dichotomize the
learning-duringtiple choice questions (MCQs). The answer options can search as ‘HIGH’ vs ‘Low’, by establishing a threshold
be a mixture of fact-based responses (TRUE, FALSE, or I value for the distances, then we can obtain eight possible
DON’T KNOW ), [
            <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
            ] or recall-based responses (I re- knowledge-change situations (Fig. 1, right table). Three
member / don’t remember seeing this information) [13, 14]. of these eight situations violate the triangle inequality1
Constructing topic-dependant MCQs may take time and (denoted by ‘X’ in the table), and are therefore discarded.
efort, which may be aided by automated question gen- The remaining five valid situations are discussed below.
eration techniques [15]. For evaluation, this approach When Pre-Search Knowledge State and Post-Search
is the easiest, and often automated. However, MCQs al- Knowledge State are both very ‘close’ to Expert
Knowllow respondents to answer correctly by guesswork. The edge (row 1 in table), we can assume the searcher is an
third approach lets searchers write natural language expert. On the other hand, if Pre-Search Knowledge
summaries or short answers, before and after the search State and Post-Search Knowledge State are close to each
[
            <xref ref-type="bibr" rid="ref10">16, 10</xref>
            ]. Depending on experimental design, prompts for other, but are far away from Expert Knowledge (row
writing such responses can be generic (least efort) [ 17] 4), the searcher is probably a novice, and also a slow
or topic-specific (some efort) [ 15]. While this approach learner, because on conclusion of search, their
knowlprovides rich information about a searcher’s knowledge edge still remained far away from Expert Level. When
state, evaluating such responses is the most challenging.
          </p>
        </sec>
        <sec id="sec-3-1-4">
          <title>1sum of lengths of any two sides of a triangle is greater than</title>
          <p>the third side
the Post-Search Knowledge is closer to Expert than Pre- knowledge-gaps and misconceptions. They have been
Search Knowledge (row 6), it implies that the searcher used for over 50 years to provide a useful and visually
gained ‘good amount’ of new knowledge, and is thus, the appealing way of illustrating and assessing learners’
conmost desirable situation for Search as Learning. ceptual knowledge [25, 20, 24, 26, 27, 28, 29].</p>
          <p>The last two rows of the table in Fig. 1 present two Expert knowledge or “ground truth” can be
repreinteresting, albeit undesirable, possibilities. If the Pre- sented as topical knowledge-graphs of the information
Search Knowledge is closer to Expert, but the Post-Search contained in online encyclopedias and knowledge bases.
Knowledge is further away (row 7), it can signify knowl- Searcher’s pre and post-search knowledge states can
edge loss (which is also a form of knowledge change). On be represented as concept maps or personal knowledge
the other hand, if both the Pre-Search and the Post-Search graphs. The searcher’s graphs will evolve cumulatively
knowledge are far away from Expert, and they are also over time, as the they encounter more information online.
far away from each other (row 8), then it is a case of mis- Construction of the personal knowledge graph can be
directed search, and therefore, misdirected learning. manual (most efort), fully automated (least efort, but
A classic illustration of these two situations is health in- prone to prediction errors), or a human-in-the loop
soformation seeking. Suppose a user is searching for cause lution (an auto generated map is shown, but the user is
and treatment of a small brownish spot on the wrist. If free to modify it as necessary).
a physician examined the spot, they would immediately Having represented knowledge states as graph-based
identify the spot to be caused by oil-splatter burn during structures, measuring the similarity or distances between
cooking (Expert Knowledge State). The searcher may them becomes equivalent to the graph matching problem.
however, based on search results, come to the incorrect Various algorithms and metrics have been proposed for
conclusion that they have skin cancer [18, 19]. Before exact and inexact graph matching [30]. Many of the
soluthe search, if the searcher correctly guessed that the spot tions take an optimization-problem approach [31]. Some
was due to oil splatter burn, then the situation would be examples include structural similarity matching
(compardescribed by row 7 (knowledge loss, or increase in confu- ing diameter, edges, distribution degrees etc.), iterative
sion), whereas if the searcher had no intuition about the matching (comparing the node neighbours), subgraph
cause of the spot before the search, the situation would comparison, and graph isomorphism [32].
be described by row 8. Both situations should be avoided Besides comparing two graphs, other kinds of analyses
by modern IIR systems. can reveal interesting patterns of learning and thinking,
which can be correlated with search process measures.
2.4. Graph-based Operationalization Some of these measures that have been used by
Halttunen and Jarvelin [24] are addition, deletion, and
differences in top-level concept-nodes, depths of hierarchy,
and number of concepts that were ignored or changed
fundamentally. In this regard, Novak and Gowin [25]
have presented well-established scoring scheme to
evaluate concept-maps: 1 point is awarded for each correct
relationship (i.e. concept–concept linkage); 5 points for
each valid level of hierarchy; 10 points for each valid and
significant cross-link; and 1 point for each example. Such
analyses methods can further inform the development of
future operationalizations.</p>
          <p>As our anonymous reviewers mentioned, knowing the
goal of the learner is important in this scenario, as that
will guide the formation of the learner’s personal map.</p>
          <p>Furthermore, a search systems (or internet browsers) may
provide a special ‘learning mode’ which is dedicated for
measuring learning. This will help to avoid transactional
or navigational search sessions that not necessarily aimed
at learning/knowledge acquisition.</p>
          <p>While the framework discussed in Section 2.3 is purely
conceptual, we can think of a possible operationalization
using graph-based representations, such as concept maps
[20] or personalized knowledge graphs [21] (the terms
are used interchangeably in this section).</p>
          <p>
            “Learning does not happen all at once . . . it builds
on and is shaped by what people already know” [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ].
          </p>
          <p>The Learning and the Cognitive Sciences have
generally discovered that meaningful “deep learning” (of the
human kind) requires learners to: (i) relate new ideas
and concepts to previous knowledge and experiences;
(ii) integrate knowledge into interrelated conceptual
systems; and (iii) look for patterns and underlying
principles [22, 23]. Concept maps are arguably, therefore,
extremely suited to represent such knowledge
structures, connecions, and patterns. A concept-map is a
twodimensional, hierarchical node-link diagram (graph) that
depicts the structure of knowledge within a discipline,
as viewed by a student, an instructor, or an expert in a
ifeld or sub-field. The map is composed of concept labels,
each enclosed in a box (graph nodes); a series of labelled
linking lines (labelled edges); and an inclusive,
generalto-specific organization [ 24]. Concept-maps assess how
well students see the “big picture”, and where there are</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>3. Measuring the Search Process</title>
      <p>
        Learning-during-search involves two intertwined
activities: learning, and searching. In Sec. 2, we discussed
approaches to measure learning. The other part of the devices. For instance, Marchionini’s well known
inforpicture involves measuring the search process itself. Past mation seeking process (ISP) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] models the information
research eforts has largely been devoted to measuring seeking behaviour into eight stages, with connecting
search outcomes: e.g., if a target document was reached, feed-forward and feed-back loops between the stages.
or if relevant results were shown. We argue that a more However, some researchers argue that users never really
human-centred approach for measuring search is trying go “back” to an earlier state; e.g., “when reformulating
to quantify the search process. the query, users do not really go back to the initial
situation, they submit an improved query” [40]. With progress
3.1. Need for Longitudinal Studies of time, there is continuous update of users’ information
need [41] and search context [42]. Thus, the intricate
relaA major limitation of most IIR research eforts is that tionships between users’ knowledge state, cognitive state,
the user is examined in the short-term, typically over the and other factors influencing search (search context), are
course of a single lab session. The trend is similar in other ever-changing. Perhaps then Spink’s model of the IR
HCI research venues. [33] stressed the need for longitu- interaction process [43], which models interactive search
dinal designs over a decade ago, yet a meta-analysis of as an infinite continuous process of sequential steps, or
1014 user studies reported in the ACM CHI 2020 confer- cycles2, is better suited to explain information searching
ence revealed that more than 85% of the studies observed behaviour. Like time, there may not be an absolute
beparticipants for a day or less. To this day, “longitudi- ginning or end of a user’s information searching process,
nal studies are the exception rather than the norm” [34]. but only search sessions. The user’s cognitive state is
On the other hand, it is quite evident that knowledge always ever-changing and advancing, both during and
acquisition is a longitudinal process, occurring gradu- between these search sessions. So a more realistic model
ally over time [
        <xref ref-type="bibr" rid="ref3 ref5">3, 23, 5, 22</xref>
        ]. Therefore, most educational will probably mean a fusion of Marchionini’s and Spink’s
curricula in schools and universities are spread across models, where Marchionini’s entire ISP process becomes
several months and years. “An over-reliance on short a cycle inside the Spink’s model, with forward-directed
studies risks inaccurate findings, potentially resulting in arrows only. These types of realistic models, improved
prematurely embracing or disregarding new concepts” and validated by empirical data, will help to explain
phe[34]. nomena behind next-generation search interactions, such
as searching and multi-tasking, multi-tabbed browsing [3,
p. 36], multi-device searching, and multi-session
searching [3, p. 61].
      </p>
      <sec id="sec-4-1">
        <title>3.2. Need for Updated Theoretical</title>
      </sec>
      <sec id="sec-4-2">
        <title>Models</title>
        <p>The Information Seeking literature is dominated by a 3.3. Neuro-physiological methods
large number of “multiple arrow-and-box” theoretical
models. These models divide the information seeking Neuro-physiological methods (NP methods) [44] provide
process for complex search-tasks into diferent stages. an interesting avenue to observe users while they
interSome argue that these models are not not “real mod- act with information systems. Two popular NP methods
els” but more of “short-hand common-sense task flows” are eye-tracking [45, 46] and EEG [47]. Eye-tracking
[35, 36]. The mantra of these models have always been can capture eye-movements of users while they
examthe same: they have “implications for systems design ine information on a screen. EEG captures (changes in)
and practice”. Unfortunately, these models, along with a activation in diferent brain regions as users consume
significant body of IIR research, has not been able to go information. NP methods provide opportunities to
underbeyond suggestions, to providing concrete design solu- stand and investigate how users gain knowledge during
tions [37]. Moreover, there is great overlap in basic search search. E.g, searchers use words or phrases they read
strategies across many of these models [38], calling into in previous search results, in their future query
reforquestion whether so many models are still relevant. Con- mulations [48]. Eye-tracking can detect and model this
sequently, current search systems still predominantly use phenomenon. As a result, a number of recent eforts have
a “one-size-fits-all” approach: one interface is used for all tried to investigate learning (during search) using one or
stages of a search, even for complex search endeavours more NP methods [16, 17, 49, 50, 51, 15]. However, a
ma[39]. jor limitation of NP methods is that they (still) require lab</p>
        <p>Again reiterating [33], we posit that these models, the- environments for data collection. Taking lessons from the
orised decades ago for bulky desktop computers, are in COVID-19 pandemic, as well as for scalability reasons,
need of improvement. Information seeking models have the IIR community needs search process metrics that can
to incorporate the continuous or lifelong nature of online
information searching, enabled by the proliferation of occu2rwrehnecrees eoafcuhsceyr’csleqcuoenrysisintspouft,oInResoyrsmtemoreoiunttpeurta,catinvde
ufeseedr’bsaicnkinternet access in various handheld and portable digital terpretation and judgement of the output
measure remote user interaction, preferably over the long to ‘track’ and measure their knowledge progress over
term. Consumer wearable devices (e.g., smartwatches) time, in a manner similar to tracking weight, fitness and
are a promising direction, since they can record physi- physical exercises.
ological data such as heart rate, skin temperature, and
galvanic skin response. White et al. [52] collected such
data at a population scale, and correlated them with the Acknowledgments
population’s search activities, to obtain improvements in
relevance of result rankings.</p>
        <sec id="sec-4-2-1">
          <title>We thank the anonymous reviewers for their very helpful and thought provoking suggestions and feedback.</title>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. Conclusion</title>
      <sec id="sec-5-1">
        <title>The perspectives and propositions in this paper have</title>
        <p>been shaped by our experience in IIR research. The
Information Processing Model from Educational Psychology
states that information is most likely to be retained by
a learner if it makes sense, and has meaning [53, p. 55].</p>
        <p>When a piece of information fits into the world-view of
the learner, it is said to make sense; when information
is relevant to the learner, it has meaning. Our past
research have primarily been in the second aspect of
information retention: relevance judgement. After several
user-studies and analysing multimodal sources of data,
we generally conclude that relevant information attracts
more visual attention, longer eye-dwell time, and more
brain activations [45, 54, 17], compared to irrelevant
information. Metrics which can capture the entire duration
of an experimental trial, or the real-time flow of
interactions, usually perform better as predictors, than metrics
which aggregate the entire trial into a set of single
numbers [46, 55, 54]. Hence we call for new and improved
measures of the search process.</p>
        <p>In the domain of Search as Learning, we employed
word [56] and sentence [17] embeddings to
semantically compare searcher’s responses to expert knowledge.</p>
        <p>Word embeddings provided better visualization of
results, showing clear separation of Pre-Search Knowledge
from Post-Search and Expert Knowledge [56]. We also
co-related Knowledge Change measures with interaction
and eye-tracking measures. We saw that people who
learnt ‘less’ spent more reading efort on SERPs [ 17].</p>
        <p>Conversely, people who learnt ‘more’ were doing less
reading overall; but most of their reading was on content
pages. These high learners used more specialized terms
in their queries, and reported higher mental workload
(NASA-TLX).</p>
        <p>In conclusion, we reiterate that learning-during-search
is a good candidate for evaluating IR systems. We need
more research to uncover relationships between the
users’ search process and their learning outcomes.
Process measures can shed light on the various subtle aspects
of human behaviour. If we understand them well, we can
teach people to be more successful in their information
seeking eforts, and maximize their learning outcomes.</p>
        <p>We envision that in the future, searchers will be able
sessions on the web, in: Conference on Human [26] Y. Egusa, H. Saito, M. Takaku, H. Terai, M. Miwa,
Information Interaction &amp; Retrieval (CHIIR), 2018. N. Kando, Using a Concept Map to Evaluate
Ex[13] S. Kruikemeier, S. Lecheler, M. M. Boyer, Learning ploratory Search, in: Proceedings of the Third
Symfrom news on diferent media platforms: An eye- posium on Information Interaction in Context, IIiX
tracking experiment, Political Communication 35 ’10, ACM, New York, NY, USA, 2010, pp. 175–184.
(2018) 75–96. doi:10.1145/1840784.1840810.
[14] N. Roy, F. Moraes, C. Hauf, Exploring users’ learn- [27] Y. Egusa, M. Takaku, H. Saito, How Concept
ing gains within search sessions, in: Conference Maps Change if a User Does Search or Not?, in:
on Human Information Interaction and Retrieval Proceedings of the 5th Information Interaction in
(CHIIR), 2020. Context Symposium, IIiX ’14, ACM, New York,
[15] R. Syed, K. Collins-Thompson, P. N. Bennett, NY, USA, 2014, pp. 68–75. doi:10.1145/2637002.</p>
        <p>M. Teng, S. Williams, D. W. W. Tay, S. Iqbal, Im- 2637012.
proving learning outcomes with gaze tracking and [28] Y. Egusa, M. Takaku, H. Saito, How to evaluate
automatic question generation, in: The Web Con- searching as learning, in: Searching as Learning
ference (WWW), 2020. Workshop (IIiX 2014 Workshop), Regensburg,
Ger[16] N. Bhattacharya, J. Gwizdka, Relating eye-tracking many, 2014. URL: http://www.diigubc.ca/IIIXSAL/
measures with changes in knowledge on search program.html.
tasks, in: Symposium on Eye Tracking Research &amp; [29] Y. Egusa, M. Takaku, H. Saito, Evaluating
ComApplications (ETRA’18), 2018. plex Interactive Searches Using Concept Maps., in:
[17] N. Bhattacharya, J. Gwizdka, Measuring learning SCST@ CHIIR, 2017, pp. 15–17.
during search: Diferences in interactions, eye-gaze, [30] Wikipedia, Graph matching, https://en.wikipedia.
and semantic similarity to expert knowledge, in: org/wiki/Graph_matching, 2021. (Accessed on
2021CHIIR’19, 2019. 10-14).
[18] D. M. Australia, Why googling symptoms leads to [31] X. Bai, Graph-Based Methods in Computer Vision:
cancer diagnosis and worse for your health, https: Developments and Applications: Developments
//www.dailymail.co.uk/news/article-6927695, 2019. and Applications, IGI global, 2012.
(Accessed on 02/07/2021). [32] Wikipedia, Graph isomorphism, https:
[19] H. Times, Stop it! google is not a real //en.wikipedia.org/wiki/Graph_isomorphism,
doctor and no you don’t have cancer, 2021. (Accessed on 2021-10-14).
https://www.hindustantimes.com/health/ [33] D. Kelly, S. Dumais, J. O. Pedersen, Evaluation
story-EjdPqGmAe2CugNyNdUClSI.html, 2015. challenges and directions for information-seeking
(Accessed on 02/07/2021). support systems, IEEE Computer 42 (2009).
[20] J. D. Novak, Learning, Creating, and Using Knowl- [34] L. Koeman, Hci/ux research: what methods do we
edge: Concept Maps as Facilitative Tools in Schools use? – lisa koeman – blog, https://lisakoeman.nl/
and Corporations, 2nd ed ed., Routledge, New York, blog/hci-ux-research-what-methods-do-we-use/,
NY, 2010. 2020-06-18. (Accessed on 11/08/2020).
[21] K. Balog, P. Mirza, M. G. Skjaeveland, Z. Wang, [35] A. Dillon, Personal communication, 2020. [dated
Workshop on personal knowledge graphs, co- 2020-04-29].
located with the 3rd automatic knowledge base [36] A. Dillon, No more information seeking models
construction conference (akbc’21), https://pkgs.ws/, please, https://adillon.ischool.utexas.edu/2006/10/
2021. (Accessed on 2021-10-14). 03/no-more-information-seeking-models-please/,
[22] R. K. Sawyer, The Cambridge handbook of the learn- 2006. [Online; accessed 2020-05-28].</p>
        <p>ing sciences, Cambridge University Press, 2005. [37] T. Saracevic, Information science,
Jour[23] S. A. Ambrose, M. W. Bridges, M. DiPietro, M. C. nal of the American Society for
InforLovett, M. K. Norman, How Learning Works: Seven mation Science 50 (1999) 1051–1063.
Research-Based Principles for Smart Teaching, John doi:10.1002/(SICI)1097-4571(1999)50:
Wiley &amp; Sons, 2010. 12&lt;1051::AID-ASI2&gt;3.0.CO;2-Z.
[24] K. Halttunen, K. Jarvelin, Assessing learning out- [38] O. Hoeber, D. Patel, D. Storie, A study of
acacomes in two information retrieval learning envi- demic search scenarios and information seeking
ronments, Information Processing &amp; Management behaviour, in: Conference on Human Information
41 (2005) 949–972. doi:10.1016/j.ipm.2004.02. Interaction and Retrieval (CHIIR), 2019, pp. 231–
004. 235.
[25] J. D. Novak, D. B. Gowin, Learning How to Learn, [39] H. C. Huurdeman, J. Kamps, From multistage
Cambridge University Press, 1984. doi:10.1017/ information-seeking models to multistage search
CBO9781139173469. systems, in: Proceedings of the 5th Information
Interaction in Context Symposium, 2014, pp. 145– Psychology 66 (2013) 2289–2294.</p>
        <p>154. [52] R. White, R. Ma, Improving search engines via
[40] V. T. Tran, N. Fuhr, Using eye-tracking with dy- large-scale physiological sensing, in: Conference
namic areas of interest for analyzing interactive in- on Research and Development in Information
Reformation retrieval, in: Conference on Research and trieval (SIGIR’17), 2017.</p>
        <p>Development in Information Retrieval (SIGIR’12), [53] D. A. Sousa, How the Brain Learns, Fifth Edition,
2012. Corwin Press, 2017.
[41] X. Huang, D. Soergel, Relevance: An improved [54] J. Gwizdka, R. Hosseini, M. Cole, S. Wang, Temporal
framework for explicating the notion, Journal of the dynamics of eye-tracking and EEG during reading
American Society for Information Science and Tech- and relevance decisions, Journal of the Association
nology 64 (2013) 18–35. doi:10.1002/asi.22811. for Information Science and Technology (2017).
[42] T. Saracevic, The Notion of Relevance in Informa- [55] N. Bhattacharya, S. Rakshit, J. Gwizdka, Towards
tion Science: Everybody knows what relevance is. real-time webpage relevance prediction using
conBut, what is it really?, Synthesis Lectures on Infor- vex hull based eye-tracking features, in:
Sympomation Concepts, Retrieval, and Services (2016). sium on Eye Tracking Research &amp; Applications
[43] A. Spink, Study of interactive feedback during medi- (ETRA ’20), 2020.</p>
        <p>ated information retrieval., Journal of the American [56] N. Bhattacharya, J. Gwizdka, Visualizing and
quanSociety for Information Science (1997). tifying vocabulary learning during search, in:
Pro[44] J. Gwizdka, Y. Moshfeghi, M. L. Wilson, Introduc- ceedings of the CIKM 2020 Workshops, October
tion to the special issue on neuro-information sci- 19-20, 2020, Galway, Ireland, 2020.
ence, Journal of the Association for Information</p>
        <p>Science and Technology 70 (2019) 911–916.
[45] J. Gwizdka, Characterizing Relevance with
Eyetracking Measures, in: Proceedings of the 5th
Information Interaction in Context Symposium, IIiX
’14, ACM, New York, NY, USA, 2014, pp. 58–67.</p>
        <p>doi:10.1145/2637002.2637011, 00028.
[46] N. Bhattacharya, S. Rakshit, J. Gwizdka, P. Kogut,</p>
        <p>Relevance prediction from eye-movements using
semi-interpretable convolutional neural networks,
in: Conference on Human Information Interaction
and Retrieval (CHIIR’20), 2020.
[47] Z. Pinkosova, W. J. McGeown, Y. Moshfeghi, The
cortical activity of graded relevance, in:
Conference on Research and Development in Information</p>
        <p>Retrieval (SIGIR), 2020, pp. 299–308.
[48] C. Eickhof, S. Dungs, V. Tran, An eye-tracking
study of query reformulation, in: Proceedings
of the 38th International ACM SIGIR Conference
on Research and Development in Information
Retrieval, SIGIR ’15, Association for Computing
Machinery, Santiago, Chile, 2015, pp. 13–22. doi:10.</p>
        <p>1145/2766462.2767703.
[49] S. Kruikemeier, S. Lecheler, M. M. Boyer, Learning</p>
        <p>From News on Diferent Media Platforms: An
EyeTracking Experiment, Political Communication 35
(2018) 75–96.
[50] J. Mao, Y. Liu, N. Kando, M. Zhang, S. Ma, How
does domain expertise afect users’ search
interaction and outcome in exploratory search?, ACM</p>
        <p>Transactions on Information Systems 36 (2018).
[51] M. S. Franklin, J. M. Broadway, M. D. Mrazek,</p>
        <p>J. Smallwood, J. W. Schooler, Window to the
wandering mind: Pupillometry of spontaneous thought
while reading, Quarterly Journal of Experimental</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B. C.</given-names>
            <surname>Brookes</surname>
          </string-name>
          ,
          <article-title>The foundations of information science. part i. philosophical aspects</article-title>
          ,
          <source>Journal of information science 2</source>
          (
          <year>1980</year>
          )
          <fpage>125</fpage>
          -
          <lpage>133</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>G.</given-names>
            <surname>Marchionini</surname>
          </string-name>
          , Information Seeking in Electronic Environments, Cambridge University Press,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R. W.</given-names>
            <surname>White</surname>
          </string-name>
          ,
          <source>Interactions with Search Systems</source>
          , Cambridge University Press,
          <year>2016</year>
          . doi:
          <volume>10</volume>
          .1017/ CBO9781139525305.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S. Y.</given-names>
            <surname>Rieh</surname>
          </string-name>
          , Research area 1:
          <article-title>Searching as learning</article-title>
          , https://rieh.ischool.utexas.edu/research,
          <year>2020</year>
          . [Online; accessed 2020-
          <volume>04</volume>
          -19].
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>N. R.</given-names>
            <surname>Council</surname>
          </string-name>
          , How People Learn: Brain, Mind, Experience, and School: Expanded Edition, The National Academies Press, Washington, DC,
          <year>2000</year>
          . doi:
          <volume>10</volume>
          .17226/9853.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>P.</given-names>
            <surname>Pirolli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Schank</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hearst</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Diehl</surname>
          </string-name>
          ,
          <article-title>Scatter/gather browsing communicates the topic structure of a very large text collection</article-title>
          ,
          <source>in: Conference on Human Factors in Computing Systems (CHI'96)</source>
          ,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>P.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , D. Soergel,
          <article-title>Process patterns and conceptual changes in knowledge representations during information seeking and sensemaking: A qualitative user study</article-title>
          ,
          <source>Journal of Information Science</source>
          <volume>42</volume>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Understanding online health information consumers' search as a learning process</article-title>
          ,
          <source>Library Hi Tech</source>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <article-title>Searching as learning: Exploring search behavior and learning outcomes in learning-related tasks</article-title>
          ,
          <source>in: Conference on Human Information Interaction &amp; Retrieval (CHIIR)</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>H. L. O'Brien</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Kampen</surname>
            ,
            <given-names>A. W.</given-names>
          </string-name>
          <string-name>
            <surname>Cole</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Brennan</surname>
          </string-name>
          ,
          <article-title>The role of domain knowledge in search as learning</article-title>
          ,
          <source>in: Conference on Human Information Interaction and Retrieval (CHIIR)</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhou</surname>
          </string-name>
          , U. Gadiraju,
          <article-title>How does team composition afect knowledge gain of users in collaborative web search?</article-title>
          ,
          <source>in: Conference on Hypertext and Social Media (HT'20)</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>U.</given-names>
            <surname>Gadiraju</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dietze</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Holtz</surname>
          </string-name>
          ,
          <article-title>Analyzing knowledge gain of users in informational search</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>