<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Visualizing and Quantifying Vocabulary Learning During Search</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nilavra Bhattacharya</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jacek Gwizdka</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Information, The University of Texas at Austin</institution>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>We report work in progress for visualizing and quantifying learning during search. Users initiate a search session with a Pre-Search Knowledge state. During search, they undergo a change in knowledge. Upon conclusion, users attain a PostSearch Knowledge state. We attempt to measure this dynamic knowledge-change from a stationary reference point: Expert Knowledge on the search topic Using word-embeddings of searchers' written summaries, we show that w.r.t. Expert Knowledge, there is observable and quantifiable diference between the Pre-Search knowledge (Pre-Exp distance) and Post-Search knowledge (Post-Exp distance).</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;search as learning</kwd>
        <kwd>quantifying learning</kwd>
        <kwd>expert knowledge</kwd>
        <kwd>word embedding</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        KPrneo-wSeleadrcghe tual multiple choice questions (MCQs). The answer
options can be a mixture of fact-based responses (TRUE,
FALSE, or I DON’T KNOW ), [3, 4] or recall-based
responses (I remember / don’t remember seeing this
information) [
        <xref ref-type="bibr" rid="ref3">5, 6</xref>
        ]. Constructing topic-dependant MCQs
Searching may take time and efort, which may be aided by
automated question generation techniques[7]. For
evaluation, this approach is the easiest, and often automated.
      </p>
      <p>
        PKonsot-wSeleadrgche Post‒Exp Distance EKxnpoewrltedge bHyowgeuveesrs,wMoCrkQ.s aTlhloewtrheisrpdonadpepnrtosatochanlsewtsersceoarrrcehcetrlys
write natural language summaries or short answers,
Figure 1: Conceptual framework of Search-as-Learning. before and after the search [
        <xref ref-type="bibr" rid="ref12 ref24">8, 2</xref>
        ]. Depending on
experimental design, prompts for writing such responses
can be generic (least efort) [ 9] or topic-specific (some
1. Introduction efort) [ 7]. While this approach can provide the
richest information about the searcher’s knowledge state,
An important aspect of understanding learning during evaluating such responses is the most challenging, and
web search is to measure and quantify learning, possi- requires extensive human intervention.
bly in an automated fashion. Recent literature adopts We report progress on extending work by [9], and
three broad approaches for this purpose. The first ap- take the third approach mentioned above. We attempt
proach asks searchers to rate their self-perceived pre- to visualize and quantify vocabulary learning during
search and post-search knowledge levels [
        <xref ref-type="bibr" rid="ref24">1, 2</xref>
        ]. This search, using natural language Pre-Search and
Postapproach is the easiest to construct, and can be gener- Search responses. The previous authors used sentence
alized over any search topic. However, self-perceptions embedding models, and reported not finding strong
asmay not objectively represent true learning. The sec- sociations between search interactions and knowledge
ond approach tests searchers’ knowledge using fac- change measures. A possible reason is that sentence
embedding approaches are yet to attain maturity, and
typically employ average pooling operation to generate
sentence vectors from individual word vectors.
Devising efective strategies to obtain vectors for compound
units (phrases / sentences) from individual word
vectors is always a challenge [10]. Diferently from [ 9],
we use word embedding vectors and max-pooling
operations (taking element wise maximum of individual
word vectors to form sentence vectors), which
experiProceedings of the CIKM 2020 Workshops, October 19-20, 2020,
Galway, Ireland
email: nilavra@ieee.org (N. Bhattacharya);
iwilds2020@gwizdka.com (J. Gwizdka)
url: https://nilavra.in (N. Bhattacharya); http://gwizdka.com (J.
      </p>
      <p>Gwizdka)
orcid: 0000-0001-7864-7726 (N. Bhattacharya);
0000-0003-2273-3996 (J. Gwizdka)</p>
      <p>© 2020 Copyright for this paper by its authors. Use permitted under Creative
CPWrEooUrckReshdoinpgs IhStpN:/c1e6u1r3-w-0s.o7r3g CCoEmUmoRns WLiceonrsekAsthtriobuptioPnr4o.0cIneteerdnaitniognasl ((CCC EBYU4R.0)-.WS.org)</p>
    </sec>
    <sec id="sec-2">
      <title>Pre-Search Prompt:</title>
      <p>Think of what you already know on the topic of this search and list as
many phrases or words as you can that come to your mind. For
example, if you know about side effects, please do not just type the
phrase “side effects” ,but rather type “side effects” and then list the
specific side effects you know about. Please list only one word or phrase
per line and end each line with a comma.</p>
    </sec>
    <sec id="sec-3">
      <title>Post-Search Prompt:</title>
      <p>Now that you have completed this search task, think of the information
that you found and list as many words or phrases as you can on the topic
of the search task. This will be short ANSWERS to the search questions.</p>
      <p>For example, if you were searching for side effects, please do not just
type the phrase “side effects”, but rather type “side effects” and then list
the specific side effects you found. Please list only one word (or phrase)
per line and end each line with a comma.
angular distance( ,  ) = arccos</p>
      <p>
        (1)
 ⋅ 
( ‖ ‖ ‖ ‖ )
/
mentally showed better results than average-pooling. Search, and Expert Knowledge (Fig. 1). word2vec
contains 300 dimensional vectors for about 100 billion
words (tokens) from the Google News dataset, and is
2. Experimental Design claimed to be the most stable word-embedding [13].
GloVe ofers multiple pre-trained word embeddings;
We analyze data from the user-study reported in [
        <xref ref-type="bibr" rid="ref12">8, 9</xref>
        ]. we ran experiments with 50, 100, and 300 dimensional
Participants ( = 30, 16 females, mean age 24.5 years) versions.
searched for health-related information on the web, Word embedding algorithms produce vectors for
inover two search-tasks, T3 (topic: Vitamin A) and T4 dividual words. To obtain vectors for phrases and
sen(topic: Hypotension). Each search task began (Pre- tences, the individual word vectors are usually pooled
Search) and ended (Post-Search) with a knowledge as- or aggregated. As discussed in Sec. 1, we performed
sessment, to gauge the participants’ initial and final max pooling, to produce a single high dimensional
vecknowledge states. Participants entered natural lan- tor for a participant response (or expert knowledge).
guage responses from free-recall, as answers. A vo- We employed two distance metrics – euclidean, and
ancabulary of Expert Knowledge was also created for gular (cosine) – to compute distances between vectors
each topic, in consultation with a medical doctor. Ex- of Pre-Search responses, Post-Search responses, and
ample participant responses, and an excerpt from the Expert’s Knowledge (Fig. 1). The euclidean distance is
Expert Knowledge are shown in Fig. 2. After data clean- unbounded, while the angular distance (Eqn. 1) ranges
ing, we obtained data from 49 participant-task pairs from 0 (no distance) to 1 (maximum distance).
(  3 = 26;   4 = 23). Due to space limitations, please
see [9] for more details about the study.
3. Data Analysis &amp; Preliminary
      </p>
      <p>Results</p>
      <sec id="sec-3-1">
        <title>We manually set the angular distance to be 1 (i.e, max</title>
        <p>imum) if one of the input vectors was a zero vector.</p>
        <p>This makes sense because zero vectors are obtained
We hypothesize that participants’ learning during search only if participants’ responses do not contain any signs
can be assessed from the ‘diference’ in their Pre-Search of knowledge (e.g., “none” or “i dont know”).
and Post-Search responses. Since diferent participants To visualize the high-dimensional vectors of various
may have diferent initial and final knowledge states, knowledge states, we employed the t-SNE algorithm.
we measured it from a stationary reference-point: the This algorithm projects a set of high-dimensional
obexpert knowledge. Calculating such diferences be- jects on a 2D plane in such a way that similar objects
tween pieces of natural language texts is challenging, are modelled by nearby points, and dissimilar objects
and is an active research topic. Word embedding is a are modelled by distant points. Using this algorithm,
popular method of computing semantic similarity (or we obtained 2D representations of the Pre-Search,
Postdistances) between two pieces of natural language texts. Search, and Expert Knowledges (Fig. 3, left column).
A word embedding algorithm produces a numeric, high- The visualization shows an almost clear separation
dimensional vector for each word, which is assumed between the Pre-Search (red circle) and Post-Search
to encapsulate the ‘meaning’ of the word. In this work, (green square) knowledge states, with Expert
Knowlwe leverage two popular pre-trained word-embedding edge (blue star) residing near the Post-Search
knowlmodels: word2vec [11], and GloVe [12], to compute edge states. This is a visual confirmation and support
‘diferences’ or ‘distances’ between Pre-Search, Post- to the hypothesis that participants gain knowledge
durPre-search Knowledge Embedding
Post-search Knowledge Embedding</p>
        <p>Expert Knowledge Embedding</p>
        <sec id="sec-3-1-1">
          <title>Visualizing high-dimensional embeddings in 2D using t-SNE</title>
          <p>Participant-Task Pair
Pre – Exp Distance
Post – Exp Distance</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>Euclidean Distance Metric</title>
          <p>Participant-Task Pair
Pre – Exp Distance
Post – Exp Distance</p>
        </sec>
        <sec id="sec-3-1-3">
          <title>Angular Distance Metric</title>
          <p>[0 = min distance, 1= max distance]
ing search, and move ‘closer’ to the Expert Knowledge the sum of the positive diference ranks (Σ +) and the
state at the end of a search. sum of the negative diference ranks (Σ −). Since Σ −</p>
          <p>The Euclidean and Angular distances between Pre- was greater than Σ + in all the tests, the diference
beSearch and Expert (Pre-Exp distance), and Post-Search tween Pre-Exp and Post-Exp distances is negative. This
and Expert (Post-Exp distance), are shown in the mid- means that the majority of participants had lower
Postdle and right columns, respectively, in Fig. 3. For both Exp distance than Pre-Exp distance (i.e. they moved
distance metrics, the majority of the participants have closer to expert knowledge at the end of the task). The
lower Post-Exp distances than Pre-Exp distances (i.e. magnitude of a phenomenon is measured by efect size,
their Post-Search response is less distant, or more simi- which ranges from 0 (no efect) to 1 (maximum efect).
lar to, Expert Knowledge). These metrics were calcu- All the tests had efect sizes greater than 0.8, signifying
lated between the high dimensional embedding vectors, that searching online had a strong efect on minimizing
which supports the fact that the 2D visualizations (left the distance between participants’ knowledge level and
column) showing the clear separation between Pre- and expert knowledge.</p>
          <p>Post-Search Knowledge levels is not merely by random
chance. Interestingly, for few participants, the Post-Exp
distance was higher than the Pre-Exp distance. This 4. Conclusion and Future Work
possibly demonstrates a ‘loss’ in knowledge level: users
were closer to Expert Knowledge before the search, and We showed that word embeddings have promise for
moved away from Expert Knowledge after the search. visualizing and quantifying vocabulary-based
learn</p>
          <p>We further tested whether these visual diferences ing during search. Clear separation between user’s
between Pre-Exp and Post-Exp distances were statis- Pre-Search and Post-Search knowledge states was seen
tically significant. Since the distance values were not and measured using simple distance metrics.
Possinormally distributed, we employed the non-parametric ble future directions include predicting these learning
Wilcoxon Signed-Rank test, which is used for compar- metrics from search-interactions measures. Another
ing paired or related samples. (Tbh)[e3r6e]sults are presented direction is to experiment with contextual embeddings
in Table 1. We can see that across diferent c(hdo)iecyees-torafckin(ge.[g1.3,5B]ERT). We also plan to investigate individual
difword(ce)meybee-dtrdaicnkgins,gt[h1e0r3e] were significant diferences ferences in learning during search.
between the Pre-Exp and Post-Exp distances. Thus, the
results are not due to choice of particular word em- 4.0.1. Acknowledgements
bedding models. The directionalities of the diferences
in the Wilcoxon Signed-Rank test are expressed using</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>We thank Sudipto Mukherjee, for technical and conceptual mentoring; Dr. Andrzej Kahl, our medical doctor</title>
        <p>Wilcoxon SR Test
all tests significant at p &lt; .05
ΣR+ = 20.0, ΣR– = 1205.0
word2vec 95% CI: -2.76 to -1.82
Effect Size: 0.84
GloVe 6B 50d 8.678(.±226.39) 5.124(.±681.29) 9ΣE5Rff%e+ctC=SI:3iz-74e..0:0,03.8tΣo2R-2–.4=81188.0
GloVe 6B 100d 9.348(.±926.55) 5.465(.±171.42) 9ΣE5Rff%e+ctC=SI:3iz-04e..0:4,06.8tΣo3R-2–.7=91195.0
GloVe 6B 300d 12.1151(.±937.18) 7.206(.±811.72) 9ΣE5Rff%e+ctC=SI:2iz-95e..0:7,09.8tΣo3R-3–.6=51196.0
GloVe 42B 300d 12.1171(.±734.10) 7.096(.±661.80) 9ΣE5Rff%e+ctC=SI:2iz-95e..0:9,02.8tΣo3R-3–.7=91196.0
13.24 (±3.16) ΣR+ = 28.0, ΣR– = 1197.0
coGnloVsuel8t4a0nBt3f0o0rd exper1t2-.6v9ocabula8r.3y67(.c±711r.7e9a)tio9n5;%aCnI: d-5.6Y7 iton-g3.-48
long Zhang, for contributing to experimenEftfeaclt
Sdizaet:a0.8c3ollection. The research was partially funded by IMLS
Award #RE-04-11-0062-11 to Jacek Gwizdka.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <given-names>Word</given-names>
            <surname>Euclidean Distance Metric</surname>
          </string-name>
          <article-title>Angu[l0a=rleDasisttdaisntacneceM;1e=trmicax(Ndisotramncae]lized)</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <article-title>Embedding Pmremeane-d(i±EaSnxDp) Pmoemsatend-(i±aESnxDp) Pmremeane-d(i±EaSnxDp) Pmoemsatend-(i±aESnxDp) all tWesitlscsoigxnoifnicaSntRatTpe&lt;st</article-title>
          .05
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          6.
          <fpage>306</fpage>
          (.
          <source>±112.52) 3</source>
          .
          <fpage>903</fpage>
          (.
          <source>±680.87) 0</source>
          .
          <fpage>300</fpage>
          (.
          <source>±108.28) 0</source>
          .
          <fpage>110</fpage>
          (.±
          <volume>100</volume>
          .03) 9ΣE5Rff%e+ctC=SI:
          <fpage>2iz</fpage>
          -
          <lpage>80e</lpage>
          ..
          <volume>0</volume>
          :
          <issue>1</issue>
          ,
          <fpage>03</fpage>
          .8tΣo3R-
          <fpage>0</fpage>
          -.0=
          <issue>61197</issue>
          .
          <fpage>0</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          0.
          <fpage>270</fpage>
          (.
          <source>±107.28) 0</source>
          .
          <fpage>100</fpage>
          (.±
          <volume>009</volume>
          .03) 9ΣE5Rff%e+ctC=SI:
          <fpage>4iz</fpage>
          -
          <lpage>30e</lpage>
          ..
          <volume>0</volume>
          :
          <issue>1</issue>
          ,
          <fpage>02</fpage>
          .8tΣo1R-
          <fpage>0</fpage>
          -.0=
          <issue>61182</issue>
          .
          <fpage>0</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          0.
          <fpage>300</fpage>
          (.
          <source>±109.28) 0</source>
          .
          <fpage>110</fpage>
          (.±
          <volume>100</volume>
          .03) 9ΣE5Rff%e+ctC=SI:
          <fpage>3iz</fpage>
          -
          <lpage>20e</lpage>
          ..
          <volume>0</volume>
          :
          <issue>1</issue>
          ,
          <fpage>05</fpage>
          .8tΣo2R-
          <fpage>0</fpage>
          -.0=
          <issue>71193</issue>
          .
          <fpage>0</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          0.
          <fpage>300</fpage>
          (.
          <source>±200.27) 0</source>
          .
          <fpage>110</fpage>
          (.±
          <volume>100</volume>
          .03) 9ΣE5Rff%e+ctC=SI:
          <fpage>3iz</fpage>
          -
          <lpage>50e</lpage>
          ..
          <volume>0</volume>
          :
          <issue>1</issue>
          ,
          <fpage>04</fpage>
          .8tΣo2R-
          <fpage>0</fpage>
          -.0=
          <issue>71190</issue>
          .
          <fpage>0</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          0.
          <fpage>310</fpage>
          (.
          <source>±201.27) 0</source>
          .
          <fpage>110</fpage>
          (.±
          <volume>100</volume>
          .03) 9ΣE5Rff%e+ctC=SI:
          <fpage>3iz</fpage>
          -
          <lpage>80e</lpage>
          ..
          <volume>0</volume>
          :
          <issue>1</issue>
          ,
          <fpage>06</fpage>
          .8tΣo2R-
          <fpage>0</fpage>
          -.0=
          <issue>81187</issue>
          .
          <fpage>0</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <source>ΣR+ = 38.0</source>
          , ΣR- =
          <volume>1187</volume>
          .
          <fpage>0</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>M.</surname>
          </string-name>
          <year>T0</year>
          .e30n0(.g±
          <volume>200</volume>
          ,.
          <year>2S7</year>
          ).
          <source>Willi0a.1m20(.s±101,.0D3)</source>
          .
          <year>W9</year>
          .5W%
          <string-name>
            <surname>C.I:T-</surname>
          </string-name>
          0a.
          <year>1y3</year>
          ,
          <fpage>toS</fpage>
          -.0.
          <issue>I0q6bal</issue>
          , Im-
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <source>proving learning outcomeEsffect Size: 0</source>
          .
          <fpage>82</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <source>Conference (WWW)</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>N.</given-names>
            <surname>Bhattacharya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gwizdka</surname>
          </string-name>
          , Relating eye-
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Research</surname>
          </string-name>
          &amp;
          <string-name>
            <surname>Applications</surname>
          </string-name>
          (ETRA),
          <year>2018</year>
          . [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Shah</surname>
          </string-name>
          , Searching as learning: [9]
          <string-name>
            <given-names>N.</given-names>
            <surname>Bhattacharya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gwizdka</surname>
          </string-name>
          , Measuring learning
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          2018.
          <article-title>and Retrieval (CHIIR</article-title>
          ),
          <year>2019</year>
          . [2]
          <string-name>
            <surname>H. L. O'Brien</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Kampen</surname>
            ,
            <given-names>A. W.</given-names>
          </string-name>
          <string-name>
            <surname>Cole</surname>
            , K. Bren- [10]
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Roy</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Ganguly</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Mitra</surname>
            ,
            <given-names>G. J.</given-names>
          </string-name>
          <string-name>
            <surname>Jones</surname>
          </string-name>
          , Rep-
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>Interaction</surname>
          </string-name>
          and
          <string-name>
            <surname>Retrieval (CHIIR)</surname>
          </string-name>
          ,
          <year>2020</year>
          . ACM SIGIR workshop on neural information re[3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhou</surname>
          </string-name>
          , U. Gadiraju,
          <article-title>How does team com- trieval (</article-title>
          <string-name>
            <surname>Neu-IR)</surname>
          </string-name>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <article-title>position afect knowledge gain of users in collab-</article-title>
          [11]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mikolov</surname>
          </string-name>
          , I. Sutskever,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. S.</given-names>
            <surname>Corrado</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <source>and Social Media (HT)</source>
          ,
          <year>2020</year>
          .
          <article-title>phrases and their compositionality</article-title>
          ,
          <source>in: Advances</source>
          [4]
          <string-name>
            <given-names>U.</given-names>
            <surname>Gadiraju</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dietze</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Holtz</surname>
          </string-name>
          ,
          <source>Analyzing in neural information processing systems</source>
          ,
          <year>2013</year>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <article-title>knowledge gain of users in informational search pp</article-title>
          .
          <fpage>3111</fpage>
          -
          <lpage>3119</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <article-title>sessions on the web</article-title>
          , in: Conference on Human [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Pennington</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Socher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Manning</surname>
          </string-name>
          , Glove:
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <given-names>Information</given-names>
            <surname>Interaction</surname>
          </string-name>
          &amp;
          <string-name>
            <surname>Retrieval (CHIIR)</surname>
          </string-name>
          ,
          <year>2018</year>
          .
          <article-title>Global vectors for word representation</article-title>
          , in: Con[5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kruikemeier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lecheler</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Boyer</surname>
          </string-name>
          ,
          <article-title>Learning ference on empirical methods in natural language</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <source>from news on diferent media platforms: An eye- processing (EMNLP)</source>
          ,
          <year>2014</year>
          , pp.
          <fpage>1532</fpage>
          -
          <lpage>1543</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <article-title>tracking experiment</article-title>
          ,
          <source>Political Communication</source>
          <volume>35</volume>
          [13]
          <string-name>
            <given-names>L.</given-names>
            <surname>Burdick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. K.</given-names>
            <surname>Kummerfeld</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mihalcea</surname>
          </string-name>
          , Fac-
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          (
          <year>2018</year>
          )
          <fpage>75</fpage>
          -
          <lpage>96</lpage>
          .
          <article-title>tors influencing the surprising instability of word [6</article-title>
          ]
          <string-name>
            <given-names>N.</given-names>
            <surname>Roy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Moraes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hauf</surname>
          </string-name>
          , Exploring users' learn- embeddings, in: Conference of the North Ameri-
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <article-title>on(a)H[2u1m]an Information (Ibn)te[3r6a]ction</article-title>
          and Retrieval
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <source>(CHIIR)</source>
          ,
          <year>2020</year>
          .
          <article-title>(d) eye-tracking [135Lp]ipn</article-title>
          .
          <source>g2u0i9s2ti-c2s:10H2u. man Language Technologies</source>
          ,
          <year>2018</year>
          , [7]
          <string-name>
            <given-names>R.</given-names>
            <surname>Syed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Collins-Thompson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. N.</given-names>
            <surname>Bennett</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>