<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Relations Between Relevance Assessments, Bibliometrics and Altmetrics</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Forschungszentrum Julich</institution>
          ,
          <addr-line>52425 Julich</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>TH Koln (University of Applied Sciences)</institution>
          ,
          <addr-line>50678 Cologne</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <fpage>101</fpage>
      <lpage>112</lpage>
      <abstract>
        <p>Relevance assessment in retrieval test collections and citations/mentions of scienti c documents are two di erent forms of relevance decisions: direct and indirect. To investigate these relations, we combine arXiv data with Web of Science and Altmetrics data. In this new collection, we assess the e ect of relevance ratings on measured perception in the form of citations or mentions, likes, tweets, et cetera. The impact of our work is that we could show a relation between direct relevance assessments and indirect relevance signals.</p>
      </abstract>
      <kwd-group>
        <kwd>Relevance assessments bibliometrics tions information retrieval test collections</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        One of the long-running open questions in Information Science in general and
especially in Information Retrieval (IR) is on what constitutes relevance and
relevance decisions. In this paper, we would like to borrow from the idea of
using IR test collections and their relevance assessments to intersect these
explicit relevance decisions with some implicit or hidden relevance decisions in the
form of citations. We see this in the light of Borlund's discussion of relevance
and its multidimensionality [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. On the one hand, we have the test collection's
relevance assessments that are direct relevance decisions and are always based
on a concrete topic and the corresponding information need of an assessor [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ].
On the other hand, the citation data gives us a hint on a distant or indirect
relevance decision from external users. These external users are not part of the
design process of the test collections, and we do not know anything about their
information need or retrieval context. We only know that they cited a speci c
paper - therefore, this paper was somehow relevant to them. Otherwise, they
would not have cited it.
? Listed in alphabetical order. Data and sources are available at Zenodo [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>Copyright c 2020 for this paper by its authors. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0). BIR 2020, 14 April 2020,
Lisbon, Portugal.</p>
      <p>
        A test collection that incorporates both direct and indirect relevance
decisions is the iSearch collection introduced by Lykke et al. [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. One of the main
advantages of iSearch is the combination of a classic document collection derived
from the arXiv, a set of topics that describe a speci c information need plus the
related context, relevance assessments, and a complementing set of references
and citation information.
      </p>
      <p>
        Carevic and Schaer [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] previously analyzed the iSearch collection to learn
about the connection between topical relevance and citations. Their experiments
showed that internal references within the iSearch collection did not retrieve
enough relevant documents when using a co-citation-based approach. Only very
few topics retrieved a high number of potentially relevant documents. This might
be due to the preprint characteristics of the arXiv, where typically, a citation
would target a journal publication and not the preprint. This information on
external citations is not available within iSearch.
      </p>
      <p>To improve on the known limitations of having a small overlap of citations
and relevance judgments in iSearch, we expand the iSearch document collection
and its internal citation data. We complement iSearch with external citation data
from the Web of Science. Additionally, we add di erent Altmetric scores as they
might introduce some other promising insights on relevance indicators. These
di erent data sources will be used to generate a dataset to investigate whether
there is a correlation between intellectually generated direct relevance decisions
and indirect relevance decisions incorporated through citations or mentions in
Altmetrics.</p>
      <p>Our expanded iSearch collection allows us to compare and analyze direct
and indirect relevance assessments. The following research questions are to be
addressed with the help of this collection and a rst data evaluation:
RQ1 Are arXiv documents with relevance ratings published in journals with a
higher impact?
RQ2 Are arXiv documents with a relevance rating cited more highly or do they
receive more mentions in Altmetrics?
RQ3 In the literature, a connection between Mendeley readerships and citations
is described. Is there evidence of a link between Mendeley readerships and
citations in the documents with relevance ratings?</p>
      <p>The paper is structured as follows: In Section 2, we describe the related work.
Section 3 is about the data set generation and on the intersections between
arXiv, Web of Science, and the Altmetrics Explorer. In Section 4, we use this
new combined data set to answer the previous research questions. We discuss
our rst empirical results in Section 5 and draw some rst conclusions.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        Borlund [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] proposed a theory of relevance in IR for the multidimensionality of
relevance, its many facets, and the various relevance criteria users may apply
in the process of judging the relevance of retrieved information objects. Later,
Cole [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] expanded on this work and asked about the underlying concept of
information needs, which is the foundation for every relevance decision. While these
works discuss the question of relevance and information need in great details,
they lack a formal evaluation of their theories and thoughts.
      </p>
      <p>
        White [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] combined relevance theory and citation practices to investigate
the links between these two concepts further. He described that based on the
relevance theory, authors intend their citations to be optimally relevant in given
contexts. In his empirical work, he showed a link between the concept of relevance
and citations. From a more general perspective, Heck and Schaer [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] described a
model to bridge bibliometric and retrieval research by using retrieval test
collections3. They showed that these two disciplines share a common basis regarding
data collections and research entities like persons, journals, et cetera - especially
with regards to the desire to rank these entities. These mutual bene ts of IR
test collections and informetric analysis methods could advance both disciplines
if suitable test collections were available.
      </p>
      <p>To the best of our knowledge, Altmetrics has not been a mainstream topic
within the BIR workshop series. Based on literature analysis of the
Bibliometricenhanced-IR Bibliography4 only two papers explicitly used Altmetrics-related
measures to design a study, Bessagnet in 2014 and Jack et al. in 2018. The
reason for this low coverage of Altmetrics-related papers in BIR is unclear, as the
inherent advantages in comparison to classic bibliometric indicators are
apparent. One of the reasons Altmetrics has been approached is the time lag caused by
the peer review and publication process of journal publications: It takes two years
or more until citation data is available for a publication and thus, something can
be said about its perception. The advantage of Altmetrics can, therefore, be a
faster availability of data in contrast to bibliometrics.</p>
      <p>
        On the other hand, there is no uniform de nition of Altmetrics, and therefore
no consensus on what exactly is measured by Altmetrics. A semantic analysis of
contributions in social media is lacking for the most part, which is a major
issue making the evaluation of Altmetrics counts so di cult. Mentions are mostly
counted based on identi ers such as the DOI. However, it is not possible to mass
evaluate which mentions should be deemed as positive and which should be
deemed as negative, which means that a \performance paradox" develops. This
problem exists in a similar form in classical bibliometrics and must be considered
as an inherent problem of the use of quantitative metrics [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Haustein et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]
found that 21.5 % of all scienti c publications from 2012 available in Web of
Science were mentioned in at least one Tweet, while the proportion of publications
mentioned in other social media was less than 5 %. In Tunger et al. [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], the
share of WoS publications with at least one mention on Altmetric.com is already
42 %. It becomes visible that the share of WoS publications referenced in social
media is continuously increasing. Among the scienti c disciplines, there are also
substantial variations concerning the coverage at Altmetric.com: publications
3 IR test collections consist of three main parts: (1) a xed document collection, (2) a
set of topics that contain information needs), and (3) a set of relevance assessments.
4 https://github.com/PhilippMayr/Bibliometric-enhanced-IR_Bibliography
from the eld of medicine are represented considerably more often than, for
example, publications from the engineering sciences. Thus, the question arises, to
what extent the statements of bibliometrics and Altmetrics overlap or correlate.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Data Set Generation: Intersections between arXiv,</title>
    </sec>
    <sec id="sec-4">
      <title>Web of Science, and Altmetrics Explorer</title>
      <p>documents are rated twice or more, we ltered out duplicates before investigating
the DOI coverage of the documents with di erent relevance levels. As it can
be seen, the percentage of DOI coverage increases with higher relevance for
documents being rated as relevant. Besides, the percentage DOI coverage of
marginally (71; 0 %) and highly (71; 8 %) relevant documents is slightly higher
compared to that of documents that are rated non-relevant (70; 0 %).</p>
      <p>In sum, there are 1228 (out of 8670) documents with two or more ratings
across di erent topics. In the following, we consider the relevance rating on a
binary scale by treating ratings of 2 and 0 as non-relevant and ratings of 1, 2
and 3 as relevant. 136 (11; 1 %) documents are exclusively rated as relevant, 698
(56; 8 %) documents are exclusively rated as non-relevant, and the remaining 394
(32; 1 %) documents are rated both relevant and non-relevant across di erent
topics. Therefore we only have a small intersection of contradicting relevance
assessments for the same document which is under 5 % of the total count of
judged documents (394 out of 8670).
Category
Nuclear &amp; Particles Physics
Fluids &amp; Plasmas
General Physics
Astronomy &amp; Astrophysics
Applied Physics</p>
      <p>In the next step, we combine the arXiv data with WoS and the Altmetrics
Explorer6 to obtain statements on the impact of these publications both in
the scienti c world and beyond in social media. The matching between arXiv
and WoS is carried out via DOI. For 4061 out of 10; 263 ratings, WoS data
is available. Of the documents with DOI that were matched with the WoS,
the publications in arXiv can essentially be assigned to four major categories,
as shown in Table 1. The distribution of relevance ratings by category shows
a slightly di erent picture, which indicates a small shift and shows that the
category with the largest number of articles is not automatically the category
with the most relevance assessments.</p>
    </sec>
    <sec id="sec-5">
      <title>Results</title>
      <sec id="sec-5-1">
        <title>Relevance and Journal Impact Factor</title>
        <p>RQ1 focuses on the relationship between positive relevance assessments and
the perception of documents. Or in other words: Whether a publication with a
positive relevance rating can achieve a higher scienti c perception or a higher
perception in Altmetrics. The question is, therefore, whether there is a
connection between high relevance rating and high perception.</p>
        <p>If we look at Table 2, we see that the unrated documents, which form the
vast majority, have a lower citation rate than the relevant-rated documents, on
average, about 41 citations per document. The documents with relevance
assessments reach higher citation rates than the group of documents without relevance
assessment. The highest citation rate is achieved by the documents that have
been grouped into the highest relevance level 3: With a citation rate of 76.2,
they achieve a citation rate almost twice as high as documents without a
relevance rating. The group of documents with relevance rating is small compared
6 Altmetrics is also topic of the project UseAltMe (funding code 16lFl107):
On the way from article-level to aggregated indicators: understanding the
mode of action and mechanisms of Altmetrics https://www.th-koeln.de/en/
information-science-and-communication-studies/usealtme_68578.php
Relevance
Non-relevant
Marginal (1)
Fair (2)
High (3)
Not rated
to the group without relevance ratings. Nevertheless, from the authors' point of
view, the group size is su cient for them to be able to read rough trends from it
using bibliometric methods and to nd answers to the research questions. It is,
therefore, not so much the small deviations that are important here, but rather
the more speci c trends.</p>
        <p>Table 1 shows the citation rates for the ve subject categories, which together
account for 90 % of the arXiv documents, for the documents with and without
relevance assessment. It can be seen that the citation rates for publications with
a relevance assessment are signi cantly higher for all categories shown than for
publications without a relevance rating in the same category (Pearson = 0.997).
The presentation of the citation rates in Table 2 for the individual groups of
documents with and without a relevance rating goes in the same direction: One
of the trends also lies in the observation that all publications with a relevance
rating have a higher citation count than the group whose documents were not
selected as relevant.</p>
        <p>If there is a correlation, it can be demonstrated at other points: When
looking at the results, it is noticeable that the citation rate changes depending on
the degree of relevance assessment. In the highest level of relevance assessment,
level 3, the citation rate of 76.2 is almost twice as high as for the non-assessed
documents. The citation rate increases continuously from the rst assessment
level 0 to level 3. Level 2 is an exception, where the citation rate is lower than in
level 1, but still higher than the citation rate of the non-evaluated documents.</p>
        <p>Are there di erences in the composition of the groups that would explain the
di erences in citation rates described above? The Journal Impact Factor (JIF)
does not show a di erence for any of the groups. It only di ers by fractions of
a decimal point. The documents without relevance rating have an average JIF
of 4.5, the documents with relevance rating have an average JIF between 4.4
and 4.7. This shows that there is no signi cant di erence in the composition of
the individual groups as to whether they publish more in high- or low-impact
journals. The structure of all groups is the same in terms of average journal
impact. Thus, RQ1 can be answered to the e ect that the impact of a journal
has no in uence on the decision about the relevance of a document.
4.2</p>
      </sec>
      <sec id="sec-5-2">
        <title>Relevance and Citation Rates/Altmetrics Mentions</title>
        <p>If we look at RQ2, we can say that arXiv documents with relevance ratings
achieve a higher perception in terms of citation rate than unrated documents.
This observation cannot be directly transferred to Altmetrics, where there is
no di erence in the average number of tweets or news articles. The only
measurable di erence refers to the documents with a relevance rating of 3, here an
above-average number of tweets or mentions in patents can be observed. This
e ect is the result of a skewed distribution and is presumably favored by two
publications: A publication for which 253 tweets exist and another publication
for which 50 mentions in patents are recorded. Without these two publications,
there are no signi cant di erences between documents with and without
relevance rating. Thus, RQ2 can be answered to the e ect that documents with a
relevance rating receive a higher number of citations on average, but with the
exception of Mendeley, there is no e ect on Altmetrics.</p>
        <p>
          An explanation of why we do not see an e ect for Altmetrics in the arXiv
data may be in the year of publication: The majority of the publications were
published between 2003 and 2009. During this time, social media were already
being used actively in society, but not yet in science. This changed only slowly
towards 2008, with the Altmetric Explorer, for example, being founded by Euan
Adie in 2011. It is not known to what extent publications prior to the founding
year of Altmetric Explorer were retroactively re-indexed and to what extent
this is technically possible at all. This is because, in contrast to scienti c journal
publications, communication in social media is fast and can also be deleted before
it has been indexed.These e ects have to be taken into account when dealing
with altmetrics, as well as the fact that the publications originate from several
publication years, so some had more time to generate attention than others. If
it is necessary, \that older articles are compensated for lower altmetric scores
due to the lower social web use when they were published" [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], is a question
that is legitimate but not the focus of this publication. But overall it could be
shown, that there is a connection between Citations and Altmetric counts, as
also shown by Holmberg et al. [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
4.3
        </p>
      </sec>
      <sec id="sec-5-3">
        <title>Mendeley Readerships and Citation Rates</title>
        <p>
          In the literature, the question of whether there is a correlation between
citations of scienti c publications in Web of Science, Scopus, or Google Scholar
to Mendeley readerships (RQ3) has often been examined. Li &amp; Thelwall [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]
have investigated whether there is a correlation between citations from the three
mentioned databases and whether there is a connection between the number of
citations and the number of bookmarks of a publication on Mendeley. The result
of this investigation was a perfect correlation between the citation counts from
the three citation databases Web of Science, Scopus, and Google Scholar. This
result is also less surprising because Web of Science and Scopus are about 95 %
overlapping, and there is also a big overlap between these two databases and
Relevance
Non-relevant
Marginal
Fair
High
Not rated
        </p>
        <p>News</p>
        <p>
          Google Scholar. A correlation between citations from the three mentioned
citation databases to Mendeley was also measurable by Li &amp; Thelwall, but it is much
worse than the correlation between the citation databases. This is not surprising
since Mendeley readerships are not scienti c citations. Bookmarking of
publications takes place for other reasons than citing a scienti c paper. But also, Costas
et al. [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] found out, that \Mendeley is the strongest social media source with
similar characteristics to citations in terms of their distribution across elds".
        </p>
        <p>The present study is based on about 435,000 arXiv publications. However, it
was not possible to determine citations or Mendeley readerships counts for all of
these documents: For 32,081 documents out of the total number, both citation
data and data from Altmetric Explorer are available. This quantity contains
1037 documents with a relevance rating. This is where RQ3 comes into account:
Is there evidence of a link between Mendeley readerships and citations in the
arXiv-documents with relevance ratings? A correlation coe cient, according to
Pearson of 0.83 was determined for the total quantity of 32,081 documents, which
indicates an existing correlation, even if the value is not perfect. Instead, the
result indicates that Mendeley and Web of Science are not entirely overlapping
in terms of the perception of the documents.</p>
        <p>The result is not signi cantly di erent if one takes the 1037 documents with
relevance rating out of this set; Pearson is about at the same level at 0.8 (see
Table 4). It should be noted that reasons for bookmarking a publication can
be di erent from reasons for a citation: There are publications that are roughly
equal in both data sources, for example, 10.1103/PhysRevE.67.026126, which
receives 1068 citations and 1192 Mendeley reads. There are also examples of
unequal perception: 10.1088/0067-0049/182/2/543 has 3151 citations but is
bookmarked \only" 405 times on Mendeley or 10.1088/0954-3899/33/1/001
has 3903 citations but is bookmarked only 24 times, or in the opposite direction
10.1142/S0218127410026721 is cited only eight times but is bookmarked 106
times on Mendeley. So outliers can occur in both directions, which can lead to
distortions in the measured correlation.</p>
        <p>It becomes interesting if we take individual groups out of the entire group
of 1037 documents with relevance assessments: Both for the group of 786
publications, which were singled out as possibly relevant but then rated 0 as
nonrelevant. For the group of 35 publications, which were rated 3 and thus placed
Relevance
in the group of the most relevant documents, we get an even better correlation:
For the 786 publications rated 0 we get a Pearson correlation of 0.85 and for
the 35 publications rated 3 we get a correlation of 0.89. For the groups rated 1
(Pearson = 0.72) or 2 (Pearson = 0.76), we get a worse correlation value in each
case because, in these two groups, the number of outliers is higher. From our
point of view, the result is to be understood in such a way that a clear relevance
decision lters out the papers that receive roughly the same perception on both
sides, citation and Mendeley. Decisions in the middle of the relevance scale, on
the other hand, lter out papers where the perception may tend to be more to
one side. Thus, two things can be observed concerning RQ3: there is a link
between citation data and Mendeley readerships. This becomes all the more visible
if the paper is part of a set of documents that have previously been subjected
to a corresponding relevance assessment.
5</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Discussion and Outlook</title>
      <p>
        The results of our small study on the intersection of relevance assessments within
the iSearch collection and corresponding citation counts show that direct
relevance decisions of a single assessor and indirect decisions of many external
authors citing this work are related. What sounds intuitive and like common
sense is not fully backed by the literature as the connection between citations
and relevance is not undisputed. Ingwersen [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] explains that \citations are not
necessarily good markers of relevance, because impact and relevance might not
always be overlapping phenomena." While this might be true sometimes, in other
situations, references have been shown to improve retrieval quality as additional
keys to the contents [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. One general conclusion of this contradiction is that
citations more represent the general popularitiy or perception of a document, which
is not the same as a relevance judgment.
      </p>
      <p>
        Another thing we have to notice about our work is that the di erentiation of
direct and indirect relevance decisions is not an established concept in
information theory. While relevance is often described as multidimensional, layered, et
cetera, the terms direct and indirect are suggestions of the authors of this paper.
A concept that is aligned to the di erent levels and forms of relevance might
be the principle of poly representation [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Polyrepresentation might be the
common ground where the general popularity of a document measured through
citations and the concrete relevance come together.
      </p>
      <p>
        When looking at JIF, Kacem and Mayr [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] describe that users are not in
uenced by high impact and core journals while searching. This is in line with our
results, as we cannot measure a signi cant di erence in the JIF of unrated,
nonrelevant, or relevant documents. However, we have to keep in mind that judging
on static document lists to generate a test collection might be di erent from
interactive search sessions, which were the basis of the studies of Kacem and
Mayr. Regarding the connection between citations and Mendeley readerships,
the literature is con rmed. We can clearly reproduce the correlation between
these two entities. If one follows the implications of Altmetrics described at the
beginning, such a result also appears desirable, because this means that
Mendeley data also contain additional information that is not contained in the Web
of Science. This follows the goal of Altmetrics also to provide new information
and not just to be faster bibliometrics. We must conclude that the reasons for
citation and bookmarking are similar but not the same. A bookmark is a
reference to a publication that is believed to be of interest to others. This does not
necessarily imply that you have read the publication yourself.
      </p>
      <p>The impact of our work is that we could show a relation between direct
relevance assessments and indirect relevance signals originating from bibliometric
measures like citations. This relation is visible but not fully explainable. There
seems to be something inherent in relevant documents that let them gather a
higher number of citations. We are sure that it is not the impact of the
corresponding journal. Otherwise, there would be no uncited documents within
Nature. Popularity alone seems not to explain this e ect. Maybe citations in
relation to relevance assessments are a marker for \quality", although we are
aware that this term is highly controversial in the bibliometrics community.</p>
      <p>It remains a future work to evaluate and investigate the phenomena that are
the reason for the relationship we have seen. The principle of poly
representation might be an excellent framework to bring together these di erent factors
originating from relevance theory, bibliometrics, and Altmetrics. Additionally, it
might help to design a retrieval study to follow these open questions further.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Archambault</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beauchesne</surname>
            ,
            <given-names>O.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Caruso</surname>
          </string-name>
          , J.:
          <article-title>Towards a multilingual, comprehensive and open scienti c journal ontology</article-title>
          .
          <source>In: Proc. of ISSI 2011</source>
          . pp.
          <volume>66</volume>
          {
          <fpage>77</fpage>
          . Durban South Africa (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Borlund</surname>
            ,
            <given-names>P.:</given-names>
          </string-name>
          <article-title>The concept of relevance in IR</article-title>
          .
          <source>Journal of the American Society for Information Science and Technology</source>
          <volume>54</volume>
          (
          <issue>10</issue>
          ),
          <volume>913</volume>
          {925 (Aug
          <year>2003</year>
          ). https://doi.org/10.1002/asi.10286
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Breuer</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schaer</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tunger</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          : Relations Between Relevance Assessments,
          <source>Bibliometrics and Altmetrics (Mar</source>
          <year>2020</year>
          ). https://doi.org/10.5281/zenodo.3719285
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Carevic</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schaer</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>On the connection between citation-based and topical relevance ranking: Results of a pretest using isearch</article-title>
          .
          <source>In: Proc. of the First Workshop on Bibliometric-enhanced Information Retrieval co-located with ECIR</source>
          <year>2014</year>
          . pp.
          <volume>37</volume>
          {
          <issue>44</issue>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Cole</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>A theory of information need for information retrieval that connects information to knowledge</article-title>
          .
          <source>Journal of the American Society for Information Science and Technology</source>
          <volume>62</volume>
          (
          <issue>7</issue>
          ),
          <volume>1216</volume>
          {1231 (Jul
          <year>2011</year>
          ). https://doi.org/10.1002/asi.21541
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Costas</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zahedi</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wouters</surname>
            ,
            <given-names>P.:</given-names>
          </string-name>
          <article-title>The thematic orientation of publications mentioned on social media: Large-scale disciplinary comparison of social media metrics with citations</article-title>
          .
          <source>Aslib Journal of Information Management</source>
          <volume>67</volume>
          (
          <issue>3</issue>
          ),
          <volume>260</volume>
          {288 (May
          <year>2015</year>
          ). https://doi.org/10.1108/AJIM-12-2014-0173
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Dabrowska</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Larsen</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Exploiting citation contexts for physics retrieval</article-title>
          .
          <source>In: Proc. of the Second Workshop on Bibliometric-enhanced Information Retrieval co-located with ECIR</source>
          <year>2015</year>
          . pp.
          <volume>14</volume>
          {
          <issue>21</issue>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Haustein</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Costas</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lariviere</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Characterizing Social Media Metrics of Scholarly Papers: The E ect of Document Properties and Collaboration Patterns</article-title>
          .
          <source>PLOS ONE 10</source>
          (
          <issue>3</issue>
          ),
          <source>e0120495 (Mar</source>
          <year>2015</year>
          ). https://doi.org/10.1371/journal.pone.0120495
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Heck</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schaer</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Performing Informetric Analysis on Information Retrieval Test Collections: Preliminary Experiments in the Physics Domain</article-title>
          .
          <source>In: Proc. of ISSI 2013</source>
          . vol.
          <volume>2</volume>
          , pp.
          <volume>1392</volume>
          {
          <fpage>1400</fpage>
          . Vienna, Austria (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Holbrook</surname>
            ,
            <given-names>J.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barr</surname>
            ,
            <given-names>K.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brown</surname>
          </string-name>
          , K.W.:
          <article-title>We need negative metrics too</article-title>
          .
          <source>Nature</source>
          <volume>497</volume>
          (
          <issue>7450</issue>
          ),
          <volume>439</volume>
          {439 (May
          <year>2013</year>
          ). https://doi.org/10.1038/497439a
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Holmberg</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bowman</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Didegah</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          , Lehtimaki, J.:
          <article-title>The Relationship Between Institutional Factors, Citation and Altmetric Counts of Publications from Finnish Universities</article-title>
          .
          <source>Journal of Altmetrics</source>
          <volume>2</volume>
          (
          <issue>1</issue>
          ), 5 (Aug
          <year>2019</year>
          ). https://doi.org/10.29024/joa.20
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Ingwersen</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Bibliometrics/Scientometrics and IR A methodological bridge through visualization</article-title>
          (
          <year>Jan 2012</year>
          ), http://www.promise-noe.eu/documents/ 10156/028a48d8-4ba8
          <string-name>
            <surname>-</surname>
          </string-name>
          463c
          <article-title>-acbc-db75db67ea4d</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Kacem</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mayr</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Users are not in uenced by high impact and core journals while searching</article-title>
          .
          <source>In: Proc. of the 7th International Workshop on Bibliometricenhanced Information Retrieval co-located with ECIR</source>
          <year>2018</year>
          . pp.
          <volume>63</volume>
          {
          <issue>75</issue>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thelwall</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>F1000, mendeley and traditional bibliometric indicators</article-title>
          .
          <source>In: Proc. of the 17th International Conference on Science and Technology Indicators</source>
          . pp.
          <volume>541</volume>
          {
          <issue>551</issue>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Lykke</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Larsen</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lund</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ingwersen</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Developing a test collection for the evaluation of integrated search</article-title>
          .
          <source>In: Advances in Information Retrieval, 32nd European Conference on IR Research</source>
          ,
          <string-name>
            <surname>ECIR</surname>
          </string-name>
          <year>2010</year>
          . Proceedings. pp.
          <volume>627</volume>
          {
          <issue>630</issue>
          (
          <year>2010</year>
          ). https://doi.org/10.1007/978-3-
          <fpage>642</fpage>
          -12275-0 63
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Skov</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Larsen</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ingwersen</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Inter and intra-document contexts applied in polyrepresentation for best match IR</article-title>
          .
          <source>Information Processing &amp; Management</source>
          <volume>44</volume>
          (
          <issue>5</issue>
          ),
          <volume>1673</volume>
          {1683 (Sep
          <year>2008</year>
          ). https://doi.org/10.1016/j.ipm.
          <year>2008</year>
          .
          <volume>05</volume>
          .006
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Thelwall</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haustein</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lariviere</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sugimoto</surname>
            ,
            <given-names>C.R.</given-names>
          </string-name>
          :
          <source>Do Altmetrics Work? Twitter and Ten Other Social Web Services. PLoS ONE</source>
          <volume>8</volume>
          (
          <issue>5</issue>
          ), e64841 (May
          <year>2013</year>
          ). https://doi.org/10.1371/journal.pone.0064841
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Tunger</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Meier</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hartmann</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          : Altmetrics Feasibility Study.
          <source>Tech. Rep. BMBF 421-47025-3/2</source>
          , Forschungszentrum Julich (
          <year>2017</year>
          ), http://hdl.handle.
          <source>net/2128/19648</source>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Voorhees</surname>
            ,
            <given-names>E.M.:</given-names>
          </string-name>
          <article-title>TREC: Continuing information retrieval's tradition of experimentation</article-title>
          .
          <source>Communications of the ACM</source>
          <volume>50</volume>
          (
          <issue>11</issue>
          ),
          <volume>51</volume>
          (Nov
          <year>2007</year>
          ). https://doi.org/10.1145/1297797.1297822
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>White</surname>
          </string-name>
          , H.D.:
          <article-title>Relevance theory and citations</article-title>
          .
          <source>Journal of Pragmatics</source>
          <volume>43</volume>
          (
          <issue>14</issue>
          ),
          <volume>3345</volume>
          { 3361 (Nov
          <year>2011</year>
          ). https://doi.org/10.1016/j.pragma.
          <year>2011</year>
          .
          <volume>07</volume>
          .005
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>