=Paper= {{Paper |id=Vol-3617/paper-10 |storemode=property |title=Evaluating the Effectiveness of Research Grants with Journal Bibliometrics |pdfUrl=https://ceur-ws.org/Vol-3617/paper-10.pdf |volume=Vol-3617 |authors=Marcin Oleksy,Przemysław Kazienko,Maciej Dzieżyc |dblpUrl=https://dblp.org/rec/conf/birws/OleksyKD23 }} ==Evaluating the Effectiveness of Research Grants with Journal Bibliometrics== https://ceur-ws.org/Vol-3617/paper-10.pdf
                                Evaluating the Effectiveness of Research Grants with
                                Journal Bibliometrics
                                Marcin Oleksy1,∗ , Przemysław Kazienko1 and Maciej Dzieżyc1
                                1
                                Wroclaw University of Science and Technology, Department of Artificial Intelligence, Wyb. Wyspiańskiego 27, 50-370
                                Wrocław, Poland


                                                                         Abstract
                                                                         We present and exploit WEIG – Wroclaw Effectiveness Indicator for Grants, a scientometric meta-measure
                                                                         that aggregates quality of scientific papers published within a given project normalized with its budget.
                                                                         Variations of WEIG based on general journal quality indicators: Impact Factor (IF), Article Influence
                                                                         Score (AIS), and Polish Ministerial Scoring of Journals (PMSJ) are applied to analyses of projects from
                                                                         two public agencies: European Research Council (ERC) and Polish National Science Centre (NSC). The
                                                                         studies on grants ended between 2014 and 2021 revealed that NSC projects are more effective than
                                                                         ERC ones. The efficiency of Science Branches and Scientific Panels over years are investigated. Some
                                                                         limitations of the proposed approach and observed phenomena are discussed as well.

                                                                         Keywords
                                                                         research efficiency, grant effectiveness, funding acknowledgement analysis, WEIG, IF, AIS, Polish Minis-
                                                                         terial Scoring of Journals, European Research Council, National Science Centre




                                1. Introduction
                                Achievements of research activities or results like grants or scientific papers can be evaluated
                                and compared by means of quantitative bibliometric measures. However, there is no commonly
                                agreed measure for quality of scientific outcome or effectiveness of research grants. Anyway,
                                we still expect a simple and possibly interpretable metric to compare research centres, journals,
                                conferences, or funding schemas. impact factor (IF) appears to fulfill such a function for scientific
                                journals, and h-index for individual scientists. On the other hand, we should be aware that
                                quantitative scientometric applied to evaluaiton of individual scientists, e.g. for the promotion
                                purposes, still remains questionable. This issue was addressed in San Francisco Declaration on
                                Research Assessment [1] and Leiden Manifesto [2].
                                   Supporting fundamental research by means of public funds is considered to have positive
                                impact on the country’s economy [3]. Therefore, public financing of basic studies is necessary
                                in the recent world and “no nation can ‘free-ride’ on the world scientific system”. This paper
                                concentrates on fundamental research, for which the primary expected outcome are scientific

                                BIR 2023: 13th International Workshop on Bibliometric-enhanced Information Retrieval at ECIR 2023, April 2, 2023
                                ∗
                                    Corresponding author.
                                Envelope-Open marcin.oleksy@pwr.edu.pl (M. Oleksy); kazienko@pwr.edu.pl (P. Kazienko); maciej.dziezyc@gmail.com
                                (M. Dzieżyc)
                                GLOBE https://kazienko.eu/en (P. Kazienko)
                                Orcid 0000-0001-7740-5557 (M. Oleksy); 0000-0001-5868-356X (P. Kazienko); 0000-0001-5461-6685 (M. Dzieżyc)
                                                                       © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
                                    CEUR
                                    Workshop
                                    Proceedings
                                                  http://ceur-ws.org
                                                  ISSN 1613-0073
                                                                       CEUR Workshop Proceedings (CEUR-WS.org)



                                                                                                                                         101




CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
papers.
   To support economical progress and competitiveness of their economy, most developed
countries maintain their own dedicated public agencies to boost basic scientific studies. It
includes NSF and NIH in USA, EPSRC in the UK, DFG in Germany, ANR in France, NSFC
in China, NRF in Korea, JSPS and JST in Japan, as well as some international initiatives like
ERC (European Research Centre), which is a part of the European Union’s flagship Research
and Innovation programme (Horizon Europe), component of the EU-long-term Multiannual
Financial Framework. These public bodies support hundreds and thousands of grants, so they
want to monitor the project implementation and validate their results. Therefore, we present a
measure that compare the quantity of scientific papers and effectiveness of the projects in a
massive manner.
   For that purpose Wroclaw Effectiveness Indicator for Grants (WEIG) – a meta-measure was
initially proposed in [4] along with some preliminary analyses of Polish National Science Centre
(NSC) grants. Next, it was further developed and applied to compare grants funded by European
Research Centre (ERC) and NSC [5], which ended by 2020. Here, we present some other analyses
related to effectiveness of research grants, i.e., we consider an additional journal metric – Polish
Ministerial Scoring of Journals (PMSJ), year 2021 is appended, and publications without IF are
investigated.


2. Related Work
2.1. Funding Acknowledgement Analysis
The Web of Science (recently by Clarivate Analytics) added two features to the indexed scientific
papers in 2008: the funding agency and grant number. These fields were analyzed already in
2011, showing also an important role of non-govermental agencies in Germany and Japan [6].
Then, it was called ”funding acknowledgement analysis”.
   John Rigby showed a weak correlation between the number of declared funding sources
and the impact of the publications measured by the number of received citations [7]. This
observation was based on 3,596 articles from the Journal of Biological Chemistry published in
2009. He also found out that Web of Science has a clear advantage over the MEDLINE (PubMed)
database when it comes to funding sources. MEDLINE covers only a few sources of funding,
mainly from the USA [8]. He also suggested that the funding text can be used for mapping
research outputs and priorities of funding bodies.
   In 2017, Nicola Grassano et al. evaluated recall and precision for funding information provided
by Web of Science and PubMed. Their study was based on manually annotated data about
funding sources from 7,510 paper’s full-texts related to UK cancer research in 2011 [9]. They
received a high recall value for WoS - 93% and 94% for precision. In contrast, recall for PubMed
was 42% and precision - 96%. Also, based on the e-mail survey sent to the authors of the articles,
they concluded that only 3% of papers did not reveal all funding sources.
   Belén Álvarez-Bornstein et al. published a paper with the estimation of the completeness and
the accuracy of Web of Science funding data [10]. In 87.8% of the articles, which included funding
text data, both funding agency and grant number fields were correctly extracted. However,
as they pointed out, there are problems with the precision of such data: (1) different funding



                                                102
bodies can be recognised as one body, (2) there is a lack of the country for the funding body
country, and (3) different grants from the same funding body can be recognised as one grant.
  In 2020, Liu at al. reported improvement in funding information in Web of Science (WoS)
[11]. Authors suggested that the funding acknowledgement data in WoS should be primarily
used for the analysis of papers published in English. Also, a case study published in the same
year revealed that funding information in WoS is more reliable than in Scopus [12].
  In 2021, Álvarez-Bornstein and Montesi found in their literature review on analysed funding
acknowledgements a lack of data normalisation, consistency, and completeness, especially for
Pubmed and Scopus databases [13]. Although Web of Science is thought to possess the best
funding acknowledgements data, it still has many problems related to papers in Social Sciences
and Humanities.

2.2. Research Assessment
The Declaration on Research Assessment (DORA) was announced at the Annual Meeting of the
American Society for Cell Biology in San Francisco in 2012 [1]. It contains the guidelines for
funding agencies, institutions, publishers, organisations that supply metrics and researchers:

   1. Be open and transparent by providing data and methods used to calculate all metrics.
   2. Provide the data under a licence that allows unrestricted reuse, and provide computational
      access to data, where possible.
   3. Be clear that inappropriate manipulation of metrics will not be tolerated; be explicit about
      what constitutes inappropriate manipulation and what measures will be taken to combat
      this.
   4. Account for the variation in article types (e.g., reviews versus research articles), and in
      different subject areas when metrics are used, aggregated, or compared.

  Polish National Science Centre (NSC) signed DORA in 2018 [14] and ERC in 2021 [15].
  In 2015, The Leiden Manifesto for research metrics was announced in Nature [2].
  It covers ten principles for research evaluation to avoid abuse of bibliometrics.

2.3. PBRF: Performance-based Research Funding
According to [16, 17], research funding can be treated as performance-based if: (1) is evaluated
ex-post; (2) output or impact of scientific studies are estimated, and the funding or its part is
based on such assessment; (3) funding is implemented either at national or regional level.
   Implementation of the Performance-based research funding in Australia resulted in the
greater number of scientific papers with simultaneous drop in quality of research [18]. Next,
this finding was refuted in 2017 [19].
   A positive effect of PBRF application on the research quantity was concluded from studies
on the data from 31 countries and the 1996-2016 period [20]. The author relied primarly on
bibliometric evaluation. Zacharewicz at al. discovered in [21] that 12 out of 28 EU countries
exploited no PBRF, 3 had limited PBRF, 11 applied a quantitative, bibliometric assessment and 5
countries used peer review process.




                                               103
3. Scientometric Measures for Journals and Grants
3.1. Journal Impact Factor, Derivatives, and Similar Measures
Impact Factor (IF) is a measure for scietific journal as the average number of citations normalized
with the number of articles published in recent years. For a given year 𝑦, its 𝐼 𝐹𝑦 is the number
of citations received in year 𝑦 to papers published in two preceding years (𝑦 − 1 and 𝑦 − 2),
divided by the number of citable papers published in years 𝑦 − 1 and 𝑦 − 2:
                                         𝐶𝑖𝑡𝑎𝑡𝑖𝑜𝑛𝑠𝑦−1 + 𝐶𝑖𝑡𝑎𝑡𝑖𝑜𝑛𝑠𝑦−2
                             𝐼 𝐹𝑦 =                                                                (1)
                                      𝐶𝑖𝑡𝑎𝑏𝑙𝑒𝐼 𝑡𝑒𝑚𝑠𝑦−1 + 𝐶𝑖𝑡𝑎𝑏𝑙𝑒𝐼 𝑡𝑒𝑚𝑠𝑦−2
   Impact Factor is even criticised as a measure of journal quality. IF does not respect the
quality of citations, includes unlimited self-citations, is incomparable between domains and
concentrates on English journals. Moreover, a simple method of calculation incentives publishers
to manipulate this factor by promoting self-citations, preferential for reviews and the decreasing
number of Citable Items [22, 23]. It is also criticised for being unscientific [24].
   The Average Journal Impact Factor Percentile, here refered as IF%, was introduced in 2016.
   It is a non-linear transformation of IF, which takes percentile positions of journal’s IF in each
scientific category and averages it.
   IF% was shown to have a smaller variation than IF and the distribution close to normal [25].
   Article Influence Score (AIS) is defined as eigenfactor (the idea similar to Google Page Rank)
divided by the number of citable articles. AIS is comparable to 5-year IF [26].

3.2. Polish Ministerial Scoring of Journals (PMSJ)
Universities and public scientific bodies in Poland are eveluated by the Ministry of Education
and Science separately in 47 scientific disciplines that are grouped into branches of science:
humanities (7 disciplines), engineering and technology (9), medical and health sciences (4),
agricultural sciences (5), social sciences (11), natural sciences (7), theology (1), the arts (3) [27].
For the evaluation purpose, scientific entities report among others the papers published by the
author affiliated at their organisation. Polish Ministerial Scoring of Journals (PMSJ) is assigned
to each paper according to the list announced by the Minister of Education and Science [28].
Each journal is assigned one of the score: 200 (best), 140, 100, 70, 40, 20 (worst) as well as the
list of disciplines a given journal is relevant to. In principle, the scoring is based on IF of the
journal, however, it is not the absolute rule. Since PMSJ is not a linear transformation of IF, we
cannot claim that sum of IF of two journals with PMSJ=100 is close to IF of one journal with
PMSJ=200. To some extent, PMSJ may be considered similar to IF% but with less granularity
and some manual corrections.
   Results of the evaluation for years 2017-2021 are published in [29], however, the entire process
has not been finished yet due to some appeals against this decision.

3.3. WEIG - Wroclaw Effectiveness Indicator for Grants
Wroclaw Effectiveness Indicator for Grants – 𝑊 𝐸𝐼 𝐺(𝑝) for project 𝑝 is a meta-measure defined as
a calculable outcome 𝑂(𝑝) of 𝑝 divided by 𝑝’s funding 𝐹 (𝑝), here expressed in millions of euros:



                                                  104
Table 1
Example of two projects selected from panel PE6 within Physical Sciences and Engineering, which
terminated in 2017, together with their respective metrics
    Agency                           NSC        ERC
    Grant id          2013/09/B/ST6/02317     278212
    Budget [€]a                    154579    1638175
    #papers                             25         33
    ∑ IF                            46.266    184.708
    WEIG-IF                       327.841     112.752
    ∑ AIS                           13.931     95.960
    WEIG-AIS                         98.72     58.577
    ∑ PMSJ                           2150       4870
    WEIG-PMSJ                   15234.922    2972.820

a
    1 EUR = 4.7 PLN



                                                              𝑂(𝑝)
                                             𝑊 𝐸𝐼 𝐺(𝑝) =                                            (2)
                                                              𝐹 (𝑝)
   Assuming the project 𝑝 has a set 𝐴𝑝 of related scientific articles 𝑎, which are the effect of a
given project 𝑝, and which acknowledge 𝑝. We can use a bibliometric quality measure 𝑀(𝑎) for
each such paper 𝑎. By aggregating 𝑀(𝑎) for all papers resulting from project 𝑝, we obtain the
general output quality of 𝑝: 𝑂(𝑝) = ∑𝑎∈𝐴𝑝 𝑀(𝑎). Then, 𝑊 𝐸𝐼 𝐺(𝑝) is:

                                                         ∑𝑎∈𝐴𝑝 𝑀(𝑎)
                                         𝑊 𝐸𝐼 𝐺(𝑝) =                                                (3)
                                                              𝐹 (𝑝)
   In this study, as 𝑀(𝑎) we analyzed three measures: IF, AIS (Sec. 3.1), and PMSJ (Sec. 3.2),
however, also any other measure can be considered like IF% or the number of 𝑎’a citations. As a
result, we tested WEIG-IF, WEIG-AIS, and WEIG-PMSJ, respectively.
   WEIG can also be generalised for the set of projects 𝑃, i.e., for all grants from a given scientific
discipline (panel) or year:

                                         𝑂(𝑃) ∑𝑝∈𝑃 𝑂(𝑝) ∑𝑝∈𝑃 ∑𝑎∈𝐴𝑝 𝑀(𝑎)
                           𝑊 𝐸𝐼 𝐺(𝑃) =         =            =                                       (4)
                                         𝐹 (𝑃)   ∑𝑝∈𝑃 𝐹 (𝑝)   ∑𝑝∈𝑃 𝐹 (𝑝)
where 𝑂(𝑃) are aggregated outcomes (sum) of all projects 𝑝 ∈ 𝑃, and 𝐹 (𝑃) - their total funding,
e.g. in millions of euros.
   WEIG was initially proposed in [4] along with some preliminary analyses of NSC grants
and then in-depth investigated for both considered agencies in [5]. Here, we extend it with
additional analyses, a new year – 2021 and a new measure PMSJ.




                                                        105
4. Effectiveness of Research Grants Funded by ERC and NSC
4.1. Research Setup
A detailed description of all processing steps can be found in [5]. Overall, the process consists
of: (1) collecting the data about scientific grants funded by ERC and NSC from their online
databases, (2) using grant ids, the outcome papers are identified from Web of Science web
service, (3) for each such paper published in the journal, a bibliometric measure is gathered
from Journal Citation Report (IF, AIS) or from the online list published by Polish Ministry of
Education [28]. Having bibliometric measures (IF, AIS, PMSJ) for each paper assigned to each
grant, appropriate WEIG values are computed, including relative ones, Sec. 3.3. Aggregated
WEIG values are calculated for scientific panel, years, and funding schemas.
   The project year was defined as the year in which a given grant had ended. We considered
all project’s identified publications registered by the data collection date (April 2022) while
calculating the WEIG values. We believe that a reliable comparison between agencies or panels
should be carried out only between grants completed in the same year.
   Year 2014 was treated as a reliable starting point for our studies, since there were relatively
few grants ended before this year for both agencies. Their funding schemas (many finished
projects) yielded first mature results as late as 2014.




Figure 1: General effectiveness for funding agencies over time: WEIG-IF (a), WEIG-AIS (b), and WEIG-
PMSJ (c), for branches of science: Physical Sciences and Engineering - PE (green), Life Sciences - LS
(red), and Social Sciences and Humanities - SH (blue), separately for NSC projects (dotted lines) and
ERC projects (solid lines). Mind that X denotes the last year of the grants.




                                                106
4.2. Effectiveness of Science Branches
Both financing agencies distinguish three separate science branches (see [5]), which correlate:
    • Social Sciences and Humanities (SHERC =HSNSC ),
    • Physical Sciences and Engineering (PEERC =STNSC ),
    • Life Sciences (LSERC =NZNSC ).
   The WEIG values for science branches are relatively stable regardless of the measure taken
(see Fig. 1). However, we can observe a general small drop over time for Life Sciences (LS) both
for NSC and ERC projects. A sudden drop is characteristic of almost all branches and it is related
to the COVID-19 pandemic – loss of effectiveness, which started in 2020, can be explained by the
requested projects extensions. A slight increase for Physical Sciences and Engineering projects
funded by NSC may be related to growing efficiency of Production and Processes Engineering
and Astronomy and Space Science grants. Interestingly, there is one significant difference
between WEIG-PMSJ and other metrics: Social Sciences and Humanities (SS&H) effectiveness
does not differ from other branches as significantly as in the case of WEIG-IF or WEIG-AIS. On
the other hand, a very large part of SS&H scientific papers is published in periodics with no IF
(see Fig. 3). Overall IF may not be an adequately good metric to quantify output in this science
branch. WEIG-PMSJ may be complementary to other presented metrics.

4.3. Effectiveness of Scientific Panels and Funding Schemes
The values of WEIG-IF, WEIG-AIS, WEIG-PMSJ, and funding for the projects, which ended
between 2014 and 2021, were presented for Physical Sciences and Engineering (PE) as well as
Life Sciences (LS) in division into individual panels, Fig. 2. Social Sciences and Humanities (SH)
were disregarded as there is no correspondence between most of NSC and ERC panels for HS
(see [5]). There is a similarity in the effectiveness of the respective NSC and ERC panels. NSC
grants are overall more effective than ERC ones. They are also characterized by significantly
smaller funding. The Universe Sciences (PE9) both for NSC and ERC provide the highest overall
effectiveness. Simultaneously, Mathematics (PE1) reveals a big difference between WEIG-IF
and WEIG calculations based on different measures. Computer Science and Informatics and
Systems (CS&IS) and Communication Engineering (CE) grants both for NSC and ERC appear to
be less effective in comparison to other Physical Sciences and Engineering panels. It may be
connected with the specifity of scientific activity. A significant part of the articles for CS&IS
and CE have been published in the journals with no IF (see Fig. 3). In fact, they are very often
conference proceedings.
   WEIG measure can indicate some important differences between various types of projects.
Undoubtedly, larger projects allocate relatively more resources to infrastructure development
etc. in comparison with smaller projects. This translates into effectiveness of research grants
measured with journal bibliometrics. The results of the comparison of the effectiveness of the
small projects for young scientists and larger projects for advanced researchers are significant
in this matter (see Fig. 4). Both PRELUDIUM and Starting Grant (AdG) are more effective
than bigger projects (OPUS and Advanced Grant (AdG) respectively) in terms of WEIG mea-
sure. However, smaller projects have much greater restrictions on financing the purchase or
manufacture of scientific and research equipment.



                                               107
Figure 2: (a-b) WEIG-IF, (c-d) WEIG-AIS, (e-f) WEIG-PMSJ, and (g-h) funding in millions of euros (1
EUR = 4.3 PLN), for all projects ended in 2014-21. Only panels from Physical Sciences and Engineering
(PE) and Life Sciences (LS) are shown, separately for NSC (a, c, e) and ERC projects (b, d, f). The data
refers to grants from all funding schemes and years. Panels with green bars are discussed in Sec. 4.3.




                                                  108
Figure 3: The percentage of publications with no IF, which are the effect of the NSC (green) and ENC
(blue) projects assessed to the scientific panels (X-axis).




Figure 4: WEIG-PMSJ and funding (log scale) for funding schemes separately for NSC (PRELUDIUM
and OPUS) and ERC (Starting Grant (StG) and Advanced Grant (AdG)) projects.


5. Discussion: Other WEIG Variations
We are aware of the limitations and problems associated with the use of journal metrics, e.g. the
IF measure [30]. First of all, journal impact factor could not be simply translated into individual
achievements, e.g. actual citations of individual articles. Second, IF is affected by various factors
such as research field (e.g. favouring the research fields with literature that rapidly becomes
obsolete) or journal and article type (e.g. review articles are heavily cited). Some limitations are
related to the database, e.g. its coverage. It is necessary therefore to take into consideration the
wider context while analysing and interpreting the results. It is difficult or even impossible to
include all of the factors in one measure (e.g. different grant results, the characteristics of the



                                                109
discipline etc.). However, some of them have such potential (e.g. exchange rate fluctuations,
costs of living etc.). Simple experiments like adding cost of living index (CLI) to the formula
may be the way to introduce the relevant determinants into the WEIG measure:

                                                    𝑂(𝑝)
                                     𝑊 𝐸𝐼 𝐺(𝑝) =                                            (5)
                                                 𝐹 (𝑝) ⋅ 𝐶𝑂𝐼
   In this case, one would have to consider e.g. that the CLI varies depending on the place of
living and the time of the estimation. Referring to the example in the Table 1, after assuming
adequate CLI for UK and Poland, the gap between the presented projects would decrease:
WEIG-IF (CLI) for grant 2013/09/B/ST6/02317 would be 12490.768 instead of 327.841, WEIG-IF
(CLI) for grant 278212 would be 7836.285 instead of 112.752. Thus, the NSC-funded grant would
have a score 159.40% better than the ERC grant, instead of 290.76%. Such experiments could
nevertheless be a step toward improving the WEIG measure.


6. Conclusions and Future Work
There are three main findings derived from our analyses of the WEIG values for grants funded
by two public agencies European Research Council (ERC) and Polish National Science Centre
(NSC):
   1. Polish NSC grants are commonly more effective than those supported by European Union
      via ERC (two times for physical and engineering as well as humanities, ca. 50% for life
      sciences). The difference is particularly noticeable when considering WEIG-PMSJ: three
      times greater;
   2. WEIG calculations using different measures provide similar results, i.e., they reveal the
      same phenomena. However, they are complementary in some aspects;
   3. There is a similarity in the effectiveness of the respective NSC and ERC panels for Physical
      Sciences and Engineering (PE) and Life Sciences (LS). It is connected with the specifity of
      scientific activity including publishing in conference proceedings.
   In many cases, it is not possible to analyze all the data in detail. WEIG, in combination with
other project information (such as scientific discipline, subject matter, implementing entity,
project size), can be used to filter information on ongoing grants. This can apply to the searching
for scientists to collaborate with, identification of promissing research topics, or policy-making
in the field of scientific research and technological development. Thus, it could be useful in
tackling information overload in various contexts.
   Future work will focus on further development and analysis of derived WEIG measures, e.g.
staring from the one suggested in Sec. 5. Along with the WEIG-based measures already tested
in this paper, they would provide a more copmprehensive insights into research projects and
can be used to minimize the impact of some selected factors like local salary level.


Acknowledgments
The authors would like to thank Web of Science Group, Clarivate Analytics for giving consent
for usage and aggregation of data acquired from their databases.



                                               110
This work was financed by (1) the National Science Centre, Poland, project no.
2021/41/B/ST6/04471; (2) the Polish Ministry of Education and Science, CLARIN-PL; (3) the
European Regional Development Fund as a part of the 2014-2020 Smart Growth Operational
Programme, CLARIN – Common Language Resources and Technology Infrastructure, project
no. POIR.04.02.00-00C002/19; (4) the statutory funds of the Department of Artificial Intelligence,
Wroclaw University of Science and Technology; (5) the Polish Ministry of Education and Science
within the programme “International Projects Co-Funded”; (6) the European Union under the
Horizon Europe, grant no. 101086321 (OMINO). Views and opinions expressed are however
those of the author(s) only and do not necessarily reflect those of the European Union or the
European Research Executive Agency. Neither the European Union nor European Research
Executive Agency can be held responsible for them.


References
 [1] T. A. S. for Cell Biology, Read the declaration - dora, 2012. URL: https://sfdora.org/read/.
 [2] H. Diana, W. Paul, W. Ludo, de Rijcke Sarah, R. Ismael, Bibliometrics: The leiden manifesto
     for research metrics, Nature 520 (2015) 429––431. doi:10.1038/520429a .
 [3] A. J. Salter, B. R. Martin, The economic benefits of publicly funded basic research: a critical
     review, Research Policy 30 (2001) 509 – 532.
 [4] M. Dziezyc, P. Kazienko, Jaka jest efektywność projektów badawczych, Forum Akademickie
     07-08 (2018) 0165551520962775. URL: https://prenumeruj.forumakademickie.pl/fa/2018/07-
     08/jaka-jest-efektywnosc-projektow-badawczych/.
 [5] M. Dzieżyc, P. Kazienko, Effectiveness of research grants funded by european research
     council and polish national science centre, Journal of Informetrics 16 (2022) 101243.
     doi:https://doi.org/10.1016/j.joi.2021.101243 .
 [6] J. Wang, P. Shapira, Funding acknowledgement analysis: an enhanced tool to investigate
     research sponsorship impacts: the case of nanotechnology, Scientometrics 87 (2011)
     563–586. doi:10.1007/s11192-011-0362-5 .
 [7] J. Rigby, Looking for the impact of peer review: does count of funding acknowledgements
     really predict research impact?, Scientometrics 94 (2013) 57–73.
 [8] PubMed, Grant number information found in the gr field in medline/pubmed, 2021. URL:
     https://www.nlm.nih.gov/bsd/grant_acronym.html.
 [9] N. Grassano, D. Rotolo, J. Hutton, F. Lang, M. M. Hopkins, Funding data from publica-
     tion acknowledgments: Coverage, uses, and limitations, Journal of the Association for
     Information Science and Technology 68 (2017) 999–1017. doi:10.1002/asi.23737 .
[10] B. Álvarez-Bornstein, F. Morillo, M. Bordons, Funding acknowledgments in the web of
     science: completeness and accuracy of collected data, Scientometrics 112 (2017) 1793–1812.
[11] W. Liu, L. Tang, G. Hu, Funding information in web of science: An updated overview,
     Scientometrics 122 (2020) 1509–1524.
[12] W. Liu, Accuracy of funding information in scopus: a comparative case study, Scientomet-
     rics 124 (2020) 803–811.
[13] B. Álvarez Bornstein, M. Montesi, Funding acknowledgements in scientific publications:
     A literature review, Research Evaluation (2021). doi:10.1093/reseval/rvaa038 , rvaa038.



                                                111
[14] DoraNsc, National science centre signs the dora declaration, 2018. URL: https://
     www.ncn.gov.pl/aktualnosci/2018-10-23-ncn-sygnatariuszem-dora?language=en.
[15] DoraErc, Erc plans for 2022 announced, 2021. URL: https://erc.europa.eu/news/erc-2022-
     work-programme.
[16] D. Hicks, Performance-based university research funding systems, Research Policy 41
     (2012) 251 – 261.
[17] K. Jonkers, T. Zacharewicz, et al., Research performance based funding systems: A
     comparative assessment, Publications Office of the European Union, Luxembourg (2016).
[18] L. Butler, Explaining australia’s increased share of isi publications—the effects of a funding
     formula based on publication counts, Research Policy 32 (2003) 143 – 155.
[19] P. van den Besselaar, U. Heyman, U. Sandström, Perverse effects of output-based research
     funding? butler’s australian case revisited, Journal of Informetrics 11 (2017) 905 – 918.
[20] D. Checchi, M. Malgarini, S. Sarlo, Do performance-based research funding systems affect
     research production and impact?, Higher Education Quarterly 73 (2019) 45–69.
[21] T. Zacharewicz, B. Lepori, E. Reale, K. Jonkers, Performance-based research funding in EU
     Member States—a comparative assessment, Science and Public Policy 46 (2018) 105–115.
[22] M. E. Falagas, V. D. Kouranos, R. Arencibia-Jorge, D. E. Karageorgopoulos, Comparison of
     scimago journal rank indicator with journal impact factor, The FASEB Journal 22 (2008)
     2623–2628. doi:10.1096/fj.08-107938 , pMID: 18408168.
[23] S. Ramin, A. S. Shirazi, Comparison between impact factor, scimago journal rank indicator
     and eigenfactor score of nuclear medicine journals, Nuclear Medicine Review 15 (2012)
     132–136.
[24] K. Moustafa, The disaster of the impact factor, Science and Engineering Ethics 21 (2015)
     139–142. doi:10.1007/s11948-014-9517-0 .
[25] L. Yu, H. Yu, Does the average jif percentile make a difference?, Scientometrics 109 (2016)
     1979–1987. doi:10.1007/s11192-016-2156-2 .
[26] C. Bergstrom, Eigenfactor: Measuring the value and prestige of scholarly journals, College
     & Research Libraries News 68 (2007) 314–316. doi:10.5860/crln.68.5.7804 .
[27] Rozporządzenie ministra edukacji i nauki z dnia 11 października 2022 r. w sprawie
     dziedzin nauki i dyscyplin naukowych oraz dyscyplin artystycznych, 2022. URL: https:
     //isap.sejm.gov.pl/isap.nsf/DocDetails.xsp?id=WDU20220002202.
[28] Nowy, rozszerzony wykaz czasopism naukowych i recenzowanych materiałów z konfer-
     encji międzynarodowych, 2021. URL: https://www.gov.pl/web/edukacja-i-nauka/nowy-
     rozszerzony-wykaz-czasopism-naukowych-i-recenzowanych-materialow-z-konferencji-
     miedzynarodowych.
[29] Wyniki ewaluacji działalności naukowej za lata 2017-2021, 2022. URL: https://www.gov.pl/
     web/edukacja-i-nauka/wyniki-ewaluacji-dzialalnosci-naukowej-za-lata-2017-2021.
[30] P. O. Seglen, Why the impact factor of journals should not be used for evaluating research,
     Bmj 314 (1997) 497.




                                               112