<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Evaluation of Crowdsourced Peer Review using Synthetic Data and Simulations</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Michael Soprano</string-name>
          <email>michael.soprano@uniud.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eddy Maddalena</string-name>
          <email>eddy.maddalena@uniud.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesca Da Ros</string-name>
          <email>francesca.daros@uniud.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maria Elena Zuliani</string-name>
          <email>zuliani.mariaelena@spes.uniud.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefano Mizzaro</string-name>
          <email>stefano.mizzaro@uniud.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Mathematics</institution>
          ,
          <addr-line>Computer Science and Physics</addr-line>
          ,
          <institution>University of Udine</institution>
          ,
          <addr-line>Udine, Friuli-Venezia Giulia</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The scholarly publishing process relies on peer review to uphold the quality of scientific knowledge. However, challenges such as increasing submission volumes and potential malicious behavior undermine its efectiveness. In this study, we evaluate Readersourcing, an alternative peer review approach that leverages communitydriven judgments. Using simulations with synthetic data based on a probabilistic model and a publicly available implementation, we assess six quantities and examine the impact of each component on the outcomes. Our ifndings show that the co-determination algorithm captures distinct aspects of manuscript judgments compared to simpler aggregation strategies. Key simulation parameters consistently influence the computed quantities across diferent settings. We also publicly release the data, code, and simulation runs.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Scholarly Publishing</kwd>
        <kwd>Peer Review</kwd>
        <kwd>Evaluation</kwd>
        <kwd>Readersourcing</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The primary method for disseminating scientific knowledge is the scholarly publishing process, which
relies on peer review. In this process, a scientific article authored by individuals is assessed and evaluated
by peers with equivalent expertise. Although peer review is a well-established method for ensuring the
quality of scientific publications, it is not without drawbacks [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. These include challenges in managing
the increasing volume of submissions and the potential for malicious behavior by some stakeholders
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Several approaches to addressing these limitations have been discussed in the literature, including
outsourcing the review process to the broader scientific community itself [
        <xref ref-type="bibr" rid="ref3 ref4 ref5">3, 4, 5</xref>
        ].
      </p>
      <p>
        While recent advancements in Artificial Intelligence (AI) technologies make automating peer review
in the scholarly publishing process an appealing prospect, several issues and concerns warrant further
investigation. For instance, existing tools struggle to understand and interpret manuscripts within
the broader context of scientific literature [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], and AI-based approaches to peer review are prone to
systematic biases [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Some researchers suggest that a hybrid approach could involve cooperation
between humans and AI in the peer review process [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. However, the nuanced judgment and contextual
understanding that human reviewers provide remain crucial for ensuring the integrity and reliability of
scientific evaluation.
      </p>
      <p>
        In light of this, we specifically focus on the Readersourcing model (RSM), originally introduced by
Mizzaro [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], which provides a framework for enhancing the peer review process in scholarly publishing
through community-driven numerical judgments. RSM quantifies both the overall quality of an article
and the reputation of a scholar, considered as a reader and as an author, using a co-determination
algorithm. The primary challenge lies in aggregating these ratings into quality and reputation indices
and ultimately deriving a single index for each measure. The model has been implemented in a system
accessible to the research community [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ]. Although RSM has been evaluated using social network
metrics [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], the co-determination algorithm and the influence of its components on the computed
quantities remain insuficiently investigated.
1.1. Aims
In this work, we evaluate RSM through simulations on synthetic data generated using a probabilistic
model. First, we show that the model efectively captures distinct and meaningful aspects of judgments,
providing strong evidence for its adoption. Second, we validate its structural properties, confirming
that the model’s design aligns with the foundational principles outlined by Mizzaro [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Through a
detailed analysis of these properties, we highlight the model’s potential to enable the outsourcing of
the peer review process to the broader community of readers.
      </p>
      <p>Specifically, we investigate the following Research Questions (RQs):
RQ1 How does the probabilistic approach used in our simulations influence the computed quantities
of the model? What improvements could enhance its outcomes?
RQ2 How efectively does RSM capture meaningful and distinct aspects of judgments made by readers?</p>
      <p>How do the insights generated by RSM compare with those from simpler aggregation strategies?
RQ3 What is the impact of each component of the RSM model on the computed quantities? How do
the diferent components influence the overall results? How does their interaction contribute to
the model’s outcomes?</p>
      <p>The data, code, and all supplementary materials related to our study are publicly available to the
research community at: https://osf.io/kwv47/.</p>
      <sec id="sec-1-1">
        <title>1.2. Contributions</title>
        <p>Our contributions are as follows: (i) We show that the co-determination algorithm in RSM provides
meaningful diferentiation compared to traditional, widely adopted aggregation strategies. (ii) We
identify key simulation parameters that significantly afect the model’s outputs, highlighting those
that play a prominent role in computing critical quantities. (iii) We confirm that these key parameters
consistently shape the co-determination process across various simulation settings. (iv) We publicly
release our simulation dataset to support further research and analysis.</p>
      </sec>
      <sec id="sec-1-2">
        <title>1.3. Outline</title>
        <p>The remainder of this paper is structured as follows: Section 2 provides an overview of the related
literature, Section 3 describes the methodology, Section 4 presents the results, and Section 5 discusses
the impact and outlines the limitations of our approach. Finally, Section 6 presents the conclusions and
indicates directions for future research.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Scholarly publishing, which relies on peer review, is the primary method for disseminating scientific
knowledge. In this process, scientific articles authored by researchers are evaluated by peers with
comparable expertise and, if deemed of suficient quality, are made available to the broader
community [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Although peer review is the cornerstone of evaluating the quality of scientific publications, it
has shortcomings [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. First, the system is under strain due to the large volume of submissions [
        <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
        ]
and the time required to process them eficiently. In this regard, reviewers have been described as a
scarce resource [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Second, the peer review system is prone to bias [
        <xref ref-type="bibr" rid="ref16 ref17">16, 17</xref>
        ] and inconsistencies [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
      </p>
      <p>
        To address these limitations, several solutions have been proposed [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], focusing on the transparency,
eficiency, quality, and equity of the process [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. One example is the concept of open peer review, in
which the publication is accompanied by anonymous reviews [
        <xref ref-type="bibr" rid="ref21 ref22">21, 22</xref>
        ].
      </p>
      <p>
        More recently, eforts have been made to incorporate automated tools and AI into the peer review
process [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. For instance, Checco et al. [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] developed an AI tool trained on 3,300 papers from three
conferences, along with their corresponding review evaluations. The tool was designed to predict the
review score of a new, unseen manuscript based solely on its textual content. Similarly, Boukhris and
Zaâbi [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ] proposed a GAN-BERT-based method to analyze the sentiment of reviewers’ comments and
automatically generate an objective final decision regarding the acceptance or rejection of a manuscript.
      </p>
      <p>
        As Large Language Models (LLMs) are tested in the field of generating scientific hypotheses [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ],
they have also been employed in the peer review process, prompting many journals to establish specific
policies to regulate their use [
        <xref ref-type="bibr" rid="ref27 ref28">27, 28</xref>
        ]. Latona et al. [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] conducted experiments on the scores and reviews
of the 2024 International Conference on Learning Representations (ICLR 2024), finding that over 15%
of the reviews were written with the assistance of AI (verified through experiments with GPTZero).
Interestingly, these AI-generated reviews tended to assign higher scores compared to non-AI-generated
reviews. The potential of LLMs for assessing the quality of scientific papers has been explored by Liang
et al. [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ], who developed a pipeline employing Generative Pretrained Transformer 4 (GPT-4) to generate
comments on research articles. The results indicated that over half of the users rated GPT-4-generated
feedback as helpful or very helpful, with many finding it more beneficial than feedback from at least
one of the human reviewers. Similarly, Santu et al. [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ] investigated the generation of meta-reviews by
leveraging three LLMs (LLaMA2, GPT-3.5, and PaLM2) using data from ICLR from 2020 to 2023. Their
qualitative analysis showed that GPT-3.5 and PaLM2 performed comparably overall, with both being
rated higher by humans than LLaMA2 for manuscript-level judgments. Notably, PaLM2 demonstrated
superior recall scores, while GPT-3.5 achieved better precision scores, highlighting the varying strengths
of these LLMs in generating meta-reviews.
      </p>
      <p>
        All in all, while AI and LLMs can undoubtedly enhance the eficiency of the peer review process,
they also raise ethical concerns, particularly regarding the transparency of the process, disclosure
agreements, and the potential replication and amplification of biases inherent in the data or systems [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
These challenges highlight the continued necessity of human intervention in the process [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <sec id="sec-3-1">
        <title>3.1. The Readersourcing Model</title>
        <p>
          Readersourcing (RSM) is a crowdsourcing approach to peer review [
          <xref ref-type="bibr" rid="ref3 ref4">4, 3</xref>
          ], which can serve as either a
pre-publication alternative or a post-publication complement to the traditional peer review process.
Figure 1 illustrates the general framework, and Table 1 summarizes the notation. We provide only a
brief overview of the model. For additional details, see Mizzaro [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].
        </p>
        <p>RSM involves three key entities: a set ℳ of manuscripts (also referred to as publications, articles, or
papers), a set  of authors, and a set ℛ of readers. When a reader  ∈ ℛ reads a manuscript  ∈ ℳ,
they assign a numerical value r,m ∈ [0, 100], referred to as judgment (or rating).</p>
        <p>Each entity in the model is associated with a score:
• The manuscript score m of a manuscript  ∈ ℳ is calculated as an aggregation of its judgments,
serving as an indicator of its quality.
• The author score a of an author  ∈  is derived from the aggregation of the scores of the
manuscripts they have published, serving as an indicator of their reputation and skills.
• The reader score r of a reader  ∈ ℛ is determined by comparing their judgments on manuscripts
with those of other readers, serving as an indicator of their reputation and skills.</p>
        <p>Scores are dynamic and evolve over time based on user behavior and interactions. Each score is
paired with a steadiness value, denoted with  , that reflects its stability:  m for manuscripts,  a for
set  of
authors
authors, and  r for readers. For example, an older manuscript with many evaluations tends to exhibit a
high steadiness value, whereas a newly registered reader will have a low steadiness value. Steadiness
influences how scores are updated: lower steadiness leads to faster score adjustments in response to
new inputs. As the score stabilizes, its steadiness increases.</p>
        <p>One can consider RSM as a tripartite graph whose nodes correspond to three sets: authors, manuscripts,
and readers. Authors are connected to the manuscripts they publish, and readers are connected to
the manuscripts they read. More formally, an edge exists between an author  ∈  and a manuscript
 ∈ ℳ if  publishes  (such edges are unweighted). Conversely, there is an edge between a reader
 ∈ ℛ and a manuscript  ∈ ℳ if  reads ; the weight of this edge corresponds to the judgment r,m
that  expresses on .</p>
        <p>Note that if a user acts as both an author and a reader, they maintain two distinct scores and steadiness
values.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Simulations Flow and Assumptions</title>
        <p>The simulation flow is illustrated in Figure 2, while the parameters used in the simulation are summarized
in Table 2. Each simulation starts with a fixed number of authors and readers. In this study, these two
quantities are set to be equal, i.e., || = |ℛ| (see Step 1 in Figure 2).</p>
        <p>
          It is well established that scholarly publications follow Power Law distributions [
          <xref ref-type="bibr" rid="ref33 ref34">33, 34</xref>
          ]. Informally,
in a Power Law distribution a small number of events occur with very high frequency, while most
events occur with low frequency. Its general form is given by  () ∼ −  , where  is the scaling
exponent. Using this distribution, we model the following quantities:
• Number of manuscripts published by each author: The Power Law distribution representing
the number of manuscripts published by an author is denoted by a() and is parameterized
by  a. To avoid extreme values, we impose an upper limit max on the maximum number of
manuscripts an author can publish (see Step 2 in Figure 2).
• Number of manuscripts read by each reader: The Power Law distribution for the number of
manuscripts read by each reader is denoted by r() and is parameterized by  r (see Step 3 in
Figure 2).
• Number of reads per manuscript: The Power Law distribution for the number of times each
manuscript is read is denoted by m() and is parameterized by  m (see Step 4 in Figure 2).
        </p>
        <p>The probability that a specific reader  ∈ ℛ reads a specific manuscript  ∈ ℳ is denoted as r,m(, ).
Assuming the two events are independent, the joint probability that reader  reads manuscript  is
expressed as r,m(, ) = r() · m(). (see Step 5 in Figure 2). To model judgments of readers on
manuscripts, we proceed as follows. For each manuscript  ∈ ℳ, we draw a reference score (i.e., the
“ideal” score of the manuscript), denoted as m, from a Power Law distribution G() with parameter
 G (see Step 6 in Figure 2).</p>
        <p>
          The judgment r,m assigned by reader  ∈ ℛ to manuscript  ∈ ℳ is drawn from a Gaussian
distribution with mean  m set to the reference score m and standard deviation m. Judgment values
are bounded between 0 and 100 (see Step 7 in Figure 2). In the original work [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], these values ranged
from 0 to 1.
        </p>
        <p>
          The simulation flow is based on the following assumptions, derived from the foundational work of
Mizzaro [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]:
(i) Each manuscript  ∈ ℳ has exactly one author.
(ii) The judgments r,m are independent.
(iii) The number of authors equals the number of readers (|| = |ℛ|).
(iv) The simulation does not enforce connections between all entities: some manuscripts may remain
unread, and some readers may not read any manuscripts.
        </p>
        <p>As discussed in Section 6, these assumptions will be addressed and refined in future work.</p>
        <p>Power Law Pm 
s
it
p
r
c
s
u
n
a
m
f
o
# # of readers</p>
        <p>αm
Joint probability between
Pr and Pm
2
2
3
5
2
1
4
|  | = | ℛ |
2/5</p>
        <sec id="sec-3-2-1">
          <title>Authors</title>
        </sec>
        <sec id="sec-3-2-2">
          <title>Readers ℛ</title>
          <p>Step 3: Manuscripts are read following Pm
Step 4: Readers read following Pr</p>
        </sec>
        <sec id="sec-3-2-3">
          <title>Step 5: The association between readers and</title>
          <p>manuscripts follows Pr,m</p>
        </sec>
        <sec id="sec-3-2-4">
          <title>Step 6: Manuscripts have a reference score that</title>
          <p>follows PG
Probability the manuscript is
read by one of the readers
Probability the reader reads one
of the manuscripts
2
2/5⋅3/7
3</p>
        </sec>
        <sec id="sec-3-2-5">
          <title>Step 1: Generate authors and readers</title>
        </sec>
        <sec id="sec-3-2-6">
          <title>Step 7: Judgments follow a Gaussian distribution</title>
          <p>0
56
μmk = gmk = 56
100</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Experimental Setup</title>
        <p>To conduct our simulations, we generate a set of configurations by choosing discrete values for six
parameters (see Table 2): the number of authors/readers || = |ℛ|, the scaling exponents for each Power
Law ( a,  m,  r,  G), and the standard deviation of the Gaussian distribution for the reference score
m. We fix the upper limit on the number of manuscripts, max, to a constant value in all simulations,
thereby excluding it as a parameter.</p>
        <p>The simulations use a population of 250 authors/readers, which can be scaled up in future work.
We select exponents of 1.1, 1.2, and 1.3 to explore diferent power law steepness levels, where higher
values lead to more concentrated distributions. Standard deviations of 2.5, 5.0, 7.5, and 10.0 control
data variability: smaller values keep data closer to the mean, while larger values foster the presence of
outliers. These choices allow us to examine the simulation thoroughly under varied conditions. The
total number of configurations is the product of the possible values for each parameter: 1 population
size × 81  values × 4 standard deviation values, for a total of 324 configurations. We repeat each
configuration 10 times to account for stochasticity, leading to 3,024 executions in total.</p>
        <p>We conducted all experiments on a machine equipped with an Intel Core i7-10700 CPU, 64 GB of
DDR4 RAM, NVIDIA RTX A6000 and RTX 3090 GPUs, a 1 TB NVMe SSD, and a 4 TB HDD. We ran
the simulations using Python 3.9.21 in a Conda-based environment. To ensure compatibility with the
Parquet serialization format, we used NumPy 2.0.2 and pyarrow 18.1.0.</p>
        <p>We use the Python-based implementation of RSM, which is publicly available on GitHub: https:
//github.com/EddyMaddalena/Readersourcing_OO.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Evaluation</title>
        <p>3.4.1. Approach
Our analysis examines the relationship between simulation parameters and the quantities (score and
steadiness) computed by the RSM co-determination algorithm for each entity (authors, manuscripts,
and readers), based on individual judgments. We compare these computed values across all simulation
runs and repetitions, with each comparison focusing on a specific entity.</p>
        <p>To ensure reliable results, we exclude inactive entities from the simulation flow, such as unread
manuscripts, readers who do not provide judgments, and authors who publish only unread or excluded
manuscripts (see Section 3.2). We also omit entities with a steadiness of 0, as it indicates a lack of active
participation in the model. Given their absence from the computation, we refer to them as inert. In
contrast, we classify as active those entities that participate in the process, i.e., readers who provide
judgments, manuscripts that receive judgments, and authors who publish manuscripts that are actively
judged. From now on, we will generally refer to quantities using a single specifier. For instance, we will
use m instead of m to refer to the manuscript score.
3.4.2. Aggregation Functions
We evaluate the performance and distinctiveness of the co-determination algorithm in RSM by comparing
its outputs with aggregations derived from simpler, widely adopted strategies for combining individual
judgments.</p>
        <p>We examine the impact of three commonly used aggregation functions: the arithmetic mean,
geometric mean, and median. These functions are relevant in data aggregation tasks for diferent reasons:
the arithmetic mean summarizes central tendencies, the geometric mean is well suited for products and
rates, and the median is less afected by outliers. By comparing these results with those produced by
the co-determination algorithm, we aim to assess whether RSM provides meaningful diferentiation
and additional insights beyond traditional aggregation methods.
3.4.3. Random Forest Regressor (RFR)
The Random Forest Regressor (RFR) is an ensemble learning method that constructs multiple decision
trees during training and outputs the average prediction from all trees. It is highly valued for its
ability to capture complex, non-linear relationships between input features and the dependent variable,
delivering robust performance with minimal hyperparameter tuning.</p>
        <p>In our study, we use RFR to model the relationship between the simulation parameters and the
quantities produced by the co-determination algorithm of RSM. We treat  a,  m,  r,  G, and m (see
Table 2) as features, and a, m, r,  a,  m, and  r (see Table 1) as outputs. We apply RFR separately to
the data for each of the three entities. For example, one RFR run compares  a with a and  a.</p>
        <p>
          The feature importance rankings generated by the Random Forest help pinpoint which simulation
parameters most significantly afect the output quantities. We leverage these insights to evaluate
the model’s robustness and to confirm the importance of these parameters in subsequent analyses,
such as Analysis of Variance (ANOVA). We use the RandomForestRegressor implementation from
scikit-learn [
          <xref ref-type="bibr" rid="ref35">35</xref>
          ].
3.4.4. Analysis of Variance (ANOVA)
ANOVA [
          <xref ref-type="bibr" rid="ref36">36</xref>
          ] is a statistical method used to test for significant diferences between the means of multiple
groups. Provided its underlying assumptions are satisfied [ 37, 38, 39, 40, 41], it evaluates the impact of
each feature on the outputs at the population level.
        </p>
        <p>We use one-way ANOVA to assess the influence of simulation parameters on the quantities computed
by the co-determination algorithm of RSM. We adopt the same definitions of features and outputs as
for RFR (see Section 3.4.3). By analyzing how these quantities vary across diferent parameter settings,
we aim to identify which parameters significantly afect the results, thus validating the insights from
our feature importance analysis. We use the f_oneway implementation from the scipy package [42].</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <sec id="sec-4-1">
        <title>4.1. RQ1: Efect of the Probabilistic Approach</title>
        <p>To evaluate the efect of our probabilistic approach in simulations (RQ1), we focus on inert entities. By
design, our approach is designed toallow their presence (see Section 3.2 and Section 3.4.1). To guide
future experimental designs, we investigate inert entities using a twofold approach.</p>
        <p>First, we examine this phenomenon quantitatively by counting the number of active and inert entities
in our simulations, ensuring that inert entities do not overwhelmingly dominate. Table 3 shows the
number of active authors, manuscripts, and readers. For each entity, we report the mean, minimum,
maximum, and the first, second, and third quartiles. Notably, authors tend to remain active longer in
the network due to higher connectivity; of the 250 simulated authors, 214 (85%) are active.</p>
        <p>Next, we analyze correlations to understand how these values relate to the input parameters, aiming
to identify potential interventions for tuning simulation parameters to reduce the occurrence of inert
entities. Figure 3 shows the correlation values between the simulation parameters and the number of
active entities (first to third columns), the number of inert entities (fourth to sixth columns), and the
ratio between active and inert entities (seventh to ninth columns). Positive correlations are shown in
blue, while negative correlations appear in light green.</p>
        <p>As the figure indicates, some simulation parameters increase the number of active entities, whereas
others increase the number of inert ones. Specifically, the  G and m parameters do not afect inert
entities. This is expected because they govern aspects related to individual judgments and do not
influence edge formation in the tripartite graph. In contrast, the three parameters  a,  m, and  r, which
govern the Power Law distributions for authors, manuscripts, and readers, do afect inert entities. The
correlations show that these parameters decrease the number of active entities, increase the number of
inert ones, and ultimately increment the ratio between inert and active entities.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. RQ2: Comparison with Simpler Aggregation Strategies</title>
        <p>To evaluate how efectively RSM captures distinct aspects of manuscript judgments (RQ2), we compare
the model’s outputs with simpler strategies for aggregating individual judgments. Figure 4 presents the
resulting Kendall’s  correlation coeficients. The first column shows correlations with the reference
score, while the second to fourth columns show correlations with judgments aggregated using the
arithmetic mean, geometric mean, and median, respectively. The final column shows the manuscript
score. Positive correlations are shown in blue, and negative correlations in light green.</p>
        <p>The manuscript score m correlates moderately with the reference score but strongly with simpler
aggregation strategies, namely the arithmetic mean, geometric mean, and median. This stronger
correlation is likely due to the efect of the normal distribution, as the median aligns well with such a
distribution. Furthermore, because the dataset is generated from skilled readers with highly consistent
0.3
0.3
1
and precise judgments, it minimizes the influence of rewarding better-performing readers. Consequently,
the model emphasizes overall trends, while its weaker correlation with the reference score suggests that
it captures a distinct dimension of judgment. In summary, the model places more weight on aggregated
judgment patterns than on the reference score. We discuss this further in Section 6.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. RQ3: Impact of Model Components on Quantities</title>
        <p>To evaluate the impact of individual features on the outputs (RQ3), we begin by examining their
correlations. Figure 5 shows Kendall’s  correlation coeficients with respect to both simulation
parameters and computed quantities. In this figure, as well as in subsequent ones, positive correlations
are shown in blue, while negative correlations in light green.</p>
        <p>In the lower-right section of the heatmap, we observe a strong correlation among the steadiness
values computed for each entity. High steadiness in one entity is associated with high steadiness in
others, indicating that steadiness is a system-wide property observed uniformly across all entities. In
the top-center blue section, the positive correlation among author scores indicates that authors with
higher manuscript scores tend to receive higher individual scores. An unexpected result is the negative
correlation between author and manuscript scores, on the one hand, and reader scores on the other:
when authors and manuscripts have higher scores, readers tend to have lower scores, and vice versa.
We discuss this further in Section 5.</p>
        <p>Focusing on the feature importance analysis conducted using RFR, Figure 5 shows the computed
feature importance scores, revealing each simulation parameter’s contribution. The first three columns
pertain to scores, while the fourth through sixth columns represent steadiness values.</p>
        <p>As expected, the  G parameter has the greatest influence on steadiness, since variations in the
manuscript’s reference score cause fluctuations in steadiness. In contrast, the  r parameter afects the
manuscript score by determining how many readers provide judgments. Meanwhile, the  a parameter,
which governs the number of manuscripts published, influences the reader score. This suggests a
relationship between manuscript amount and the consistency of reader judgments.</p>
        <p>Next, we present the results of our one-way ANOVA analysis. Figure 7 shows the F-statistic values,
which quantify the strength of the relationships between simulation parameters and computed quantities.
All values are statistically significant (  &lt; 0.0001), except for the a column.</p>
        <p>As expected, the manuscript score m is strongly influenced by all features, given that it arises from
interactions among the parameters. The m parameter, which influences all quantities, especially the
reader score r, afects judgment variability. When the standard deviation is low, errors in judgments
are smaller, efectively making the reader “skilled”. Examining higher variability could lead to additional
insights into the reader score, as noted in Section 6.</p>
        <p>Both the RFR and ANOVA analyses consistently indicate that simulation parameters significantly
influence the manuscript score m, with  G and  r exerting particularly strong efects. While RFR
highlights the relative importance of each feature, ANOVA confirms the statistical significance of these
impacts. The high importance of  G revealed by RFR is confirmed by ANOVA, underscoring how
variations in the reference score drive changes in steadiness. Similarly, both analyses highlight the
influence of the standard deviation parameter on the reader score, highlighting the role of judgment
variability in consistency.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion and Limitations</title>
      <p>Our findings indicate that the probabilistic approach efectively accounts for inert entities within
simulations. Correlation analyses reveal that certain simulation parameters afect the number of inert
entities by reducing the number of active entities and increasing the number of inert ones. These
insights guide parameter tuning to minimize the occurrence of inert entities and refine experimental
designs in future studies (RQ1).</p>
      <p>Additionally, our results show that RSM captures distinct aspects of manuscript judgments by focusing
on aggregated judgment patterns, such as the mean and median, rather than the reference score. Its
strong correlation with simpler aggregation methods reflects overall judgment trends, while its weaker
correlation with the reference score suggests it captures a diferent dimension of judgments (RQ2).</p>
      <p>Further analysis of RSM model components reveals how each feature contributes to the final
results. The strong correlation among steadiness values highlights its role as a consistent, system-wide
property. Feature importance analysis suggests that a higher manuscript count leads to more stable
judgments, while ANOVA highlights the impact of standard deviation on score steadiness. Greater
variability in judgments produces less stable outcomes, reflecting the complex interactions of model
components (RQ3).</p>
      <p>A limitation of our approach is that reference scores follow a Power Law distribution, often resulting
in many manuscripts with near-zero scores. This issue is worsened by the bounded scale, where
clipping values outside the predefined range can distort judgments [ 43]. Low-quality manuscripts with
near-zero scores are considered “easy” to judge, which leads to small judgment errors. This dynamic
creates a countervariance efect where skilled readers provide high-quality judgments, while authors of
low-quality manuscripts receive lower scores. An unbounded scale, such as Magnitude Estimation [44],
could mitigate this issue.</p>
      <p>Another aspect requiring further attention is the treatment of inert nodes. While excluding them from
the analysis is reasonable (see Section 3.4.1), real-world publishing systems might indeed include unread
manuscripts and inactive readers. Exploring alternative approaches to inert nodes could provide further
insights, such as applying graph growth models where inert nodes are “drawn” to highly connected
nodes [45].</p>
      <p>Our simulation flow also employs four bounded power laws, which introduce four additional
parameters and increase complexity. A potential solution is proposed by Antipov et al. [46], who reduce the
number of parameters by randomly selecting values from a suitably scaled power law distribution at
each iteration.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions and Future Research Directions</title>
      <p>In conclusion, our study shows the efectiveness of the co-determination algorithm in RSM,
highlighting key simulation parameters that significantly influence the model’s outputs and confirming their
consistent impact across various settings.</p>
      <p>
        Future research should aim to enhance RSM by comparing it with other models and exploring more
complex simulations with diverse scenarios and variables. Mizzaro [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] suggests improvements like
considering multiple authors per manuscript for greater realism. Other possibilities include modeling
manuscripts submitted to journals with varying acceptance thresholds and assigning multiple scores
for authors and readers. Additionally, we propose allowing readers to revise their judgments over time,
as the current model assumes fixed judgments within a set period.
      </p>
      <p>
        A possible comparison with existing models would involve evaluating RSM reader scores against
those from the co-determination algorithm in TrueReview [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Additionally, a network analysis using
the HITS algorithm [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] could be applied to reassess the updated model.
      </p>
      <p>
        Future simulations could benefit from real-world data, such as the 1.5 million preprints available
on arXiv [47]. As proposed by Mizzaro [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], one approach is to model various reader types with
diferent judgment patterns. For example, we could simulate readers who consistently overestimate,
underestimate, or align closely with the reference score by adjusting the standard deviation of the
Gaussian distribution that generates reader scores. Skilled readers would have low standard deviations,
while less skilled readers would have higher ones, enabling us to evaluate how closely their scores
reflect their skill levels.
      </p>
      <p>
        Finally, incorporating LLMs into simulations is becoming increasingly relevant due to their ability
to replicate human-like behavior. For instance, Park et al. [48] describe an agent-based framework
that emulates realistic individual behaviors and attitudes. We believe such a framework could enrich
simulations by modeling reader judgments influenced by demographic, ideological, and personality
traits. However, caution is advised when using LLMs as proxies for human decision-making [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ].
      </p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This research is partially supported by the PRIN 2022 Project – “MoT—The Measure of Truth: An
Evaluation-Centered Machine-Human Hybrid Framework for Assessing Information Truthfulness”
(Code No. 20227F2ZN3, CUP No. G53D23002800006), funded by the European Union – Next Generation
EU – PNRR M4 C2 I1.1, and by the Strategic Plan of the University of Udine–Interdepartment Project
on Artificial Intelligence (2020-25).
[37] N. Ferro, Y. Kim, M. Sanderson, Using Collection Shards to Study Retrieval Performance Efect Sizes,</p>
      <p>ACM Trans. Inf. Syst. 37 (2019). URL: https://doi.org/10.1145/3310364. doi:10.1145/3310364.
[38] N. Ferro, G. Silvello, A General Linear Mixed Models Approach to Study System Component Efects,
in: Proceedings of the 39th International ACM SIGIR Conference on Research and Development
in Information Retrieval, SIGIR ’16, Association for Computing Machinery, New York, NY, USA,
2016, p. 25–34. doi:10.1145/2911451.2911530.
[39] N. Ferro, G. Silvello, Toward an anatomy of IR system component performances, Journal of the
Association for Information Science and Technology 69 (2018) 187–200. doi:doi.org/10.1002/
asi.23910.
[40] K. Roitero, B. Carterette, R. Mehrotra, M. Lalmas, Leveraging Behavioral Heterogeneity Across
Markets for Cross-Market Training of Recommender Systems, in: Companion Proceedings of the
Web Conference 2020, WWW ’20, Association for Computing Machinery, New York, NY, USA,
2020, p. 694–702. doi:10.1145/3366424.3384362.
[41] F. Zampieri, K. Roitero, J. S. Culpepper, O. Kurland, S. Mizzaro, On Topic Dificulty in IR
Evaluation: The Efect of Systems, Corpora, and System Components, in: Proceedings of the
42nd International ACM SIGIR Conference on Research and Development in Information
Retrieval, SIGIR’19, Association for Computing Machinery, New York, NY, USA, 2019, p. 909–912.
doi:10.1145/3331184.3331279.
[42] P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski,
P. Peterson, W. Weckesser, J. Bright, et al., SciPy 1.0: fundamental algorithms for scientific
computing in Python, Nature Methods 17 (2020) 261–272. doi:10.1038/s41592-019-0686-2.
[43] M. Hubert, P. J. Rousseeuw, K. Vanden Branden, Robust Statistics: Theory and Methods, 2nd ed.,</p>
      <p>Wiley, 2011. doi:10.1002/0470010940.
[44] E. Maddalena, S. Mizzaro, F. Scholer, A. Turpin, On Crowdsourcing Relevance Magnitudes
for Information Retrieval Evaluation, ACM Transactions on Information Systems 35 (2017).
doi:10.1145/3002172.
[45] F. Menczer, S. Fortunato, C. A. Davis, A First Course in Network Science, Cambridge University</p>
      <p>Press, 2020. doi:10.1017/9781108653947.
[46] D. Antipov, M. Buzdalov, B. Doerr, Lazy Parameter Tuning and Control: Choosing All Parameters</p>
      <p>Randomly from a Power-Law Distribution, 2023. doi:10.1007/s00453-023-01098-z.
[47] C. B. Clement, M. Bierbaum, K. P. O’Keefe, A. A. Alemi, On the Use of ArXiv as a Dataset, arXiv,
2019. doi:10.48550/arXiv.1905.00075. arXiv:1905.00075.
[48] J. S. Park, C. Q. Zou, A. Shaw, B. M. Hill, C. Cai, M. R. Morris, R. Willer, P. Liang, M. S. Bernstein,
Generative Agent Simulations of 1,000 People, arXiv, 2024. doi:10.48550/arXiv.2411.10109.
arXiv:2411.10109.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>W. Y.</given-names>
            <surname>Arms</surname>
          </string-name>
          ,
          <article-title>What are the alternatives to peer review? Quality control in scholarly publishing on the web</article-title>
          ,
          <source>JEP</source>
          <volume>8</volume>
          (
          <year>2002</year>
          ). doi:
          <volume>10</volume>
          .3998/3336451.0008.103.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Jecmen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yoon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Conitzer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. B.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          <article-title>Dataset on Malicious Paper Bidding in Peer Review</article-title>
          ,
          <source>in: Proceedings of the ACM Web Conference</source>
          <year>2023</year>
          , WWW '23,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2023</year>
          , p.
          <fpage>3816</fpage>
          -
          <lpage>3826</lpage>
          . doi:
          <volume>10</volume>
          .1145/3543507.3583424.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mizzaro</surname>
          </string-name>
          ,
          <article-title>Readersourcing-A manifesto</article-title>
          ,
          <source>Journal of the American Society for Information Science and Technology</source>
          <volume>63</volume>
          (
          <year>2012</year>
          )
          <fpage>1666</fpage>
          -
          <lpage>1672</lpage>
          . doi:
          <volume>10</volume>
          .1002/asi.22668.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mizzaro</surname>
          </string-name>
          ,
          <article-title>Quality control in scholarly publishing: A new proposal</article-title>
          ,
          <source>Journal of the American Society for Information Science and Technology</source>
          <volume>54</volume>
          (
          <year>2003</year>
          )
          <fpage>989</fpage>
          -
          <lpage>1005</lpage>
          . doi:
          <volume>10</volume>
          .1002/asi.10296.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5] L. de Alfaro, M. Faella,
          <article-title>TrueReview: A Proposal for Post-Publication Peer Review</article-title>
          ,
          <source>Technical Report UCSC-SOE-16-13</source>
          , University of California, Santa Cruz,
          <year>2016</year>
          . URL: https://tr.soe.ucsc.edu/research/ technical-reports/UCSC-SOE-
          <volume>16</volume>
          -13.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R.</given-names>
            <surname>Schulz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Barnett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bernard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. J. L.</given-names>
            <surname>Brown</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Byrne</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Eckmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Gazda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kilicoglu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Prager</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Salholz-Hillel</surname>
          </string-name>
          , G. ter Riet, T. Vines,
          <string-name>
            <given-names>C. J.</given-names>
            <surname>Vorland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhuang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bandrowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. L.</given-names>
            <surname>Weissgerber</surname>
          </string-name>
          ,
          <article-title>Is the future of peer review automated?</article-title>
          ,
          <source>BMC Research Notes</source>
          <volume>15</volume>
          (
          <year>2022</year>
          )
          <article-title>203</article-title>
          . doi:
          <volume>10</volume>
          .1186/s13104-022-06080-6.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Price</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. A.</given-names>
            <surname>Flach</surname>
          </string-name>
          ,
          <article-title>Computational support for academic peer review: a perspective from artificial intelligence</article-title>
          ,
          <source>Communications of the ACM</source>
          <volume>60</volume>
          (
          <year>2017</year>
          )
          <fpage>70</fpage>
          -
          <lpage>79</lpage>
          . doi:
          <volume>10</volume>
          .1145/2979672.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Mohammed Salah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. A.</given-names>
            <surname>Halbusi</surname>
          </string-name>
          , Debate:
          <article-title>Peer reviews at the crossroads-'To AI or not to AI?'</article-title>
          ,
          <source>Public Money &amp; Management</source>
          <volume>43</volume>
          (
          <year>2023</year>
          )
          <fpage>781</fpage>
          -
          <lpage>782</lpage>
          . doi:
          <volume>10</volume>
          .1080/09540962.
          <year>2023</year>
          .
          <volume>2264032</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Soprano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mizzaro</surname>
          </string-name>
          , Crowdsourcing Peer Review:
          <article-title>As We May Do</article-title>
          , in: Manghi, Paolo and Candela, Leonardo and Silvello, Gianmaria (Ed.), Digital Libraries: Supporting Open Science, volume
          <volume>988</volume>
          of Communications in Computer and Information Science, Springer,
          <year>2019</year>
          , pp.
          <fpage>259</fpage>
          -
          <lpage>273</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -11226-4_
          <fpage>21</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Soprano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mizzaro</surname>
          </string-name>
          ,
          <article-title>Crowdsourcing Peer Review in the Digital Humanities?</article-title>
          ,
          <source>in: Book of Abstracts, 8th AIUCD Conference 2019 - Pedagogy</source>
          , Teaching, and
          <article-title>Research in the Age of Digital Humanities</article-title>
          ,
          <source>AIUCD '19</source>
          ,
          <year>2019</year>
          , p.
          <fpage>251</fpage>
          . URL: http://aiucd2019.uniud.it/wp-content/uploads/2020/03/ AIUCD2019-BoA_DEF.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Soprano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Roitero</surname>
          </string-name>
          , S. Mizzaro, HITS Hits Readersourcing:
          <article-title>Validating Peer Review Alternatives Using Network Analysis, in: M. K</article-title>
          . Chandrasekaran, P. Mayr (Eds.),
          <source>Proceedings of the 4th Joint Workshop on Bibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries co-located with the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval</source>
          , volume
          <volume>2414</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>70</fpage>
          -
          <lpage>82</lpage>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2414</volume>
          /paper7.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Drozdz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Ladomery</surname>
          </string-name>
          , The Peer Review Process: Past, Present, and
          <string-name>
            <surname>Future</surname>
          </string-name>
          ,
          <source>British Journal of Biomedical Science</source>
          <volume>81</volume>
          (
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .3389/bjbs.
          <year>2024</year>
          .
          <volume>12054</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>C.</given-names>
            <surname>Kadaifci</surname>
          </string-name>
          , E. Isikli,
          <string-name>
            <surname>Y. I. Topcu</surname>
          </string-name>
          ,
          <article-title>Fundamental Problems in the Peer-Review Process and Stakeholders' Perceptions of Potential Suggestions for Improvement</article-title>
          ,
          <source>Learned Publishing</source>
          <volume>38</volume>
          (
          <year>2025</year>
          )
          <article-title>e1637</article-title>
          . doi:
          <volume>10</volume>
          .1002/leap.1637.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Publons</surname>
          </string-name>
          ,
          <source>2018 Global State of Peer Review, Publons Report</source>
          ,
          <year>2018</year>
          . URL: https://publons.com/static/ Publons-Global-
          <article-title>State-</article-title>
          <string-name>
            <surname>Of-</surname>
          </string-name>
          Peer-Review-
          <year>2018</year>
          .pdf, accessed:
          <fpage>2024</fpage>
          -12-18.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Team</surname>
          </string-name>
          , Peer Review:
          <article-title>How We Found 15 Million Hours of Lost Time</article-title>
          ,
          <source>AJE Scholarly Publishing Blog</source>
          ,
          <year>2023</year>
          . URL: https://www.aje.com/arc/peer-review-process-15
          <string-name>
            <surname>-</surname>
          </string-name>
          million
          <article-title>-hours-lost-time/</article-title>
          , accessed:
          <fpage>2024</fpage>
          -12-18.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>O. M.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. L.</given-names>
            <surname>Davis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. B.</given-names>
            <surname>Pizza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Waterman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. C.</given-names>
            <surname>Dobson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Foster</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Jarvey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. N.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Leuenberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Nourn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. E.</given-names>
            <surname>Conway</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Fiser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z. A.</given-names>
            <surname>Hansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hristova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Mack</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Saunders</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O. J.</given-names>
            <surname>Utley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Young</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. L.</given-names>
            <surname>Davis</surname>
          </string-name>
          ,
          <article-title>Peer review perpetuates barriers for historically excluded groups</article-title>
          ,
          <source>Nature Ecology &amp; Evolution</source>
          <volume>7</volume>
          (
          <year>2023</year>
          )
          <fpage>512</fpage>
          -
          <lpage>523</lpage>
          . doi:
          <volume>10</volume>
          .1038/s41559-023-01999-w.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hafar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bazerbachi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. H.</given-names>
            <surname>Murad</surname>
          </string-name>
          , Peer Review Bias:
          <string-name>
            <given-names>A Critical</given-names>
            <surname>Review</surname>
          </string-name>
          ,
          <source>Mayo Clinic Proceedings</source>
          <volume>94</volume>
          (
          <year>2019</year>
          )
          <fpage>670</fpage>
          -
          <lpage>676</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.mayocp.
          <year>2018</year>
          .
          <volume>09</volume>
          .004.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Blackburn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Hakel</surname>
          </string-name>
          ,
          <source>An Examination of Sources of Peer-Review Bias, Psychological Science</source>
          <volume>17</volume>
          (
          <year>2006</year>
          )
          <fpage>378</fpage>
          -
          <lpage>382</lpage>
          . doi:
          <volume>10</volume>
          .1111/j.1467-
          <fpage>9280</fpage>
          .
          <year>2006</year>
          .
          <volume>01715</volume>
          .x.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>W.</given-names>
            <surname>Kaltenbrunner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pinfield</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Waltman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. B.</given-names>
            <surname>Woods</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Brumberg</surname>
          </string-name>
          ,
          <article-title>Innovating peer review, reconfiguring scholarly communication: an analytical overview of ongoing peer review innovation activities</article-title>
          ,
          <source>Journal of Documentation</source>
          <volume>78</volume>
          (
          <year>2022</year>
          )
          <fpage>429</fpage>
          -
          <lpage>449</lpage>
          . doi:
          <volume>10</volume>
          .1108/JD-01-2022-0022.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>L.</given-names>
            <surname>Waltman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Kaltenbrunner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pinfield</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. B.</given-names>
            <surname>Woods</surname>
          </string-name>
          ,
          <article-title>How to improve scientific peer review: Four schools of thought</article-title>
          ,
          <source>Learned Publishing</source>
          <volume>36</volume>
          (
          <year>2023</year>
          )
          <fpage>334</fpage>
          -
          <lpage>347</lpage>
          . doi:
          <volume>10</volume>
          .1002/leap.1544.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>T.</given-names>
            <surname>Ross-Hellauer</surname>
          </string-name>
          ,
          <article-title>What is open peer review? A systematic review</article-title>
          ,
          <source>F1000Research</source>
          <volume>6</volume>
          (
          <year>2017</year>
          )
          <article-title>588</article-title>
          . doi:
          <volume>10</volume>
          .12688/F1000RESEARCH.11369.1.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Tennant</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Dugan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Graziotin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. C.</given-names>
            <surname>Jacques</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Waldner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Mietchen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Elkhatib</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. B.</given-names>
            <surname>Collister</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. K.</given-names>
            <surname>Pikas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Crick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Masuzzo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Caravaggi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. R.</given-names>
            <surname>Berg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. E.</given-names>
            <surname>Niemeyer</surname>
          </string-name>
          , T. RossHellauer, S. Mannheimer,
          <string-name>
            <given-names>L.</given-names>
            <surname>Rigling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Katz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. G.</given-names>
            <surname>Tzovaras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pacheco-Mendoza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Fatima</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Poblet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Isaakidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. E.</given-names>
            <surname>Irawan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Renaut</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. R.</given-names>
            <surname>Madan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Matthias</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. N.</given-names>
            <surname>Kjaer</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. P. O'Donnell</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Neylon</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Kearns</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Selvaraju</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Colomb</surname>
          </string-name>
          ,
          <article-title>A multi-disciplinary perspective on emergent and future innovations in peer review</article-title>
          ,
          <source>F1000Research</source>
          <volume>6</volume>
          (
          <year>2017</year>
          )
          <article-title>1151</article-title>
          . doi:
          <volume>10</volume>
          .12688/F1000RESEARCH. 12037.1.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>K.</given-names>
            <surname>Kousha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Thelwall</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence to support publishing and peer review: A summary and review</article-title>
          ,
          <source>Learned Publishing</source>
          <volume>37</volume>
          (
          <year>2024</year>
          )
          <fpage>4</fpage>
          -
          <lpage>12</lpage>
          . doi:
          <volume>10</volume>
          .1002/leap.1570.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>A.</given-names>
            <surname>Checco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bracciale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Loreti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pinfield</surname>
          </string-name>
          , G. Bianchi,
          <article-title>AI-assisted peer review</article-title>
          ,
          <source>Humanities and Social Sciences Communications</source>
          <volume>8</volume>
          (
          <year>2021</year>
          )
          <article-title>25</article-title>
          . doi:
          <volume>10</volume>
          .1057/s41599-020-00703-8.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>I.</given-names>
            <surname>Boukhris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zaâbi</surname>
          </string-name>
          ,
          <article-title>A GAN-BERT based decision making approach in peer review</article-title>
          ,
          <source>Social Network Analysis and Mining</source>
          <volume>14</volume>
          (
          <year>2024</year>
          )
          <article-title>107</article-title>
          . doi:
          <volume>10</volume>
          .1007/s13278-024-01269-y.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>Y. J.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kaplan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.-W.</given-names>
            <surname>Hsu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Can ChatGPT be used to generate scientific hypotheses?</article-title>
          , arXiv,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2304.12208. arXiv:
          <volume>2304</volume>
          .
          <fpage>12208</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>V.</given-names>
            <surname>Mollaki</surname>
          </string-name>
          ,
          <article-title>Death of a reviewer or death of peer review integrity? the challenges of using AI tools in peer reviewing and the need to go beyond publishing policies</article-title>
          ,
          <source>Research Ethics</source>
          <volume>20</volume>
          (
          <year>2024</year>
          )
          <fpage>239</fpage>
          -
          <lpage>250</lpage>
          . doi:
          <volume>10</volume>
          .1177/17470161231224552.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>A.</given-names>
            <surname>Flanagin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kendall-Taylor</surname>
          </string-name>
          , K. Bibbins-Domingo,
          <article-title>Guidance for Authors, Peer Reviewers</article-title>
          , and Editors on
          <source>Use of AI</source>
          ,
          <string-name>
            <surname>Language</surname>
            <given-names>Models</given-names>
          </string-name>
          , and Chatbots, JAMA
          <volume>330</volume>
          (
          <year>2023</year>
          )
          <fpage>702</fpage>
          -
          <lpage>703</lpage>
          . doi:
          <volume>10</volume>
          .1001/ jama.
          <year>2023</year>
          .
          <volume>12500</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>G. R.</given-names>
            <surname>Latona</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. H.</given-names>
            <surname>Ribeiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. R.</given-names>
            <surname>Davidson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Veselovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>West</surname>
          </string-name>
          ,
          <string-name>
            <surname>The AI Review Lottery: Widespread AI-Assisted Peer Reviews Boost Paper Scores and Acceptance Rates</surname>
          </string-name>
          , arXiv,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2405.02150. arXiv:
          <volume>2405</volume>
          .
          <fpage>02150</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>W.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. Y.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Vodrahalli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>McFarland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zou</surname>
          </string-name>
          ,
          <article-title>Can Large Language Models Provide Useful Feedback on Research Papers? A Large-Scale Empirical Analysis</article-title>
          ,
          <source>NEJM AI 1</source>
          (
          <year>2024</year>
          )
          <article-title>AIoa2400196</article-title>
          . doi:
          <volume>10</volume>
          .1056/AIoa2400196.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <surname>S. K. K. Santu</surname>
            ,
            <given-names>S. K.</given-names>
          </string-name>
          <string-name>
            <surname>Sinha</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Bansal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Knipper</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Sarkar</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Salvador</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Mahajan</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Guttikonda</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Akter</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Freestone</surname>
            ,
            <given-names>M. C. W.</given-names>
          </string-name>
          <string-name>
            <surname>Jr</surname>
          </string-name>
          ,
          <article-title>Prompting LLMs to Compose Meta-Review Drafts from PeerReview Narratives of Scholarly Manuscripts</article-title>
          , arXiv,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2402.15589. arXiv:
          <volume>2402</volume>
          .
          <fpage>15589</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Burtch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Fazelpour</surname>
          </string-name>
          ,
          <article-title>Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina</article-title>
          , arXiv,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2410.19599. arXiv:
          <volume>2410</volume>
          .
          <fpage>19599</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <surname>D. J. de Solla Price</surname>
          </string-name>
          ,
          <source>Networks of Scientific Papers, Science</source>
          <volume>149</volume>
          (
          <year>1965</year>
          )
          <fpage>510</fpage>
          -
          <lpage>515</lpage>
          . doi:
          <volume>10</volume>
          .1126/ science.149.3683.510.
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Banshal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. H.</given-names>
            <surname>Lathabai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. K.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>Power Laws in altmetrics: An empirical analysis</article-title>
          ,
          <source>Journal of Informetrics</source>
          <volume>16</volume>
          (
          <year>2022</year>
          )
          <article-title>101309</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.joi.
          <year>2022</year>
          .
          <volume>101309</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>F.</given-names>
            <surname>Pedregosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Varoquaux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gramfort</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Michel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Thirion</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Grisel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Blondel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Prettenhofer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Weiss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Dubourg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Vanderplas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Passos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Cournapeau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Brucher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Perrot</surname>
          </string-name>
          , Édouard Duchesnay,
          <article-title>Scikit-learn: Machine Learning in Python</article-title>
          ,
          <source>Journal of Machine Learning Research</source>
          <volume>12</volume>
          (
          <year>2011</year>
          )
          <fpage>2825</fpage>
          -
          <lpage>2830</lpage>
          . URL: http://jmlr.org/papers/v12/pedregosa11a.html.
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>S. F.</given-names>
            <surname>Olejnik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Algina</surname>
          </string-name>
          ,
          <source>Generalized Eta and Omega Squared Statistics: Measures of Efect Size for Some Common Research Designs, Psychological methods 8</source>
          (
          <year>2004</year>
          )
          <fpage>434</fpage>
          -
          <lpage>47</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>