<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Aarhus, Denmark
£ idamarie@cas.au.dk(I. M. S. Lassen); pascale.moreira@cc.au.dk(P. F. Moreira); yuri.bizzoni@cc.au.dk
(Y. Bizzoni); kln@cas.au.dk (K. Nielbo)
ȉ</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Literary Canonicity and Algorithmic Fairness: The Efect of Author Gender on Classification Models</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ida Marie S. Lassen</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pascale Feldkamp Moreira</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yuri Bizzoni</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kristofer</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nielbo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Center for Humanities Computing, Aarhus University</institution>
          ,
          <country country="DK">Denmark</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>This study examines gender biases in machine learning models that predict literary canonicity. Using algorithmic fairness metrics like equality of opportunity, equalised odds, and calibration within groups, we show that models violate the fairness metrics, especially by misclassifying non-canonical books by men as canonical. Feature importance analysis shows that text-intrinsic diferences between books by men and women authors contribute to these biases. Men have historically dominated canonical literature, which may bias models towards associating men-authored writing styles with literary canonicity. Our study highlights how these biased models can lead to skewed interpretations of literary history and canonicity, potentially reinforcing and perpetuating existing gender disparities in our understanding of literature. This underscores the need to integrate algorithmic fairness in computational literary studies and digital humanities more broadly to foster equitable computational practices.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;bias</kwd>
        <kwd>algorithmic fairness</kwd>
        <kwd>gender bias</kwd>
        <kwd>computational literary studies</kwd>
        <kwd>canonicity</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In recent years, computational literary studies have increasingly utilised machine learning (ML)
models to analyse and classify literary texts, e.g. to predict reader appreciation3[
        <xref ref-type="bibr" rid="ref1 ref34">1, 34</xref>
        ] or
literary success [
        <xref ref-type="bibr" rid="ref18 ref21 ref45 ref9">18, 45, 21, 9</xref>
        ] with uptake in applications in the publishing industry1. Models often
rely on text-intrinsic features, contributing to the study of which text characteristics serve as
predictors for a given classification. While other studies have shown that literature assessment
can be biased by gender [
        <xref ref-type="bibr" rid="ref29 ref43">43, 29</xref>
        ] and ethnicity [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], focusing on text-intrinsic characteristics
might seem like a way to avoid such biases as it concentrates solely on the text.
      </p>
      <p>However, seemingly objective features can harbour social biases, reflecting disparities in the
underlying data. The present work examines gender biases in ML models that predict
literary canonicity, demonstrating how the uncritical use of ML models in humanities research can
lead to biased knowledge production, potentially skewing our understanding of literary history
and the phenomenon of canonicity. This has implications beyond academic research, as these
models could influence real-world applications, including the assessment of new manuscripts
by publishers based on predicted success or likeness to existing canon. By integrating insights
from algorithmic fairness into our analysis of predictive models, we aim to highlight the
potential for hidden biases in seemingly objective computational methods. Our analysis
demonstrates how these biases can afect our interpretation of literary history and canon formation,
and we emphasise the importance of critical reflection on ML methodologies in DH research.</p>
      <p>Our findings underscore that the significance of this work lies not only in the practical
application of prediction models but also in exposing the epistemic consequences of using biased
ML models to study literary phenomena. This approach invites researchers to consider how
computational methods may inadvertently reproduce or amplify existing biases in literary
history.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related works</title>
      <sec id="sec-2-1">
        <title>2.1. Predicting canonicity</title>
        <p>
          This study builds on prior research demonstrating the potential of ML classifiers to predict
various literary attributes, such as whether a book belongs to the literary canon, is written
by a Nobel laureate, is a bestseller, is longlisted for given awards, or receives a high rating
on GoodReads [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. To narrow the scope, we will focus on the attempt to predict canonicity.
While various studies focus on classifying canonical works and gauging their textual profile
[
          <xref ref-type="bibr" rid="ref12 ref33 ref6">6, 12, 33</xref>
          ], the limited resources in the literary field are rarely openly available. We thus focus
on one newly published dataset [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], which served as the foundation for Bizzoni, Feldkamp,
Jacobsen, Thomsen, and Nielbo [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] and provides a rich and diverse collection of features of
literary works.
        </p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], the focus extended beyond classification accuracy to provide insights into the textual
features important for the classification models, seeking to understand the characteristics that
diferentiate canonical from non-canonical books. The study found that “canonical texts have
the most distinctive profile across all dimensions and are therefore the easiest to classify in the
binary classification task” due to their denser nominal style, lower readability, less predictable
sentiment arcs, and higher perplexity.
        </p>
        <p>
          However, it is well-known that canonical literature – like the literary field more broadly –
has historically been dominated by men [
          <xref ref-type="bibr" rid="ref30 ref36 ref37">37, 30, 36</xref>
          ]. Still, studies that seek to predict some
form of canonicity or perceived literary quality rarely include reflections on how biases in
their data inform their results, and the cultural, temporal, or gendered dimensions of texts
are rarely mentioned. While Algee-Hewitt and McGurl [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] show how the “canon” significantly
changes depending on the approach taken; our study highlights the critical oversight of gender
imbalances inherent in literary datasets, which can inadvertently bias model outcomes.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Gender diferences in literary texts</title>
        <p>
          Previous research on gender diferences in literary texts highlights key issues to avoid. One
concern is treating these diferences as fixed and universal markers of men’s and women’s
writing. For example, Burrows [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] shows that gendered patterns in writing styles changed
over time, with distinct diferences found before 1860 but not after, indicating that gendered
styles are historically contingent.
        </p>
        <p>
          A second concern is the assumption of a binary gender model, where men’s writing is seen
as the default. Land [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ] critiques such approaches for framing women’s writing as deviant, as
seen in studies like [
          <xref ref-type="bibr" rid="ref25 ref3">3, 25</xref>
          ], which rely on essentialist assumptions and risk reinforcing biased
interpretations of literary styles2.
        </p>
        <p>
          With that being said, studies have found linguistic and stylistic diferences between texts by
men and women that are independent of topic and genre 4[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. In the literary domain,
Argamon, Koppel, Fine, and Shimoni [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] shows that a high frequency of pronouns is a “strong female
marker”, which is supported by Newman, Groom, Handelman, and Pennebaker 3[
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] who also
found that women’s language more frequently includes pronouns, social words, various
psychological process references, and verbs, as well as negations and home-related terms. Men,
on the other hand, used longer words, more numbers, articles, and prepositions than women
(p. 223).
        </p>
        <p>Hiatt [20] examined contemporary (1978) American prose and found that women use twice
as many emotional adverbs compared to men, while men use nearly twice as many pace
adverbs. She concludes that while there is a distinct feminine writing style, there is “far less
basis for labelling the feminine styles as hyperemotional than for labelling the masculine style
hypo-emotional” [20, p. 226].</p>
        <p>
          Hayward [19] tests whether readers can identify an author’s gender and concludes that
gender diferences are subtler than genre diferences. Koolen [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ] goes deeper into the question
of genre and examines the interaction of gender and genre, especially with regard to “false
labelling”, i.e., that works by women are more often labelled as “women’s books” regardless of
genre [
          <xref ref-type="bibr" rid="ref40">40</xref>
          ]. The findings suggest that while some romantic novels have distinct styles, novels by
women are heterogeneous and not distinguishable from those by men. Considering the
prevalence of biased mechanisms in the literary field (e.g., false labelling), it is possible that readers
focus on similarities among women authors and diferences among men authors rather than
the reverse.
        </p>
        <p>The literature reviewed here highlights the complexity of considering gender diferences
in literary texts and not reducing these diferences to essentialist notions about “how women
write.” In the following, we will use methods from algorithmic fairness to examine biases
in models used to predict canonicity. We do not claim to establish definitive conclusions
about the general diferences between men’s and women’s writing; rather, we emphasise how
modelling a literary phenomenon inevitably mirrors the underlying data and that results could
difer if other datasets were used .</p>
        <p>Considering that questions about bias and fairness are increasingly discussed in ML
development and that ML is increasingly applied in DH, algorithmic fairness insights are rarely
integrated into computational literary studies. Although Bagga and Piper4[] explored the
impact of bias on predictive accuracy and positive prediction balance in literary data, our study
presents a more comprehensive bias analysis informed by the methodologies of algorithmic
fairness. We aim to answer the following research questions:</p>
        <p>• RQ1: To what extent do ML models trained on (imbalanced) literary corpora exhibit
2We use the terms “women and men authors” instead of the more commonly used “female and male authors” to
distinguish cultural gender (which is examined in this paper) from biological sex.
biases on author gender in classification tasks, particularly in predicting canonicity?
• RQ2: Which features in the dataset significantly difer between books by women and
men authors, and how do these features impact the bias in classification models?
These questions are, of course, contingent on the data analysed. Therefore, we zoom out and
include a question that addresses a broader concern:
• RQ3: How does the use of biased ML models afect the knowledge produced in
computational literary studies?</p>
        <p>
          This study focuses on binary gender categories, including only men and women authors.
We acknowledge that this does not capture the full spectrum of gender identities and that
gender is performative and shaped by discursive practices 1[
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. However, this approach aligns
with historical perspectives and addresses existing biases between men and women in literary
canonicity.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Methods</title>
      <p>
        3.1. Data
The dataset used in this work is theChicago Corpus, which consists of 9,089 novels from
diverse genres published in the US between 1880 and 2000. The data is compiled on the number
of libraries holding each novel, with a preference for more circulated works. The dataset was
made available with a recent paper [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].3 The canon category is compiled from books by
authors in the Norton Anthology, the Penguin Classics series, and the top 1000 authors mentioned
in English syllabi (collected by the OpenSyllabus project), as shown in Table1.
      </p>
      <p>A diverse set of stylistic, syntactic and narrative features were used in 8[], which found that
“[t]he highest F1 score was achieved when all proposed features were included”. In addition
to these features, we have included normalised frequencies of part-of-speech (PoS) features, as
they have been highlighted as diferent in the writings of men and women (see Section 2.2).4
3The textual features, including reception categories like ‘canon’, are described onGithub
4Including features that are potentially strong markers of gender is important because other features can act as
‘proxies’ for these. Ignoring them might not reduce bias, as the model could still pick up on these proxies. Including
them allows for a more comprehensive analysis of potential biases5[].</p>
      <sec id="sec-3-1">
        <title>3.2. Modelling</title>
        <p>
          To replicate the experiments in [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], we employed Random Forest (RF) models for the
classiifcation task. RF models are known for their robustness to overfitting and ability to handle
nonlinear relationships. For fairness analysis, we utilised the Dalex librar5y, which provides
tools to explain, explore, and mitigate biases in ML models.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.3. Algorithmic Fairness</title>
        <p>In this work, bias is defined as systematic deviations in predictions that favour or disadvantage
one group – here, authors – based on sensitive features (such as gender, ethnicity, religion, etc.).
To address this, we incorporate fairness analyses to identify and examine such biases.</p>
        <p>
          Group fairness is particularly relevant in our context as it seeks equitable treatment across
diferent groups of authors. This approach balances the distribution of treatments and
resources between groups to ensure that predictions do not disproportionately favour or
disadvantage one or multiple social groups [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. Equality of opportunity, equalised odds, and
calibration within groups are metrics used to estimate group fairness in predictive models. Integrating
these fairness considerations into DH research is crucial, as biased tools can lead to the
misrepresentation of corpora and minority groups, as highlighted in2[
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
        </p>
        <p>Equality of opportunity ensures that the opportunity to be classified as a true positive
instance is equal for all groups, and for all social groups considered their positive rate (TPR)
should be equal:</p>
        <p>+   
=</p>
        <p>+   
for all groups ,</p>
        <p>In relation to the binary classifier for canonicity, equality of opportunity ensures that the
likelihood of correctly recognising a canon book is equal regardless of whether the book is
written by a man or a woman.</p>
        <p>Equalised odds extends beyond equality of opportunity by ensuring equality of the true
negative rate (TNR) and the false positives rate (FPR) for all the specified groups:</p>
        <p>+   
=</p>
        <p>+   
for all groups , 
For the canonicity classifier, equalised odds ensure that the likelihood of incorrectly classifying
a non-canon book as canon is equal regardless of whether the book is written by a man or a
woman.</p>
        <p>Calibration Within Groups ensures that the precision of the classifier is balanced, meaning
the proportion of correct positive predictions (true positives) out of all positive predictions is
the same for all groups:</p>
        <p>for all groups , 
For the canonicity classifier, this means that the books classified as ‘canon’ are actually canon
and that the accuracy of these predictions is consistent across books written by both men and
women.</p>
        <p>
          Dalex reports various classification outcomes and calculates the fairness metrics outlined
above. The criteria are evaluated using the following:
 ≤
metric for non-privileged group≤
metric for privileged group
1

(4)
with  = 0.8 , following the four-fiths rule [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. This threshold is widely used to detect significant
disparities in treatment between groups. The benefit of this approach is that it ofers a clear
and standardised benchmark for assessing fairness, while its limitation is that it may not detect
subtle biases and could oversimplify complex fairness issues 3[
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. The groups considered in
our experiments are women and men authors, with men authors being the privileged group.
        </p>
        <p>
          The outlined criteria have been shown to be impossible to satisfy simultaneously, except for
trivial cases [
          <xref ref-type="bibr" rid="ref23 ref32">32, 23</xref>
          ]. This is a challenging finding because it is difÏcult to justify sacrificing
any of these criteria in a fair classifier. It emphasises the importance of conducting fairness
analysis and interpretation within the specific context of use, considering the underlying data
foundation. We prioritise equalised odds in the canonicity classifier to ensure fair treatment
of men and women authors by balancing FPR and TNR across genders. Without this, one
group could disproportionately influence what is deemed canonical. See Section5 for further
discussion.
        </p>
        <p>Dalex was also used to estimate feature importance for the canon classifiers, employing a
permutation-based approach to compute feature importance. This assesses the contribution of
each feature to classification outcomes by systematically permuting them and calculating their
impact on model performance.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>
        In the first round of the experiments, we used the same sampling methods as in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] to ensure
balance between the positive and negative class: All 618 canon books are used with a random
sub-sample of 618 non-canon books. This process was repeated 20 times, and the average
accuracy was 0.72 – somewhat reproducing the accuracy of 0.75 reported in 8[]. However, as
the gender distribution is not equal for either the positive or negative class, we cannot rule
out the efect of class imbalance when examining the fairness results, as models trained on
imbalanced datasets often develop a bias favouring the majority class [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
      <p>Considering this, we bootstrapped a 50-50 gender distribution to conduct a more meaningful
bias analysis. Since the canon group contains few books by women authors (166 vs. 452 by men
authors), we randomly sampled 166 books by men authors from the canon group to achieve
gender balance, alongside 166 books by men and women authors from the non-canon group,
resulting in a total of = 664 . Each sampling selects a random subset of canon books by men
authors and non-canon books, and the entire process, including model training and fairness
analysis, is repeated 20 times. Sampling is conducted with replacement between rounds to
ensure variability between iterations. Hence, all 166 canon books by women authors are in
each run used together with a random subset of canon books by men authors.</p>
      <p>When training on a 50-50 gender distribution, the average accuracy on the 20 runs remains
approximately the same, 0.71. One potential reason the accuracy is not afected by a smaller
data sample is that the balanced gender distribution may enhance the model’s ability to
generalise across diferent author groups, counteracting any potential loss of information from the
reduced sample size.</p>
      <sec id="sec-4-1">
        <title>4.1. Fairness</title>
        <p>Out of the 20 models trained on a 50-50 gender distribution for both the positive and negative
classes, 16 models are unfair according to the fairness criteria. Specifically, for 9 of the models,
the FPR is lower for women authors than for men authors, and for 7 models, the TPR is higher
for women authors than for men authors. The FPR results indicate that the models have a
greater tendency to classify non-canon books by men authors as canon, compared to non-canon
books by women authors, violating the equalised odds metric. The higher TPR for women
shows that the proportion of correctly recognised canon books is greater for women authors,
violating the equality of opportunity metric. To gain insights into these results, in the following
section, we summarise the feature distributions in the underlying data and feature importance
of the models.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Feature Importance</title>
        <p>4.2.1. Consistent Statistically Diferent Features
Before examining the predictive models’ feature importance, we first tested whether the
included features difered between books by women and men authors. To do so, we conducted
a Mann-Whitney U test with Bonferroni correction to account for multiple comparisons. This
was done for each sample process to ensure that the findings were robust and not related to
the random sample. Conducting the test on a 50-50 gender distribution sample rather than the
full (imbalanced) dataset minimises the influence of unequal group sizes, providing a clearer
understanding of each feature without the confounding efects of gender imbalance. The
following features are reported as statistically significant between books by men and women in
the canon set in more than half of the sampling rounds:
• Narrative features: The mean sentiment of all sentences in the book as well as the mean
sentiment of the first and last 10% of the book.
• The normalised frequencies of negation modifiers, auxiliaries, pronouns, verbs, and
nominal subjects.</p>
        <p>• The ratio between verbs and nouns.</p>
        <p>Thus, at least some of the 36 text-intrinsic characteristics difer between the canon books
by men and women authors, suggesting that there may be a distinct profile for women and
men canon authors. When performing a Mann-Whitney U test with Bonferroni correction
for multiple comparisons for thewhole corpus of 9,000 novels, 31 out of 36 features exhibit a
statistically significant diference between books by men vs. books by women authors. Hence,
there are larger diferences between books written by women and men in the whole corpus
than there are in the canon set. Next, we examined each model’s feature importance to see if
the diferences in features between men and women drive the observed biases.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Feature Importance in Fair and Unfair Models</title>
        <p>For each model, we analysed feature importance and counted the presence of each feature in
both fair and unfair models, respectively. Using the Dalex Library, which identifies the top 10
most influential features, we counted the presence of these features across all models. Fig. 2
presents an overview of the important features in both fair and unfair models, as well as the
features that are reported as statistically significantly diferent between author genders within
the canon set.</p>
        <p>
          The frequency of negation modifiers, type-token-ratio, perplexity and approximate entropy
are often reported among the top ten features regardless of whether the classifier is fair or unfair
(w.r.t. the considered fairness criteria). Recalling the findings from [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], our results confirm the
discriminating power of the textual metric perplexity.
        </p>
        <p>The frequency of negation modifiers and auxiliaries is statistically significant between canon
books by men and women authors inall sample runs and an important feature inall and most
fair models, respectively. This might suggest that the canon vs. non-canon signal for these
features is stronger than the gender diference. This may also be the case for the type-token
ratio, which we report to be diferent between canon books by men and women in 30% of the
sample runs, but important for all models.</p>
        <p>Furthermore, the frequency of relative clause modifiers and the compressibility of the text
are also important features for distinguishing canon books from non-canon books. Both
features are reported more often for the fair models, indicating that despite compressibility being
reported as diferent for men and women authors in the canon group (in 25% of the sample
runs), this does not explain the observed bias.</p>
        <p>For the unfair models specifically, we find that the stop words and verb frequency are more
important than in the fair models. Verb frequency is reported as statistically significant
between books by men and women canon authors in 60% of the sample runs. It is reported as
important only in unfair models, indicating that relying on this feature might contribute to the
observed biases. Similarly, although the frequency of stop words is only reported as
statistically diferent in books by women and men canon authors in 5% of the sample runs, it might
still add to the observed biases when combined with other features.</p>
        <p>For the mean sentiment of the 10% first and last parts of the books, we see that they are
only important for the unfair models, while they are reported to be statistically significantly
diferent for books by men and women canon authors. This indicates that these features might
contribute to the observed biases. The mean sentiment of all sentences, which is reported as
statistically significantly diferent between men and women authors in all runs, is an important
feature for 20% of both the fair and the unfair models, and we can, therefore, not conclude how
it contributes to biases.</p>
        <p>Moreover, the frequencies of nominal subjects and the verb-noun ratio are reported as
diferent between canon books by men and women authors. However, these are not important for
the classifiers to tell non-canon from canon books. This suggests that while women canon
authors and men canon authors difer in these features, they are not important predictors for the
canon category as such. On the other hand, features such as approximate entropy, perplexity,
relative clause modifiers, use of stop words, and type-token ratio appear crucial for
determining canonicity. Notably there are no substantial diferences between men and women authors
regarding these features within the canon group, suggesting a shared canon style among men
and women canon writers w.r.t. these features.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion</title>
      <p>The results presented in this paper show that while it is possible to predict canonicity based on
text-intrinsic features, it is crucial to consider social biases in these models, such as the efect
of author gender. Moreover, our results show that research in DH and computational literary
studies can benefit from insights from algorithmic fairness to increase awareness of social
biases ingrained into methods and datasets. In the following, we outline our main findings and
discuss them in relation to earlier work and fairness considerations.</p>
      <sec id="sec-5-1">
        <title>5.1. Features</title>
        <p>
          Regarding the feature importance results, it is important to note that with the 50-50 gender
distribution in this work and the inclusion of PoS frequencies, we do not reproduce the same
feature importances as reported in [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. While perplexity is confirmed as having discriminative
power in our results, nominal style, readability, and predictability of sentiment arcs do not
appear to be significant predictors of canonicity.
        </p>
        <p>
          As the experiments in [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] did not take gender into account, their models have been exposed
to more men authors than women authors – both of the canon and the non-canon group. In
contrast, our bootstrapped sampling process ensures that our models are exposed to an equal
number of texts by men and women authors. Predictability and nominal style were reported
as statistically significantly diferent in canon books by men and women authors, but these
features did not emerge as predictors of canonicity in our experiments. Therefore, it seems
plausible that by highlighting these exact features, the models in [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] might have picked up
on a style associated with men (canon) authors rather than canonicity itself. However, keep
in mind that our inclusion of PoS features in the analysis may also influence which features
are reported as most important. It is possible that these features remain important but appear
further down the list in our models.
        </p>
        <p>These results underscore the necessity of a careful sampling process when dealing with
imbalanced data. Our bootstrapping method, while straightforward, is not without the limitation
of reducing datapoints. A more refined approach would involve up-sampling texts by women
authors to match the distribution of existing women-authored books.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. False Positives</title>
        <p>
          11 of the 16 unfair models have a higher FPR for men authors than for women authors,
showing a tendency to over-include non-canon men in the canon, rather than non-canon women.
At the same time, the TPR is higher for women authors, indicating that the models have an
easier time recognising canon works by women than by men. Overall, this seems to point to
a harder divisibility of the men authors’ space between canonical and non-canonical books.
One potential explanation for this is that the distance from the canon group might be larger
for the non-canon women than for the non-canon men. The hypothesis is sketched in Fig.
3. Further work is needed to test whether this is the case, potentially through techniques like
embedding-based clustering of books based on text-intrinsic features used in the present study.
A closer examination of how genre plays into the efect observed is also needed, especially as
a larger distance between canon and non-canon women authors may be due to other efects
related to gender disparity. Women authors are shown to predominantly write in genres such
as romance, children’s literature, and young adult fiction [
          <xref ref-type="bibr" rid="ref28 ref42">42, 28</xref>
          ]. If genres like romance are
dominated by women authors and are less represented in canonical compilations1[
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], and if
genres are closely related to writing style[
          <xref ref-type="bibr" rid="ref22">22</xref>
          ], the disparity between canon and non-canon
books (such as romance novels) by women authors may be larger.
        </p>
        <p>
          To avoid naturalising these findings, caution is required when speculating that non-canon
women authors align less with the canon style; and our study does not draw definitive
conclusions about the intrinsic qualities of men’s versus women’s writing. Previous studies have
identified gender diferences in texts (see Section 2.2), but these findings do not always
generalise well [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. Our analysis reflects the underlying data of the Chicago Corpus, which
prioritises widely circulated books. If there is a greater disparity between canon and non-canon
women authors than men authors, it could result from diferential reception [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ] and “false
labelling” of women’s works. This highlights how seemingly objective text-intrinsic features
can embody social biases, as extensively discussed in 1[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ].
        </p>
      </sec>
      <sec id="sec-5-3">
        <title>5.3. Impossibility considerations</title>
        <p>
          As discussed in Section3.3, the impossibility theorem of algorithmic fairness 3[
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] shows that
diferent metrics for group fairness are incompatible with each other if the distribution of
positives varies between groups – also known as unequal base rates. In our experiments, we
ensured equal base rate through the 50-50 gender distribution for both the positive and
negative classes. Despite this, the majority of the models displayed biased predictions based on
author gender. Specifically, the FPR is higher for men authors than for women authors in 11
out of the 16 unfair models, leading to a violation of equalised odds.
        </p>
        <p>
          In real-world scenarios, the base rates are rarely equal. Therefore, addressing such
unfairness often involves accepting lower accuracy, a trade-of known as the parity-accuracy
tradeof [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]. To balance accuracy and fairness and to choose which fairness metric to prioritise, it
is essential to consider the context of use and the intended goals carefully. For a canonicity
classifier aimed at understanding canonical literature, it is arguably important to avoid unequal
false positives, as this would result in one social group having disproportionate (false) influence
over what represents canonical literature. This consideration supports prioritising equalised
odds, which addresses fairness in terms of error rates across groups.
        </p>
        <p>
          The publishing industry might make up another potential use case for binary classifiers
predicting categories such as ‘bestseller’ or ‘quality’ [
          <xref ref-type="bibr" rid="ref2 ref45">45, 2</xref>
          ]. If an ML classifier predicts the success
of new manuscripts, it is still preferable to avoid favouring one group over another, thus
supporting equalised odds. However, if human experts later sort the manuscripts, over-including
false positives is not as harmful as violating equal opportunity (where one group’s positive
instances are more likely to be disregarded). In such a use case, the cost of being falsely
disregarded is higher than being falsely recognised. Therefore, equality of opportunity is crucial to
ensure manuscripts with high potential are equally likely to be recognised, regardless of the
author’s group (e.g., gender, ethnicity)
        </p>
        <p>For a binary classifier used in the publishing industry, it is also crucial to consider the
fairness criteria calibration within groups, as it ensures consistency between predicted
probabilities and actual outcomes within each group. Hence, if the classifier consistently predicts a
10% likelihood of bestseller status for manuscripts written by men, then roughly 10% of those
manuscripts should indeed turn out to be bestsellers when checked against the actual data,
and similarly for other groups. Lack of calibration within groups could lead to systematically
overconfident or underconfident predictions for certain groups.</p>
        <p>
          The sections above show how biases can be embedded within ML models used to predict
literary phenomena. While the field of algorithmic fairness can help identify and address such
skewness, it is worth asking some more fundamental questions about the existing approaches
of using imbalanced literary corpora to classify literary works. One thing to keep in mind is that
developing predictive ML models relies on an assumption about the existence of classification
schema, which can serve as a ground truth. In other words, justifying a canonicity classifier
through its accuracy relies on an acceptance of the distinction between the canon novels and
the non-canon novels and the canon’s historical profile. Such considerations should not be seen
as a dismissal of the idea of a literary canon per se; rather, we aim to encourage reflections
about what happens when contested classification schema is operationalised into predictive
models. Similar points are addressed by Piper 3[
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]: “[W]hile statistical tests can measure the
functioning of the model (“the extent to which what we are observing exceeds the boundaries
of chance”), they cannot confirm “whether the model is an appropriate approximation of the
phenomenon that one is claiming to observe”” (quoted in1[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]).
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion and Future Works</title>
      <p>This study emphasises the critical role of algorithmic fairness in computational literary studies,
especially in addressing gender biases in classification models. Despite balanced training data,
our findings show that ML models still exhibit significant gender biases, misclassifying
noncanon books by men as canon more frequently than those by women, thus violating equalised
odds. This suggests that ignoring gender distribution in literary datasets can bias models
towards associating men-authored writing styles with canonicity and relatedly can lead to
misguided ideas about the textual characteristics of categories like canonic literature.</p>
      <p>Our results reveal that seemingly objective text-intrinsic features can harbour social biases,
highlighting the need to critically reflect on potential biases in datasets and corpora. By
integrating fairness considerations into ML model development and application in computational
literary studies, we can not only improve the reliability of the research results but also foster
inclusivity by ensuring the representation of all social groups and not just those historically
included in established canons6.</p>
      <p>Further research is needed to understand feature distributions across author genders and
their impact on biases. One approach is to create embedding-based clustering to analyse how
diferent author genders are located and distributed within and outside of the canon category.</p>
      <p>As pointed out in section4.3, some features (approximate entropy, perplexity, relative clause
modifiers, use of stop words, and type-token ratio) appear crucial for determining canonicity
while showing no substantial diferences between men and women authors. This suggests a
shared canon style among the canon writers, and future work could examine whether these
features are consistent across diferent genres or literary movements within the canon and how
this evolves over time.</p>
      <p>A limitation of our experiments is that genres were not considered. Future research should
incorporate genre distinctions to ensure significant features of canon literature are not
conlfated with genre-specific ones. This is particularly crucial in the sampling process to avoid
comparing canon books against genre literature.</p>
      <p>
        Another limitation is the influence of pressures from the publishing industry. Research has
shown that women writers often face constraints from publishers regarding their writing style
and subject matter [
        <xref ref-type="bibr" rid="ref44 ref7">44, 7</xref>
        ]. While this requires further investigation, it highlights the social
context shaping how literature is written, published, and distributed - factors that inevitably
influence literary data and the resulting predictive models.
6Distribution plots for features statistically significant between men and women canon authors are provided in the
appendix.
[19]
[20]
      </p>
    </sec>
    <sec id="sec-7">
      <title>Online Resources</title>
      <p>See https://zenodo.org/records/12699037for code.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Algee-Hewitt</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>McGurl</surname>
          </string-name>
          .
          <source>Between Canon and Corpus: Six Perspectives on 20thCentury Novels. Literary Lab Pamphlet</source>
          <volume>8</volume>
          . Stanford Literary Lab,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Archer</surname>
          </string-name>
          and
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Jockers</surname>
          </string-name>
          .
          <article-title>The bestseller code: Anatomy of the blockbuster novel</article-title>
          . Usa: St. Martin's Press,
          <year>2016</year>
          . doi:
          <volume>10</volume>
          .5555/3098683.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Argamon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Koppel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Fine</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Shimoni</surname>
          </string-name>
          . “Gender, genre, and
          <article-title>writing style in formal written texts”</article-title>
          .
          <source>In: Text &amp; talk 23.3</source>
          (
          <issue>2003</issue>
          ), pp.
          <fpage>321</fpage>
          -
          <lpage>346</lpage>
          . doi:
          <volume>10</volume>
          .1515/text.
          <year>2003</year>
          .
          <volume>014</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bagga</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Piper</surname>
          </string-name>
          . “
          <article-title>Measuring the efects of bias in training data for literary classification”</article-title>
          .
          <source>In: Proceedings of the 4th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage</source>
          ,
          <source>Social Sciences, Humanities and Literature</source>
          .
          <year>2020</year>
          , pp.
          <fpage>74</fpage>
          -
          <lpage>84</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Barocas</surname>
          </string-name>
          and
          <string-name>
            <given-names>A. D.</given-names>
            <surname>Selbst</surname>
          </string-name>
          . “
          <article-title>Big data's disparate impact”</article-title>
          . In:Calif.
          <string-name>
            <given-names>L.</given-names>
            <surname>Rev</surname>
          </string-name>
          .
          <volume>104</volume>
          (
          <year>2016</year>
          ), p.
          <fpage>671</fpage>
          . doi: 24758720.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Barré</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.-B. Camps</surname>
            , and
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Poibeau</surname>
          </string-name>
          . “
          <article-title>Operationalizing Canonicity: A Quantitative Study of French 19th and 20th Century Literature”</article-title>
          .
          <source>In: Journal of Cultural Analytics 8.3</source>
          (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .22148/001c.
          <fpage>88113</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>I. Berensmeyer.</surname>
          </string-name>
          “
          <article-title>Authors of Slender Means? Female Authorship in Mid-TwentiethCentury British Fiction”</article-title>
          .
          <source>In: Zeitschrift für Anglistik und Amerikanistik 70.4</source>
          (
          <issue>2022</issue>
          ), pp.
          <fpage>385</fpage>
          -
          <lpage>402</lpage>
          . doi:
          <volume>10</volume>
          .1515/zaa-2022-
          <year>2073</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bizzoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Feldkamp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jacobsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Thomsen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Nielbo</surname>
          </string-name>
          . “
          <article-title>Good Books are Complex Matters: Gauging Complexity Profiles Across Diverse Categories of Perceived Literary Quality”</article-title>
          .
          <source>In: arXiv preprint arXiv:2404.04022</source>
          (
          <year>2024</year>
          ).
          <source>doi: 10.48550/arXiv.2404.0 4022.</source>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bizzoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Moreira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Thomsen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K. L.</given-names>
            <surname>Nielbo</surname>
          </string-name>
          . “
          <article-title>The fractality of sentiment arcs for literary quality assessment: The case of Nobel laureates”</article-title>
          .
          <source>In:Journal of Data Mining &amp; Digital Humanities</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bizzoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. F.</given-names>
            <surname>Moreira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. M. S.</given-names>
            <surname>Lassen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Thomsen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Nielbo</surname>
          </string-name>
          .
          <article-title>“A Matter of Perspective: Building a Multi-Perspective Annotated Dataset for the Study of Literary Quality”</article-title>
          .
          <source>In: Proceedings of the 2024 Joint International Conference on Computational Linguistics</source>
          ,
          <article-title>Language Resources and Evaluation (LREC-COLING</article-title>
          <year>2024</year>
          ).
          <year>2024</year>
          , pp.
          <fpage>789</fpage>
          -
          <lpage>800</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>K.</given-names>
            <surname>Bode</surname>
          </string-name>
          . “
          <article-title>Why you can't model away bias”</article-title>
          .
          <source>In:Modern Language Quarterly 81.1</source>
          (
          <issue>2020</issue>
          ), pp.
          <fpage>95</fpage>
          -
          <lpage>124</lpage>
          . doi: /10.1215/
          <fpage>00267929</fpage>
          -
          <lpage>7933102</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Brottrager</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Stahl</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Arslan</surname>
          </string-name>
          . “Predicting Canonization:
          <article-title>Comparing Canonization Scores Based on Text-Extrinsic and -Intrinsic Features”</article-title>
          .
          <source>In:CEUR Workshop Proceedings</source>
          . Antwerp, Belgium: Ceur,
          <year>2021</year>
          , pp.
          <fpage>195</fpage>
          -
          <lpage>205</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.</given-names>
            <surname>Burrows</surname>
          </string-name>
          . “
          <article-title>Textual Analysis”</article-title>
          . In: A Companion to Digital Humanities. John Wiley &amp; Sons, Ltd,
          <year>2004</year>
          . Chap.
          <volume>23</volume>
          , pp.
          <fpage>323</fpage>
          -
          <lpage>347</lpage>
          . doi:
          <volume>10</volume>
          .1002/9780470999875.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J.</given-names>
            <surname>Butler</surname>
          </string-name>
          . Gender Trouble:
          <article-title>Feminism and the Subversion of Identity</article-title>
          . Routledge,
          <year>2006</year>
          . doi:
          <volume>10</volume>
          .4324/9780203824979.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>P.</given-names>
            <surname>Chong</surname>
          </string-name>
          . “Reading diference:
          <article-title>How race and ethnicity function as tools for critical appraisal”</article-title>
          .
          <source>In: Poetics 39.1</source>
          (
          <issue>2011</issue>
          ), pp.
          <fpage>64</fpage>
          -
          <lpage>84</lpage>
          . doi: https://doi.org/10.1016/j.poetic.
          <year>2010</year>
          .
          <volume>11</volume>
          .0
          <fpage>03</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S.</given-names>
            <surname>Das</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Stanton</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N.</given-names>
            <surname>Wallace</surname>
          </string-name>
          . “
          <article-title>Algorithmic fairness”</article-title>
          .
          <source>In:Annual Review of Financial Economics 15.1</source>
          (
          <issue>2023</issue>
          ), pp.
          <fpage>565</fpage>
          -
          <lpage>593</lpage>
          . doi:
          <volume>10</volume>
          .1146/annurev-financial-
          <volume>110921</volume>
          -125930.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>P.</given-names>
            <surname>Feldkamp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bizzoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Thomsen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K. L.</given-names>
            <surname>Nielbo</surname>
          </string-name>
          .
          <source>Measuring Literary Quality. Proxies and Perspectives. Report. Darmstadt</source>
          ,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .26083/tuprints-00027391.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>V.</given-names>
            <surname>Ganjigunte Ashok</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Feng</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Choi</surname>
          </string-name>
          . “
          <article-title>Success with Style: Using Writing Style to Predict the Success of Novels”</article-title>
          .
          <source>In:Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing</source>
          . Seattle, Washington, USA: Association for Computational Linguistics,
          <year>2013</year>
          , pp.
          <fpage>1753</fpage>
          -
          <lpage>1764</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Hayward</surname>
          </string-name>
          . “
          <article-title>Are texts recognizably gendered? An experiment and analysis”</article-title>
          .
          <source>In:Poetics 31.2</source>
          (
          <issue>2003</issue>
          ), pp.
          <fpage>87</fpage>
          -
          <lpage>101</lpage>
          . doi:
          <volume>10</volume>
          .1016/s0304-422x(
          <issue>03</issue>
          )
          <fpage>00005</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Hiatt</surname>
          </string-name>
          . “
          <article-title>The feminine style: Theory and fact”</article-title>
          .
          <source>In:College Composition &amp; Communication 29.3</source>
          (
          <issue>1978</issue>
          ), pp.
          <fpage>222</fpage>
          -
          <lpage>226</lpage>
          . doi:
          <volume>10</volume>
          .2307/356931.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>S.</given-names>
            <surname>Jannatus Saba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. S.</given-names>
            <surname>Bijoy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Gorelick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ismail</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Islam</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Amin</surname>
          </string-name>
          . “
          <article-title>A Study on Using Semantic Word Associations to Predict the Success of a Novel”</article-title>
          .
          <source>InP:roceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics. Online: Association for Computational Linguistics</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>38</fpage>
          -
          <lpage>51</lpage>
          .
          <year>doi1</year>
          :
          <fpage>0</fpage>
          .18653/v1/
          <year>2021</year>
          .stars em-
          <volume>1</volume>
          .4.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Jockers</surname>
          </string-name>
          . Macroanalysis:
          <article-title>Digital Methods and Literary History. Topics in the digital humanities</article-title>
          . Urbana: University of Illinois Press,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kleinberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mullainathan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Raghavan</surname>
          </string-name>
          . “
          <article-title>Inherent Trade-Ofs in the Fair Determination of Risk Scores”</article-title>
          .
          <source>In:8th Innovations in Theoretical Computer Science Conference (ITCS</source>
          <year>2017</year>
          ). Ed. by
          <string-name>
            <given-names>C. H.</given-names>
            <surname>Papadimitriou</surname>
          </string-name>
          . Vol.
          <volume>67</volume>
          . Leibniz International Proceedings in Informatics (LIPIcs). Dagstuhl, Germany: Schloss
          <string-name>
            <surname>Dagstuhl-Leibniz-Zentrum fuer Informatik</surname>
          </string-name>
          ,
          <year>2017</year>
          ,
          <volume>43</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>43</lpage>
          :
          <fpage>23</fpage>
          . doi:
          <volume>10</volume>
          .4230/LIPIcs.ITCS.
          <year>2017</year>
          .
          <volume>43</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>C.</given-names>
            <surname>Koolen</surname>
          </string-name>
          .
          <article-title>Women's books versus books by women</article-title>
          .
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>M.</given-names>
            <surname>Koppel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Argamon</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Shimoni</surname>
          </string-name>
          . “
          <article-title>Automatically categorizing written texts by author gender”</article-title>
          .
          <source>In: Literary and linguistic computing 17.4</source>
          (
          <issue>2002</issue>
          ), pp.
          <fpage>401</fpage>
          -
          <lpage>412</lpage>
          . doi:
          <volume>10</volume>
          .1093/llc/17.4.401.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>K.</given-names>
            <surname>Land</surname>
          </string-name>
          . “
          <article-title>Predicting author gender using machine learning algorithms: Looking beyond the binary”</article-title>
          .
          <source>In: Digital Studies/Le champ numérique 10.1</source>
          (
          <year>2020</year>
          ). doi:
          <volume>10</volume>
          .16995/dscn.362.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>I. M. S.</given-names>
            <surname>Lassen</surname>
          </string-name>
          , R. D.
          <string-name>
            <surname>Kristensen-McLachlan</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Almasi</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Enevoldsen</surname>
            , and
            <given-names>K. L.</given-names>
          </string-name>
          <string-name>
            <surname>Nielbo</surname>
          </string-name>
          . “
          <article-title>Epistemic consequences of unfair tools”</article-title>
          .
          <source>In:Digital Scholarship in the Humanities 39.1</source>
          (
          <issue>2024</issue>
          ), pp.
          <fpage>198</fpage>
          -
          <lpage>214</lpage>
          . doi:
          <volume>10</volume>
          .1093/llc/fqad091.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>I. M. S.</given-names>
            <surname>Lassen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. F.</given-names>
            <surname>Moreira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bizzoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Thomsen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K. L.</given-names>
            <surname>Nielbo</surname>
          </string-name>
          . “
          <article-title>Persistence of Gender Asymmetries in Book Reviews Within and Across Genres”</article-title>
          .
          <source>In:CEUR Workshop Proceedings</source>
          . Vol.
          <volume>3558</volume>
          . ceur workshop proceedings.
          <year>2023</year>
          , p.
          <fpage>14</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>I. M. S.</given-names>
            <surname>Lassen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bizzoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Peura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Thomsen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K. L.</given-names>
            <surname>Nielbo</surname>
          </string-name>
          . “
          <article-title>Reviewer Preferences and Gender Disparities in Aesthetic Judgments”</article-title>
          .
          <source>In:CEUR Workshop Proceedings</source>
          <volume>3290</volume>
          (
          <year>2022</year>
          ), pp.
          <fpage>280</fpage>
          -
          <lpage>290</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>P.</given-names>
            <surname>Lauter</surname>
          </string-name>
          . “
          <article-title>Race and Gender in the Shaping of the American Literary Canon A Case Study from the Twenties”</article-title>
          .
          <source>In: Canons and Contexts</source>
          . Oxford University Press,
          <year>1991</year>
          , pp.
          <fpage>22</fpage>
          -
          <lpage>47</lpage>
          . doi:
          <volume>10</volume>
          .1093/oso/9780195055931.003.0007.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>S.</given-names>
            <surname>Maharjan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Arevalo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Montes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>González</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Solorio</surname>
          </string-name>
          .
          <article-title>“A Multi-task Approach to Predict Likability of Books”. InP:roceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics</article-title>
          : Volume
          <volume>1</volume>
          ,
          <string-name>
            <given-names>Long</given-names>
            <surname>Papers</surname>
          </string-name>
          . Ed. by
          <string-name>
            <given-names>M.</given-names>
            <surname>Lapata</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Blunsom</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Koller</surname>
          </string-name>
          . Valencia, Spain: Association for Computational Linguistics,
          <year>2017</year>
          , pp.
          <fpage>1217</fpage>
          -
          <lpage>1227</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>T.</given-names>
            <surname>Miconi</surname>
          </string-name>
          . “
          <article-title>The impossibility of ”fairness”: a generalized impossibility result for decisions”</article-title>
          . In: arXiv (
          <year>2017</year>
          ). doi:
          <volume>10</volume>
          .48550/arXiv.1707.01195.
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mohseni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Redies</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V.</given-names>
            <surname>Gast</surname>
          </string-name>
          . “
          <article-title>Approximate Entropy in Canonical and NonCanonical Fiction”</article-title>
          .
          <source>In:Entropy 24.2</source>
          (
          <issue>2022</issue>
          ), p.
          <fpage>278</fpage>
          . doi:
          <volume>10</volume>
          .3390/e24020278.
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>P.</given-names>
            <surname>Moreira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bizzoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Nielbo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. M.</given-names>
            <surname>Lassen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Thomsen</surname>
          </string-name>
          . “Modeling Readers'
          <article-title>Appreciation of Literary Narratives Through Sentiment Arcs and Semantic Profiles”</article-title>
          .
          <source>In: Proceedings of the 5th Workshop on Narrative Understanding</source>
          .
          <year>2023</year>
          , pp.
          <fpage>25</fpage>
          -
          <lpage>35</lpage>
          . doi:
          <volume>10</volume>
          .18 653/v1/
          <year>2023</year>
          .wnu-
          <volume>1</volume>
          .5.
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <surname>M. L. Newman</surname>
            ,
            <given-names>C. J.</given-names>
          </string-name>
          <string-name>
            <surname>Groom</surname>
            ,
            <given-names>L. D.</given-names>
          </string-name>
          <string-name>
            <surname>Handelman</surname>
            , and
            <given-names>J. W.</given-names>
          </string-name>
          <string-name>
            <surname>Pennebaker</surname>
          </string-name>
          . “
          <article-title>Gender diferences in language use: An analysis of 14,000 text samples”</article-title>
          .
          <source>In: Discourse processes 45.3</source>
          (
          <issue>2008</issue>
          ), pp.
          <fpage>211</fpage>
          -
          <lpage>236</lpage>
          . doi:
          <volume>10</volume>
          .1080/01638530802073712.
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>E. L.</given-names>
            <surname>Overgaard</surname>
          </string-name>
          and
          <string-name>
            <given-names>I. M.</given-names>
            <surname>Granum</surname>
          </string-name>
          . “
          <article-title>Kønshierarki i kanonlitteratur: En kvantitativ undersøgelse af køn”</article-title>
          .
          <source>In: Dansknoter</source>
          <year>2023</year>
          .
          <volume>3</volume>
          (
          <issue>2023</issue>
          ), pp.
          <fpage>46</fpage>
          -
          <lpage>49</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <surname>B. G. Pace.</surname>
          </string-name>
          “
          <article-title>The Textbook Canon: Genre, Gender, and Race in US Literature Anthologies”</article-title>
          .
          <source>In: The English Journal 81.5</source>
          (
          <issue>1992</issue>
          ), pp.
          <fpage>33</fpage>
          -
          <lpage>38</lpage>
          . doi:
          <volume>10</volume>
          .2307/819892.
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>A.</given-names>
            <surname>Piper</surname>
          </string-name>
          . “
          <article-title>Think small: on literary modeling”</article-title>
          .
          <source>In:Pmla 132.3</source>
          (
          <issue>2017</issue>
          ), pp.
          <fpage>651</fpage>
          -
          <lpage>658</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>P. L.</given-names>
            <surname>Roth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bobko</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F. S. Switzer</given-names>
            <surname>III</surname>
          </string-name>
          .
          <article-title>“Modeling the behavior of the 4/5ths rule for determining adverse impact: Reasons for caution</article-title>
          .”
          <source>In:Journal of Applied Psychology 91.3</source>
          (
          <issue>2006</issue>
          ), p.
          <fpage>507</fpage>
          . doi:
          <volume>10</volume>
          .1037/
          <fpage>0021</fpage>
          -
          <lpage>9010</lpage>
          .
          <year>91</year>
          .3.507.
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>J.</given-names>
            <surname>Russ</surname>
          </string-name>
          .
          <article-title>How to Suppress Women's Writing</article-title>
          . Austin, Texas, USA: University of Texas Press,
          <year>1983</year>
          . doi:
          <volume>10</volume>
          .7560/316252.
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>R.</given-names>
            <surname>Sarawgi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Gajulapalli</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Choi</surname>
          </string-name>
          . “Gender Attribution:
          <article-title>Tracing Stylometric Evidence Beyond Topic and Genre”</article-title>
          .
          <source>In:Proceedings of the Fifteenth Conference on Computational Natural Language Learning</source>
          . Ed. by
          <string-name>
            <given-names>S.</given-names>
            <surname>Goldwater</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Manning</surname>
          </string-name>
          . Portland, Oregon, USA: Association for Computational Linguistics,
          <year>2011</year>
          , pp.
          <fpage>78</fpage>
          -
          <lpage>86</lpage>
          .
          <year>doi1</year>
          :
          <fpage>0</fpage>
          .5555/2018936 .2018946.
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>M.</given-names>
            <surname>Thelwall</surname>
          </string-name>
          . “
          <article-title>Book genre and author gender: Romance&gt;Paranormal-Romance to Autobiography&gt;Memoir”</article-title>
          .
          <source>In: Journal of the Association for Information Science and Technology 68.5</source>
          (
          <issue>2017</issue>
          ), pp.
          <fpage>1212</fpage>
          -
          <lpage>1223</lpage>
          . doi:
          <volume>10</volume>
          .1002/asi.23768.
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          [43]
          <string-name>
            <given-names>S.</given-names>
            <surname>Touileb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Øvrelid</surname>
          </string-name>
          , and
          <string-name>
            <surname>E. Velldal.</surname>
          </string-name>
          “
          <article-title>Gender and sentiment, critics and authors: a dataset of Norwegian book reviews”</article-title>
          .
          <source>In:Proceedings of the Second Workshop on Gender Bias in Natural Language Processing</source>
          . Barcelona,
          <string-name>
            <surname>Spain</surname>
          </string-name>
          (Online):
          <source>Association for Computational Linguistics</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>125</fpage>
          -
          <lpage>138</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          [44]
          <string-name>
            <given-names>G.</given-names>
            <surname>Tuchman</surname>
          </string-name>
          and
          <string-name>
            <given-names>N. E.</given-names>
            <surname>Fortin</surname>
          </string-name>
          .
          <article-title>Edging women out: Victorian novelists, publishers and social change</article-title>
          . Vol.
          <volume>13</volume>
          .
          <string-name>
            <surname>Oxfordshire</surname>
          </string-name>
          , England, UK: Routledge,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          [45]
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Yucesoy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Varol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Eliassi-Rad</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.-L.</given-names>
            <surname>Barabási</surname>
          </string-name>
          . “
          <article-title>Success in books: predicting book sales before publication”</article-title>
          .
          <source>In:EPJ Data Science 8.1</source>
          (
          <issue>2019</issue>
          ), p.
          <fpage>31</fpage>
          . doi:
          <volume>10</volume>
          .1140 /epjds/s13688-019-0208-6.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>