<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>When the Scale is Unclear - Analysis of the Interpretation of Rating Scales in Human Evaluation of Text Simplification</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Regina Stodden</string-name>
          <email>regina.stodden@hhu.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Heinrich Heine University Düsseldorf</institution>
          ,
          <addr-line>Universitätsstraße 1, 40225 Düsseldorf</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <fpage>84</fpage>
      <lpage>95</lpage>
      <abstract>
        <p>In the evaluation of text simplification, human ratings are of the highest importance as automatic metrics are not yet suficient. However, so far, no best practices for human evaluation of text simplification exist. Hence, several diferent rating scales and definitions of evaluation dimensions are used to evaluate text simplification system outputs. Also, the scales lack some analysis regarding their reliability and interpretation. Therefore, in this paper, we analyse the interpretation of the scales of the evaluation dimensions meaning preservation, and simplicity based on simplification pairs with no change. Our analysis shows that annotators diferently interpreted the scale of the simplicity dimension: on the one hand, the lowest value was interpreted to describe that the simplified sentence is more complex than the original sentence, and on the other hand, that a simplified sentence is as complex as the original sentence. Overall, the paper emphasises that best practices for human evaluation of text simplification are demanded to reduce misinterpretation of the scales.</p>
      </abstract>
      <kwd-group>
        <kwd>text simplification</kwd>
        <kwd>human evaluation</kwd>
        <kwd>scale interpretation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Text simplification is the manual or automatic process of generating a simpler version of a
complex text or sentence by preserving its meaning. Simplified texts are easier to understand,
for example, for non-native speaker or people with lower literacy. Besides simplicity, meaning
preservation and grammaticality are important criteria for a good simplification of a text. Thus,
these criteria are also used to evaluate automatic text simplification systems [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Therefore, the
original text and its generated simplified version are aligned to a simplification pair. This pair
can be evaluated manually or automatically [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        So far, manual evaluation of text simplification is the most reliable evaluation method to
judge text simplification [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], as for example the existing automatic evaluation metrics either
focus only on lexical changes, e.g., SARI [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], or the meaning preservation, e.g., BLEU [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
Nevertheless, human evaluation also has its weakness because no best practice for text simplification
Proceedings of the First Workshop on Current Trends in Text Simplification (CTTS 2021), co-located with SEPLN 2021.
      </p>
      <p>LGOBE
https://user.phil.hhu.de/~stodden (R. Stodden)</p>
      <p>CEUR</p>
      <p>
        CEUR Workshop Proceedings (CEUR-WS.org)
evaluation exists. Currently, three dimensions are most often used in research, i.e., meaning
preservation, simplicity and fluency [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Even if there is agreement on these dimensions, there
is no agreement on the question and scale used for evaluation (see [
        <xref ref-type="bibr" rid="ref4 ref5 ref6 ref7">4, 5, 6, 7</xref>
        ]). Even if Likert
scales [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] are often used in text simplification evaluation and other evaluation tasks, many
options exist for using and interpreting a Likert scale [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>In this paper, we analyse the interpretation of diferent existing scales, including Likert scales,
of human evaluation in 6 text simplification datasets. We investigate whether diferent scale
interpretations exist by looking at human ratings of simplification pairs for which the original
and the simplified sentences are identical. In detail, we answer the following research questions:
I) Do human annotators agree on one label in the judgment of simplicity of identical sentence
pairs, e.g., the middle or the lowest score value? II) Do human annotators agree on one label in
the judgment of meaning preservation of identical sentence pairs, i.e., the highest score value?
III) Do human annotators stick to their interpretation of a rating scale in all of their ratings?</p>
      <p>In the following, we will first summarise the state of the art in current manual evaluation
of text simplification. Then, we describe our methods and data and build our hypotheses.
Afterwards, we present our results, conclude with some final interpretation and discussion of
the results and mention possible future works.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        The human evaluation of natural language processing tasks is very costly and time-consuming,
hence, automatic metrics are developed and optimised. For text simplification evaluation also
some evaluation directions exist, e.g., evaluation on multiple references [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], evaluation without
any reference [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] or evaluation of structural simplifications [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. But all of these metrics still
have some limitations, hence, they should be only used for quickly comparing and assessing
diferent text simplification systems [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        For a more detailed evaluation, human evaluation is required. In human text simplification
evaluation, common evaluation dimensions exist, .i.e., meaning preservation, simplicity, and
grammatically [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], but there is no agreement on the questions asked per evaluation dimension
or the scale used for evaluation.
      </p>
      <p>
        For the same dimensions, e.g., fluency (also called grammaticality), several definitions and
questions exist: in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], they ask the raters if the output sentence is grammatical, [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] ask if the
simplified sentence is grammatical and fluent, and [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] state that ”fluency indicates if the output
is syntactically correct.”. Even if the statements sound similar, they emphasise diferent points
and, hence, the raters may focus on diferent points during the rating. Especially if a rater is
not an expert in text simplification, minor diferences may lead to incomparable results. There
is also a discussion of whether sentence pairs should be rated by experts or crowd workers of
the target group [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        Furthermore, there is no agreement on a rating scale, most approaches prefer Likert scales
(see [
        <xref ref-type="bibr" rid="ref5 ref6">6, 5</xref>
        ]) but others prefer continuous scales (see [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]). However, Likert scales are also diferently
used, e.g., a scale ranging from 1 to 5 (see [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]) or -2 to +2 (see [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]). Following [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], Likert scales can
also be diferently used regarding other aspects, e.g., single-item vs. multi-item, same distance
between consecutive points (ordinal vs. interval), odd or even number of points, each point
labeled vs. only end points labeled, descending vs. ascending order, negatively or positively
stated items.
      </p>
      <p>
        In text simplification evaluation, the most common rating scales are 5 point Likert-scales,
e.g., [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], a scale from -2 to +2, e.g., [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], and a continuous scale from 0 to 100, e.g., [
        <xref ref-type="bibr" rid="ref14 ref7">14, 7</xref>
        ]. On
the one hand, [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] argue that a continuous scale leads to more consistency in inter-annotator
agreement in text simplification evaluation as already proofed for machine translation. On the
other hand, [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] prefer a Likert scale with negative to positive scale points including a neutral
middle point because they are helpful to rate sentence pairs in which the simplified sentence
is more or equally complex as the original sentence. However, both scales include a middle
point element. Following [
        <xref ref-type="bibr" rid="ref15 ref9">9, 15</xref>
        ], annotators interpret the middle point as, e.g., ”undecided”,
”neutral”, or ”no opinion”, which might be not always the interpretation the scale developers
have intended. Overall, the diferent scales and their interpretations make it dificult to compare
the ratings of diferent system outputs and, therefore, distort text simplification evaluation.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Method</title>
      <p>
        3.1. Data
As we want to analyse the interpretation of rating scales by diferent annotators, a dataset
with ratings of at least 2 annotators is required. Therefore, in our analysis, we focus on
QATS [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]1, HSplit [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]2, PWKP test [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]3, ASSET [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]4, human-likert and system-likert [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]5,
and Fusion [
        <xref ref-type="bibr" rid="ref18 ref19">18, 19</xref>
        ]6. An overview of their relevant evaluation dimensions, scales and number
of raters per dataset is given in Table 1.
      </p>
      <p>Additionally, in all datasets, grammaticality is also rated. However, it is rated only absolutely
on the simplified sentence and not in relation to the original sentence, so we do not consider it
in the analysis. The simplicity rating of QATS is also not considered for the same reason.</p>
      <sec id="sec-3-1">
        <title>3.2. Hypotheses</title>
        <p>
          The dataset selection already showed the diferences between the human evaluation in text
simplification. Even if the name and the idea behind the evaluation dimensions are very similar,
the judgements are collected I) on scales with diferent sizes, i.e., 3, 5 and 100, II) on scales
with diferent point names, i.e., ”good”, to ”bad” or ”strongly disagree” to ”strongly agree”,
1The data is available online https://qats2016.github.io/shared.html.
2The human judgements of HSplit are available online https://github.com/eliorsulem/simplification-acl2018.
3Due to a currently dead link to the system outputs of the sentence pairs, we instead copied the system outputs
provided in EASSE [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] in the given order. However, the sentence pairs of 2 system outputs could not be found.
Hence, our version of the dataset contains only 500 sentence pairs. The human judgements are available online
https://github.com/eliorsulem/SAMSA/blob/master/Human_evaluation_benchmark.ods. The original sentences and
system outputs are available in EASSE https://github.com/feralvam/easse/tree/master/easse/resources/data.
        </p>
        <p>4The human judgements of ASSET are available online https://github.com/facebookresearch/asset/tree/master/
human_ratings.</p>
        <p>5The human judgements are available online http://dl.fbaipublicfiles.com/questeval/simplification_human_
evaluations.tar.gz.</p>
        <p>6The data will be available here https://cs.pomona.edu/~dkauchak/simplification/. Currently it is only available
upon request by the authors.
III) by crowd workers or experts, an IV) on diferent item types, i.e., questions or statements,
V) on diferent types of simplification pairs, i.e., manually or automatically simplified sentences,
VI) on sources which are reused for text simplification, e.g., English Wikipedia and Simple
English Wikipedia (in HSplit) or which are directly designed for text simplification (in ASSET,
human-likert), VII) on sentence pairs with diferent aspirations in the simplicity level, e.g.,
the simplified sentence must be simpler or the simplified sentence can also be more complex.
Hence, the following points make it dificult to compare judgements of text simplification
systems reported in system papers. In the following, we will analyse if more problems in human
evaluation exist. Therefore, we analyse if the annotators consistently understand the scales in
each of the datasets.</p>
        <p>To analyse the interpretation of the scale, we compare the ratings of simplification pairs
in which no change was made from the original to the simplified sentence. These sentence
pairs are further called no-change pairs. As complexity assessment is a subjective task, diferent
ratings of the simplifications are expected. But if the simplified sentence is identical to the
original sentence, the rating can be expected to be the same because not the absolute simplicity
of the sentence is measured but the change/simplification which does not exist in this case.
Hence, we use the no-change pairs of the datasets to check whether diferent interpretations of
the rating scales exist. An overview of the proportion of no-change pairs per dataset and their
size themselves are given in Table 2.</p>
        <p>We will focus on the analysis of the evaluation dimensions of simplicity and meaning
preservation. The interpretation of the grammaticality dimensions couldn’t be analysed as in all
datasets grammaticality was only rated for the simplified sentence but not for the original
sentence. In the analysis, we will verify the following hypotheses, which are based on the
dataset and scale descriptions in the previous section.</p>
        <p>Hypothesis 1: In HSplit and Fusion, the simplicity rating of no-change pairs are equal to
the neutral element, i.e., 0.</p>
        <p>
          The simplicity ratings in HSplit are judged on a scale ranging form -2 to +2 including the neutral
element 0. Following the scale definition in [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], the neutral element of the scale indicates that
the simplicity of the original and the simplified sentence of a pair are the same. Hence, we
hypothesise that the simplicity ratings of no-change pairs in HSplit and Fusion are equal to
0. A score of -2 indicates a more complex simplified sentence and +2 a more easy simplified
sentence compared to the original sentence.
        </p>
        <p>
          Hypothesis 2: In ASSET, human-likert, and system-likert, the simplicity ratings of no-change
pairs are equal to the lowest element of the scale, i.e., 0, as it indicates the worst simplification.
In ASSET, human-likert, and system-likert, the annotators rate the relative simplicity of
simpliifcation pair based on their level of agreement to a given statement. The scale ranges from 0
(strongly disagree) to 100 (strongly agree). Hence, the lowest value indicates a rejection of the
statement, which is interpretable as the worst simplification. In the rating instruction[
          <xref ref-type="bibr" rid="ref20">20</xref>
          ], the
question raises how to annotate the sentence pair if the original and the simplified sentence
are exactly the same. The answer refers to the formulation of the dimension that some change
should have been made. However, it does not indicate an expected behaviour of the annotators,
e.g., do not judge an identical pair or judge with a specific value. Hence, we can only assume
that the lowest score, i.e., 0, indicates that the simplified sentence is more complex than the
original sentence as well as that the simplified sentence is as simple/complex as the original
sentence. Following this interpretation, a score of 50 would indicate that the simplified sentence
is more, roughly 50% more, simple than the original sentence.
        </p>
        <p>Hypothesis 3: The meaning preservation rating is equal to the maximum element in QATS,
HSplit, PWKP test, ASSET, human-likert, system-likert, and Fusion.</p>
        <p>In no-change pairs, the meaning of the original sentence is exactly the same as in the simplified
sentence. As meaning preservation measures in all corpora the extent to which the meaning
is preserved in the simplified compared to the original sentence, we hypothesise the highest
possible value for no-change pairs in the evaluation dimension of meaning preservation. The
highest possible value for QATS and PWKP test7 is 3, for HSplit 5, and for ASSET, human-likert
and system-likert 100, respectively.</p>
        <p>
          In contrast, the lowest possible values would indicate that the simplified sentence has a
completely diferent meaning than the original sentence. Even if the scale has a middle element,
this element does not have to indicate a neutral element as for the simplicity scale in HSplit.
Following [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], it is also possible to express the indecision of the rater, which is more likely in
this case.
        </p>
        <p>Hypothesis 4: If diferent interpretations of the scales exist, the rater groups’ ratings
significantly difer for sentence pairs in which the original and the simplified sentences are not
identical.</p>
        <p>
          7In PWKP test, the meaning preservation score is based on the averaged reversed ratings of information gain
and information loss (see [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]).
If at least one of the previous hypotheses can be disproved, the rating behaviour of the
annotators will be analysed in more detail. The deviation in the scores of the hypothesis lead to
the assumptions that the raters diferently understood the rating scales. To evaluate the extent
of the misunderstanding, we compare the ratings per sentence pair, including sentence pairs
with changes, of diferent rating groups, e.g., preferring the highest and middle value of the
scale. For example, if a rater group rated the simplicity of no-change pairs of ASSET with 50
and not with the assumed 0 score, we have a closer look at their simplicity ratings on pairs with
a change. If a rater group prefers 50 for no-change pairs, they most likely annotate the pairs
with a change diferently than the rater group preferring 0. Hence, it is hypothesised that the
ratings of such rater groups significantly difer from each other.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results and Discussion</title>
      <p>
        Each of the selected datasets contains some no-change pairs which are rated by a diferent
number of annotators. An overview of the number of no-change pairs and annotators of
nochange pairs per dataset is provided in Table 2. For the ratings of ASSET, human-likert, and
system-likert, we normalised the human judgements by their individual mean and standard
deviation, following the description in [
        <xref ref-type="bibr" rid="ref14 ref7">7, 14</xref>
        ]. In the following, we will analyse the raters’
interpretation of the dimensions simplicity and meaning preservation to disprove or corroborate
the hypotheses.
      </p>
      <sec id="sec-4-1">
        <title>4.1. Simplicity Rating</title>
        <p>In HSplit, the ratings of the experts are consistent and corroborate Hypothesis 1. All ratings of
the 346 identical sentence pairs agree on the assumed neutral value of 0, except one annotator
for three of overall 7840 annotation records (0.03%).</p>
        <p>In Fusion, on average, only 6 of 338 no-change sentence pairs (1.88%) are not scored with
the neutral value as assumed in Hypothesis 1. The overall average score of all no-change pairs’
simplicity judgements of all three annotators are equal to -0.0026±0.05. Interestingly deviations
in both directions exist, i.e., closer to more simple and closer to more dificult.</p>
        <p>In ASSET, the annotators do not agree with their ratings for the no-change pair on the
dimension of simplicity. For each pair, roughly half of the annotators decides on the minimum
value, which is hypothesised, and roughly the other half on the middle value. One annotator
per no-change pair rates simplicity with the highest possible score. In contrast to Hypothesis 2,
the simplicity ratings in ASSET are not always equal to the lowest element.</p>
        <p>Similar to the annotators’ behaviour in ASSET, in human-likert and system-likert, the
annotators can be split into three rating groups: preferring 0 for no-change pairs, preferring 50 or
100. Again, in contrast to Hypothesis 2, the simplicity ratings in human-likert and system-likert
are not always equal to the lowest element. Further analysis is required to check whether these
ratings are done by mistake or due to diferent scale interpretations (see subsection 4.3).</p>
        <p>
          The results of these datasets show that some crowd-workers and experts interpret the
simplicity scale as hypothesised. In contrast, the number of points and points ranging from negative to
positive or only positive, seem to influence the interpretation of the scale: Both datasets with a
scale ranging between -2 and +2 achieved a higher consistency than the scale ranging between
0 and 100. The diferent interpretations might be due to diferent understandings of the middle
point of the scale [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Meaning Preservation Rating</title>
        <p>For the dimension of meaning preservation, the human ratings in human-likert, system-likert
and ASSET rather meet the values assumed in the hypotheses, i.e., close to 100. For all 5 identical
pairs, more than 80% rate a score of the maximum category (80 to 100). But for some of the
sentence pairs, one of the annotators rate the meaning preservation also either with a value
between 0 and 19 or 40 and 59. Hence, a small proportion also interpreted the scale diferently
than hypothesised in Hypothesis 3. In comparison to the simplicity rating scale, the meaning
preservation scale seems clearer to understand, which might be due to a clearer formulation of
the scale item.</p>
        <p>The annotators of HSplit again agree all in the same rating, here the maximum value, except
for 8 out of 346 identical pairs (2.31%). Furthermore, in QATS all no-change pairs are rated with
the highest possible value, which is ”good”. In Fusion, 15 of the 338 no-change pairs (4.43%) were
rated with a diferent value than the highest value. The overall average score of all no-change
pairs’ meaning preservation judgements of all three annotators is equal to 4.98±0.12. Hence,
Hypothesis 3 can also be approved for HSplit, QATS and Fusion.</p>
        <p>In contrast, in PWKP test, the ratings are below the values hypothesised. Each of the
annotators rated the no-change pairs with a score ranging on average from 2.275 to 2.525. 3
of the 5 annotators annotated half of the no-change pairs with the highest value, but another
rater only selected it for 30% of the pairs. Hence, for PWKP test Hypothesis 3 is disproved.
However, it must be considered that the alignment of PWKP test was reproduced. Hence, the
results of PWKP must be interpreted with caution because the found efects might be due to a
misalignment.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Consistent Interpretations</title>
        <p>Roughly half of the annotators in ASSET, human-likert and system-likert either annotated
simplicity with the lowest value and the other half with the middle value. As stated in Hypothesis
4, we analyse whether the annotators stick to their scale interpretation or not.</p>
        <p>In system-likert and human-likert, 16 of 34 annotators rated more than one no-change pair
on the simplicity dimension. 10 of the 16 annotators are consistent in their ratings (see Table 3),
they rated the no-change pairs all either with a score between 0 and 19 or 40 and 59. Looking
closer at the ratings, 5 of the 10 raters, decided on a score between 0 and 19 on all of their
no-change pairs, as hypothesised in Hypothesis 2, and the other half on a score between 80 and
100. However, also 6 of 16 raters alternate between the lowest, middle or highest value, hence,
they seem to have no clear scale interpretation.</p>
        <p>
          In ASSET, 20 crowd workers annotated more than one no-change pair. 13 of them always
annotated the same value for all simplicity ratings of their no-change pairs (see Table 4). Similar
to system-likert and human-likert, the annotators are split into nearly equally sized groups
preferring either the lowest or the highest value for simplicity. Overall, we, can confirm, that
diferent simplicity scale interpretations occur in system-likert, human-likert and ASSET. The
diferent understandings of the lowest value might be due to a not-intended misinterpretation
of the middle value [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
        </p>
        <p>To further investigate the diferent scale interpretations also on simplification pairs with a
change, we divided the raters into groups based on their preferred score on the no-change pairs,
i.e., preference-1 and preference-50. The groups are compared sentence-wise on the evaluation
dimension of simplicity.</p>
        <p>In the averages of the simplicity rating of both groups, the diferent interpretations are also
present. In system-likert and human-likert, the rater group preference-1 (n =5, n =911,
M=52.87±40.18) have an overall lower simplicity average than the rater group preference-50
(n =5, n =634, M=63.77±33.88) on simplification pairs with a change. The same applies
also to ASSET: preference-1 (n =7, n =571, M=35.58±37.24), preference-50 (n =6,
n =292, M=44.43±33.22).</p>
        <p>Comparing all sentence pairs with changes rated by both rater groups using a
Mann-WhitneyU-test, the simplicity ratings are significantly difering between both groups in system-likert
and human-likert (U=252213.0, p≤0.01) and ASSET (U=64127.0, p≤0.01). Hence, it seems that
both groups interpret the simplicity scale diferently, but apply their diferent interpretations to
all rated pairs. Hypothesis 4 can be corroborated.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion and Future Work</title>
      <p>Concerning the research questions asked, in the dataset analysed, human annotators (experts
and crowd workers) mostly agree on one label, i.e., the highest value of the scale, in the
judgements of meaning preservation. In contrast, the analysis has also shown that diferent
scale interpretations exist for the evaluation dimension of simplicity in the dataset with
crowdsourced human ratings on a continuous scale. Some raters prefer the lowest value and some the
middle value of the scale to indicate the same level of simplicity in no-change pairs. However,
the values are not randomly seeded, a clear distinction between raters who annotate the lowest
or neutral element on several no-change pairs is possible. This leads to the assumption that they
did not rate the lowest or the middle element by mistake but understood the scale diferently.</p>
      <p>Following the analysis results, on the one hand, the interpretation of the simplicity scale is
consistent when rated by experts or using a neutral element for simplicity. On the other hand,
crowd-workers had diferent interpretations of the simplicity scale, i.e., either the lowest or the
middle element of the scale indicate no change in simplicity. The scale and the annotations also
could get clearer, e.g., by reformulating the definition or scale ending, the crowd-workers could
get more certain by seeing more examples before the annotation, or one could rely only on
(trained) experts.</p>
      <p>In contrast, the expert ratings in HSplit, PWKP test and the crowd worker ratings in Fusion
regarding all evaluation dimensions and the ratings in ASSET regarding meaning preservation
are congruent with the values hypothesised. Overall, a deeper analysis of the interpretation of
human rating scales in text simplification is required. Therefore, a user study could be conducted
in which several sentence pairs with and without changes would be rated on diferent scales or
with diferent instructions by crowd workers and experts.</p>
      <p>
        Not only the diferent scale interpretations by human raters but also the diferent
implementations of the scales of human evaluation limit the comparison of human evaluation of text
simplification. Hence, best practices as e.g. published for natural language generation [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] are
in high demand for text simplification. We hope that this paper increases the awareness of
problems in text simplification evaluation and kicks of a discussion regarding these challenges,
e.g., training of human annotators or showing examples to them, developing clear and precise
statements or questions for evaluation dimensions, best number of points of a scale, e.g., 0 to
100 or -2 to +2, or rating of experts or crowd workers.
      </p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This research is part of the PhD-program ”Online Participation”, supported by the North
RhineWestphalian (German) funding scheme ”Forschungskolleg”. We thank the anonymous and
non-anonymous reviewers for their valuable feedback during the preparation of this paper.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>F.</given-names>
            <surname>Alva-Manchego</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Scarton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Specia</surname>
          </string-name>
          ,
          <article-title>Data-driven sentence simplification: Survey and benchmark</article-title>
          ,
          <source>Computational Linguistics</source>
          <volume>46</volume>
          (
          <year>2020</year>
          )
          <fpage>135</fpage>
          -
          <lpage>187</lpage>
          . URL: https://aclanthology.org/
          <year>2020</year>
          .cl-
          <volume>1</volume>
          .4. doi:
          <volume>10</volume>
          .1162/coli_a_
          <fpage>00370</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>W.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoles</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Pavlick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Callison-Burch</surname>
          </string-name>
          ,
          <article-title>Optimizing statistical machine translation for text simplification</article-title>
          ,
          <source>Transactions of the Association for Computational Linguistics</source>
          <volume>4</volume>
          (
          <year>2016</year>
          )
          <fpage>401</fpage>
          -
          <lpage>415</lpage>
          . URL: https://aclanthology.org/Q16-1029. doi:
          <volume>10</volume>
          .1162/tacl_ a_
          <fpage>00107</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>K.</given-names>
            <surname>Papineni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Roukos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ward</surname>
          </string-name>
          , W.-J. Zhu,
          <article-title>Bleu: a method for automatic evaluation of machine translation, in: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</article-title>
          , Philadelphia, Pennsylvania, USA,
          <year>2002</year>
          , pp.
          <fpage>311</fpage>
          -
          <lpage>318</lpage>
          . URL: https://aclanthology.org/P02-1040. doi:
          <volume>10</volume>
          . 3115/1073083.1073135.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Stajner</surname>
          </string-name>
          ,
          <article-title>Automatic text simplification for social good: Progress and challenges, in: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021</article-title>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Online,
          <year>2021</year>
          , pp.
          <fpage>2637</fpage>
          -
          <lpage>2652</lpage>
          . URL: https://aclanthology.org/
          <year>2021</year>
          .findings-acl.
          <volume>233</volume>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2021</year>
          .findings-acl.
          <volume>233</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Maddela</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Alva-Manchego</surname>
          </string-name>
          , W. Xu,
          <article-title>Controllable text simplification with explicit paraphrasing, in: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics</article-title>
          , Online,
          <year>2021</year>
          , pp.
          <fpage>3536</fpage>
          -
          <lpage>3553</lpage>
          . URL: https://aclanthology.org/
          <year>2021</year>
          .naacl-main.
          <volume>277</volume>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2021</year>
          .naacl-main.
          <volume>277</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E.</given-names>
            <surname>Sulem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Abend</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rappoport</surname>
          </string-name>
          ,
          <article-title>Simple and efective text simplification using semantic and neural methods, in: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics</article-title>
          , Melbourne, Australia,
          <year>2018</year>
          , pp.
          <fpage>162</fpage>
          -
          <lpage>173</lpage>
          . URL: https://aclanthology.org/P18-1016. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>P18</fpage>
          -1016.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>F.</given-names>
            <surname>Alva-Manchego</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bordes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Scarton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Sagot</surname>
          </string-name>
          , L. Specia,
          <article-title>ASSET: A dataset for tuning and evaluation of sentence simplification models with multiple rewriting transformations, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</article-title>
          , Online,
          <year>2020</year>
          , pp.
          <fpage>4668</fpage>
          -
          <lpage>4679</lpage>
          . URL: https://aclanthology.org/
          <year>2020</year>
          .acl-main.
          <volume>424</volume>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          .acl-main.
          <volume>424</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>R.</given-names>
            <surname>Likert</surname>
          </string-name>
          ,
          <article-title>A technique for the measurement of attitudes</article-title>
          ,
          <source>Archives of Psychology</source>
          <volume>22</volume>
          (
          <year>1932</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S. Y. Y.</given-names>
            <surname>Chyung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Roberts</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Swanson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hankinson</surname>
          </string-name>
          ,
          <article-title>Evidence-based survey design: The use of a midpoint on the likert scale</article-title>
          ,
          <source>Performance Improvement</source>
          <volume>56</volume>
          (
          <year>2017</year>
          )
          <fpage>15</fpage>
          -
          <lpage>23</lpage>
          . URL: https://onlinelibrary.wiley.com/doi/abs/10.1002/pfi.21727. doi:https://doi.org/10.1002/ pfi.21727. arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/pfi.21727.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>L.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Humeau</surname>
          </string-name>
          , P.-E. Mazaré,
          <string-name>
            <surname>É. de La Clergerie</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Bordes</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Sagot</surname>
          </string-name>
          ,
          <article-title>Reference-less quality estimation of text simplification systems</article-title>
          ,
          <source>in: Proceedings of the 1st Workshop on Automatic Text Adaptation (ATA)</source>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Tilburg, the Netherlands,
          <year>2018</year>
          , pp.
          <fpage>29</fpage>
          -
          <lpage>38</lpage>
          . URL: https://aclanthology.org/W18-7005. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>W18</fpage>
          -7005.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>E.</given-names>
            <surname>Sulem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Abend</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rappoport</surname>
          </string-name>
          ,
          <article-title>Semantic structural evaluation for text simplification</article-title>
          ,
          <source>in: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>
          , Volume
          <volume>1</volume>
          (
          <string-name>
            <surname>Long</surname>
            <given-names>Papers)</given-names>
          </string-name>
          ,
          <source>Association for Computational Linguistics</source>
          , New Orleans, Louisiana,
          <year>2018</year>
          , pp.
          <fpage>685</fpage>
          -
          <lpage>696</lpage>
          . URL: https://aclanthology.org/N18-1063. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>N18</fpage>
          -1063.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Narayan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gardent</surname>
          </string-name>
          ,
          <article-title>Hybrid simplification using deep semantics and machine translation</article-title>
          ,
          <source>in: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume</source>
          <volume>1</volume>
          :
          <string-name>
            <surname>Long</surname>
            <given-names>Papers)</given-names>
          </string-name>
          ,
          <source>Association for Computational Linguistics</source>
          , Baltimore, Maryland,
          <year>2014</year>
          , pp.
          <fpage>435</fpage>
          -
          <lpage>445</lpage>
          . URL: https://aclanthology.org/P14-1041. doi:
          <volume>10</volume>
          .3115/v1/
          <fpage>P14</fpage>
          -1041.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wan</surname>
          </string-name>
          ,
          <article-title>Neural sentence simplification with semantic dependency information</article-title>
          ,
          <source>Proceedings of the AAAI Conference on Artificial Intelligence</source>
          <volume>35</volume>
          (
          <year>2021</year>
          )
          <fpage>13371</fpage>
          -
          <lpage>13379</lpage>
          . URL: https://ojs.aaai.org/index.php/AAAI/article/view/17578.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>T.</given-names>
            <surname>Scialom</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Staiano</surname>
          </string-name>
          ,
          <string-name>
            <surname>Éric Villemonte de la Clergerie</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Sagot</surname>
          </string-name>
          ,
          <article-title>Rethinking automatic evaluation in sentence simplification</article-title>
          ,
          <year>2021</year>
          . arXiv:
          <volume>2104</volume>
          .
          <fpage>07560</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>J. T.</given-names>
            <surname>Nadler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Weston</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. C.</given-names>
            <surname>Voyles</surname>
          </string-name>
          ,
          <article-title>Stuck in the middle: The use and interpretation of mid-points in items on questionnaires</article-title>
          ,
          <source>The Journal of General Psychology</source>
          <volume>142</volume>
          (
          <year>2015</year>
          )
          <fpage>71</fpage>
          -
          <lpage>89</lpage>
          . URL: https://doi.org/10.1080/00221309.
          <year>2014</year>
          .
          <volume>994590</volume>
          . doi:
          <volume>10</volume>
          .1080/00221309.
          <year>2014</year>
          .
          <volume>994590</volume>
          . arXiv:https://doi.org/10.1080/00221309.
          <year>2014</year>
          .
          <volume>994590</volume>
          , pMID:
          <fpage>25832738</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S.</given-names>
            <surname>Štajner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Popović</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Saggion</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Specia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Fishel</surname>
          </string-name>
          ,
          <article-title>Shared task on quality assessment for text simplification</article-title>
          ,
          <source>in: Proceedings of the Workshop on Quality Assessment for Text Simplification (QATS)</source>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Portorož, Slovenia,
          <year>2016</year>
          , pp.
          <fpage>22</fpage>
          -
          <lpage>37</lpage>
          . URL: http://www.lrec-conf.
          <source>org/proceedings/lrec2016/workshops/ LREC2016Workshop-QATS_Proceedings.pdf#page=28.</source>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>F.</given-names>
            <surname>Alva-Manchego</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Scarton</surname>
          </string-name>
          , L. Specia, EASSE:
          <article-title>Easier automatic sentence simplification evaluation</article-title>
          ,
          <source>in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing</source>
          (
          <article-title>EMNLP-IJCNLP): System Demonstrations, Association for Computational Linguistics</article-title>
          , Hong Kong, China,
          <year>2019</year>
          , pp.
          <fpage>49</fpage>
          -
          <lpage>54</lpage>
          . URL: https://aclanthology.org/D19-3009. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>D19</fpage>
          -3009.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>M.</given-names>
            <surname>Schwarzer</surname>
          </string-name>
          ,
          <article-title>Crowdsourcing Text Simplification with Sentence Fusion, Bachelor thesis</article-title>
          , Pomona College,
          <year>2018</year>
          . URL: https://cs.pomona.edu/classes/cs190/thesis_examples/ Schwarzer.18.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>M.</given-names>
            <surname>Schwarzer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Tanprasert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kauchak</surname>
          </string-name>
          ,
          <article-title>Improving human text simplification with sentence fusion</article-title>
          ,
          <source>in: Proceedings of the Fifteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-15)</source>
          , Association for Computational Linguistics, Mexico City, Mexico,
          <year>2021</year>
          , pp.
          <fpage>106</fpage>
          -
          <lpage>114</lpage>
          . URL: https://aclanthology.org/
          <year>2021</year>
          .textgraphs-
          <volume>1</volume>
          .
          <fpage>10</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>F.</given-names>
            <surname>Alva-Manchego</surname>
          </string-name>
          ,
          <article-title>Automatic Sentence Simplification with Multiple Rewriting Transformations</article-title>
          ,
          <source>Phd thesis</source>
          , University of Shefield, Shefield, UK,
          <year>2020</year>
          . URL: https://etheses. whiterose.ac.uk/28690/.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>C. van der</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gatt</surname>
          </string-name>
          , E. van Miltenburg,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wubben</surname>
          </string-name>
          , E. Krahmer,
          <article-title>Best practices for the human evaluation of automatically generated text</article-title>
          ,
          <source>in: Proceedings of the 12th International Conference on Natural Language Generation</source>
          , Association for Computational Linguistics, Tokyo, Japan,
          <year>2019</year>
          , pp.
          <fpage>355</fpage>
          -
          <lpage>368</lpage>
          . URL: https://aclanthology.org/W19-8643. doi:
          <volume>10</volume>
          .18653/ v1/
          <fpage>W19</fpage>
          -8643.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>