<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Paris, France
£ janos.borst@uni-leipzig.de(J. Borst);jannis.klaehn@uni-leipzig.de(J. Klähn);
burghardt@informatik.uni-leipzig.d(Me. Burghardt)
ȉ</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Death of the Dictionary? - The Rise of Zero-Shot Sentiment Classification</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Janos Borst</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jannis Klähn</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>ManuelBurghardt</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computational Humanities, Leipzig University</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>In our study, we conduct a comparative analysis between dictionary-based sentiment analysis and entailment zero-shot text classi昀椀cation for German sentiment analysis. We evaluate the performance of a selection of dictionaries on eleven data sets, including four domain-speci昀椀c data sets with a focus on historic German language. Our results demonstrate that, in the majority of cases, zero-shot text classi昀椀cation outperforms general-purpose dictionary-based approaches but falls short of the performance achieved by speci昀椀cally 昀椀ne-tuned models. Notably, the zero-shot approach exhibits superior performance, particularly in historic German cases, surpassing both general-purpose dictionaries and even a broadly trained sentiment model. These 昀椀ndings indicate that zero-shot text classi昀椀cation holds significant promise as an alternative, reducing the necessity for domain-speci昀椀c sentiment dictionaries and narrowing the availability gap of o昀-the-shelf methods for German sentiment analysis. Additionally, we thoroughly discuss the inherent trade-o昀s associated with the application of these approaches.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;sentiment analysis</kwd>
        <kwd>zero-shot text classi昀椀cation</kwd>
        <kwd>sentiment dictionary</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Sentiment analysis plays an important role in digital humanities, allowing researchers to
uncover attitudes and emotions expressed in text. However, when the text and domain di昀er from
the available datasets, some o昀-the-shelf methods or models become signi昀椀cantly less useful.
Since the data of interest to the humanities o昀琀en diverge in language and subject from
computer science reference datasets, and are rarely fully digitised, let alone annotated, alternative
methods that do not require 昀椀ne-tuning a large language model (LLM) or custom curated
dictionary become particularly interesting. We target the domain of historical German language,
speci昀椀cally historical stock market reports and literature, for which there seems to be a lack of
readily available domain-speci昀椀c packages and models.</p>
      <p>While there is the established approach of using sentiment dictionaries and the modern
approach of 昀椀ne-tuning LLMs, both lead to signi昀椀cant workloads in aggregating and curating
domain-speci昀椀c data or annotations when deviating from o昀-the-shelf methods. Recent
approaches – namely zero-shot text classi昀椀cation – promise to achieve similar results without
manual dataset creation. While 昀椀ne-tuning neural networks remains the gold standard for
optimal performance, we explore whether zero-shot sentiment classi昀椀cation can serve as a
substitute for the dictionary-based baseline, discussing its advantages and drawbacks.</p>
      <p>Current sentiment analysis methods fall into two main categories: dictionary-based and
machine learning-based approaches. Dictionaries are the more traditional way to tackle
sentiment analysis and is still an actively used approach. In short, the procedure is to use expert
knowledge to cra昀琀 domain- and task-speci昀椀c lists of negative and positive words with
respective sentiment evaluation to build a word-sentiment mapping. These words’ occurrences in
texts are then aggregated, and their sentiment valuations’ ratio or sum determines the text’s
sentiment index.</p>
      <p>
        While this approach o昀ers advantages like computational e昀케ciency and explainability, it
often requires the creation of domain-speci昀椀c dictionaries or results in performance drops when
using general-purpose ones1[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Many decisions must be made regarding preprocessing,
including casing, stemming, and POS-昀椀ltering, all of which can impact performance.
Additionally, linguistic challenges such as handling negation and metaphors, which are not easily
captured in word lists, need consideration. Text quality is also crucial, as the approach requires
matching word strings regardless of orthographic or grammatical errors.
      </p>
      <p>
        The emergence of LLMs such as BERT [
        <xref ref-type="bibr" rid="ref15">16</xref>
        ] and GPT [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] has led an the increased popularity
of machine learning methods, as they solve many of these problems. The tokenisation process
makes them robust against orthographic mistakes, the contextualisation makes it possible to
spot negations and contextual semantics. In turn, however, they come with the downside of
椀昀ne-tuning, requiring substantial manual e昀ort in annotating task-speci昀椀c data and
computational cost to adapt and inference with o昀琀en over a million parameters.
      </p>
      <p>
        Consequently, there is a growing interest in exploring more e昀케cient approaches like
zeroshot learning 6[
        <xref ref-type="bibr" rid="ref20 ref50 ref6">6, 52, 21</xref>
        ]. Zero-shot learning o昀ers the potential to automate sentiment
analysis tasks by eliminating the need for manual data labeling. Zero-shot learning has already
demonstrated promising results in general text classi昀椀cation task6s7[
        <xref ref-type="bibr" rid="ref20">, 21</xref>
        ] and application to
sentiment analysis [
        <xref ref-type="bibr" rid="ref41 ref48 ref50 ref55">52, 43, 57, 22, 50</xref>
        ]. This approach comes with the advantages of robustness
against orthographic mistakes and not having to label data, either as training data or word
lists, and the capability to detect contextualised semantics. However, it has a larger
computational inference time as it still is mostly based on neural networks. This is why we think
zero-shot models could be a compromise between the performance of neural language models
and close the gap for availability of o昀-the-shelf methods for sentiment analysis, while keeping
the advantages over dictionary-based approaches.
      </p>
      <p>In this paper we try to analyse the performance of these three approaches – a variety of
dictionaries, zero-shot-learning and a 昀椀ne-tuned transformer model – for German sentiment
analysis and hope to be able to demonstrate the usefulness of the zero-shot sentiment
classi椀昀cation method for application in German language. To get a more valid result we not only
test these models on our target domain (historical German) but also on many contemporary
German sentiment datasets, such as reviews and tweets.</p>
      <p>Our contributions are:
• A comparison of dictionary-based sentiment analysis and zero-shot sentiment
classi昀椀cation with regard to performance and inference tim1 e.
1Code to reproduce results availablehattps://github.com/JaBorst/deathofthedictionary
• A discussion of advantages and drawbacks of these approaches and their usefulness for
practical purposes, with focus on digital humanities datasets.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Application of dictionary-based sentiment analysis is still an activate 昀椀eld of researc2h9,[
        <xref ref-type="bibr" rid="ref26 ref33 ref34 ref37 ref38 ref45">36,
28, 40, 39, 47, 35</xref>
        ] with the advent of transformer-based classi昀椀cation, a more sophisticated
approach has emerged 2[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. The use of computer-assisted text analysis has also become
suf椀昀ciently established in the humanities and social sciences that performance comparisons of
di昀erent methods with their own content focus have gained pertinence 9[
        <xref ref-type="bibr" rid="ref1 ref2 ref59">, 1, 62, 2</xref>
        ]. A major
criticism is that o昀-the-shelf dictionaries, i.e. existing vocabularies for emotion or trend
analysis, are highly domain-dependent in their classi昀椀cation performance1][ and do not provide
satisfactory results without revalidatio1n3[]. Furthermore, dictionaries are language-bound
and cannot be translated without veri昀椀cation due to the ambiguity of the words they contain.
      </p>
      <p>
        The prevalence of English dictionaries is a common problem in the 昀椀eld, leading to resource
imbalances. In a comparison of di昀erent polarity resources in German2,5[] found that both
quantity and quality di昀ered considerably. Additionally, these manually created sources have
proven to be error prone55[]. Moreover, the creation of these annotations is o昀琀en in昀氀uenced by
domain-speci昀椀c factors, limiting their generalisability13[, p. 19]. For many use cases,
domainspeci昀椀c dictionaries are required, and while extremely labor-intensive and time-consuming to
create, they are still applied in individual case2s8,[
        <xref ref-type="bibr" rid="ref37 ref38">40, 39</xref>
        ]. However, as [
        <xref ref-type="bibr" rid="ref18">19</xref>
        ] show in their
comparison of di昀erent German dictionaries and datasets, domain-speci昀椀c dictionaries do not
perform well for other application4s0,[
        <xref ref-type="bibr" rid="ref18">19</xref>
        ].
      </p>
      <p>
        Hybrid methods that combine machine learning with semi-automatic word list creation or
dictionary expansion have been proposed as promising approaches. These methods are
cumbersome due to the cumulative validation steps required56,[
        <xref ref-type="bibr" rid="ref16 ref36">38, 17</xref>
        ]. Dictionaries o昀er the
advantage of low-threshold and resource-e昀케cient applicability without requiring training data
[
        <xref ref-type="bibr" rid="ref45">47</xref>
        ]. Nevertheless, compared to supervised learning methods, both o昀-the-shelf and specially
created dictionaries, including self-implemented and commercial options, consistently show
signi昀椀cantly worse performance [
        <xref ref-type="bibr" rid="ref1 ref16 ref5 ref59 ref9">5, 9, 17, 1, 62</xref>
        ].
      </p>
      <p>
        In supervised learning, neural networks have emerged as the state-of-the-art for
sentiment text classi昀椀cation over the last years. Especially 昀椀ne-tuning transformer-based
LLMs, such as BERT [
        <xref ref-type="bibr" rid="ref15">16</xref>
        ], is nowadays the de facto standard in solving text classi昀椀cation
tasks [yangXLNetgeneralizedAutoregressive2019a, 31]. The main drawback with applying
LLMs to new domain-speci昀椀c tasks is the need for annotated data and the necessary hardware
to compute, which can be substantial 4[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Achieving domain-adaptation of LLM-based text
classi昀椀cation models through 昀椀ne-tuning o昀琀en comes with the computational cost of having
to update millions of parameters for every data point, which can be rather di昀케cult and even
infeasible at times. In recent years, there has been a signi昀椀cant focus on developing methods that
reduce the reliance on large training data sets, leading to the emergence of few-shot models
[
        <xref ref-type="bibr" rid="ref11 ref12 ref4 ref57">12, 11, 4, 60</xref>
        ] and even zero-shot models 6[
        <xref ref-type="bibr" rid="ref46 ref6">6, 67, 48</xref>
        ]. These models enable text classi昀椀cation
tasks to be performed without the need for task-speci昀椀c 昀椀ne-tuning or manual data labeling.
The application of zero-shot text classi昀椀cation models not only eliminates the necessity for
manual data annotation but also mitigates the computational costs associated with 昀椀ne-tuning.
Therefore, we systematically investigate the performance of zero-shot against dictionaries for
the task of sentiment analysis on German texts for both general and domain-speci昀椀c use cases.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Experiments</title>
      <sec id="sec-3-1">
        <title>3.1. Dictionaries</title>
        <p>
          In this section we brie昀氀y describe the experimental setting. We explain the application of the
dictionaries and zero-shot methods and list the datasets we used to compare them.
In order to obtain fair comparisons for German dictionaries, we decided upon three generally
applicable German o昀-the-shelf-dictionaries (SentiWS, BAWL-R, GermanPolarityClues) with
a wide reputation 4[
          <xref ref-type="bibr" rid="ref1 ref56">1, 58, 59</xref>
          ]. In addition, a 昀椀nance-speci昀椀c dictionary BPW was tested for the
special dataset BBZ [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], as well as a literature-speci昀椀c dictionary SentiLitKrit (SLK1)8[]. Both
SentiWS and BAWL-R o昀er valence-based sentiment classi昀椀cation, meaning that each word in
the dictionary is weighted by a numerical value, whereas the other dictionaries only allow for
polarity-based sentiment assignment. For the annotation of the datasets with the presented
o昀-the-shelf-dictionaries, we follow the approach of1[], [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], as well as [
          <xref ref-type="bibr" rid="ref59">62</xref>
          ], who all use the R
library quanteda and a similar pre-processing. In our case, the quanteda extension quanteda
sentiment was used and only punctations and numbers were remove2d.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Zero-Shot Text Classification</title>
        <p>As a zero-shot model, we use textual entailment classi昀椀cation – also called natural language
inference (NLI) –, following the task description proposed i6n7][. In this approach a sentence
pair, called premise and hypothesis, is classi昀椀ed as ‘entailment’, ‘contradiction’ or ‘neutral’,
based on how well the hypothesis logically entails the premise. For zero-shot classi昀椀cation
we form hypotheses using the target labels. These hypotheses are created using the template:
”The sentiment is [blank]” 3. The blank is then 昀椀lled with the sentiment categories ‘negative’,
‘neutral’ and ‘positive’. The model generates probability scores for each premise and
hypothesis pair, corresponding to the di昀erent entailment classes. From these scores, we identify the
hypothesis with the highest probability of entailment as the classi昀椀cation outcome, and assign
the corresponding category. This methodology is applied to achieve zero-shot sentiment
classi昀椀cation, as illustrated in Figure1.</p>
        <p>
          Although there is some criticism about the performance of these models relying on spurious
correlation of super昀椀cial text elements3[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], still this model – and variants of it – are
performing very well, especially in sentiment classi昀椀cation6[
          <xref ref-type="bibr" rid="ref50 ref9">9, 52</xref>
          ]. We choose this the entailment
2https://github.com/quanteda/quanteda.sentiment. Besides the simpler quanteda variant, there are also more
complex dictionary approaches, such as VADER2[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. However, as VADER was developed for the English language
and the integrated translation option would not be cost-free for the datasets tested here, it was decided not to use
VADER.
3Translated from German“:Die Stimmung ist [blank].”
        </p>
        <p>Premise
‘Die Stimmung der Börse war zuversichtlich.’
‘Die Stimmung der Börse war zuversichtlich.’
‘Die Stimmung der Börse war zuversichtlich.’</p>
        <p>
          Hypotheses
‘Die Stimmung ist positiv.’
‘Die Stimmung ist neutral.’
‘Die Stimmung ist negativ.’
approach also because of its 昀氀exibility and accessibility. As entailment model we use a
pretrained NLI-mode4l from huggingface[
          <xref ref-type="bibr" rid="ref62">65</xref>
          ], which was trained on machine-translated versions
of multiple NLI-datasets (MNLI6[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]), ANLI [
          <xref ref-type="bibr" rid="ref35">37</xref>
          ], SNLI [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]) and tested on the German part of
the XNLI [15] dataset.
        </p>
        <p>Since we aim at comparing these models with zero knowledge about domain speci昀椀c
assumptions or vocabulary, we use the same for all datasets. For application purposes the hypothesis
template can have substantial impact on the quality of classi昀椀cation, and is part of the
optimization process similar to prompt engineerin3g0[]. The problem with optimizing the hypothesis
template is the need for some annotated data for evaluation, which is another reason, why we
opt for a simple, 昀椀xed and generic template.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Finetuned or State-of-the-Art</title>
        <p>As for the 昀椀netuned and state-of-the-art performance on each dataset we include two
comparisons: Firstly, we include the latest developments in the 昀椀eld as reported in other papers,
showcasing the current state-of-the-art (SOTA). Secondly, we employ a German sentiment
model developed by2[0]. This o昀-the-shelf model is trained on various German sentiment
datasets and serves as another benchmark for comparison. It is worth exploring whether this
broadly trained model, without domain-speci昀椀c adaptation, generalises well on out-of-domain
datasets, e.g. German historic language.
3.4. Data
We chose the datasets by availability and by recent mentions of SOTA results in recent research
publications. Nevertheless, the availability of non-English datasets remains a severely limiting
factor and also poses a signi昀椀cant problem for the subsequent training of specialised language
models. In addition to seven contemporary datasets based on social media posts or reviews, we
also selected four domain speci昀椀c datasets based on historical German, which we will outline
below:</p>
        <p>
          BBZ [
          <xref ref-type="bibr" rid="ref58">61</xref>
          ]: A set of 772 sentences sampled from the Berliner Börsenzeitung (BBZ) between
1872 and 1930. The dataset contains sentences-level annotations of negative, neutral and
positive labels.
        </p>
        <p>German Novel Dataset (GND) [68]: A crowd-sourced collection of 270 ternary labelled
sentences (positive,neutral,negative) from the German Novel Corpus (GNC).</p>
        <p>
          Lessing [
          <xref ref-type="bibr" rid="ref44">46</xref>
          ]: A set of 200 sampled speeches from Gottlob Lessing’s plays, manually
annotated by 昀椀ve experts with binary labels (positive, negative).
        </p>
        <p>
          SentiLitKrit [
          <xref ref-type="bibr" rid="ref17">18</xref>
          ]: A sample corpus for the SentiLitKrit (SLK) dictionary consisting of
manually annotated literature reviews for the period 1870-1889 with 1,010 binary annotated
sentences.
        </p>
        <p>GermEval 2017 [64]: This collection of tweets and news about the Deutsche Bahn between
2015 and 2016. We use the prede昀椀ned synchronous test set with 2,566 examples labelled with
positive, neutral or negative.</p>
        <p>
          PotTs [
          <xref ref-type="bibr" rid="ref51">53</xref>
          ]: A collection of tweets from 2013 on elections and political events with 7,504
items. The labels are positive, neutral or negative.
        </p>
        <p>
          SB10k [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]: Originally a set of over 9,000 tweets collected in 2013 divided into the categories
positive, negative and neutral. Since the original dataset does only publish the Twitter links,
we resort to a pre-assembled version by20[] with 7,476 entries and the labels positive, neutral
or negative.
        </p>
        <p>
          Amazon Reviews DE [
          <xref ref-type="bibr" rid="ref24">26</xref>
          ]: A multilingual corpus of amazon product reviews based on
star-ratings between 2015 and 2019. We use the German part of this corpus, which contains
5,000 test set elements.
        </p>
        <p>Filmstarts and Holidaycheck [20]: These datasets are sets of reviews for either 昀椀lms
or hotels crawled form the respective website. We use the dataset as described b2y0][to
ensure comparability and also exclude ratings with 3 stars, which would correspond to the
label neutral. The resulting sets include 55,260 items for Filmstarts and around 3.3 million
items for Holidaycheck.</p>
        <p>
          SCARE [
          <xref ref-type="bibr" rid="ref40">42</xref>
          ]: This dataset contains around 735,000 reviews for various apps from the Google
Playstore. It contains positive, neutral or negative labels.
        </p>
        <p>Except for the Amazon reviews, all datasets are unbalanced. Unless stated otherwise, no
pre-processing was conducted and if no dedicated test dataset was available, the entire dataset
was annotated. You can 昀椀nd detailed table of dataset sizes and composition in the AppendiAx.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results &amp; Discussion</title>
      <p>Performance In Table1, the micro F1 evaluation scores for all datasets and approaches are
presented. It is important to note that when comparing against20[], a problem arises, as we
were unable to reproduce the exact test sets used in their reported values. Therefore, there
is a possibility that our evaluation of their model may di昀er from the SOTA value extracted
from their work. Furthermore, to re昀氀ect the criticism of the high technical barrier faced by
social scientists, an out-of-the-box approach of implementing the Guhr et al. model using the
Huggingface pipeline was applied.</p>
      <p>The experimental 昀椀ndings indicate a consistent pattern in the performance of zero-shot text
classi昀椀cation, which falls between the application of available dictionaries and the SOTA
approach in micro and macro F1 (Tabl1e and Table 2). This pattern holds true not only for
contemporary data such as Amazon reviews or tweets but also generalises to the historical
examples like BBZ, GND, SentiLitKrit and Lessing. Close inspection into label-wise metrics as
seen in Table3 reveals that this happens despite of zero-shot struggling with the neutral class.
This also explains its high performance in binary polarity cases. The performance on positive
and negative polarity is high, with the exception of SB10k and GermEval, this will be discussed
shortly below.</p>
      <p>While the o昀-the-shelf model by [20] achieves a slightly better result than zero-shot
classi椀昀cation on contemporary data, it fails to generalise e昀ectively to the historical and literature</p>
      <p>Dataset Negative Neutral Positive
BBZ Gold 0.698 | 0.642 | 0.669 0.469 | 0.545 | 0.504 0.821 | 0.792 | 0.807</p>
      <p>
        Lessing [
        <xref ref-type="bibr" rid="ref44">46</xref>
        ] 0.853 | 0.755 | 0.801 - 0.558 | 0.704 | 0.623
SentiLitKrit [
        <xref ref-type="bibr" rid="ref17">18</xref>
        ] 0.603 | 0.770 | 0.676 - 0.894 | 0.793 | 0.841
      </p>
      <p>GND [68] 0.475 | 0.775 | 0.589 0.700 | 0.112 | 0.194 0.409 | 0.754 | 0.530
GermEval2017 [64] 0.503 | 0.767 | 0.608 0.769 | 0.097 | 0.173 0.076 | 0.847 | 0.140</p>
      <p>
        PotTs [
        <xref ref-type="bibr" rid="ref51">53</xref>
        ] 0.425 | 0.806 | 0.557 0.415 | 0.109 | 0.172 0.599 | 0.673 | 0.634
      </p>
      <p>
        SB10k [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] 0.318 | 0.746 | 0.446 0.687 | 0.076 | 0.138 0.327 | 0.822 | 0.468
Amazon Reviews [
        <xref ref-type="bibr" rid="ref24">26</xref>
        ] 0.744 | 0.769 | 0.757 0.358 | 0.244 | 0.290 0.756 | 0.852 | 0.801
      </p>
      <p>Filmstarts [20] 0.649 | 0.797 | 0.715 - 0.913 | 0.831 | 0.870
Holiday Check [20] 0.663 | 0.805 | 0.727 - 0.973 | 0.946 | 0.959</p>
      <p>
        SCARE [
        <xref ref-type="bibr" rid="ref40">42</xref>
        ] 0.741 | 0.845 | 0.789 - 0.940 | 0.891 | 0.915
domain. Given that the model was trained on contemporary or similar domain and language
style datasets, this is not surprising, but also illustrates that even modern language models
without 昀椀ne-tuning in the envisaged target domain only achieve mediocre results.
      </p>
      <p>Generally, our tests show a strong inconsistency in the results of the dictionaries, which
is independent of the intended use. In the two cases, GermEval 2017 and SB10k, where
zeroshot performs worse than dictionaries, we see a pattern of texts of very low quality. These
datasets contain annotations of varying quality and appear to be somewhat inconsistent. This
is why being trained on this type of data grants substantial advantage. However, the dictionary
approach, especially using BPW, although designed for 昀椀nancial contexts, seems to be working
well in these case. Moreover, a larger vocabulary or a combination of dictionaries, such as the
2012 version of GPC, which also contains the SentiWS vocabulary, does not necessarily lead to
better results. In the case of the SLK dataset with the purpose-built dictionary, the enormous
e昀ort required to create the dictionary is not re昀氀ected in a signi昀椀cantly better performance, as
can be seen in Table 1.</p>
      <p>
        For the Lessing dataset, an additional argument can be made regarding the inherent
incoherence of sentiment annotations. The ambiguity in sentiment o昀琀en leads to low inter-annotator
agreement during the annotation process4[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In this context, the zero-shot algorithm
demonstrates its e昀ectiveness by aligning with the majority decision in determining sentiment.
      </p>
      <p>Nevertheless based on these performance observations, we argue that the results provide
evidence supporting the viability of zero-shot text classi昀椀cation as a potential alternative, if
not a replacement, for general-purpose polarity dictionaries. Particularly in use cases where
no annotated training data or domain-speci昀椀c dictionaries are available, but where the
linguistic complexity or subject matter is di昀erent from that of the existing general-purpose
models/dictionaries, the zero-shot approach presented here delivers signi昀椀cantly better results and
a higher consistency of performance, provided that the quality of the source text is not too low.
Trade-o昀s As is o昀琀en the case, there are several trade-o昀s to consider. In Table5 we mark
these trade-o昀s for the methods with -, o and +, denoting disadvantage, neutral or advantage.
LLM tokenisation eases preprocessing and enhances robustness against orthographic errors
and contextual semantics, issues dictionary-based methods struggle with. Adapting
dictionaries or LLMs to speci昀椀c domains can be costly. Zero-shot models hold a clear advantage due to
their 昀氀exibility without adaptation. In cases where dictionaries are not adapted to the speci昀椀c
domain, the entailment zero-shot approach would deliver better performance in most cases.
Fine-tuning of the language models will deliver the best performance in any case, if trained
for the speci昀椀c task. The dictionary approach takes the clear win in inference time and
explainability. During inference time, entailment zero-shot and 昀椀ne-tuning are both slower than
dictionaries. The factor of around 4x (3x for shorter texts) between zero-shot and Guhr et al.
stems from the fact that entailment formulation introduces a forward pass per label, which in
our case is two or three, and the base model for zero-shot has three times the parameters (109M
vs 330M).</p>
      <p>Dictionaries o昀er clear explanations for algorithmic decision-making, directly tracking each
word’s contribution to sentiment scores. However, performance may not align with this
theoretical comprehensibility, as indicated in the evaluation of the 昀椀nancial BPW dictionary. In
contrast, neural classi昀椀ers are o昀琀en regarded as black boxes, but there are ongoing e昀orts to
explain token in昀氀uences on classi昀椀cation results [NIPS20178a20a862 , 54, 51], albeit through
mathematical approximations. Since this is a more indirect measure, we assess this as neutral
(o) for now.</p>
      <p>Another point to consider is that the inference and also training time, if necessary, depend
strongly on the hardware used. While the dictionary approaches are very e昀케cient and do not
need special hardware, neural network based classi昀椀ers o昀琀en gain speed signi昀椀cantly from
using GPUs, with the limiting factor o昀琀en being the VRAM. Luckily, during inference the
requirements are a bit lower than during training and especially the model we used is able to run
easily on consumer-grade GPUs7[0].</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In our study, we conducted a comparative analysis of three approaches to German sentiment
analysis: dictionary-based, zero-shot, and 昀椀ne-tuning. Although there are certain trade-o昀s,
the viability of zero-shot text classi昀椀cation for sentiment analysis as a possible alternative to
dictionary-based methods can be reasonably argued, particularly in cases where a 昀椀ne-tuned
model cannot be applied or trained su昀케ciently, either because of a lack of training data or due to
more speci昀椀c domains that deviate from the standard approaches trained on tweets or reviews.
Especially in binary cases there seem to be a clear advantage of applying zero-shot models to
alleviate data labeling labour with still substantial performance. We also emphasise that this
paper was not concerned with 昀椀ne-tuning or further engineering the prompt: In future work
the zero-shot’s weakness for neutral labels could be a matter of designing a better hypothesis
template.</p>
      <p>We argue that zero-shot text classi昀椀cation for polarity sentiment could also contribute to
bridging the gap in model availability for languages other than English. In our research, we
speci昀椀cally focused on an entailment-based zero-shot approach. However, with the
introduction of advanced language models like GPT-4 or LLama, the performance of zero-shot text
classi昀椀cation is expected to further widen the gap between dictionary approaches and zero-shot
text classi昀椀cation and even bring zero-shot results closer to SOTA values.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>This research is funded by the German Research Foundation (DFG) under project number BU
3502/1-1.</p>
      <p>A. Conneau, R. Rinott, G. Lample, A. Williams, S. R. Bowman, H. Schwenk, and V.
Stoyanov. “XNLI: Evaluating cross-lingual sentence representations”. IPnr:oceedings of the
2018 conference on empirical methods in natural language processing. Association for
Computational Linguistics, 2018,
[59] U. Waltinger. “GERMANPOLARITYCLUES: A Lexical Resource for German Sentiment
Analysis”. In:Proceedings of the Seventh International Conference on Language Resources
and Evaluation (LREC). Valletta, Malta: electronic proceedings, 2010, p. 00.
Appendix A shows an exact breakdown of how many positive, negative and neutral data sets
are in each data set.</p>
      <p>Negative
260
139
292
89
780
1569
1130
2000
15608
379683
196953</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>W. van Atteveldt</given-names>
            ,
            <surname>M. A. C. G. van der Velden</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Boukes</surname>
          </string-name>
          . “
          <article-title>The Validity of Sentiment Analysis: Comparing Manual Annotation, Crowd-Coding, Dictionary Approaches, and Machine Learning Algorithms”</article-title>
          .
          <source>InC:ommunication Methods and Measures 15.2</source>
          (
          <issue>2021</issue>
          ), pp.
          <fpage>121</fpage>
          -
          <lpage>140</lpage>
          . doi:
          <volume>10</volume>
          .1080/19312458.
          <year>2020</year>
          .
          <volume>1869198</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>C.</given-names>
            <surname>Baden</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Pipal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Schoonvelde</surname>
          </string-name>
          , and
          <string-name>
            <surname>M. A. C. G. van der Velden.</surname>
          </string-name>
          “
          <article-title>Three Gaps in Computational Text Analysis Methods for Social Sciences: A Research Agenda”</article-title>
          .
          <source>CIno:mmunication Methods and Measures 16.1</source>
          (
          <issue>2022</issue>
          ), pp.
          <fpage>1</fpage>
          -
          <lpage>18</lpage>
          . doi:
          <volume>10</volume>
          .1080/19312458.
          <year>2021</year>
          .
          <volume>2015574</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C.</given-names>
            <surname>Bannier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Pauls</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Walter</surname>
          </string-name>
          . “
          <article-title>Content analysis of business communication: introducing a German dictionary”</article-title>
          .
          <source>InJ:ournal of Business Economics 89.1</source>
          (
          <issue>2019</issue>
          ), pp.
          <fpage>79</fpage>
          -
          <lpage>123</lpage>
          . doi:
          <volume>10</volume>
          .1007/s11573-018-0914-8.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Barzilay</surname>
          </string-name>
          . “
          <article-title>Few-shot Text Classi昀椀cation with Distributional Signatures”</article-title>
          .
          <source>InI:nternational Conference on Learning Representations</source>
          .
          <year>2020</year>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>P.</given-names>
            <surname>Barberá</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. E.</given-names>
            <surname>Boydstun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Linn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>McMahon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and J.</given-names>
            <surname>Nagler</surname>
          </string-name>
          . “Automated Text Classi椀昀cation of News Articles:
          <string-name>
            <given-names>A Practical</given-names>
            <surname>Guide</surname>
          </string-name>
          <article-title>”</article-title>
          .
          <source>In:Political Analysis 29.1</source>
          (
          <issue>2021</issue>
          ), pp.
          <fpage>19</fpage>
          -
          <lpage>42</lpage>
          . doi:
          <volume>10</volume>
          .1017/pan.
          <year>2020</year>
          .
          <volume>8</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>F.</given-names>
            <surname>Barbieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. Espinosa</given-names>
            <surname>Anke</surname>
          </string-name>
          , and J.
          <string-name>
            <surname>Camacho-Collados</surname>
          </string-name>
          .
          <article-title>“XLM-T: Multilingual Language Models in Twitter for Sentiment Analysis and Beyond”</article-title>
          .
          <source>IPnr:oceedings of the Thirteenth Language Resources and Evaluation Conference. Marseille, France: European Language Resources Association</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>258</fpage>
          -
          <lpage>266</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>V.</given-names>
            <surname>Barriere</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Balahur</surname>
          </string-name>
          . “
          <article-title>Improving Sentiment Analysis over non-English Tweets using Multilingual Transformers and Automatic Translation for Data-Augmentation”</article-title>
          .
          <source>In: Proceedings of the 28th International Conference on Computational Linguistics</source>
          . Ed. by
          <string-name>
            <given-names>D.</given-names>
            <surname>Scott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Bel</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Zong</surname>
          </string-name>
          . Stroudsburg, PA, USA:
          <source>International Committee on Computational Linguistics</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>266</fpage>
          -
          <lpage>271</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          .coling-main.
          <volume>23</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Borst</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wehrheim</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Burghardt</surname>
          </string-name>
          . ““Money
          <string-name>
            <surname>Can't Buy</surname>
          </string-name>
          <article-title>Love?” Creating a Historical Sentiment Index for the Berlin Stock Exchange,</article-title>
          <year>1872</year>
          -
          <fpage>1930</fpage>
          ”.
          <source>InD: igital Humanities</source>
          <year>2023</year>
          : Book of Abstracts: Zenodo. Ed. by
          <string-name>
            <given-names>A.</given-names>
            <surname>Baillot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Tasovac</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Scholger</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.</given-names>
            <surname>Vogeler</surname>
          </string-name>
          .
          <year>2023</year>
          , pp.
          <fpage>365</fpage>
          -
          <lpage>367</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Boukes</surname>
          </string-name>
          , B. van de Velde, T. Araujo, and
          <string-name>
            <given-names>R.</given-names>
            <surname>Vliegenthart</surname>
          </string-name>
          . “
          <article-title>What's the Tone? Easy Doesn't Do It: Analyzing Performance and Agreement Between O昀-the-Shelf Sentiment Analysis Tools”</article-title>
          .
          <source>In:Communication Methods and Measures 14.2</source>
          (
          <issue>2020</issue>
          ), pp.
          <fpage>83</fpage>
          -
          <lpage>104</lpage>
          . doi:
          <volume>10</volume>
          .1080/19312458.
          <year>2019</year>
          .
          <volume>1671966</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S. R.</given-names>
            <surname>Bowman</surname>
          </string-name>
          , G. Angeli,
          <string-name>
            <given-names>C.</given-names>
            <surname>Potts</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Manning</surname>
          </string-name>
          . “
          <article-title>A large annotated corpus for learning natural language inference”</article-title>
          .
          <source>IPnr:oceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics</source>
          ,
          <year>2015</year>
          , pp.
          <fpage>632</fpage>
          -
          <lpage>642</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>D15</fpage>
          -1075.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bragg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cohan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lo</surname>
          </string-name>
          ,
          <string-name>
            <surname>and I. Beltagy.</surname>
          </string-name>
          “FLEX:
          <article-title>Unifying Evaluation for Few-Shot NLP”</article-title>
          .
          <source>In: NeurIPS</source>
          <year>2021</year>
          .
          <year>2021</year>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>T.</given-names>
            <surname>Brown</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ryder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Subbiah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Kaplan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dhariwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Neelakantan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Shyam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Sastry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Askell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Herbert-Voss</surname>
          </string-name>
          , G. Krueger,
          <string-name>
            <given-names>T.</given-names>
            <surname>Henighan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Child</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ramesh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ziegler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Winter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hesse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chen</surname>
          </string-name>
          , E. Sigler,
          <string-name>
            <given-names>M.</given-names>
            <surname>Litwin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gray</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Chess</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Berner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>McCandlish</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Radford</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Sutskever</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Amodei</surname>
          </string-name>
          . “
          <article-title>Language Models are Few-Shot Learners”</article-title>
          .
          <source>InA:dvances in Neural Information Processing Systems</source>
          . Ed. by
          <string-name>
            <given-names>H.</given-names>
            <surname>Larochelle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ranzato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hadsell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Balcan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Lin</surname>
          </string-name>
          . Vol.
          <volume>33</volume>
          . Curran Associates, Inc.,
          <year>2020</year>
          , pp.
          <fpage>1877</fpage>
          -
          <lpage>1901</lpage>
          . urlh: ttps://proceedings.neurips.c c/paper%5C%5Ffiles/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.p.df
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>C.-H. Chan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Bajjalieh</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Auvil</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Wessler</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Althaus</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Welbers</surname>
            , W. van Atteveldt, and
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Jungblut</surname>
          </string-name>
          . “
          <article-title>Four best practices for measuring news sentiment using 'o昀-the-shelf' dictionaries: a large-scale p-hacking experiment”</article-title>
          .
          <source>ICn:omputational Communication Research</source>
          <volume>3</volume>
          .1 (
          <issue>2021</issue>
          ), pp.
          <fpage>1</fpage>
          -
          <lpage>27</lpage>
          . url: https://computationalcommunication.org/ccr/article/vi ew/40.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Cieliebak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Deriu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Egger</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Uzdilli</surname>
          </string-name>
          .
          <article-title>“A Twitter Corpus and Benchmark Resources for German Sentiment Analysis”</article-title>
          .
          <source>InP:roceedings of the Fi昀琀h International Workshop on Natural Language Processing for Social Media</source>
          . Ed. by L.-W. Ku and
          <string-name>
            <given-names>C.-T.</given-names>
            <surname>Li</surname>
          </string-name>
          . Stroudsburg, PA, USA: Association for Computational Linguistics,
          <year>2017</year>
          , pp.
          <fpage>45</fpage>
          -
          <lpage>51</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>W17</fpage>
          -1106.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          , M.-
          <string-name>
            <given-names>W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          . “BERT:
          <article-title>Pre-training of Deep Bidirectional Transformers for Language Understanding”</article-title>
          .
          <source>ICno:RR</source>
          (
          <year>2019</year>
          ). doi: {arXiv}:
          <year>1810</year>
          .04 805[cs]. url: http://arxiv.org/abs/
          <year>1810</year>
          .0480 5.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>T.</given-names>
            <surname>Dobbrick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jakob</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.-H.</given-names>
            <surname>Chan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Wessler</surname>
          </string-name>
          . “
          <article-title>Enhancing Theory-Informed Dictionary Approaches with “Glass-box” Machine Learning: The Case of Integrative Complexity in Social Media Comments”</article-title>
          .
          <source>In:Communication Methods and Measures 16.4</source>
          (
          <issue>2022</issue>
          ), pp.
          <fpage>303</fpage>
          -
          <lpage>320</lpage>
          . doi:
          <volume>10</volume>
          .1080/19312458.
          <year>2021</year>
          .
          <volume>1999913</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>K.</given-names>
            <surname>Du</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Mellmann</surname>
          </string-name>
          .
          <article-title>Sentimentanalyse als Instrument literaturgeschichtlicher Rezeptionsforschung</article-title>
          . Working Paper. Göttingen,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>J.</given-names>
            <surname>Fehle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Wol昀</surname>
          </string-name>
          .
          <article-title>Lexicon-based Sentiment Analysis in German: Systematic Evaluation of Resources and Preprocessing Techniques</article-title>
          .
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .5283/epub.50833.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <given-names>O.</given-names>
            <surname>Guhr</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.-K. Schumann</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Bahrmann</surname>
            , and
            <given-names>H. J.</given-names>
          </string-name>
          <string-name>
            <surname>Böhme</surname>
          </string-name>
          . “
          <article-title>Training a Broad-Coverage German Sentiment Classi昀椀cation Model for Dialog Systems”</article-title>
          .
          <source>In:Proceedings of the Twel昀琀h Language Resources and Evaluation Conference . Marseille, France: European Language Resources Association</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>1627</fpage>
          -
          <lpage>1632</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>K.</given-names>
            <surname>Halder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Akbik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Krapac</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Vollgraf</surname>
          </string-name>
          . “
          <article-title>Task-Aware Representation of Sentences for Generic Text Classi昀椀cation”</article-title>
          .
          <source>In: Proceedings of the 28th International Conference on Computational Linguistics. International Committee on Computational Linguistics</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>3202</fpage>
          -
          <lpage>3213</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          .coling-main.
          <volume>285</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Xue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Gao</surname>
          </string-name>
          , R. Cheng, and
          <string-name>
            <given-names>Z.</given-names>
            <surname>Su.</surname>
          </string-name>
          <article-title>“Multi-Label FewShot Learning for Aspect Category Detection”</article-title>
          .
          <article-title>InP:roceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th</article-title>
          <source>International Joint Conference on Natural Language Processing</source>
          (Volume
          <volume>1</volume>
          : Long Papers).
          <source>Association for Computational Linguistics</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>6330</fpage>
          -
          <lpage>6340</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2021</year>
          .
          <article-title>acl-long</article-title>
          .
          <volume>495</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>C.</given-names>
            <surname>Hutto</surname>
          </string-name>
          and
          <string-name>
            <surname>E. Gilbert.</surname>
          </string-name>
          “
          <article-title>VADER: A Parsimonious Rule-Based Model for Sentiment Analysis of Social Media Text”</article-title>
          .
          <source>In:Proceedings of the International AAAI Conference on Web and Social Media</source>
          <volume>8</volume>
          .1 (
          <issue>2014</issue>
          ), pp.
          <fpage>216</fpage>
          -
          <lpage>225</lpage>
          . doi:
          <volume>10</volume>
          .1609/icwsm.v8i1.14550. url: https://ojs .aaai.org/index.php/ICWSM/article/view/1455.0
          <string-name>
            <given-names>A.</given-names>
            <surname>Idrissi-Yaghir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schäfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Bauer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Friedrich</surname>
          </string-name>
          . “
          <article-title>Domain Adaptation of Transformer-Based Models Using Unlabeled Data for Relevance and Polarity Classi昀椀- cation of German Customer Feedback”</article-title>
          .
          <source>InS: N Computer Science 4.2</source>
          (
          <year>2023</year>
          ).
          <source>doi: 10.1007 /s42979-022-01563-6.</source>
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>B. M. J.</given-names>
            <surname>Kern</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Baumann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. E.</given-names>
            <surname>Kolb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sekanina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Hofmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Wissik</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Neidhardt</surname>
          </string-name>
          .
          <article-title>A Review and Cluster Analysis of German Polarity Resources for Sentiment Analysis</article-title>
          .
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .4230/oasics.ldk.
          <year>2021</year>
          .
          <volume>37</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>P.</given-names>
            <surname>Keung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lu</surname>
          </string-name>
          , G. Szarvas, and
          <string-name>
            <given-names>N. A.</given-names>
            <surname>Smith</surname>
          </string-name>
          . “
          <article-title>The Multilingual Amazon Reviews Corpus”</article-title>
          .
          <source>In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)</source>
          .
          <source>Online: Association for Computational Linguistics</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>4563</fpage>
          -
          <lpage>4568</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          .emnlp-main.
          <volume>369</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>E.</given-names>
            <surname>Kim</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Klinger</surname>
          </string-name>
          .
          <article-title>A Survey on Sentiment and Emotion Analysis for Computational Literary Studies</article-title>
          .
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .17175/
          <year>2019</year>
          {\textunderscore}
          <fpage>008</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kolb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Sekanina</given-names>
            <surname>Katharina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. M. J.</given-names>
            <surname>Kern</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Neidhardt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Wissik</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Baumann</surname>
          </string-name>
          . “
          <article-title>The ALPIN Sentiment Dictionary: Austrian Language Polarity in Newspapers”. IPnr:oceedings of the Thirteenth Language Resources and Evaluation Conference (LREC</article-title>
          <year>2022</year>
          ). Ed. by N. e. a.
          <source>Calzolari</source>
          .
          <year>2022</year>
          , pp.
          <fpage>4708</fpage>
          -
          <lpage>4716</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ma</surname>
          </string-name>
          , J. Meng,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhuang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.-Q.</given-names>
            <surname>Peng</surname>
          </string-name>
          . “
          <article-title>Detecting Sentiment toward Emerging Infectious Diseases on Social Media: A Validity Evaluation of Dictionary-Based Sentiment Analysis”</article-title>
          .
          <source>In:International Journal of Environmental Research and Public Health</source>
          <volume>19</volume>
          .11 (
          <year>2022</year>
          ). doi:
          <volume>10</volume>
          .3390/ijerph19116759.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>P.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yuan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hayashi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G. Neubig.</given-names>
            “
            <surname>Pre-Train</surname>
          </string-name>
          , Prompt, and
          <article-title>Predict: A Systematic Survey of Prompting Methods in Natural Language Processing”</article-title>
          .
          <source>In: ACM Comput. Surv. 55.9</source>
          (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .1145/3560815.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Joshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Levy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zettlemoyer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V.</given-names>
            <surname>Stoyanov. RoBERTa: A Robustly Optimized BERT Pretraining Approach</surname>
          </string-name>
          .
          <year>2019</year>
          . arXiv:
          <year>1907</year>
          .
          <article-title>11692 [cs</article-title>
          .CL].
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Lundberg</surname>
          </string-name>
          and
          <string-name>
            <surname>S.-I. Lee.</surname>
          </string-name>
          “
          <article-title>A uni昀椀ed approach to interpreting model predictions”</article-title>
          .
          <source>In: Advances in neural information processing systems</source>
          . Ed. by I. Guyon,
          <string-name>
            <given-names>U. V.</given-names>
            <surname>Luxburg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wallach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Fergus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vishwanathan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Garnett</surname>
          </string-name>
          . Vol.
          <volume>30</volume>
          . Curran Associates, Inc.,
          <year>2017</year>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>T.</given-names>
            <surname>Ma</surname>
          </string-name>
          , J.-G. Yao,
          <string-name>
            <given-names>C.-Y.</given-names>
            <surname>Lin</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Zhao</surname>
          </string-name>
          <string-name>
            <surname>.</surname>
          </string-name>
          “
          <article-title>Issues with Entailment-based Zero-shot Text Classi昀椀cation”</article-title>
          .
          <source>In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing</source>
          (Volume
          <volume>2</volume>
          : Short Papers).
          <source>Association for Computational Linguistics</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>786</fpage>
          -
          <lpage>796</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2021</year>
          .acl-short.
          <volume>9</volume>
          <fpage>9</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>G.</given-names>
            <surname>Manias</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mavrogiorgou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kiourtis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Symvoulidis</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Kyriazis</surname>
          </string-name>
          . “
          <article-title>Multilingual text categorization and sentiment analysis: a comparative analysis of the utilization of multilingual approaches for classifying twitter data”</article-title>
          .
          <source>NIneu:ral Computing and Applications</source>
          (
          <year>2023</year>
          ).
          <source>doi: 10.1007/s00521-023-08629-3.</source>
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mengelkamp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Koch</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Schumann</surname>
          </string-name>
          . “Creating Sentiment Dictionaries:
          <article-title>Process Model and Quantitative Study for Credit Risk”</article-title>
          .
          <source>IPnr:oceedings of the 9th European Conference on Social Media. 1</source>
          .
          <year>2022</year>
          , pp.
          <fpage>121</fpage>
          -
          <lpage>129</lpage>
          . doi:
          <volume>10</volume>
          .25968/opus-2449.
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>K.</given-names>
            <surname>Müller</surname>
          </string-name>
          . “
          <article-title>German forecasters' narratives: How informative are German business cycle forecast reports?</article-title>
          ”
          <source>InE:mpirical Economics 62.5</source>
          (
          <issue>2022</issue>
          ), pp.
          <fpage>2373</fpage>
          -
          <lpage>2415</lpage>
          . doi:
          <volume>10</volume>
          .1007/s001 81-
          <fpage>021</fpage>
          -02100-9.
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Nie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Williams</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Dinan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bansal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Weston</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Kiela</surname>
          </string-name>
          . “
          <string-name>
            <surname>Adversarial</surname>
            <given-names>NLI</given-names>
          </string-name>
          :
          <article-title>A New Benchmark for Natural Language Understanding”. InP:roceedings of the 58th Annual Meeting of the Association for Computational Linguistics</article-title>
          .
          <source>Online: Association for Computational Linguistics</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>4885</fpage>
          -
          <lpage>4901</lpage>
          .
          <year>doi1</year>
          :
          <fpage>0</fpage>
          .18653/v1/
          <year>2020</year>
          .acl-main.
          <volume>441</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>M.</given-names>
            <surname>Palmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Roeder</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Muntermann</surname>
          </string-name>
          . “
          <article-title>Induction of a sentiment dictionary for 昀椀nancial analyst communication: a data-driven approach balancing machine learning and human intuition”</article-title>
          .
          <source>In:Journal of Business Analytics 5.1</source>
          (
          <issue>2022</issue>
          ), pp.
          <fpage>8</fpage>
          -
          <lpage>28</lpage>
          . doi:
          <volume>10</volume>
          .1080/257 3234x.
          <year>2021</year>
          .
          <volume>1955022</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pöferlein</surname>
          </string-name>
          . “
          <article-title>Sentiment Analysis of German Texts in Finance: Improving and Testing the BPW Dictionary”</article-title>
          .
          <source>In:Journal of Banking and Financial Economics</source>
          <year>2021</year>
          .
          <volume>2</volume>
          (
          <issue>16</issue>
          ) (
          <year>2021</year>
          ), pp.
          <fpage>5</fpage>
          -
          <lpage>24</lpage>
          . doi:
          <volume>10</volume>
          .7172/
          <fpage>2353</fpage>
          -
          <lpage>6845</lpage>
          .jbfe.
          <year>2021</year>
          .
          <volume>2</volume>
          .1.
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>C.</given-names>
            <surname>Puschmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Karakurt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Amlinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Gess</surname>
          </string-name>
          , and
          <string-name>
            <given-names>O.</given-names>
            <surname>Nachtwey</surname>
          </string-name>
          . “
          <string-name>
            <surname>RPC-Lex</surname>
          </string-name>
          :
          <article-title>A dictionary to measure German right-wing populist conspiracy discourse online”C.Ionn:vergence (London</article-title>
          , England)
          <volume>28</volume>
          .4 (
          <issue>2022</issue>
          ), pp.
          <fpage>1144</fpage>
          -
          <lpage>1171</lpage>
          . doi:
          <volume>10</volume>
          .1177/13548565221109440.
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>R.</given-names>
            <surname>Remus</surname>
          </string-name>
          , U. Quastho昀, and
          <string-name>
            <given-names>G.</given-names>
            <surname>Heyer. “SentiWS - A Publicly Available</surname>
          </string-name>
          <article-title>German-language Resource for Sentiment Analysis”</article-title>
          .
          <source>InP:roceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)</source>
          . Valletta,
          <source>Malta: European Language Resources Association (ELRA)</source>
          ,
          <year>2010</year>
          , p.
          <fpage>2</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sänger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Leser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kemmerer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Adolphs</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Klinger</surname>
          </string-name>
          . “
          <article-title>SCARE - The Sentiment Corpus of App Reviews with Fine-grained Annotations in German”</article-title>
          .
          <source>InP:roceedings of the Tenth International Conference on Language Resources and Evaluation (LREC</source>
          <year>2016</year>
          ). Ed. by
          <string-name>
            <surname>N. C.</surname>
          </string-name>
          ( Chair),
          <string-name>
            <given-names>K.</given-names>
            <surname>Choukri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Declerck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Grobelnik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Maegaard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mariani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Moreno</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Odijk</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Piperidis</surname>
          </string-name>
          . Paris, France:
          <source>European Language Resources Association (ELRA)</source>
          ,
          <year>2016</year>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          [43]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sarkar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Reddy</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Iyengar</surname>
          </string-name>
          . “
          <article-title>Zero-Shot Multilingual Sentiment Analysis Using Hierarchical Attentive Network and BERT”</article-title>
          .
          <source>InP:roceedings of the 2019 3rd International Conference on Natural Language Processing and Information Retrieval. Nlpir</source>
          <year>2019</year>
          . New York, NY, USA: Association for Computing Machinery,
          <year>2019</year>
          , pp.
          <fpage>49</fpage>
          -
          <lpage>56</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          [44]
          <string-name>
            <given-names>T.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Burghardt</surname>
          </string-name>
          . “
          <article-title>An Evaluation of Lexicon-based Sentiment Analysis Techniques for the Plays of Gotthold Ephraim Lessing”</article-title>
          .
          <source>InPr:oceedings of the Second Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage</source>
          ,
          <source>Social Sciences, Humanities and Literature, August</source>
          <volume>25</volume>
          ,
          <year>2018</year>
          ,
          <string-name>
            <given-names>Santa</given-names>
            <surname>Fe</surname>
          </string-name>
          , New Mexico, USA. Ed. by
          <string-name>
            <given-names>B.</given-names>
            <surname>Alex</surname>
          </string-name>
          . Stroudsburg, PA: Association for Computational Linguistics,
          <year>2018</year>
          , pp.
          <fpage>139</fpage>
          -
          <lpage>149</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          [45]
          <string-name>
            <given-names>T.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Burghardt</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Dennerlein</surname>
          </string-name>
          . “„
          <article-title>Kann man denn auch nicht lachend sehr ernstha昀琀 sein¿' - Zum Einsatz von Sentiment Analyse-Verfahren für die quantitative Untersuchung von Lessings Dramen””</article-title>
          . In:Book of Abtracts.
          <year>2018</year>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          [46]
          <string-name>
            <given-names>T.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Burghardt</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Dennerlein</surname>
          </string-name>
          . “
          <article-title>Sentiment Annotation of Historic German Plays: An Empirical Study on Annotation Behavior”</article-title>
          .
          <source>IPnr:oceedings of the Workshop on Annotation in Digital Humanities</source>
          <year>2018</year>
          (annDH
          <year>2018</year>
          ). Ed. by
          <string-name>
            <given-names>S.</given-names>
            <surname>Kübler</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.</given-names>
            <surname>Zinsmeister. So昀椀a</surname>
          </string-name>
          , Bulgaria,
          <year>2018</year>
          , pp.
          <fpage>47</fpage>
          -
          <lpage>52</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          [47]
          <string-name>
            <given-names>T.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dangel</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Wol昀. SentText</surname>
          </string-name>
          :
          <article-title>A Tool for Lexicon-based Sentiment Analysis in Digital Humanities</article-title>
          .
          <source>Universität Regensburg</source>
          ,
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .5283/epub.44943.
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          [48]
          <string-name>
            <given-names>E.</given-names>
            <surname>Schonfeld</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ebrahimi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sinha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Darrell</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Z.</given-names>
            <surname>Akata</surname>
          </string-name>
          . “
          <article-title>Generalized Zero-</article-title>
          and
          <source>FewShot Learning via Aligned Variational Autoencoders”. I2n0:19 IEEE/CVF Conference on Computer Vision</source>
          and
          <article-title>Pattern Recognition (CVPR)</article-title>
          . Ieee,
          <year>2019</year>
          , pp.
          <fpage>8239</fpage>
          -
          <lpage>8247</lpage>
          . doi:
          <volume>10</volume>
          .1109 /cvpr.
          <year>2019</year>
          .
          <volume>00844</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          [49]
          <string-name>
            <given-names>R.</given-names>
            <surname>Schwartz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dodge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Smith</surname>
          </string-name>
          , and
          <string-name>
            <given-names>O.</given-names>
            <surname>Etzioni</surname>
          </string-name>
          . “
          <string-name>
            <surname>Green</surname>
            <given-names>AI</given-names>
          </string-name>
          ”.
          <source>InC:ommunications of the ACM</source>
          <volume>63</volume>
          (
          <year>2019</year>
          ), pp.
          <fpage>54</fpage>
          -
          <lpage>63</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          [50]
          <string-name>
            <given-names>R.</given-names>
            <surname>Seoh</surname>
          </string-name>
          , I. Birle,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.-S.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Pinette</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Hough</surname>
          </string-name>
          . “
          <article-title>Open Aspect Target Sentiment Classi昀椀cation with Natural Language Prompts”</article-title>
          .
          <source>In:Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>6311</fpage>
          -
          <lpage>6322</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2021</year>
          .emnlp-main.
          <volume>509</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          [51]
          <string-name>
            <given-names>A.</given-names>
            <surname>Shrikumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Greenside</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Kundaje</surname>
          </string-name>
          . “
          <article-title>Learning important features through propagating activation di昀erences”</article-title>
          .
          <source>In: Proceedings of the 34th international conference on machine learning - volume 70. Icml'17. JMLR.org</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>3145</fpage>
          -
          <lpage>3153</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          [52]
          <string-name>
            <given-names>L.</given-names>
            <surname>Shu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Liu</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          .
          <source>Zero-Shot Aspect-Based Sentiment Analysis</source>
          .
          <year>2022</year>
          . arXiv:
          <fpage>2202</fpage>
          .
          <year>01924</year>
          [cs.CL].
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          [53]
          <string-name>
            <given-names>U.</given-names>
            <surname>Sidarenka</surname>
          </string-name>
          . “
          <article-title>PotTS: The Potsdam Twitter Sentiment Corpus”</article-title>
          .
          <source>InP:roceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)</source>
          . Portorož,
          <source>Slovenia: European Language Resources Association (ELRA)</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>1133</fpage>
          -
          <lpage>1141</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          [54]
          <string-name>
            <given-names>K.</given-names>
            <surname>Simonyan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vedaldi</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Zisserman</surname>
          </string-name>
          . “
          <article-title>Deep Inside Convolutional Networks: Visualising Image Classi昀椀cation Models and Saliency Maps”</article-title>
          .
          <source>InW:orkshop at International Conference on Learning Representations</source>
          .
          <year>2014</year>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>
          [55]
          <string-name>
            <given-names>H.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Tolochko</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.-M. Eberl</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Eisele</surname>
            , E. Greussing,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Heidenreich</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Lind</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Galyga</surname>
            , and
            <given-names>H. G.</given-names>
          </string-name>
          <string-name>
            <surname>Boomgaarden</surname>
          </string-name>
          . “In Validations We Trust?
          <article-title>The Impact of Imperfect Human Annotations as a Gold Standard on the Quality of Validation of Automated Content Analysis”</article-title>
          .
          <source>In:Political Communication 37.4</source>
          (
          <issue>2020</issue>
          ), pp.
          <fpage>550</fpage>
          -
          <lpage>572</lpage>
          . doi:
          <volume>10</volume>
          .1080/10584 609.
          <year>2020</year>
          .
          <volume>1723752</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>
          [56]
          <string-name>
            <given-names>A.</given-names>
            <surname>Stoll</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wilms</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Ziegele</surname>
          </string-name>
          . “
          <article-title>Developing an Incivility Dictionary for German Online Discussions - a Semi-Automated Approach Combining Human and Arti昀椀cial Knowledge”</article-title>
          .
          <source>In: Communication Methods and Measures</source>
          (
          <year>2023</year>
          ), pp.
          <fpage>1</fpage>
          -
          <lpage>19</lpage>
          . doi:
          <volume>10</volume>
          .1080/1931245 8.
          <year>2023</year>
          .
          <volume>2166028</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref55">
        <mixed-citation>
          [57]
          <string-name>
            <given-names>S. G.</given-names>
            <surname>Tesfagergish</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kapočiūtė-Dzikienė</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>R.</given-names>
            <surname>Damaševičius</surname>
          </string-name>
          . “
          <article-title>Zero-Shot Emotion Detection for Semi-Supervised Sentiment Analysis Using Sentence Transformers and Ensemble Learning”</article-title>
          .
          <source>In:Applied Sciences 12.17</source>
          (
          <year>2022</year>
          ), p.
          <fpage>8662</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref56">
        <mixed-citation>
          [58]
          <string-name>
            <surname>M. L. H. Võ</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Conrad</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Kuchinke</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Urton</surname>
            ,
            <given-names>M. J.</given-names>
          </string-name>
          <string-name>
            <surname>Hofmann</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Jacobs. “The Berlin A昀ective Word List Reloaded (BAWL-R)</surname>
          </string-name>
          <article-title>”</article-title>
          .
          <source>In:Behavior Research Methods 41.2</source>
          (
          <issue>2009</issue>
          ), pp.
          <fpage>534</fpage>
          -
          <lpage>538</lpage>
          . doi:
          <volume>10</volume>
          .3758/brm.41.2.534.
        </mixed-citation>
      </ref>
      <ref id="ref57">
        <mixed-citation>
          [60]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. T.</given-names>
            <surname>Kwok</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Ni</surname>
          </string-name>
          . “
          <article-title>Generalizing from a Few Examples: A Survey on Few-Shot Learning”</article-title>
          .
          <source>In:ACM Comput. Surv. 53.3</source>
          (
          <year>2020</year>
          ). doi:
          <volume>10</volume>
          .1145/3386252.
        </mixed-citation>
      </ref>
      <ref id="ref58">
        <mixed-citation>
          [61]
          <string-name>
            <given-names>L.</given-names>
            <surname>Wehrheim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Borst</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Burghardt</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Niekler</surname>
          </string-name>
          . “„
          <article-title>Auch heute war die Stimmung im Allgemeinen fest</article-title>
          .“
          <article-title>Zero-Shot Klassi昀椀kation zur Bestimmung des Media Sentiment an der Berliner Börse zwischen 1872 und 1930”</article-title>
          . InK:onferenzabstracts DHd2023: Open Humanities, Open Culture.
          <year>2023</year>
          , pp.
          <fpage>90</fpage>
          -
          <lpage>94</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref59">
        <mixed-citation>
          [62]
          <string-name>
            <given-names>T.</given-names>
            <surname>Widmann</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Wich</surname>
          </string-name>
          . “Creating and
          <string-name>
            <given-names>Comparing</given-names>
            <surname>Dictionary</surname>
          </string-name>
          , Word Embedding, and
          <article-title>Transformer-Based Models to Measure Discrete Emotions in German Political Text”</article-title>
          .
          <source>In: Political Analysis</source>
          (
          <year>2022</year>
          ), pp.
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          . doi:
          <volume>10</volume>
          .1017/pan.
          <year>2022</year>
          .
          <volume>15</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref60">
        <mixed-citation>
          <string-name>
            <given-names>A.</given-names>
            <surname>Williams</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Nangia</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Bowman</surname>
          </string-name>
          . “
          <article-title>A broad-coverage challenge corpus for sentence understanding through inference”. InP: roceedings of the 2018 conference of the north american chapter of the association for computational linguistics: Human language technologies, volume 1 (long papers)</article-title>
          .
          <source>Association for Computational Linguistics</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>1112</fpage>
          -
          <lpage>1122</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref61">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Wojatzki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Ruppert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Holschneider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Zesch</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Biemann</surname>
          </string-name>
          . “
          <source>GermEval</source>
          <year>2017</year>
          :
          <article-title>Shared Task on Aspect-based Sentiment in Social Media Customer Feedback”</article-title>
          .
          <article-title>InPr:oceedings of the GermEval 2017 - Shared Task on Aspect-based Sentiment in Social Media Customer Feedback</article-title>
          . Berlin, Germany,
          <year>2017</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref62">
        <mixed-citation>
          [65]
          <string-name>
            <given-names>T.</given-names>
            <surname>Wolf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Debut</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Sanh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chaumond</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Delangue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Moi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cistac</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Rault</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Louf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Funtowicz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Davison</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shleifer</surname>
          </string-name>
          , P. von Platen, C. Ma,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jernite</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Plu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. Le</given-names>
            <surname>Scao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gugger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Drame</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Lhoest</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Rush</surname>
          </string-name>
          . “Transformers:
          <article-title>State-of-theArt Natural Language Processing”</article-title>
          .
          <source>InP:roceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Online: Association for Computational Linguistics</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>38</fpage>
          -
          <lpage>45</lpage>
          .
          <year>doi1</year>
          :
          <fpage>0</fpage>
          .18653/v1/
          <year>2020</year>
          .emnlp-demos.6. url: https://aclanthology.org/
          <year>2020</year>
          .emnlp-demos..6
        </mixed-citation>
      </ref>
      <ref id="ref63">
        <mixed-citation>
          [66]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Schiele</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Z.</given-names>
            <surname>Akata</surname>
          </string-name>
          . “
          <article-title>Zero-Shot Learning - The Good, the Bad and the Ugly”</article-title>
          .
          <source>In: IEEE Computer Vision and Pattern Recognition (CVPR)</source>
          .
          <year>2017</year>
          , [
          <volume>63</volume>
          ] [64] [67] [68]
          <string-name>
            <given-names>W.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hay</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Roth</surname>
          </string-name>
          . “
          <article-title>Benchmarking Zero-shot Text Classi昀椀cation: Datasets, Evaluation and Entailment Approach”</article-title>
          .
          <source>InP:roceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP</source>
          <year>2019</year>
          ,
          <string-name>
            <given-names>Hong</given-names>
            <surname>Kong</surname>
          </string-name>
          , China, November 3-
          <issue>7</issue>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref64">
        <mixed-citation>
          <string-name>
            <surname>Association for Computational Linguistics</surname>
          </string-name>
          ,
          <year>2019</year>
          , pp.
          <fpage>3912</fpage>
          -
          <lpage>3921</lpage>
          .
          <year>do1i</year>
          :
          <fpage>0</fpage>
          .18653/v1/
          <fpage>D19</fpage>
          - 1404.
        </mixed-citation>
      </ref>
      <ref id="ref65">
        <mixed-citation>
          <string-name>
            <given-names>A.</given-names>
            <surname>Zehe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Becker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Jannidis</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Hotho</surname>
          </string-name>
          . “
          <article-title>Towards Sentiment Analysis on German Literature”</article-title>
          .
          <source>In:KI</source>
          <year>2017</year>
          :
          <article-title>Advances in Arti昀椀cial Intelligence</article-title>
          . Ed. by
          <string-name>
            <given-names>G.</given-names>
            <surname>Kern-Isberner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Fürnkranz</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Thimm</surname>
          </string-name>
          . Vol.
          <volume>10505</volume>
          . Lecture Notes in Computer Science. Cham: Springer International Publishing,
          <year>2017</year>
          , pp.
          <fpage>387</fpage>
          -
          <lpage>394</lpage>
          .
          <year>do1i</year>
          :
          <fpage>0</fpage>
          . 1007 / 978 - 3 -
          <fpage>319</fpage>
          - 67190 - 1\_
          <fpage>36</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref66">
        <mixed-citation>
          [69]
          <string-name>
            <given-names>R. H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. X.</given-names>
            <surname>Fan</surname>
          </string-name>
          , and
          <string-name>
            <surname>R. Zhang.</surname>
          </string-name>
          <article-title>ConEntail: An Entailment-based Framework for Universal Zero and Few Shot Classi昀椀cation with Supervised Contrastive Pretraining</article-title>
          . Dubrovnik, Croatia,
          <year>2023</year>
          . doi1:
          <fpage>0</fpage>
          .18653/v1/
          <year>2023</year>
          .eacl-main.
          <volume>142</volume>
          . url: https://aclantholo gy.org/
          <year>2023</year>
          .eacl-main.
          <volume>142</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref67">
        <mixed-citation>
          [70]
          <string-name>
            <given-names>F.</given-names>
            <surname>Ziegner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Borst</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Niekler</surname>
          </string-name>
          , and M.
          <source>PotthasUt.sing Language Models on Low-end Hardware</source>
          .
          <year>2023</year>
          . arXiv:
          <volume>2305</volume>
          .02350 [cs.CL].
          <source>Dataset BBZ Gold Lessing[46] SentiLitKrit[18] GND[68] GermEval 2017 sync [64] PotTs [53] SB10k [14] Amazon Reviews german [26] Filmstarts [20] Holiday Check [20] SCARE [42]</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>