<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Bologna, IT
* Corresponding author.
†These authors contributed equally.
$ tiemann@bibb.de (M. Tiemann)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>A Comparison of “X” Sentiment Analysis Investigating the Impact of COVID-19 on “Essential Jobs”</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ali Vahdatnia</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Danoosh Peachkah</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michael Tiemann</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Federal Institute for Vocational Education and Training (BIBB)</institution>
          ,
          <addr-line>Bonn</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Koblenz, Department of Computer Science</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>This paper investigates transformations of “Essential Jobs” during the COVID-19 pandemic through sentiment analysis of social media data, specifically focusing on “X” posts. The study employs a comprehensive methodology consisting of traditional and modern sentiment analysis tools as well as advanced deep learning approaches to examine job-related sentiments across English and German languages. The research demonstrates that the “Twitter-XLM-RoBERTa” model outperforms other sentiment analysis tools in both base and enhanced implementations, challenging the assumption that deep learning enhancements necessarily improve sentiment analysis performance. The findings indicate significant variations between “Essential Job” designations. However, the high proportion of “No-Data” classifications and linguistic variability between English and German datasets suggest methodological limitations.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Essential Jobs</kwd>
        <kwd>“X” Social Media Analysis</kwd>
        <kwd>COVID-19 Pandemic</kwd>
        <kwd>Sentiment Analysis</kwd>
        <kwd>Deep Learning</kwd>
        <kwd>Cross-lingual Analysis</kwd>
        <kwd>Job Classification</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Social media has emerged as a potent instrument for capturing and shaping public opinion during
significant events. Platforms such as “X” (formerly “Twitter”) can be seen as real-time indicators of
societal sentiment. These platforms facilitate the aggregation of dynamic insights into public sentiment
and reactions, thereby ofering researchers unprecedented opportunities to study human behavior and
opinion at scale.</p>
      <p>In order to comprehend the substantial and unstructured data generated on social media platforms
such as “X”, researchers employ techniques such as sentiment analysis, which categorizes posts as
positive, negative, or neutral. The eficacy of this process is augmented through the integration of
machine learning and Natural Language Processing (NLP), a combination fostering the extraction of
emotional meaning from textual data. NLP facilitates the nuanced interpretation of user-generated
content, while machine learning models can learn from labeled datasets to classify new posts with a high
degree of accuracy. Deep learning, a subset of machine learning, has further advanced this capability by
more efectively handling the complexity of human language, especially with the introduction of Large
Language Models (LLMs). These models have been demonstrated to possess the capacity to comprehend
implicit sentiments and contextual meaning, rendering them particularly well-suited for the analysis of
extensive social media datasets with enhanced precision and scalability.</p>
      <p>These methods help using social media data to study public opinion on societal changes, such as the
evolving definition of Essential Jobs during the course of the pandemic. Conventional data collection
methodologies, such as surveys, have encountered significant limitations, as they are not only
timeand ressource-intensive. Social media provides a viable alternative, ofering the capacity to capture
real-time, spontaneous public reactions. This paper addresses the research gap on why more and
more jobs were regarded as essential during the pandemic by conducting a comprehensive analysis
of sentiments expressed on “X” with respect to Essential Jobs, building upon extant research on the
German workforce. The objective of this study is to find hints on whether shifts in what constitutes
an Essential Job were institutionally driven or arose “organically” from public discourse. However, we
focus on two particular research questions:
• Which sentiment analysis method is the most efective for assessing changes in sentiment towards</p>
      <p>Essential Jobs? (RQ1)
• How might contemporary deep learning algorithms improve the performance of the sentiment
analysis process? (RQ2)</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Tiemann et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] look into the societal and economic assessment of Essential Jobs during the COVID-19
pandemic. The analysis involves comparing two lists of Essential Jobs, namely the Berlin List — Essential
Jobs before and in the initial phase of the pandemic — and the Extended List — jobs that were added to
the Essential Jobs list as the pandemic continued. They find diferences in wages, prestige, workload,
and degrees of qualification, analysing data from the 2018 BIBB/BAuA Employment Survey [ ? ] to
understand these diferences across occupations throughout the pandemic. The results emphasize
discrepancies in remuneration and occupational prestige between Essential and Non-Essential Jobs.
One gap in research is thus to investigate whether jobs had been included in the Essential Jobs list
“organically” over the course of the pandemic which could have shown in changing sentiments towards
them or in the discussions surrounding them. By using sentiment analysis techniques, as well as
incorporating deep learning methods on “X” data, we want to deepen our understanding on sentiments
towards occupations as the initial analyses did only rely on standard methods for this. Other literature
studying Essential Jobs focus on the efects of the COVID-19 pandemic on workers classified as Essential
and Non-Essential, as an example see van Zoonen et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Miah et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] explored a novel system for cross-lingual sentiment analysis utilizing transformers and
Large Language Models in an ensemble architecture. The study focuses on the eficacy of pre-trained
sentiment analysis models such as Twitter-RoBERTa-Base-Sentiment-Latest,
BERT-base-multilingualuncased-sentiment, and GPT-3 from OpenAI. A hybrid paradigm for examining comments on YouTube
social media was presented by Jelodar et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The framework utilizes semantic and sentiment analysis
approaches to identify meaningful latent topics and levels of sentiment in user comments. It employs
Latent Dirichlet Allocation (LDA) for topic modeling and VADER for sentiment analysis. The study of
Albahli et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] aims to analyze public sentiment about COVID-19 vaccinations by using “X” data and
implementing a deep learning method. The research utilizes historical and real-time data obtained via
web scraping from “X”, as well as the VADER sentiment analysis tool. The authors introduce a model as
a solution to the shortcomings of previous studies. Badi et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] conducted an extensive investigation
of the sentiment analysis of “X” posts on COVID-19 vaccinations, with a special focus on AstraZeneca
and Pfizer.
      </p>
      <p>
        In order to explore the predictability of trends using recognized patterns over “X” social media
sentiment analysis, Di Tollo et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] employ a combination of stochastic neural networks, the BERT
model as an NLP technique, and an external evolutionary algorithm to optimize parameters for robustly
accurate predictions. According to Hameleers [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], while the digital media ecosystem has evolved into a
crucial component of society and one of the most demanding communication tools, it also creates an
environment subject to the strategic exploitation of platform architectures and algorithmic systems to
magnify misleading content. These platforms and tech companies serve as crucial intermediaries that
can either restrict or amplify disinformation through their algorithmic design and content moderation
policies.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Data and Methods</title>
      <p>
        3.1. Data
This paper is centered on the analysis of public opinion shared on X. Due to modifications in X policies
in 2023, it has become impracticable to retrieve the most recent X posts. It is fortunate that research
involving German occupations in the same year had already been conducted, as this thesis will build
upon this previous study [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The X data from that study will be re-analysed, as it was the most recent
data available at the time.
      </p>
      <p>
        The data collection process was executed in accordance with the protocol established in the preceding
study (see [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]). The process was initiated with the access of X’s API. The dataset under consideration
contains approximately 3.5 million tweets related to occupations, accompanied by supplementary
metadata.
      </p>
      <sec id="sec-3-1">
        <title>3.2. Methods</title>
        <p>The initial step in this research will be the conceptualization and implementation of a variety of
sentiment analysis tools, as well as their subsequent evaluation to determine the most efective tool.
Subsequently, deep learning algorithms will be integrated with the previously mentioned sentiment
analysis tools to assess the impact of deep learning models on the aforementioned tools. This analysis
will identify the most efective tool for determining whether a job is essential or not.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Efectiveness of Sentiment Analysis Tools (RQ1) To address the first research question a process</title>
        <p>of conceptualizing and implementing the tools is needed. This process involves a multi-step approach,
incorporating various sentiment analysis methods and addressing the challenges posed by multilingual
data. Ultimately, to address finding the most efective tool, a comprehensive performance evaluation
will be developed and conducted.</p>
      </sec>
      <sec id="sec-3-3">
        <title>Impact of Deep Learning on Sentiment Analysis Tools (RQ2) This section examines the integra</title>
        <p>tion of deep learning models with previously introduced sentiment analysis tools in order to respond to
the second research question. Studies show that deep learning models are indeed practical for both
sentiment analysis and emotion classification tasks, and they can outperform conventional machine
learning in general for these tasks. Particularly, Convolutional Neural Networks (CNN) and Recurrent
Neural Networks (RNN) variants have become increasingly popular for sentiment analysis tasks due
to their state-of-the-art performance, as many studies have successfully applied these deep learning
models for sentiment analysis (Kastrati et al., 2024). The endeavor focuses on harnessing the capabilities
of both deep learning and machine learning techniques to enhance the accuracy and robustness of
sentiment analysis.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Analysis and Evaluation</title>
      <sec id="sec-4-1">
        <title>4.1. Evaluation Results of the Sentiment Analysis Tools (RQ1)</title>
        <p>Measuring the performance of the sentiment analysis tools is the first mandatory step in analyzing and
evaluating their outcomes. This is accomplished through the utilization of continuous measurements,
including MSE, MAE, R-squared, and correlation coeficients, as well as discrete metrics, such as
accuracy, precision, recall, and F1-score. For brevity, this paper only reports detailed results on the
discrete metrics.1</p>
        <p>A cumulative score is introduced to give a fair perspective on the best sentiment analysis tool overall.
However, to experiment with the impact level and magnitude of the metrics, they are fed to the score
individually and then at once, meaning that first, only the first three error metrics, R-squared and
1Detailed results for the continuous metrics can be obtained via the authors.
correlation metrics were taken into account, and then they were included stepwise to reach a cumulative
score. Interestingly, the best tool for both languages didn’t change even after the inclusions.</p>
        <p>For the English dataset, the initial evaluations showed that the Twitter-XLM-RoBERTa (‘twtr_xlm_rob’
model) consistently outperformed other tools across all continuous metrics where its superior
performance can be attributed to its strong correlation with labels and lower error rates. The high correlation
coeficients indicate a strong positive relationship between predicted and actual sentiment scores. The
low error metrics suggest that Twitter-XLM- RoBERTa predictions are closer to the true labels compared
to other tools. GPT4o showed the second-best performance in the continuous evaluation, though its
performance was not as strong as Twitter-XLM-RoBERTa, it still demonstrated a good correlation with
labels and relatively low error rates, outperforming the remaining tools.</p>
        <p>For the German part, GPT4o emerged as the top performer in the continuous evaluation, achieving
the highest scores in most metrics, which indicates its superior ability to predict sentiment scores
that closely align with the true labels, showing both high correlation and low error rates. Despite
trailing GPT4o by a small margin, the Twitter- XLM-RoBERTa displayed strong predictive capabilities,
especially in minimizing absolute errors, and placed second in the continuous evaluation.</p>
        <p>Similar to the continuous perspective, a cumulative score is calculated considering all key classification
metrics, including accuracy, precision, recall, F1-score, macro-average, and micro-average, to have a
thorough assessment of classification performance.</p>
        <p>As with the continuous evaluation, the Twitter-XLM-RoBERTa again emerged as the top performer
across all discrete metrics, indicating the tool’s strength in classifying “X” posts into the correct sentiment
categories, while its performance was consistent across diferent sentiment classes (Table 1). Due to the
outstanding classification performance of XLM-RoBERTa-German, it outperformed the other tools in
the discrete evaluation, yet not as strong as Twitter-XLM-RoBERTa, and thus earned the second-best
ranking.</p>
        <p>For German “X” posts (Table 2), the Twitter-XLM-RoBERTa again surpassed all other tools,
demonstrating its ability to properly classify sentiments across multiple groups while maintaining a balanced
performance in both precision and recall. Even while GPT4o was powerful and ranked second in the
discrete examination, it was not as dominant as it was in the continuous evaluation. This revealed
that while it excels in predicting sentiment scores on a continuous scale, its performance in discrete
classification is good but not superior to Twitter-XLM-RoBERTa.</p>
        <p>Focusing on the ROC curves, for the tools employed for English “X” posts, the Twitter- XLM-RoBERTa
consistently outperforms the others, with the highest AUC values across diferent classes, ranging from
0.69 to 0.82 (a good to excellent performance). The XLM-RoBERTa-German and GPT4o came next, as
both tools demonstrated good to moderate performances, though their performances were not even
comparable to Twitter-XLM-RoBERTa. Regarding German “X” posts Twitter- XLM-RoBERTa showed
the best performance with an AUC of 0.89 and the ROC curve consistently above all others, indicating
superior discrimination ability across all thresholds. The second-best performer with an AUC of 0.75
was GPT4o, also exhibiting good discriminative power.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Evaluation Results of the Enhanced Tools (RQ2)</title>
        <p>In the next step the results of the most eficient model are evaluated to ensure they are accurate, besides
realizing the opportunity to compare it to the results of RQ1. The relative performance of the enhanced
sentiment analysis tools is therefore evaluated using the same metrics as in section 4.1, with the same
continuous and discrete metrics.</p>
        <p>The continuous evaluation results show varying performance across enhanced sentiment analysis
tools and their variants. The gradient-boosting (twtr_xlm_rob_GB) enhancement of the
Twitter-XLMRoBERTa and GPT4o models demonstrate superior performance in capturing the nuances of sentiments
on a continuous scale across various metrics and evaluation approaches, indicating that both models are
efective in predicting sentiment scores accurately, and consistently outperforming other tools across
various metrics. Moreover, it demonstrates that the ensemble variant of each tool tends to perform
better than the base-enhanced counterparts using the CNN-LSTM model.</p>
        <p>Regarding the discrete viewpoint (Table 3), the gradient-boosting form of the enhanced
Twitter-XLMRoBERTa along with the random forest (twtr_xlm_rob_RF) variant consistently outperforms other tools
in discrete classification on almost all performance metrics, followed by the gradient-boosting version
of the XLM-RoBERTa-German model which shows competitive performance in terms of accuracy and
other metrics.</p>
        <p>The results of the ROC curves and AUC values, as evident in Figure 2, show that the enhanced
Twitter-XLM-RoBERTa with both random forest and gradient-boosting ensemble emerges as the overall
best tool for sentiment analysis, demonstrating superior performance across all sentiment classes. Their
ability to distinguish between diferent sentiment classes is significantly better than other models, as
evidenced by the consistently showing the highest AUC values across diferent classes, often above
0.80, and curves furthest from the diagonal line. The GPT4o families exhibit competitive performance,
particularly for certain sentiment classes.</p>
        <p>The GPT4o model with both gradient-boosting and random forest enhancements performs best across
all metrics in the continuous evaluation. Technically, it shows the highest correlation coeficients and
the best error metrics – lowest MSE, MAE, and RMSE. While the Twitter-XLM-RoBERTa model with
gradient boosting was not as strong as the GPT4o models, it still shows good performance, particularly
in terms of Pearson correlation and error metrics.</p>
        <p>In the discrete evaluation (Table 4), the Twitter-XLM-RoBERTa model with gradient boosting performs
best overall, demonstrating particularly strong precision and a good balance between precision and
recall, as reflected in its high F1-Score. The same model with base enhancement comes next, with slightly
lower but still impressive scores. The GPT4o model with gradient boosting is the third-best performer
in this category, indicating strong performance, particularly in terms of F1-Score and Macro-Average.</p>
        <p>Based on the ROC curves shown in Figure 2 and related AUC values, the best tool appears to be
Twitter-XLM-RoBERTa with gradient boosting with ROC curves consistently above others for most
classes and AUC values close to 0.8, indicating strong discriminative power across all sentiment classes.
The ROC curves for the gradient- boosting type of GPT4o are also among the top performers, with
considerable AUC values, showcasing decent performance for some classes. The Twitter XLM-RoBERTa
model with base enhancement also shows competitive performance, with ROC curves often close to the
top two models.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Summary</title>
        <p>In order to discover the best model overall, the results of the first research question regarding the
evaluations of the sentiment analysis tools without enhancement from external deep learning and
machine learning models will be called ERQ1. They are compared to the evaluation results of the
enhanced sentiment analysis tools (here ERQ2) to reach a thorough examination of the second research
question. This will be done by investigating the results of both Continuous and Discrete aspects of both
English and German tools, which would strengthen the quality of the examination.</p>
        <p>For English, the best overall tool comes from ERQ1, namely the Twitter-XLM-RoBERTa model. This
tool achieved the highest evaluation score, outperforming its enhanced variants in ERQ2. Notably, it
is the best performer for both continuous and discrete metrics in ERQ1, demonstrating its robustness
across diferent evaluation criteria. The German language analysis revealed a more complex picture, as
the best overall performance came from ERQ1, but with a mixed result. In fact, GPT4o performed best
for continuous metrics, while Twitter-XLM-RoBERTa excelled in discrete metrics.</p>
        <p>The analysis reveals that the best overall tool across both languages is Twitter-XLM- RoBERTa, which
demonstrates no significant improvements over the baseline sentiment analysis tools (Table 5). The
analysis of core tools further proves that it is indeed the best overall core tool due to appearing with the
maximum number of 5 times among the top-performing models in both base and enhanced CNN-LSTM
with machine learning ensemble models across both languages.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and Outlook</title>
      <p>The Twitter-XLM-RoBERTa base model ultimately emerged as the best sentiment analysis tool due to
demonstrating superior performance overall in both English and German datasets, highlighting the
efectiveness of a robust pre-trained language model and transformer- based architecture and enabling
efective cross-lingual sentiment detection. In a next step, a comprehensive assessment framework was
implemented to determine whether or not jobs can be predicted as being Essential Jobs. The approach
entailed utilizing the elected Twitter- XLM-RoBERTa model to conduct sentiment analysis on the “X”
posts for all available datasets and aggregate the outcomes for each unique job title to use as a predictor
of job essentiality. Two diferent methods were introduced to assess the essentiality of job titles: The
Berlin Method solely focuses on the comparison of the test subject to the corresponding ground truth
of the initial list of essential jobs. The Temporal Method compares the changes to the list of essential
jobs over the pandemic’s duration to the initial list on the same dataset. And Both Methods which is the
combination of the two methods, acting as a meta-classifier of the essentiality assessment mechanism,
to make the final decisions based on the results of the other two methods.</p>
      <p>The findings obtained from the final assessment reveal a significant degree of complexity and
variability, underscoring the challenges inherent in sentiment-based assessments, specifically in
lesspopular and intricate contexts such as job essentiality. The following content ofers an in-depth
interpretation of the findings:
Berlin Method: The Berlin Method exhibited a consistent trend across both languages, with the
majority of unique job titles of both English and German versions of the Extended List being categorized
as No-Data, suggesting a potential bias or overgeneralization in the classification process. The next
highest proportion of jobs were allocated to the Essential category, with about half of the No-Data
amount, highlighting the permissible performance of the method in discovering Essential Jobs. The
overall outcome of the Berlin Method may reflect the inherent limitations of the ground truth dataset
or the aggregation mechanism, which could have failed to capture delicate variations in sentiment for
certain job titles.</p>
      <p>Temporal Method The Temporal Method results revealed greater variability between the two
languages. While the Extended List predominantly classified jobs as No- Data in both languages, the
rest of the German dataset jobs revealed somewhat similar trends between the rest of the classes, while
the English dataset classified nearly no job as essential. This disparity could be attributed to restricted
classification criteria, variations in the temporal distribution of the datasets, difering patterns of
sentiment evolution across languages, or the impact of language-specific nuances on sentiment scoring.
The relatively balanced distribution across the remaining classes in both languages otherwise may
suggest that the Temporal Method failed to capture a broader range of sentiment dynamics compared
to the Berlin Method.</p>
      <p>Both Methods The “Both Methods” approach demonstrated significant diversity in categorization
between Essential and Non-Essential classes. The No-Data category appeared to be the dominant class
as in most of the previous cases, for the English Added dataset, almost one-third of the remaining
items were classified as Essential and a limited amount of the jobs to Non-Essential, while the German
dataset exhibited equal job classification in the Essential and Non-Essential categories. This suggests
that the integration of Berlin and Temporal Methods, while robust in combining complementary
perspectives and resulting in depicting a better classifications, may still be influenced by the inherent
biases or limitations of each approach. Notably, the excess of No-Data besides variation of Non-Essential
classifications between languages underscores the dificulty of definitively identifying job essentiality
through sentiment analysis alone.</p>
      <p>This research has made significant contributions to understanding the dynamics of Essential Job
classifications during the COVID-19 pandemic through advanced sentiment analysis of social media data
and highlights both methodological advances and significant analytical challenges. While
Twitter-XLMRoBERTa has emerged as the superior sentiment analysis tool within all competitors on both the English
and German datasets, the results indicate that adding deep learning enhancements unexpectedly failed
to surpass baseline performance. The new dual-stream comparative evaluation approach, which merges
Berlin-based reference analysis with temporal sentiment evolution, sheds light on the intricate nature
of Essential Job classifications. Significant methodological constraints, including concerns with data
quality, language variation, and threshold setting, are highlighted through the considerable proportion
of classifications becoming No-Data and substantial distinctions between the outcomes of English
and German datasets. While the established framework seems promising in analyzing job essentiality
through sentiment analysis, its practical application faces considerable constraints, particularly in
predicting prospective Essential Jobs. Future developments ought to focus on enhanced job title grouping
strategies, refined multilingual model adaptation, and more sophisticated comparative analysis design
to minimize classification ambiguity. Despite these limitations, this research establishes a foundational
framework for investigating job essentiality through sentiment analysis during global crises, providing
valuable insights into the intersection of public sentiment and social policy determination.</p>
      <p>In this study it was decided not to concentrate on job title pre-processing, separation, grouping, but
instead execute just the minimal amount of data cleaning and pre-processing. This decision ended
up with the study’s conclusion being based on raw job titles, leading to significant job title variation.
Future research could investigate how prediction of job essentiality could be improved by more rigorous
data pre-processing and grouping of job titles. Filtering jobs and focusing on those with a minimum
amount of tweets related to them could also prove interesting, as well as job-specific sentiment analysis
instead of aggregated sentiments for groups of jobs. Lastly, fine-tuning for multilingual contexts could
also improve results.
During the preparation of this work, the authors used DeepL in order to: Grammar and spelling check.
After using these tool(s)/service(s), the authors reviewed and edited the content as needed and take(s)
full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tiemann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Udelhofen</surname>
          </string-name>
          , L. Fournier,
          <article-title>What social media can tell us about essential occupations</article-title>
          , in: INFORMATIK 2023
          <string-name>
            <surname>- Designing</surname>
            <given-names>Futures</given-names>
          </string-name>
          :
          <article-title>Zukünfte gestalten, Gesellschaft für Informatik e</article-title>
          .V.,
          <string-name>
            <surname>Bonn</surname>
          </string-name>
          ,
          <year>2023</year>
          , pp.
          <fpage>1983</fpage>
          -
          <lpage>1992</lpage>
          . doi:
          <volume>10</volume>
          .18420/inf2023_
          <fpage>198</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>W. van Zoonen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. E.</given-names>
            <surname>Rice</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. L.</given-names>
            <surname>Ter</surname>
          </string-name>
          <string-name>
            <surname>Hoeven</surname>
          </string-name>
          ,
          <article-title>Sensemaking by employees in essential versus non-essential professions during the covid-19 crisis: A comparison of efects of change communication and disruption cues on mental health, through interpretations of identity threats and work meaningfulness</article-title>
          ,
          <source>Management Communication Quarterly</source>
          <volume>36</volume>
          (
          <year>2022</year>
          )
          <fpage>318</fpage>
          -
          <lpage>349</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M. S. U.</given-names>
            <surname>Miah</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Kabir</surname>
          </string-name>
          , T. B.
          <string-name>
            <surname>Sarwar</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Safran</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Alfarhood</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Mridha</surname>
          </string-name>
          ,
          <article-title>A multimodal approach to cross-lingual sentiment analysis with ensemble of transformer and llm</article-title>
          ,
          <source>Scientific Reports</source>
          <volume>14</volume>
          (
          <year>2024</year>
          )
          <fpage>9603</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>H.</given-names>
            <surname>Jelodar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rabbani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. B. B.</given-names>
            <surname>Ahmadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Boukela</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. S. A.</given-names>
            <surname>Larik</surname>
          </string-name>
          ,
          <article-title>A nlp framework based on meaningful latent-topic detection and sentiment analysis via fuzzy lattice reasoning on youtube comments</article-title>
          ,
          <source>Multimedia Tools and Applications</source>
          <volume>80</volume>
          (
          <year>2021</year>
          )
          <fpage>4155</fpage>
          -
          <lpage>4181</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Albahli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Nawaz</surname>
          </string-name>
          ,
          <article-title>Tsm-cv: Twitter sentiment analysis for covid-19 vaccines using deep learning</article-title>
          ,
          <source>Electronics</source>
          <volume>12</volume>
          (
          <year>2023</year>
          )
          <fpage>3372</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kastrati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Kastrati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Shariq</given-names>
            <surname>Imran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Biba</surname>
          </string-name>
          ,
          <article-title>Leveraging distant supervision and deep learning for twitter sentiment and emotion classification</article-title>
          ,
          <source>Journal of Intelligent Information Systems</source>
          <volume>62</volume>
          (
          <year>2024</year>
          )
          <fpage>1045</fpage>
          -
          <lpage>1070</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>G.</given-names>
            <surname>di Tollo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Andria</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. Filograsso,</surname>
          </string-name>
          <article-title>The predictive power of social media sentiment: Evidence from cryptocurrencies and stock markets using nlp and stochastic anns</article-title>
          ,
          <source>Mathematics</source>
          <volume>11</volume>
          (
          <year>2023</year>
          )
          <fpage>3441</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>H.</given-names>
            <surname>Badi</surname>
          </string-name>
          , I. Badi,
          <string-name>
            <given-names>K.</given-names>
            <surname>El Moutaouakil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Khamjane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bahri</surname>
          </string-name>
          ,
          <article-title>Sentiment analysis and prediction of polarity vaccines based on twitter data using deep nlp techniques</article-title>
          ,
          <source>Radioelectronic and Computer Systems</source>
          (
          <year>2022</year>
          )
          <fpage>19</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hameleers</surname>
          </string-name>
          ,
          <article-title>Disinformation as a context-bound phenomenon: Toward a conceptual clarification integrating actors, intentions and techniques of creation and dissemination</article-title>
          ,
          <source>Communication Theory</source>
          <volume>33</volume>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vahdatnia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Peachkah</surname>
          </string-name>
          ,
          <article-title>Monitoring of digitization and sustainability on twitter</article-title>
          , in: INFORMATIK 2023
          <string-name>
            <surname>- Designing</surname>
            <given-names>Futures</given-names>
          </string-name>
          :
          <article-title>Zukünfte gestalten, Gesellschaft für Informatik e</article-title>
          .V.,
          <string-name>
            <surname>Bonn</surname>
          </string-name>
          ,
          <year>2023</year>
          , pp.
          <fpage>1973</fpage>
          -
          <lpage>1982</lpage>
          . doi:
          <volume>10</volume>
          .18420/inf2023_
          <fpage>197</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>