<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Navigating Crypto Conversations: A Multi-Level Approach to Classify Cryptocurrency Opinions on Social Media</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Kavya G</string-name>
          <email>kavyamujk@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sonith D</string-name>
          <email>sonithksd@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>H L Shashirekha</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, Mangalore University</institution>
          ,
          <addr-line>Mangalore, Karnataka</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Forum for Information Retrieval Evaluation</institution>
          ,
          <addr-line>FIRE</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>In the rapidly evolving cryptocurrency market, understanding public sentiments is pivotal for grasping market trends. People express their views on cryptocurrencies across a variety of social media platforms like Twitter, Reddit, Facebook, and various other online forums. Analyzing sentiments on social media channels makes it necessary to provide a more nuanced perspective on public opinion. However, this task is complicated by the inherent diversity and often unstructured nature of social media content. Posts on social media usually difer greatly in format, tone, and language, which poses significant challenges for accurate sentiment classification. In this direction, "CryptOQA - Understanding CryptoCurrency Related Opinions and Questions from Social Media Posts" - a shared task organized at Forum for Information Retrieval Evaluation (FIRE) 2024, invites the research community to address the challenges of identifying the category of opinions published on social media related to cryptocurrencies in English language. To address these challenges, in this paper, we - team MUCS, describe the models proposed for Task 1: "Opinion Classification from CryptoCurrency related Social Media Posts" of the shared task. We submitted two models: i) Unique_Label_LSTM - a Long Short-Term Memory (LSTM) model with a unique labeling concept and ii) HCC_LSTM - a Hierarchical Classifier Chain (HCC) using LSTMs, to classify the given unlabeled English Reddit and Twitter cryptocurrency opinion texts into one of the predefined hierarchical categories. Among the submitted models, HCC_LSTM obtained macro F1 scores of 0.574 and 0.328 for Twitter and Reddit opinion posts, securing 4th and 5th ranks respectively.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Cryptocurrency</kwd>
        <kwd>Opinion Classification</kwd>
        <kwd>Unique Labeling</kwd>
        <kwd>Hierarchical Classifier Chain</kwd>
        <kwd>Machine Leaning and Deep Learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Cryptocurrency represents a revolutionary shift in the financial landscape, characterized by digital or
virtual currencies that leverage cryptographic techniques for secure transactions. Unlike traditional
currencies issued by governments, cryptocurrencies operate on decentralized networks based on
blockchain technology. Major cryptocurrencies such as Bitcoin, Ethereum, and Litecoin have gained
widespread recognition and adoption, becoming significant players in global financial markets. As these
digital assets grow in prominence, so does the importance of understanding public sentiments/opinions
surrounding them.</p>
      <p>
        Social media platforms have emerged as vital sources of information and opinion, ofering
realtime insights into public attitudes and behaviors. Platforms such as Twitter, Reddit, and Facebook,
serve as venues where users actively discuss and share their perspectives on various topics, including
cryptocurrencies [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. These discussions can significantly influence market perceptions and trends,
making it crucial to monitor and analyze the opinions expressed in these social media conversations.
Classifying the sentiments about cryptocurrency related social media posts provides valuable insights
into public opinion and can help stakeholders to make informed decisions. Accurate classification of
opinions ranging from positive and negative to neutral and objective can assist investors in predicting
market movements, understanding consumer behavior, and shaping marketing strategies [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. However,
the task of opinion classification in this context is fraught with challenges due to the diverse and often
unstructured nature of social media content. One of the primary challenges is the inherent variability
and ambiguity of social media language. Posts on social media are usually short, informal, and laden
with slang, making it dificult for traditional text classification models to accurately interpret the hidden
sentiments in the given text [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Additionally, the context in which opinions are expressed can vary
widely, further complicating the classification process. These issues necessitate the use of sophisticated
models that can handle the complexities of social media language and classify the opinions more
accurately.
      </p>
      <p>To address the challenges of identifying cryptocurrency related opinion posts in social media
platforms, "CryptOQA Understanding CryptoCurrency related Opinions and Questions from Social Media
Posts" shared task1 organized at FIRE2 2024, invites the research community to develop models to
detect the categories of cryptocurrency posts. The shared task consists of two tasks: Task 1 - Opinion
Classification from CryptoCurrency related Social Media Posts and Task 2 - Question Answering from
CryptoCurrency related Social Media Posts. We participated in only Task 1 which is about the
classification of Twitter and Reddit cryptocurrency related social media opinion posts. The dataset provided by
organizers of the shared task consists of 5,000 posts per platform, annotated at three levels of hierarchy
as shown in Figure 1.</p>
      <p>Many real-world scenarios require hierarchical classification, despite the fact that most of the research
is focused on flat classification problems. As the classes are arranged in a hierarchy in hierarchical
classification, the models have to learn the dependencies between classes in the hierarchy to predict
the class label for the unseen instance. Hierarchical classification becomes important when data is
organized in multiple levels satisfying the class and sub-class relationship. The challenges of hierarchical
classification is usually addressed through strategies such as Breadth First Search (BFS) and Depth First
Search (DFS), incorporating contextual information from higher to lower levels in the hierarchy. To
explore the strategies of detecting cryptocurrency opinion posts in English on social media platforms, in
this paper, we - team MUCS, describe the models submitted to Task 1 of the shared task. We implemented
two learning models: i) Unique_Label_LSTM - a LSTM model with a unique labeling concept and ii)
HCC_LSTM - a HCC using LSTMs, to tackle the nuances of opinion classification. By leveraging these
models, we aim to enhance the accuracy of sentiment analysis and provide more robust insights into
public opinion regarding cryptocurrencies.</p>
      <p>The rest of paper is organized as follows: Section 2 describes the recent literature on cryptocurrency
related opinion mining and Section 3 focuses on the description of the proposed models followed by
the experiments and results in Section 4. The paper concludes with future works in Section 5.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>Cryptocurrency market has evolved in recent years, presenting unique challenges and opportunities
for analysis. Understanding public sentiments and predicting market trends are critical for navigating
this volatile landscape. A range of studies have explored diferent learning approaches to address
these challenges, particularly in the context of sentiment analysis in social media posts related to
cryptocurrency. Some notable works are described below:</p>
      <p>
        Aslam et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] explored a combination of Long Short-Term Memory (LSTM) and Gated Recurrent
Unit (GRU) networks, utilizing Term Frequency-Inverse Document Frequency (TF-IDF) of words, Bag
of Words (BoW) and Word2Vec, for the detection of sentiments and emotions in cryptocurrency related
tweets. LSTM-GRU model trained with BoW features achieved 99% and 92% accuracies for sentiment
analysis and emotion prediction, respectively. Kim et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] proposed a novel framework of
SelfAttention-based Multiple LSTM (SAM-LSTM) model, for predicting Bitcoin (BTC) prices by leveraging a
change point detection technique to segment time-series data for improved normalization. When tested
on real-world BTC price data, their model achieved notable results with a Mean Absolute Error (MAE)
of 0.3462, Root Mean Square Error (RMSE) of 0.5035, Mean Squared Error (MSE) of 0.2536, and Mean
Absolute Percentage Error (MAPE) of 1.3251, demonstrating its efectiveness in BTC price prediction.
To study the influence of social media like Twitter on cryptocurrencies, Sahal [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] proposed integrating
Bidirectional LSTM (BiLSTM) networks with Embeddings from Language Models (ELMo), to identify the
most profitable cryptocurrencies by contextually-based sentiment analysis. Their proposed approach
achieved an accuracy of 86.30%, demonstrating its efectiveness in predicting price changes based on
social media data. Huang et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] focuses on predicting cryptocurrency price fluctuations by analyzing
sentiments from social media, particularly from Sina-Weibo, a major Chinese platform. The authors
proposed a novel approach that includes capturing Weibo posts, creating a crypto-specific sentiment
dictionary, and utilizing LSTM based Recurrent Neural Network (RNN) and auto regression models
in conjunction with historical price data. Their proposed LSTM model outperformed auto-regressive
models with precision and recall of 0.87 and 0.94 respectively.
      </p>
      <p>
        To investigate whether Twitter data related to cryptocurrencies can enhance trading strategies for
Bitcoin, Colianni et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] proposed Machine Learning (ML) techniques (Bernoli Naive Bayes (BNB),
Logistic Regression (LR), Multinominal NB (MNB), and Support Vector Machines (SVM)) trained with
binary BoW features. Among their proposed models, BNB classifier achieved the highest accuracy
with a day-to-day prediction accuracy of 95% and an hour-to-hour accuracy of 76.23% respectively.
Torba et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] presented Hierarchical Text Classification (HTC) as a generative task using advanced
neural models consisting of open framework that facilitates experimentation with various aspects of
HTC models including traversal strategies for class trees (BFS vs. DFS, root-to-leaf vs. leaf-to-root),
constraints on hierarchy coherence during decoding, and the use of label names versus acronyms.
This work provides datasets, metrics, and tools for error analysis, enabling researchers to test these
modeling choices comprehensively. By evaluating these factors, the authors aims to clarify how
diferent architectural and modeling decisions influence HTC outcomes and promote transparency
and reproducibility in the field of classification. Hua et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] proposed Hierarchical Graph Neural
Network models (Text Graph Convolutional Network (TextGCN), Simple Graph Convolution (SGC),
Text Level GCN (TextLevelGCN), and Hierarchical Graph Attention neural network (HieGAT)) to
enhance text classification by efectively leveraging word-level, sentence-level and document-level
features. Experimenting on various benchmark datasets (20NG, R8, R52, Ohsumed, and MR), their
proposed HieGAT model outperformed other models.
      </p>
      <p>The related work illustrates that researchers have employed various approaches such as LSTM-GRU
ensembles, self-attention mechanisms, and hierarchical graph neural networks using diverse features
and data sources, to analyze cryptocurrency sentiments. While these models have demonstrated their
efectiveness, the evolving nature of cryptocurrency markets and social media content suggests there is
still considerable scope for further research and innovation.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>Hierarchical classification organizes labels into a multi-level structure, enabling more detailed and
context-aware categorization by considering the relationships between diferent levels of labels. The
proposed methodology includes pre-processing the given data followed by model building as described
in the following sub-sections:</p>
      <sec id="sec-3-1">
        <title>3.1. Pre-processing</title>
        <p>Pre-processing is an essential step that converts raw text data into a clean and organized format, thereby
improving the efectiveness and accuracy of ML models. This process is necessary because raw text
data often contains noise, inconsistencies, and irrelevant information that can hinder model accuracy
and eficiency. Pre-processing standardizes the data, making it consistent and easier for the models
to understand and analyze. In this study, numeric information is transformed into words and URLs,
user mentions, hashtags, special characters, punctuation are removed and NaN values are filled with
empty strings. Additionally, stopwords are removed using resources available at the Natural Language
Toolkit3 (NLTK) to ensure the text is efectively prepared for analysis. For Reddit dataset, ’title’ and
’selftext’ fields are concatenated before applying pre-processing.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Model Building</title>
        <p>
          LSTM networks are a type of RNN which excel at capturing long-term dependencies and understanding
contextual relationships within text data [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. It is a powerful model designed to handle sequential data,
making it particularly efective for text processing tasks such as sentiment analysis. LSTM architecture
includes Input layer, Embedding layer, LSTM layer, Dense layer that serves as output layer, and an
optional Dropout layer. While the Input layer accepts text in sequence format, Embedding layer
transforms the text into a dense vector representation using Keras embeddings4 5. This involves using
Keras tokenizer to convert text data into sequence of integers, where each unique word is mapped to
a distinct integer facilitating numerical representation for the given text. These sequence of integers
are then padded to a uniform length, ensuring that all input sequences are of the same length for
consistent model input. Further, the integer sequences are mapped to continuous vectors that encode
semantic relationships between words which results in dense, fixed-size vector representations. This is
followed by 64 units LSTM layer, which processes the embedded sequences to capture dependencies
over long text sequences. Further, LSTM network includes two Dense layers: one with 32 units and
ReLU activation for introducing non-linearity, and another with a softmax activation function to output
class probabilities for the final classification. Dropout layer helps to prevent overfitting by randomly
setting a fraction of the input units to zero during training. The model is compiled with the Adam
optimizer and sparse categorical cross-entropy loss, trained over 5 epochs with a batch size of 32.
        </p>
        <p>The two proposed models: i) Unique_Label_LSTM and ii) HCC_LSTM, make use of LSTM model
to learn relationship between the classes arranged in a hierarchy. While Unique_Label_LSTM make
use of only one LSTM as flat classifier, HCC_LSTM use three LSTMs as chain of classifiers. Further,
the models difer in the way labels are considered to build the models. The description of building the
models is given below:
• Unique_Label_LSTM model - unique labels are generated by concatenating the labels in
the path from the root of the subtree to each leaf node from level 1 to level 3 in the
hierarchy of labels shown in Figure 1. This concept results in eight unique labels - NOISE,
OBJECTIVE, SUBJECTIVE_NEUTRAL_SENTIMENTS, SUBJECTIVE_NEUTRAL_QUESTIONS,
SUBJECTIVE_NEUTRAL_ADVERTISEMENTS, SUBJECTIVE_NEUTRAL_MISCELLANEOUS,
SUBJECTIVE_NEGATIVE and SUBJECTIVE_POSITIVE, for the given hierarchy of labels. The labels
3https://www.nltk.org/
4https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding
5https://keras.io/2.15/api/layers/core_layers/embedding/
’NOISE’ and ’OBJECTIVE’ are used directly as they appear in level 1 and do not have branches
further. This labeling arrangement helps to build a flat multi-class classifier with eight labels
which allows for nuanced classification of opinions, ensuring the systematic assignment of each
unseen input to its most relevant category based on its content and context.
• HCC_LSTM model - employs a hierarchical approach with multiple local classifiers dedicated
for each level of the hierarchy, as shown in Figure 2. For each level, the model is trained using
data filtered by the predictions of the previous level, allowing it to refine its classifications
hierarchically resulting in a robust framework for hierarchical text classification.</p>
        <p>While Unique_Label_LSTM model is a flat classifier, HCC_LSTM is a hierarchical classifier and these
classifiers can achieve accurate and context-aware predictions for the new unseen text.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments and Results</title>
      <p>Reddit and Twitter cryptocurrency opinion classification datasets provided by the organizers of the
shared task consists of only Train sets and the Train sets are imbalanced. The level-wise distribution
of data in the given Twitter and Reddit datasets are shown in the Figure 3 and 4 respectively and the
sample text in Twitter and Reddit datasets are shown in the Tables 1 and 2 respectively. As the datasets
consists of only Train sets, 20% of the Train sets at random are considered as Validation sets to evaluate
the performances of the models and the remaining as Train sets. The performances of the proposed
models evaluated on the Validation set based on macro F1 score are shown in Table 3 for both Twitter
and Reddit datasets.</p>
      <p>As the shared task participants were allowed to submit the predictions of only two models on the
Test sets, we trained the models with the given Train sets and obtained the predictions on the Test sets
provided by the organizers. These predictions are evaluated by the organizers based on macro F1 score
and the proposed HCC_LSTM model obtained better macro F1 scores of 0.574 and 0.328 for Twitter
and Reddit datasets, securing 4th and 5th ranks respectively compared to the other proposed model.
Comparison of macro F1 scores of all participating teams are shown in Figures 5 and 6 respectively.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion and Future Work</title>
      <p>In this paper, we - team MUCS, describe the models submitted to Task-1: ’Opinion Classification from
CryptoCurrency related Social Media Posts’ of the shared task "CryptOQA Understanding
CryptoCurrency related Opinions and Questions from Social Media Posts" at ’FIRE 2024’, to distinguishing between
categories of cryptocurrency related Twitter and Reddit opinion posts in English. We submitted two
models: i) Unique_Label_LSTM - a LSTM model with a unique labeling concept and ii) HCC_LSTM - a
HCC using LSTMs, to classify the given unlabeled English Reddit and Twitter opinion texts into one of
the predefined hierarchical categories. Among the submitted models, HCC_LSTM obtained macro F1
scores of 0.574 and 0.328 for Twitter and Reddit opinion posts, securing 4th and 5th ranks respectively.
Investigating other approaches to capture more nuanced opinions about cryptocurrencies from social
media data to improve the performance of the learning models will be explored further.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used ChatGPT in order to: Grammar and spelling
check. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s)
full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>K. R. K. G. Sougata</surname>
            <given-names>Sarkar</given-names>
          </string-name>
          , Gourav Sen,
          <article-title>Understanding CryptoCurrency Related Opinions and Questions from Social Media Posts (CryptOQA 2024), in: Forum of information retrieval (FIRE</article-title>
          ),
          <year>2024</year>
          . URL: https://sites.google.com/view/cryptoqa-2024.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Khan</surname>
          </string-name>
          , et al.,
          <source>Predicting Cryptocurrency Value, Based on Sentimental Analysis of Social Media Post</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>M. de Guerra Narciso</surname>
          </string-name>
          ,
          <source>Cryptocurrency Analysis Based on User-Generated Social Media Content</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>N.</given-names>
            <surname>Aslam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Rustam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. B.</given-names>
            <surname>Washington</surname>
          </string-name>
          , I. Ashraf,
          <article-title>Sentiment Analysis and Emotion Detection on Cryptocurrency Related Tweets using Ensemble LSTM-GRU Model</article-title>
          , in: Ieee Access, volume
          <volume>10</volume>
          , IEEE,
          <year>2022</year>
          , pp.
          <fpage>39313</fpage>
          -
          <lpage>39324</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.-H.</given-names>
            <surname>Shin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. G.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lim</surname>
          </string-name>
          ,
          <article-title>A Deep Learning-based Cryptocurrency Price Prediction Model that Uses On-chain Data</article-title>
          , in: IEEE Access, volume
          <volume>10</volume>
          , IEEE,
          <year>2022</year>
          , pp.
          <fpage>56232</fpage>
          -
          <lpage>56248</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R.</given-names>
            <surname>Sahal</surname>
          </string-name>
          ,
          <source>Predicting Optimal Cryptocurrency using Social Media Sentimental Analysis</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>X.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Surbiryala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Iosifidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Zhang,</surname>
          </string-name>
          <article-title>Lstm Based Sentiment Analysis for Cryptocurrency Prediction</article-title>
          ,
          <source>in: Database Systems for Advanced Applications: 26th International Conference, DASFAA</source>
          <year>2021</year>
          , Taipei, Taiwan,
          <source>April 11-14</source>
          ,
          <year>2021</year>
          , Proceedings,
          <source>Part III 26</source>
          , Springer,
          <year>2021</year>
          , pp.
          <fpage>617</fpage>
          -
          <lpage>621</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Colianni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rosales</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Signorotti</surname>
          </string-name>
          ,
          <source>Algorithmic Trading of Cryptocurrency Based on Twitter Sentiment Analysis, in: CS229 Project</source>
          , volume
          <volume>1</volume>
          ,
          <year>2015</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>F.</given-names>
            <surname>Torba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gravier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Laclau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kammoun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Subercaze</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          <article-title>Study on Hierarchical Text Classification as a Seq2seq Task</article-title>
          ,
          <source>in: European Conference on Information Retrieval</source>
          , Springer,
          <year>2024</year>
          , pp.
          <fpage>287</fpage>
          -
          <lpage>296</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hua</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Semantic</given-names>
            <surname>Hierarchical</surname>
          </string-name>
          <article-title>Graph Neural Network for Text Classification</article-title>
          ,
          <source>in: arXiv preprint arXiv:2209.07031</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Spasojevic</surname>
          </string-name>
          ,
          <article-title>Actionable and Political Text Classification Using Word Embeddings and LSTM</article-title>
          , in: arXiv preprint arXiv:
          <volume>1607</volume>
          .02501,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>