<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>gerber at Touché: Ideology and Power Identification in Parliamentary Debates 2024</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Christian Gerber</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Tübingen</institution>
          ,
          <addr-line>72070 Tübingen</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <abstract>
        <p>In democratic countries, national parliaments shape laws and policies through deliberative processes that reflect underlying political ideologies and power structures. This paper presents a system developed for the Ideology and Power Identification Shared Task to classify parliamentary debates into categories indicative of ideology and power dynamics. Using a Convolutional Neural Network (CNN) architecture enhanced with hyper-parameter optimisation, this system processes multilingual data from the ParlaMint corpus. Key preprocessing steps include cleaning, tokenisation and conversion of text into integer sequences. The CNN model consists of embedding, convolutional, max-pooling and dense layers with a sigmoid activation function for binary classification. Our evaluation, based on precision, recall and F1 score, shows that the model successfully classifies ideology and power dynamics in parliamentary debates, achieving an average F1 score of 0.676 for power identification and 0.632 for political orientation. These results demonstrate the potential of the model for analysing complex parliamentary discourse.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;CNN</kwd>
        <kwd>Ideology and Power Identification</kwd>
        <kwd>NLP</kwd>
        <kwd>Touché</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Since the dawn of civilisation, politics has been a fundamental part of human society. From early
tribal councils to the complex governmental structure of modern society, politics has shaped the way
societies are organised, governed and led. Throughout history, political systems have evolved to meet
the changing needs of society and to adapt to new challenges and opportunities. Without politics it will
be dificult to maintain order in a large society, therefore politics is essential for the stability, security
and development of a country, providing a structure through which decisions are made and power
is distributed [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. It influences every aspect of our lives, from the laws that are made, the resources
that are distributed, the education, welfare and security of its citizens. Understanding this political
communication and the presentation of political speakers is vital for a functioning society. These
speeches usually consist of indirect speech and are quite complex. Nevertheless, it is important to
analyse parliamentary debates in order to gain critical insight into how political ideologies and power
dynamics influence legislative outcomes and, more importantly, the lives of citizens. Parliamentary
debates are an essential part of democratic processes. Politicians and their associated parties serve as
the voice of their constituents by expressing political ideologies, negotiating and making decisions [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
This leads to the characterisation of a nation’s political landscape. These debates provide a rich corpus
for analysis, reflecting the political climate and the ideologies of each individual speakers. However,
the inherent complexity and subtlety of political language, combined with the volume of textual data
generated during parliamentary proceedings, presents significant challenges for computational analysis.
Traditional approaches or simple approaches like analysing short and direct tweets f.e. via "X", often
do not capture the nuanced expressions of political ideology and party afiliation in parliamentary
discourse. The shared task [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] focuses on identifying two variables associated with speakers in a
parliamentary debate: their political ideology and whether they belong to a governing party or a party
in opposition by using machine learning and natural language process. Ofering a large corpus to train
on. These two tasks were achieved by using a CNN model described in the third chapter.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <p>
        The analysis of political discourse has a rich history, with early studies focusing on the rhetorical
strategies used by political representatives. While early work [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ][
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] emphasised the importance of
persuasive elements used by politicians to shape public opinion and policy, recent advances include the
integration of machine learning techniques into political discourse analysis. A study by Abercrombie
and Batista-Navarro [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] provides a large and systematic review of sentiment and ideology detecting
analysis in parliamentary debates. They discuss diferent approaches, ranging from sentiment analysis,
classification and position scaling to the analysis of political speeches, in order to highlight the strengths
and limitations of these methods. They found out, that within the overall area of sentiment analysis
in political detection, there are eight tasks. For example, emotion analysis, agreement and alignment
detection and most interesting for this paper: ideology and party afiliation detection. These tasks have
been tackled using a wide range of approaches, from supervised to unsupervised machine learning
methods, including neural networks. The use of convolutional neural networks (CNNs) has become
increasingly popular due to their ability to capture complex patterns within textual data. Several
comprehensive reviews provide insights into how deep learning models, such as CNNs, are applied
to various text classification tasks, including sentiment analysis and political text analysis [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ][
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. In
addition, the analysis of X’s tweets has become increasingly popular in recent years. COVID-19
tweets were analysed by Aslan et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. They used FastText Skipgram for information extraction,
a convolutional neural network (CNN) model for feature extraction, and an adaptive optimisation
algorithm (AOA) for feature selection. In Dehghani and Yazdanparast’s paper [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], they present several
machine learning and deep learning models to analyse the sentiment of Persian political tweets. They
applied Gaussian Naive Bayes, Gradient Boosting, Logistic Regression, Decision Trees, Random Forests,
as well as a combination of CNN and LSTM to classify the polarities of tweets. The results showed, that
the CNN-LSTM model had the highest classification accuracy, showcasing the efect of CNN models in
political text analysis.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. System Overview</title>
      <p>The following section provides a detailed description of the system developed to identify ideology
and power in a political speech. The aim of these two tasks is to classify parliamentary debates into
categories that reflect ideology or power dynamics. First, the input data is pre-processed and then
CNN’s local feature extraction is used to convert textual information into numerical vectors. This
description outlines the components, resources and methods used to build and fine-tune the model. The
software used for this implementation is Python with libraries including TensorFlow, Keras, Scikit-learn,
Keras-tuner and others used for machine learning and data processing. The system uses a Convolutional
Neural Network (CNN) architecture and employs various hyperparameter optimisation techniques to
improve performance. Finally, the model is evaluated based on precision, recall and F1 score.</p>
      <sec id="sec-3-1">
        <title>3.1. Dataset</title>
        <p>
          The data for this task comes from ParlaMint [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], a multilingual comparable corpora of parliamentary
debates. A selection of speeches was collected and made available as training set [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. The data in the
training set was sampled to reduce potential confounding variables (e.g. speaker identity) and provided
in tab-separated text files. The fields in the data include:
• id: Unique ID for each text.
• speaker: Unique ID for each speaker, allowing multiple speeches from the same speaker.
• sex: Binary/biological sex of the speaker (Female, Male, Unspecified/Unknown).
• text: Transcribed text of the parliamentary speech, potentially including line breaks and other
special sequences.
• text_en: Automatic translation of the text to English, which may be empty for speeches originally
in English or missing for some non-English speeches.
• label: Binary/numeric label indicating political orientation (0 for left, 1 for right) or power
identification (1 for opposition, 0 for coalition/governing party).
        </p>
        <p>For the system described in this paper, only the fields ’id’, ’speaker’, ’text’ and ’label’ were used.
Furthermore, the training data provided 29 diferent parliaments, including Austria, Belgium, Denmark
and many more.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Data Preprocessing</title>
        <p>
          Data preprocessing is the process of preparing raw data before it is used to build machine learning
models, and involves several steps. As parliamentary speeches are sometimes published in diferent
ways, there is a lot of inconsistency and redundancy, which makes data cleaning necessary. The
following preprocessing steps were applied:
• Remove line breaks and non-alphabetic characters
• Convert text to lower case
At its core, CNN uses a "tokeniser" from Tensorflow [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], which is used to convert sequences of integers
from the input data, with a vocabulary size limited to 10,000 words.
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Model Architecture</title>
        <p>
          Convolutional Neural Networks (CNNs) are a type of artificial neural network that learns directly
from data. A CNN is a feed-forward network that can extract features from data with convolutional
structures [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. As a result of the convolutional layer, the input data is filtered and a feature map is
created that illustrates the particular attributes associated with the data points [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. CNN is able to
detect local and deep features from text by using layers to automatically learn their hierarchies. The
CNN model in this paper classifies the input text data and classifies it into two categories: for political
orientation, 0 is left and 1 is right, and for power identification, 0 indicates coalition (or ruling party)
and 1 indicates opposition. Figure 1 shows the proposed deep learning architecture using CNN. First,
the embedding layer converts the pre-tokenised integer sequences into dense vectors of fixed size in an
embedding matrix. One-dimensional convolution (Conv1D) is used for feature extraction. The next
layer is max-pooling, which reduces the network parameters, resulting in a faster training process and
easier handling of overfitting problems [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. As a final layer, the dense layer with a sigmoid activation
function is used to generate predictions. The model is built using the Adam optimiser and the binary
cross-entropy loss function. Precision, recall and F1-score are used as evaluation metrics.
        </p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Training Process</title>
        <p>The model is trained on the processed text data, split into training and validation sets to evaluate
performance during training. The ratio of the splits were 80-20. During the process, the training data is
carefully prepared to ensure no overlaps are made between speakers in the training and validation sets.
The training was done on a GPU-enabled environment.</p>
      </sec>
      <sec id="sec-3-5">
        <title>3.5. Hyperparameter Tuning</title>
        <p>
          There are eight diferent parameters used in the neural network, each of which can have a diferent
value. For the epoch 4, 5, 6, 7, 8, 9, 10, 11, 12 were considered and for the batch size 2, 4, 8, 16, 32 and 64.
To find the best parameters and optimise the performance of the CNN, the Bayesian optimisation tuner
from KerasTuner [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] was used. Bayesian optimisation is a sequential design strategy for optimising
complex models where the decision-making process is not easily interpretable [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. Basically, the
goal was to optimise for validation accuracy. This was achieved by trying diferent combinations of
words in the vocabulary (max_nb_words), embedding dimensions (embedding_dim), sequence length
(max_sequence_length), number of convolutions (num_conv_layers), number of filters (num_filters) and
kernel size (3, 5, 7). The Bayesian optimisation would be run twice per combination with a maximum of
10 trials. The best parameters for the number of epochs and the batch size were tested manually. These
values are summarised in Table 1.
In the end the model worked best under these condition:
• Epochs: 8
• Batch size: 32
• max_nb_words: 10,000
• embedding_dim: 250
• max_sequence_length: 600
• num_conv_layers: 1
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <sec id="sec-4-1">
        <title>4.1. Evaluation Metrics</title>
        <p>To evaluate the performance of the model, measures of precision, recall and F1 scores were used for
the two diferent tasks. The first task was to identify the ideology of the speaker’s party and the other
task was to identify the power. Depending on the identification task, they are calculated based on the
confusion matrix, which has four values:
• True Positives (TP): The number of correctly identified speeches from "right" ideology/"governing"
party (0)
• True Negative (TN): The number of correctly identified speeches from "left" ideology/"opposition"
party (1)
• False Positive (FP): The number of incorrectly identified speeches from the "right"
ideology/"governing" party (0) when it is actually from the "left" ideology/"opposition" party (1)
• False Negative (FN): The number of incorrectly identified speeches from the "left"
ideology/"opposition" party (1) when it is actually from the "right" ideology/"governing" party
Precision indicates what proportion of predicted positives are actually Positive.</p>
        <p>Recall measures the proportion of Positives that are correctly classified
F1-score is a number between zero and one that represents the harmonic mean of precision and recall
Precision =</p>
        <p>+  
Recall =</p>
        <p>+  
F1-score =
2 *   *</p>
        <p>+</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Evaluation</title>
        <p>
          This section presents the results of each dataset in each task. The parameters used to train the deep
learning model, as discussed in section 2.5 on hyperparameter tuning, were determined by Bayesian
optimisation. The results were evaluated using the submission system provided by TIRA [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] and were
published by Touché [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. Table 2 shows the results of the Power Identification task and Table 3 shows
the results of the Political Orientation task. The highest scores are highlighted in green and the lowest
in red. The overall F1 scores are 0.6758 for political power and 0.6322 for political orientation. These
scores indicate a moderate level of accuracy in the models, reflecting their ability to classify political
ideology and power. For the identification of speaker power, the model had the highest precision
for Hungary (0.8776), meaning that it had the highest proportion of true positives out of all positive
predictions. Conversely, Ukraine had the lowest precision (0.5135). Recall was also highest for Hungary
(0.8694), meaning that the model was able to successfully identify most of the true instances of political
power in speeches. Again, Ukraine had the lowest score (0.5138). The highest F1 score is also found in
Hungary (0.8727). This indicates a good balance between precision and recall, while the lowest F1 score
in Ukraine (0.4756) indicates significant room for improvement in both precision and recall. This shows
that the model was able to identify the political power of a speaker quite well in countries like Hungary,
Turkey and Galicia, while it had dificulties with speakers in Ukraine, Italy and Bosnia and Herzegovina.
(1)
(2)
(3)
The results for the orientation tasks, as shown in Table 3, show that the model achieved the highest
prediction for the Turkish dataset in terms of precision (0.8404), recall (0.8423) and F1 score (0.8410).
Conversely, the model gave the lowest results for Latvia in terms of precision (0.5301) and recall
(0.5083). The lowest F1 score (0.4456) is for the recognition of orientation for speakers from Bosnia and
Herzegovina. According to the results, the CNN model performed better for countries such as Turkey,
Poland and Spain and less accurately for countries such as Bosnia and Herzegovina, Latvia and Croatia.
This suggests that there is a need to improve the model in these countries.
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>
        This paper presents a CNN model, developed for the Touché Lab at CLEF 2024, that can be used to
identify the ideology and power of a speaker in parliamentary debates. Through careful data pre-processing
and the application of hyperparameter optimisation techniques, the model achieves a satisfactory level
of accuracy. The model was evaluated on both tasks by TIRA, using datasets from diferent countries.
The results indicate that the model performs moderately well, with an overall F1 score of 0.6758 for
power identification and 0.6322 for political orientation. For power identification, the CNN mode
performed best for the Hungarian dataset with an F1 score of 0.8727 and for ideology identification it
was the Turkish dataset with an F1 score of 0.8410. However, the performance of the model in regions
such as Ukraine and Bosnia and Herzegovina shows that there is room for improvement. This suggests
a need for further refinement, possibly through more specific data pre-processing or the integration of
additional contextual data. The use of monolingual or multilingual pre-trained models could help to
achieve these improvements. This could potentially be achieved by using monolingual or multilingual
pre-trained language models. Google’s BERT or mBERT architecture has typically been trained on
a large corpus, and there are a variety of writing styles in the corpus, as well as many topics (e.g.
science, novels, news). Multilingual or monolingual language models, such as BERT or mBERT, can
capture the semantics and meaning of sentences in a language. Therefore, these models are used in
data pre-processing for word embedding, resulting in text vectorisation that can be used as input to a
neural network. In addition, other layers can be added to the CNN model, such as LSTM layers. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
Future work could also focus on expanding the dataset, improving the model architecture and exploring
additional features to further improve classification performance.
      </p>
      <p>In conclusion, this research represents a fundamental step towards more automated analysis of
parliamentary debates, paving the way for deeper insights into the ideological and power dynamics that
shape legislative outcomes. The methods and results presented here contribute to the broader discourse
on political communication and its computational analysis, and highlight the potential for further
innovation in this important area.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P.</given-names>
            <surname>Chilton</surname>
          </string-name>
          ,
          <source>Analysing Political Discourse: Theory and Practice</source>
          ,
          <source>Analysing Political Discourse: Theory and Practice</source>
          , Routledge,
          <year>2004</year>
          . URL: https://books.google.de/books?id=un1buuNipQIC.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Charteris-Black</surname>
          </string-name>
          , Analysing Political Speeches: Rhetoric, Discourse and Metaphor, Bloomsbury Publishing,
          <year>2018</year>
          . URL: https://books.google.de/books?id=1fhGEAAAQBAJ.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kiesel</surname>
          </string-name>
          , Ç. Çöltekin,
          <string-name>
            <given-names>M.</given-names>
            <surname>Heinrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Fröbe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Alshomary</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. D.</given-names>
            <surname>Longueville</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Erjavec</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Handke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kopp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ljubešić</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Meden</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mirzakhmedova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Morkevičius</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Reitis-Munstermann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Scharfbillig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Stefanovitch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wachsmuth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stein</surname>
          </string-name>
          , Overview of Touché 2024:
          <article-title>Argumentation Systems</article-title>
          , in: L.
          <string-name>
            <surname>Goeuriot</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Mulhem</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Quénot</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Schwab</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Soulier</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. M. D. Nunzio</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Galuščáková</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. G. S. de Herrera</surname>
          </string-name>
          , G. Faggioli, N. Ferro (Eds.),
          <source>Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the Fifteenth International Conference of the CLEF Association (CLEF</source>
          <year>2024</year>
          ), Lecture Notes in Computer Science, Springer, Berlin Heidelberg New York,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Abercrombie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Batista-Navarro</surname>
          </string-name>
          ,
          <article-title>Sentiment and position-taking analysis of parliamentary debates: a systematic literature review</article-title>
          ,
          <source>Journal of Computational Social Science</source>
          <volume>3</volume>
          (
          <year>2020</year>
          )
          <fpage>245</fpage>
          -
          <lpage>270</lpage>
          . URL: https://doi.org/10.1007/s42001-019-00060-w. doi:
          <volume>10</volume>
          .1007/s42001-019-00060-w.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <article-title>A survey of convolutional neural networks: Analysis, applications, and prospects</article-title>
          ,
          <source>IEEE Transactions on Neural Networks and Learning Systems</source>
          PP (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>21</lpage>
          . doi:
          <volume>10</volume>
          .1109/TNNLS.
          <year>2021</year>
          .
          <volume>3084827</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Pouyanfar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sadiq</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Tian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Reyes</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.-L. Shyu</surname>
            , S.-C. Chen,
            <given-names>S. S.</given-names>
          </string-name>
          <string-name>
            <surname>Iyengar</surname>
          </string-name>
          ,
          <article-title>A survey on deep learning: Algorithms, techniques, and applications</article-title>
          ,
          <source>ACM Comput. Surv</source>
          .
          <volume>51</volume>
          (
          <year>2018</year>
          ). URL: https://doi.org/10.1145/3234150. doi:
          <volume>10</volume>
          .1145/3234150.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Aslan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kızıloluk</surname>
          </string-name>
          , E. Sert,
          <article-title>Tsa-cnn-aoa: Twitter sentiment analysis using cnn optimized via arithmetic optimization algorithm</article-title>
          ,
          <source>Neural Computing and Applications</source>
          <volume>35</volume>
          (
          <year>2023</year>
          )
          <fpage>10311</fpage>
          -
          <lpage>10328</lpage>
          . URL: https://doi.org/10.1007/s00521-023-08236-2. doi:
          <volume>10</volume>
          .1007/s00521-023-08236-2.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Dehghani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yazdanparast</surname>
          </string-name>
          ,
          <article-title>Political sentiment analysis of persian tweets using cnn-lstm model</article-title>
          ,
          <year>2023</year>
          . URL: https://arxiv.org/abs/2307.07740. arXiv:
          <volume>2307</volume>
          .
          <fpage>07740</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>T.</given-names>
            <surname>Erjavec</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ogrodniczuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Osenova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ljubešić</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Simov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pančur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rudolf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kopp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Barkarson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Steingrímsson</surname>
          </string-name>
          , Çöltekin, J. de Does,
          <string-name>
            <given-names>K.</given-names>
            <surname>Depuydt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Agnoloni</surname>
          </string-name>
          , G. Venturi,
          <string-name>
            <given-names>M. Calzada</given-names>
            <surname>Pérez</surname>
          </string-name>
          , L. D. de Macedo, C. Navarretta, G. Luxardo,
          <string-name>
            <given-names>M.</given-names>
            <surname>Coole</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rayson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Morkevičius</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Krilavičius</surname>
          </string-name>
          , R. Darg´is,
          <string-name>
            <given-names>O.</given-names>
            <surname>Ring</surname>
          </string-name>
          , R. van Heusden,
          <string-name>
            <given-names>M.</given-names>
            <surname>Marx</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fišer</surname>
          </string-name>
          ,
          <article-title>The parlamint corpora of parliamentary proceedings</article-title>
          ,
          <source>Language resources and evaluation 57</source>
          (
          <year>2022</year>
          )
          <fpage>415</fpage>
          -
          <lpage>448</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10579-021-09574-0.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Abadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Barham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Davis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dean</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Devin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghemawat</surname>
          </string-name>
          , G. Irving,
          <string-name>
            <given-names>M.</given-names>
            <surname>Isard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kudlur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Levenberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Monga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Moore</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. G.</given-names>
            <surname>Murray</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Steiner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Tucker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vasudevan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Warden</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wicke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <article-title>Tensorflow: A system for large-scale machine learning</article-title>
          ,
          <year>2016</year>
          . arXiv:
          <volume>1605</volume>
          .
          <fpage>08695</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Alzubaidi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Humaidi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Al-Dujaili</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Duan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Al-Shamma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Santamaría</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Fadhel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Al-Amidie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Farhan</surname>
          </string-name>
          ,
          <article-title>Review of deep learning: concepts, CNN architectures, challenges, applications, future directions</article-title>
          ,
          <source>J Big Data</source>
          <volume>8</volume>
          (
          <year>2021</year>
          )
          <fpage>53</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>D.</given-names>
            <surname>Yogatama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. A.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <source>Bayesian optimization of text representations</source>
          ,
          <year>2015</year>
          . arXiv:
          <volume>1503</volume>
          .
          <fpage>00693</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.</given-names>
            <surname>Snoek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Larochelle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. P.</given-names>
            <surname>Adams</surname>
          </string-name>
          ,
          <article-title>Practical bayesian optimization of machine learning algorithms</article-title>
          ,
          <year>2012</year>
          . arXiv:
          <volume>1206</volume>
          .
          <fpage>2944</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Fröbe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wiegmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kolyada</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Grahm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Elstner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Loebe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hagen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <article-title>Continuous Integration for Reproducible Shared Tasks with TIRA.io</article-title>
          , in: J.
          <string-name>
            <surname>Kamps</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Goeuriot</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Crestani</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Maistro</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Joho</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Davis</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Gurrin</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          <string-name>
            <surname>Kruschwitz</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Caputo (Eds.),
          <source>Advances in Information Retrieval. 45th European Conference on IR Research (ECIR</source>
          <year>2023</year>
          ), Lecture Notes in Computer Science, Springer, Berlin Heidelberg New York,
          <year>2023</year>
          , pp.
          <fpage>236</fpage>
          -
          <lpage>241</lpage>
          . doi:
          <volume>10</volume>
          .1007/
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>