<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>EEG Anomalies Detection and Removal Using Generative Adversarial Networks</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Amine Ahardane</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Silverio Manganaro</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roberta Avanzato</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Electrical, Electronics and Computer Engineering, University of Catania</institution>
          ,
          <addr-line>Catania</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <fpage>24</fpage>
      <lpage>34</lpage>
      <abstract>
        <p>The human brain generates electrical activities measurable through electroencephalography (EEG). These signals are often contaminated by noise, and so its not easy to do an accurate analysis and interpretation which is essential for clinical applications such as epilepsy diagnosis, cognitive neuroscience, and brain-computer interfaces. Traditional denoising techniques frequently fall short in efectively distinguishing between signal and noise, especially when the noise sources exhibit complex and nonlinear characteristics. This paper explores the application of Generative Adversarial Networks (GANs) in denoising EEG signals, ofering a data-driven approach to learn the complex structures of both clean and noisy EEG data. We detail the training of a classifier to distinguish between normal and abnormal EEG signals, the development of an AutoEncoder to compress and reconstruct signals, and the use of a Wasserstein GAN (WGAN) to manipulate abnormal signals towards normality in the latent space. Our results demonstrate the potential of GAN-based methods in enhancing EEG signal quality, paving the way for more accurate clinical analyses.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;EEG Denoising</kwd>
        <kwd>Generative Adversarial Networks</kwd>
        <kwd>AutoEncoders</kwd>
        <kwd>Deep Learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>resolution approach but can be computation-intensive,
and also this technique cannot deal with noise that is
The human brain is a complex network of connected highly non-stationary in nature. These techniques often
neurons that produces electrical activity; this can be mea- tend to delete the main important features of the
origisured through EEG. Although EEG signals are quite use- nal EEG signal, especially in cases when noise sources
ful to understand brain functioning, they are heavily con- become complex and nonlinear in nature[9, 10].
taminated by numerous sources of noise of very diferent In recent years, machine learning techniques like
Genkinds: environmental interference, muscle activity, and erative Adversarial Networks have shown great potential
electrical artifacts[1]. For instance, myogenic artifacts in denoising EEG signals. GANs were first introduced by
are created in muscle movements around the head and Goodfellow et al. in 2014 [11, 12], where we have two
face, while ocular artifacts are generated by eye blinks neural networks, named the generator and the
discrimiand movements. Moreover, brain signals are complex nator. The generator tries to generate data samples that
and non-stationary; this makes the separation of neural can come from the real data distribution, and the
discrimiactivity from noise a challenging task. Denoising EEG nator tries to diferentiate between the real data examples
signals is important because it allows us to better ana- and those created by the generator. In the end, the GAN
lyze and interpret signals. This is important in various updates its weights so that the generator can fool the
settings, such as clinical applications for the diagnosis discriminator. This adversarial training process makes
of epilepsy, cognitive neuroscience, and brain-computer GANs capable of learning complex structures of the data,
interfaces[2, 3]. which subsequently becomes useful for problems like</p>
      <p>
        To meet these challenges, traditional denoising tech- denoising where typical methods fall short.
niques, such as band-pass filtering, independent compo- Traditional GANs have several training problems, such
nent analysis (ICA) [4, 5, 6], and wavelet transforms and as mode collapse and instability during training. To
hanother domain transforms [
        <xref ref-type="bibr" rid="ref45">7, 2, 8</xref>
        ], have been put into dle these problems, (Arjovsky et al. 2017 [
        <xref ref-type="bibr" rid="ref49">13</xref>
        ])
intropractice. For example, linear filtering methods may re- duced Wasserstein Generative Adversarial Networks, or
move significant portions of the EEG signal along with WGANs[14]. The methodology presented in this paper
the noise, leading to a loss of critical information. There is is comprised of three principal stages:
an assumption at the root of ICA that the sources are
statistically independent, which might not hold in practical
scenarios. Wavelet transformation does provide a
multi
      </p>
      <sec id="sec-1-1">
        <title>1. Classifier Training: Initially, a classifier is devel</title>
        <p>oped to accurately discern between normal and
abnormal EEG signals using the provided dataset.</p>
        <p>This classifier forms the basis of our subsequent
denoising process by ensuring the precise
identiifcation of signal types.
2. AutoEncoder Development: Subsequently, an ity to handle multiple artifcat types, having the potential
AutoEncoder is constructed and trained exclu- of being used in applications in portable EEG devices.
sively on normal EEG data. This network is Similarly we have (Yang An et al. 2022 [19]) approach
tasked with compressing and reconstructing the that proposes a new loss function to retain original
inforEEG signals, thereby establishing a representa- mation and energy in the filtered signals, demonstrating
tive latent space that captures the salient features that the performances are comparable to manual
denoisof normal brain activity. This latent space serves ing methods while at the same time, we have a significant
as a critical reference for subsequent processing. reduction of the processing time.In this method, we have
3. WGAN Training: In the final stage, a Wasser- also an incorporation of a new normalization method
stein Generative Adversarial Network (WGAN) ensures stable generation of EEG signals by the GAN
is employed to refine the latent representations model, allowing for automatic denoising across diferent
of abnormal signals. The generator within the subjects’ data[20].</p>
        <p>WGAN is designed to transform these abnormal Another notable contribution in this domain is
prelatent representations so that they more closely sented by (Wang et al. 2022 [21]) in their work titled "An
resemble those of normal signals, efectively de- improved Generative Adversarial Network for Denoising
ceiving the discriminator. This transformation EEG signals of brain-computer interface systems[22, 23].
facilitates an improved reconstruction of the orig- The authors propose a novel GAN-based framework
inal signal, ultimately converting unhealthy sig- that includes a generator with BiLSTM and LSTM layers
nals into representations that are consistent with and a discriminator composed of multiple CNN layers.
healthy EEG characteristics—a process that holds This architecture aims to address the limitations of
presignificant potential for clinical applications. vious models by reducing mode collapse and improving
the convergence stability during training. The study</p>
        <p>By leveraging the advanced capabilities of GAN-based demonstrates that their improved GAN significantly
outarchitectures, the proposed approach addresses several performs traditional methods and other deep learning
limitations associated with traditional denoising tech- approaches, particularly in scenarios with high noise
levniques. The integration of classifier training, AutoEn- els. Their results show enhanced performance in terms of
coder development, and WGAN-based latent space ma- root mean square error (RMSE) and Pearson correlation
nipulation provides a comprehensive and robust solution coeficient (CC), making it a robust solution for real-time
for EEG signal denoising. This integrated methodology EEG denoising in brain-computer interface (BCI)
sysnot only enhances the quality of EEG signals but also tems.
increases their utility in clinical diagnostics and cogni- The improved GAN framework not only excels in
detive neuroscience research, paving the way for the de- noising accuracy but also enhances the robustness of EEG
velopment of more efective brain-computer interface signal processing by efectively handling artifacts such as
technologies. eye blinks and muscle movements. The authors utilized
the EEGdenoiseNet (Zhang et al. 2022 [24]) dataset to
2. Related Works benchmark their model, which includes a diverse range
of artifact-contaminated EEG signals. The proposed
There are several recent studies that deals with denois- model’s ability to maintain high performance across
varying EEG signals using various methodologies. Among ing signal-to-noise ratios (SNR) underscores its potential
these, (Peng Yi et al. 2021 [15]) using transformer, pro- for practical applications in BCI systems, where real-time
poses a novel approach integrating non-local and local processing and reliability are crucial [25, 26].
self-similarity of EEG signals through a 1-D EEG signal Moreover, this study show the importance of
incorpodenoising network with a 2-D transformer.Using self- rating BiLSTM and LSTM layers in the generator network
similarity characteristics, EEGDnet has a improvements to capture temporal dependencies in EEG signals. The
in removing ocular and muscle artifacts compared to discriminator network, consisting of multiple CNN
layother state-of-the-art models. ers, ensures that the generated clean EEG signals closely</p>
        <p>In (Eoin Brophy et al. 2022 [16]), the focus is on utiliz- resemble the true signals, thus enhancing the overall
ing Generative Adversarial Networks (GANs) to denoise quality of the denoised output.</p>
        <p>EEG time series data. We have to deal with artifacts In addition to the structural improvements, the
traininduced in real-world Brain-Computer Interface (BCI) ing strategy employed by Wang et al. achieves superior
applications[17, 18], which degrade performance.Using performance. By carefully balancing the learning rates
GANs the model is able to, given noisy EEG signals trans- and optimizing the loss functions for both the generator
forming it into clean ones,demonstrating promising re- and discriminator, the model is able to avoid the common
sults in quantitative metrics such as power spectral den- problems of the GAN, overfitting and non-convergence.
sity and signal-to-noise ratio. This study has the capabil- Other interesting works that apply GAN-based
models for anomaly detection (Zenati et al. 2018 [27]). In signal from this transformed latent representation will
this works, state-of-the-art performances are shown on be an enhanced version of the original abnormal signal.
image and network intrusion datasets. The approach con- This enhancement process exploit the combination of
sists of using a GAN that learns a latent representation classification, compression, and adversarial training to
of normal samples in such a way that the GAN is able improve the quality and normalcy of abnormal data.
to detect anomalies by measuring reconstruction and
discriminator-based loss. High performance is shown on 3.1. Phase 1: Classifier
the MNIST and KDD99 datasets in the method proposed.</p>
        <p>The proposed method by (Niu et al. 2020 [28]), is called In the first step, we train a very simple binary classifier.
LSTM-based VAE-GAN. This solves the ineficiency of The classifier learns the diference between normal and
real-time space-to-latent space mapping when doing abnormal data, in fact it’s role is to accurately identify
abanomaly detection. Here we exploit the temporal de- normal signals, which will later be processed to enhance
pendencies that an LSTM Network can capture and the their quality.
reconstruction and discrimination abilities of VAE and
GAN respectively. 3.2. Phase 2: AutoEncoder
(Zhang et al. 2023 [29]) is a novel GAN-based model
for unsupervised anomaly detection in multivariate time We now train an AutoEncoder (as shown in Figure 2)
series (MTS). In this work, the authors use a self-training using only the normal data. The job of this AutoEncoder
framework wherein they have a teacher model gen- is to compress and reconstruct each given signal. Once it
erating high-quality pseudo-labels iteratively training learns what the typical data distribution is, the
AutoEna student model. STAD-GAN involves a generator- coder can then be able to reconstruct signals that reflect
discriminator structure with a neural network classifier. normal behavior.</p>
        <p>The generator maps the normal data distribution. The
discriminator amplifies the reconstruction error of abnor- 3.3. Phase 3: WGAN
mal data to enhance recognition performance. It means
that the performance of the anomaly classifier will be For the final phase, we train a Wasserstein Generative
Adimproved through self-training by iteratively refining the versarial Network (WGAN). The Generator in the WGAN
dataset[30]. works on the encoded anomalous latent-space
represen</p>
        <p>Our approach ofers a diferent contribution. Unlike tation in such a way that the latter becomes more like the
existing methodologies that often rely on direct corre- latent representations of normal signals, those obtained
spondences between healthy and unhealthy signals for by pre-trained AutoEncoders. The Discriminator’s role
denoising, our study uses a comprehensive data-driven is to distinguish between the true normal latent
repreapproach without such direct correspondences, by trans- sentations and the manipulated ones produced by the
forming various types of unhealthy signals into healthy Generator. The objective is to train the Generator to
proones within the same architectural framework. duce enhanced versions of the abnormal signals that fool</p>
        <p>Many methods proposed in the literature typically ad- the Discriminator into classifying them as normal.
dress only one type of artifact or abnormality, these meth- The objective function for Wasserstein Generative
Adods limits versatility and applicability across diferent versarial Networks (WGAN) is given by:
scenarios and they often require a specialized design or
customization for each specific type of artifact or abnor- min max Ex∼ P [(x)] − Ez∼ P [((z))] (1)
mality, which can be both time-consuming and resource-  ∈
intensive. Here, P denotes the real data distribution, P denotes</p>
        <p>Specifically, our model is designed to handle poten- the prior distribution of the input noise z to the Generator
tially a diverse range of signal abnormalities and artifacts, , and  is the set of 1-Lipschitz functions which the
ensuring that it can be retrained and adapted to improve Discriminator  is a part of.
any given signal’s quality without the need for significant In particular, we will enforce the 1-Lipschitz constraint,
modifications. a gradient penalty term is added to the loss function. The
modified objective with gradient penalty (WGAN-GP) is:</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>3. Proposed Model</title>
      <p>Our approach to enhance abnormal data consists of three
main phases. The goal of our approach is to transform the
latent representation of an abnormal signal to resemble
those of normal signals. By doing so, the reconstructed
ℒ = Ex∼ P [(x)] − Ez∼ P [((z))]
+ Ex^∼ Px^ ︀[ (‖∇x^ (xˆ)‖2 − 1)2]︀
(2)
where Px^ is the distribution of the interpolated
samples between the real and generated data, and  is the
penalty coeficient.</p>
    </sec>
    <sec id="sec-3">
      <title>4. Implementation</title>
      <p>ular tool used for tracking machine learning experiments.</p>
      <p>It’s a strong platform for logging metrics, visualizing
reMore specifically, our method is developed in three prin- sults, and collaborating on project implementations. In
cipal phases: training of the classifier to diferentiate our implementation, we have integrated wandb to
monibetween normal and abnormal signals, training of the tor training and the evaluation processes of our models.
AutoEncoder signal compression and its reconstruction, In Table 1, we depict the various hyperparameters of the
training of Wasserstein Generative Adversarial Network models.
(WGAN) that transform unhealthy into healthy signals.</p>
      <p>Then each phase is described in detail with extensive
descriptions for data preparation, architectural choices,
training processes, and integration steps. Note that in our
experiments, we used Weights &amp; Biases (wandb)—a
pop</p>
      <sec id="sec-3-1">
        <title>4.1. Phase 1: Classifier</title>
        <p>4.1.1. Data Preparation
The main dataset (shown in Figure 1) used in our
experiments consists of EEG signals recorded under various
conditions. The Healthy dataset consists of over 1,500
one- and two-minute EEG recordings obtained from 109
volunteers. The EEG recordings were collected using a
64-channel setup with the BCI2000 system. Each subject
performed 14 experimental runs, which included:
1. Baseline, eyes open
2. Baseline, eyes closed
3. Task 1: Open and close left or right fist
4. Task 2: Imagine opening and closing left or right
ifst
5. Task 3: Open and close both fists or both feet
6. Task 4: Imagine opening and closing both fists or
both feet
7. Task 1 (repeat)
8. Task 2 (repeat)
9. Task 3 (repeat)
10. Task 4 (repeat)
11. Task 1 (repeat)
12. Task 2 (repeat)
13. Task 3 (repeat)
14. Task 4 (repeat)</p>
        <p>During these tasks, subjects performed or imagined
performing the movement corresponding to the stimuli
presented on the screen. All these tasks are designed to
evoke diferent types of motor and imagery behaviors
and have the event type tag-based indicators:
• T0: Rest
• T1: Onset of motion (real or imagined) of the left
ifst (Tasks 1 and 2) or both fists (Tasks 3 and 4)
• T2: Onset of motion (real or imagined) of the
right fist (Tasks 1 and 2) or both feet (Tasks 3 and
4)
4.1.2. Unhealthy Dataset</p>
        <sec id="sec-3-1-1">
          <title>The Unhealthy dataset comprises raw 18-channel EEG</title>
          <p>recordings from 7 human participants with orthopedic
impairment during motor imagery (MI) tasks. Due to
component removal, some subjects have slightly fewer
channels (14 or 15) to eliminate noisy channels and
improve data quality.</p>
          <p>Participants performed a series of MI-related trials
across three sessions, each consisting of 40 trials with
four diferent MI tasks presented in random order. Each
trial included:
• 3 seconds of fixation cross
• 4 seconds of visual cue
• 3 seconds of letters indicating the ready state
• 5 seconds of imaginary movement</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>The dataset includes filtered EEG data (8-30 Hz with a</title>
          <p>notch filter) and labels for 10 movement types and a rest
state. The corresponding electrode names are provided
in TSV files, ensuring a 1:1 mapping with the CSV data.</p>
          <p>Note that we have healthy and unhealthy signals, and
no correspondences between them ( so given an
unhealthy signal, we don’t have the corresponding healthy
one) .
4.1.3. Supplementary Dataset</p>
        </sec>
        <sec id="sec-3-1-3">
          <title>After training the classifier for WGAN, we tested it using</title>
          <p>the Epilepsy2 dataset, which contains single-channel EEG
measurements from 500 subjects. [31]</p>
          <p>Each subject’s brain activity was recorded for 23.6
seconds, resulting in a comprehensive collection of EEG
data. To facilitate detailed analysis and model training,
the dataset was divided and shufled into 11,500 samples,
each representing a 1-second segment of EEG data
sampled at 178 Hz. This shufling mitigates sample-subject
association and ensures a robust evaluation environment.</p>
          <p>The dataset is divided into three groups: 60 samples for
training, 20 samples for validation, and 11,420 samples
for the test. The validation set was appended directly to
the end of the training file to allow easier reproducibility
in case a validation set is required. The small size of
the training set will actually assist in testing the transfer
learning capabilities, and the test set retains the original
distribution for a fair evaluation.</p>
          <p>The dataset contains five unique classification labels
related to diferent conditions or measurement locations:</p>
        </sec>
        <sec id="sec-3-1-4">
          <title>1. Eyes open</title>
          <p>2. Eyes closed
3. EEG measured in a healthy brain region
4. EEG measured in the region of a tumor
5. Subject experiencing a seizure episode</p>
        </sec>
        <sec id="sec-3-1-5">
          <title>Our classifier achieved an impressive accuracy of 95%</title>
          <p>on this dataset, demonstrating its efectiveness in
distinguishing between seizure and non-seizure states. For
reference, the Mini Rocket classifier, a well-known model
in the field, achieves a test accuracy of 96.25%,
highlighting the competitive performance of our approach.</p>
          <p>The actual classifier, autoencoder and wgan they will
be trained using the main dataset.
4.1.4. Classifier Architecture</p>
        </sec>
        <sec id="sec-3-1-6">
          <title>We implemented a binary classifier using a neural net</title>
          <p>work architecture. The classifier consists of multiple fully
connected (dense) layers, each followed by a Rectified
Linear Unit (ReLU) activation function, and a final
Sigmoid activation function to output the probability of the
signal being abnormal. The detailed architecture is as
follows:
• Input Layer: Accepts the preprocessed EEG
signal.
• Hidden Layers:
– First layer: 512 neurons, ReLU activation.
– Second layer: 256 neurons, ReLU
activation.</p>
          <p>– Third layer: 64 neurons, ReLU activation.</p>
          <p>• Output Layer: 1 neuron, Sigmoid activation.</p>
        </sec>
        <sec id="sec-3-1-7">
          <title>This architecture was chosen for its balance between complexity and performance that allows the model to classify efectively between unhealthy and healthy signal.</title>
          <p>4.1.5. Training the Classifier</p>
        </sec>
        <sec id="sec-3-1-8">
          <title>The classifier was trained using the Adam optimizer with</title>
          <p>a binary cross-entropy loss function. The training process
involved multiple epochs, where the model parameters
were updated iteratively to minimize the loss function.
During training, the following steps were performed:
• Forward Pass: The input data was passed
through the network to obtain the output
probabilities.
• Loss Calculation: The binary cross-entropy loss
between the predicted probabilities and the true
labels was calculated.
• Backward Pass: Gradients were computed using
backpropagation, and the network weights were
updated using the Adam optimization algorithm.</p>
        </sec>
        <sec id="sec-3-1-9">
          <title>The training loss is described in Figure 4 and its correspondig loss of evaluation dataset Figure 3</title>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>4.2. Phase 2: AutoEncoder</title>
        <p>4.2.1. AutoEncoder Architecture
The AutoEncoder was designed to compress and
reconstruct the EEG signals. The AutoEncoder consists of two
main parts: the encoder and the decoder. The encoder
reduces the dimensionality of the input data to a latent
space, capturing the essential features, while the decoder
reconstructs the data back to its original form from this
compressed representation.</p>
        <p>• Encoder:
– First layer: 128 neurons, ReLU activation.
– Second layer: 64 neurons, ReLU activation.
– Third layer: 32 neurons (latent space),</p>
        <p>ReLU activation.
terms of the reconstruction error. The objective was to
minimize the reconstruction error, which should allow
the AutoEncoder to reconstruct normal EEG signals. The
training loss is described in Figure 6
4.3. Phase 3: WGAN
4.3.1. WGAN Architecture</p>
        <sec id="sec-3-2-1">
          <title>The WGAN consists of a generator and a discriminator.</title>
          <p>This architecture allows the AutoEncoder to learn The generator aims to transform the encoded latent space
a compressed representation of normal EEG signals, of abnormal signals to resemble the latent space of normal
needed for the training of WGAN. signals, while the discriminator evaluates the
authenticity of the generated signals. The detailed architectures
are as follows:
4.2.2. Training the AutoEncoder
The AutoEncoder was trained using the mean squared
error (MSE) loss function, which measures the diference
between the input and the reconstructed output. The
training involved the following steps:
• Forward Pass: The normal EEG signals were
passed through the encoder to obtain the latent
representation, and then through the decoder to
reconstruct the signals.
• Loss Calculation: The MSE loss was computed</p>
          <p>between the original and reconstructed signals.
• Backward Pass: Gradients were computed, and
the network weights were updated using the</p>
          <p>Adam optimizer.</p>
          <p>The training was done over several epochs while
checking the performance of the model on the validation set in
• Discriminator:
– First layer: 64 neurons, ReLU activation.
– Second layer: 128 neurons, ReLU
activa</p>
          <p>tion.
– Output layer: Same size as latent space,</p>
          <p>linear activation.
– First layer: 128 neurons, ReLU activation.
– Second layer: 64 neurons, ReLU activation.
– Output layer: 1 neuron, Sigmoid
activa</p>
          <p>tion.
4.3.2. Training the WGAN</p>
        </sec>
        <sec id="sec-3-2-2">
          <title>In order to provide stable training dynamics and avoid</title>
          <p>problems like mode collapse, training of the WGAN was
performed with Wasserstein loss and gradient penalty
(WGAN-GP). The training process involved alternating
between optimizing the discriminator and the generator.</p>
          <p>The following steps were performed during training:
• Discriminator Training:
– Real latent representations (from the
encoder) and fake latent representations
(from the generator) were fed into the
discriminator.
– The discriminator’s loss was calculated
based on its ability to discern real from
fake representations.
• Generator Training:
– Gradients were computed, and the discrim- signals were performed to make sure that we remove
abinator’s weights were updated. normalities but also preserve essential features of the
EEG signals.</p>
          <p>With this method, we will efectively enhance the
qual– The generator produced fake latent repre- ity of EEG signals and transform unhealthy signals to
sentations from abnormal signals. healthy ones. The deployment of our model provides a
– These representations were fed into the scalable solution for EEG signal denoising applicable in
discriminator. many clinical and research applications.
– The generator’s loss was calculated based</p>
          <p>on its ability to fool the discriminator.
– Gradients were computed, and the genera- 5. Results</p>
          <p>tor’s weights were updated.</p>
        </sec>
        <sec id="sec-3-2-3">
          <title>The discriminator was trained more frequently than</title>
          <p>the generator to maintain a balance between the two
networks. Thanks to this adversarial training process, the
generator learned how to convert signals from unhealthy
to healthy. The training loss is described in Figure 7</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>4.4. Integration and Testing</title>
        <p>After training each component, we integrated the models
(shown in Figure 8) to form a cohesive system. So, each
sub-model work will be:
• Classification : The classifier identified
abnormal signals from the EEG data.
• Encoding: The identified abnormal signals were
encoded into latent representations using the
autoencoder’s encoder.
• Transformation: The WGAN generator
transformed these latent representations to match the
distribution of normal latent representations.
• Reconstruction: The autoencoder’s decoder
reconstructed the enhanced signals from these
transformed latent representations.</p>
        <sec id="sec-3-3-1">
          <title>At inference time (shown in Figure 5) the output of</title>
          <p>the generator will be the input of the decoder to have
the corresponding Healthy signal. The combined system
was additionally analyzed with other performance
measurements such as: accuracy, RMSE, MPC. mean entropy.
Additionally, qualitative assessments of the reconstructed</p>
        </sec>
        <sec id="sec-3-3-2">
          <title>The evaluation of our model’s performance included sev</title>
          <p>eral key metrics: Root Mean Square Error (RMSE)
calculated in the latent space, Pearson correlation, classifier
accuracy, and entropy changes of the samples after
manipulation. The results are summarized in Table 2.</p>
          <p>The RMSE in the latent space for GAN architecture
is very low (0.004166), meaning that the signals
reconstructed with the use of GAN are close to the expected
latent representations. This further means that the RMSE
value is quite low and close to zero, thus suggesting the
fact that GAN can successfully capture the structure of
signals.</p>
          <p>The mean Pearson correlation for the GAN is -0.116,
which indicates a slight negative correlation between the
generated signals and the true signals. While it is not a
strong negative correlation, it still denotes that there is
more room of improvements for the model to generate
signals that closely resemble the true signal patterns.</p>
          <p>The classifier, which identify the healthy samples for
the unhealthy one, achieved an accuracy of 0.9047. This
high accuracy shows that the classifier is efective in
diferentiating between the EEG recordings of normal
and abnormal cases, giving a good basis for further
processing by the GAN. In an entropy perspective, after
manipulation was performed on the 2334 samples, 585
samples showed increased mean entropy while 1728
samples showed decreased mean entropy. Entropy represents
a measure of randomness or disorder in the signals. If the
entropy has increased, this can be taken as an indication
that the signals grew to be more complex, possibly
indicating the introduction of noise or artifacts. On the other
hand, a decrease in entropy means a decrease in signal
complexity, which may be explained by efective signal
denoising and enhancement by the GAN. In Figure 9, is that synergistically integrates a classifier, an
AutoEnshown an example of transforming an unhealthy signal coder, and a WGAN. Using the unique capabilities of
to healthy one, its clear that probably by adding to the each component, the proposed method efectively
distinloss also a constraint to the amplitude, we will obtain a guishes between normal and abnormal signals, learns a
better signal. compact latent representation of healthy EEG patterns,
and transforms noisy or abnormal latent representations
towards normality. The experimental results,
characterized by a low RMSE in the latent space and robust
classifier performance, underscore the potential of the
framework to achieve meaningful denoising while
preserving critical signal features.</p>
          <p>Our integrated approach addresses several of the
limitations inherent in traditional signal processing
techniques, ofering a data-driven alternative that
accommodates the complex, non-stationary, and nonlinear nature
of EEG noise. Despite demonstrating promising results,
particularly in terms of classifier accuracy and latent
space convergence, the study also reveals areas where
further renfiement is warranted, such as improving the
Pearson correlation between reconstructed and true
signals and optimizing entropy measures in the enhanced
Figure 9: Example of denoising an unhealthy signal output.</p>
          <p>Future work should explore the scalability of the
frame</p>
          <p>These results show promise for enhancing abnormal work across broader and more diverse datasets,
includEEG signals using a GAN-based approach. Particularly ing multi-channel recordings and clinical datasets
enencouraging is the very low RMSE and high classifier compassing a wider range of neurological conditions.
accuracy, while the entropy results provide further in- Moreover, the real-time implementation of the denoising
sight into the nature of signal transformations. Further framework could significantly enhance its applicability
improvements in the model could potentially address the in BCI systems. Integrative research eforts that
comnegative correlation and refine the signal manipulation bine EEG with other neuroimaging modalities, such as
process to achieve even better outcomes. fMRI or MEG, may further enrich the diagnostic
precision and clinical relevance of the proposed methodology.</p>
          <p>In general, the promising results of this study pave the
6. Conclusions and Future Works way for advanced EEG signal processing paradigms,
offering valuable insights into both cognitive neuroscience
This study has presented a novel framework for enhanc- research and practical clinical diagnostics.
ing abnormal EEG signals using a GAN-based approach</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Declaration on Generative AI</title>
      <sec id="sec-4-1">
        <title>During the preparation of this work, the authors used</title>
        <p>ChatGPT, Grammarly in order to: Grammar and spelling
check, Paraphrase and reword. After using this
tool/service, the authors reviewed and edited the content as
needed and take full responsibility for the publication’s
content.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <source>ume 13589 LNAI</source>
          ,
          <year>2023</year>
          , p.
          <fpage>3</fpage>
          -
          <lpage>20</lpage>
          . doi:
          <volume>10</volume>
          .1007/
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          978-3-
          <fpage>031</fpage>
          -23480-
          <issue>4</issue>
          _
          <fpage>1</fpage>
          . [9]
          <string-name>
            <given-names>B.</given-names>
            <surname>Ladjal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bechouat</surname>
          </string-name>
          , M. Se-
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <volume>120</volume>
          (
          <year>2024</year>
          )
          <fpage>14703</fpage>
          -
          <lpage>14725</lpage>
          . [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Tibermacine,
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <article-title>Analyzing eeg patterns in young adults exposed to [1</article-title>
          ]
          <string-name>
            <given-names>N.</given-names>
            <surname>Boutarfaia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tiber</surname>
          </string-name>
          <article-title>- diferent acrophobia levels: a vr study</article-title>
          , Frontiers
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <article-title>macine, Deep learning for eeg-based motor imagery in Human Neuroscience 18 (</article-title>
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .3389/
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <article-title>classification: Towards enhanced human-machine fnhum</article-title>
          .
          <year>2024</year>
          .
          <volume>1348154</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <source>interaction and assistive robotics, life</source>
          <volume>2</volume>
          (
          <year>2023</year>
          )
          <article-title>4</article-title>
          . [11]
          <string-name>
            <given-names>I. J.</given-names>
            <surname>Goodfellow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pouget-Abadie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mirza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Xu</surname>
          </string-name>
          , [2]
          <string-name>
            <given-names>V.</given-names>
            <surname>Ponzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Wajda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brociek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Warde-Farley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ozair</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Courville</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <article-title>Analysis pre and post covid-19 pandemic rorschach Generative Adversarial Networks, arXiv preprint</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <article-title>test data of using em algorithms</article-title>
          and gmm mod- arXiv:
          <fpage>1406</fpage>
          .2661 (
          <year>2014</year>
          ). URL: https://arxiv.org/abs/
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          els,
          <source>in: CEUR Workshop Proceedings</source>
          , volume
          <volume>3360</volume>
          ,
          <fpage>1406</fpage>
          .
          <fpage>2661</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <year>2022</year>
          , p.
          <fpage>55</fpage>
          -
          <lpage>63</lpage>
          . [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Guettala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          , Ef[3]
          <string-name>
            <given-names>N. N.</given-names>
            <surname>Dat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ponzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Vincelli</surname>
          </string-name>
          , et al.,
          <article-title>Sup- ifcient one-stage deep learning for text detection</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <article-title>assistant by means of end-to-end visual target nav- matica 72 (</article-title>
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <article-title>igation and reinforcement learning approaches</article-title>
          , in: [13]
          <string-name>
            <given-names>L. B.</given-names>
            <surname>Martin Arjovsky</surname>
          </string-name>
          , Soumith Chintala, GWasser-
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <source>CEUR Workshop Proceedings</source>
          , volume
          <volume>3118</volume>
          ,
          <string-name>
            <surname>CEUR- stein</surname>
            <given-names>GAN</given-names>
          </string-name>
          ,
          <source>arXiv preprint arXiv:1701.07875</source>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>WS</surname>
          </string-name>
          ,
          <year>2021</year>
          , pp.
          <fpage>51</fpage>
          -
          <lpage>63</lpage>
          . URL: https://arxiv.org/abs/1701.07875. [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Makeig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Bell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.-P.</given-names>
            <surname>Jung</surname>
          </string-name>
          , T. J. Sejnowski, [14]
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ponzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Puglisi</surname>
          </string-name>
          , S. Russo,
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <source>Information Processing Systems</source>
          (
          <year>1996</year>
          ).
          <article-title>URL: caregivers</article-title>
          ,
          <source>in: CEUR Workshop Proceedings</source>
          , vol-
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          https://www.researchgate.net/publication/ ume 3686,
          <year>2024</year>
          , p.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          2242002_Independent_Component_Analysis_of_ [15]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kecheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhaoqi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Di</surname>
          </string-name>
          , P. Xiaorong,
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <given-names>Electroencephalographic_Data. R.</given-names>
            <surname>Yazhou</surname>
          </string-name>
          , EEGDnet: Fusing Non-Local and Local [5]
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          , W. Guettala,
          <article-title>Self-Similarity for 1-D EEG Signal Denoising with</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          , Enhancing sentiment anal- 2-
          <string-name>
            <given-names>D</given-names>
            <surname>Transformer</surname>
          </string-name>
          , arXiv preprint arXiv:
          <volume>2109</volume>
          .
          <fpage>04235</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <article-title>ysis on seed-iv dataset with vision transformers: A (</article-title>
          <year>2021</year>
          ). URL: https://arxiv.org/pdf/2109.04235.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <article-title>comparative study</article-title>
          ,
          <source>in: Proceedings of the 2023</source>
          <volume>11th</volume>
          [16]
          <string-name>
            <given-names>B.</given-names>
            <surname>Eoin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Peter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Andrew</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Maarten</surname>
          </string-name>
          , B. Geral-
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <article-title>ogy: IoT and smart</article-title>
          city,
          <year>2023</year>
          , pp.
          <fpage>238</fpage>
          -
          <lpage>246</lpage>
          .
          <string-name>
            <surname>Real-World BCI Applications Using</surname>
            <given-names>GANs</given-names>
          </string-name>
          , [6]
          <string-name>
            <given-names>S. I.</given-names>
            <surname>Illari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Avanzato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          , A cloud- Frontiers in Neuroscience (
          <year>2022</year>
          ). URL: https:
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <article-title>and follow-up of hospitalized patients</article-title>
          ,
          <source>in: CEUR articles/10</source>
          .3389/fnrgo.
          <year>2021</year>
          .805573/full.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <source>Workshop Proceedings</source>
          , volume
          <volume>2694</volume>
          ,
          <year>2020</year>
          , p.
          <fpage>29</fpage>
          -
          <lpage>[</lpage>
          17]
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ahmed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          , En-
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          35.
          <article-title>hancing eeg signal reconstruction in cross-domain [7</article-title>
          ]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Weidong</surname>
          </string-name>
          , L. Yingyuan,
          <string-name>
            <surname>EEG</surname>
          </string-name>
          <article-title>Multiresolution adaptation using cyclegan</article-title>
          , in: 2024 International
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <string-name>
            <surname>port</surname>
          </string-name>
          ,
          <source>Defense Technical Information Center</source>
          ,
          <year>2001</year>
          .
          <source>Systems (ICTIS)</source>
          , IEEE,
          <year>2024</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          URL: https://apps.dtic.mil/sti/tr/pdf/ADA411450. [18]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          , M. Zouai,
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          <string-name>
            <given-names>pdf . A.</given-names>
            <surname>Rabehi</surname>
          </string-name>
          ,
          <article-title>Eeg classification using contrastive learn[8</article-title>
          ]
          <string-name>
            <given-names>N.</given-names>
            <surname>Brandizzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fanti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Gallotta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          , L. Ioc
          <article-title>- ing and riemannian tangent space representations,</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          <string-name>
            <surname>chi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Nardi</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Napoli</surname>
          </string-name>
          , Unsupervised pose es- in: 2024 International Conference on Telecommuni-
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          <article-title>timation by means of an innovative vision trans- cations and Intelligent Systems (ICTIS)</article-title>
          , IEEE,
          <year>2024</year>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          former, in: Lecture Notes in Computer Science pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          <source>(including subseries Lecture Notes in Artificial In-</source>
          [19]
          <string-name>
            <given-names>Y.</given-names>
            <surname>An</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. K.</given-names>
            <surname>Lam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. H.</given-names>
            <surname>Ling</surname>
          </string-name>
          , Auto-Denoising
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          <string-name>
            <surname>work</surname>
          </string-name>
          ,
          <string-name>
            <surname>Sensors</surname>
          </string-name>
          (
          <year>2022</year>
          ). URL: https://www.mdpi.com/
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          1424-
          <fpage>8220</fpage>
          /22/5/1750. [20]
          <string-name>
            <given-names>I.</given-names>
            <surname>Naidji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Guettala</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. E</surname>
          </string-name>
          . Tiber-
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          <source>in: ICYRIME</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>51</fpage>
          -
          <lpage>59</lpage>
          . [21]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <article-title>An improved gener-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          <article-title>ings of the 2022 China Automation Congress (CAC)</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          (
          <year>2022</year>
          ). URL: https://ieeexplore.ieee.org/document/
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          10055145. [22]
          <string-name>
            <given-names>B.</given-names>
            <surname>Nail</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Djaidir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          , C. Napoli,
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          <string-name>
            <surname>tem</surname>
          </string-name>
          ,
          <source>Diagnostyka</source>
          <volume>25</volume>
          (
          <year>2024</year>
          ). [23]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bouchelaghem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Balsi</surname>
          </string-name>
          , M. Mo-
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          <article-title>tics litter detection</article-title>
          , in: 2024 IEEE Mediterranean
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          <string-name>
            <surname>Symposium (M2GARSS),</surname>
            <given-names>IEEE</given-names>
          </string-name>
          ,
          <year>2024</year>
          , pp.
          <fpage>36</fpage>
          -
          <lpage>40</lpage>
          . [24]
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wei</surname>
          </string-name>
          , Eegdenoisenet: A
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          <article-title>eeg denoising</article-title>
          , arXiv preprint arXiv:
          <year>2009</year>
          .11662
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          (
          <year>2021</year>
          ). URL: (https://arxiv.org/abs/
          <year>2009</year>
          .11662. [25]
          <string-name>
            <given-names>B.</given-names>
            <surname>Nail</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Atoussi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Saadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          <source>tional 8</source>
          (
          <year>2024</year>
          )
          <fpage>104</fpage>
          . [26]
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ponzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Puglisi</surname>
          </string-name>
          , S. Russo,
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          <source>ume 3686</source>
          ,
          <year>2024</year>
          , p.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          . [27]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Houssam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Chuan-Sheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bruno</surname>
          </string-name>
          , M. Gaurav,
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          <string-name>
            <surname>arXiv</surname>
          </string-name>
          (
          <year>2018</year>
          ). URL: https://arxiv.org/abs/
          <year>1802</year>
          .06222. [28]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Niu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <surname>LSTM-Based</surname>
            <given-names>VAE</given-names>
          </string-name>
          -GAN
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          (
          <year>2020</year>
          ). URL: https://www.mdpi.com/1424-8220/20/
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          <volume>13</volume>
          /3738. [29]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhijie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wenzhong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Wangxiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Linming</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          <string-name>
            <given-names>Intelligent</given-names>
            <surname>Systems</surname>
          </string-name>
          and Technology (
          <year>2023</year>
          ). URL:
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          https://dl.acm.org/doi/10.1145/3572780. [30]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Akrour</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Khamar</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. E</surname>
          </string-name>
          . Tiber-
        </mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          <article-title>response to diferent auditory stimuli</article-title>
          ,
          <source>in: 2024</source>
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>
          <source>and Intelligent Systems (ICTIS)</source>
          , IEEE,
          <year>2024</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . [31]
          <string-name>
            <given-names>P.</given-names>
            <surname>Rabich</surname>
          </string-name>
          , Dataset:
          <fpage>Epilepsy2</fpage>
          ,
          <year>2023</year>
          . URL:
        </mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>
          description.php?Dataset=
          <fpage>Epilepsy2</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>