<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Intelligent system for hyperspectral image processing based on generative adversarial networks⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Victor Sineglazov</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksii Shcherban</string-name>
        </contrib>
      </contrib-group>
      <fpage>167</fpage>
      <lpage>179</lpage>
      <abstract>
        <p>This paper addresses the challenge of hyperspectral image classification under conditions of limited labeled data and class imbalance. An improved method based on the AC-WGAN-GP architecture is proposed to enhance classification performance through dataset augmentation with synthetic samples generated via class-aware sampling and label embedding. The generator, discriminator, and classifier were modified accordingly, resulting in high classification accuracy on standard benchmark datasets. The proposed approach demonstrates strong potential for applications in remote sensing and precision agriculture.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;hyperspectral images</kwd>
        <kwd>generative adversarial networks</kwd>
        <kwd>AC-WGAN-GP</kwd>
        <kwd>classification</kwd>
        <kwd>synthetic samples</kwd>
        <kwd>class-aware sampling</kwd>
        <kwd>label embedding</kwd>
        <kwd>precision agriculture</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Hyperspectral and multispectral imaging</title>
      <p>
        Hyperspectral imaging (HSI) captures hundreds of contiguous spectral bands across the visible to
SWIR range, enabling fine-grained detection of crop conditions such as nutrient deficiency, disease,
and water stress [
        <xref ref-type="bibr" rid="ref12 ref2">2, 12</xref>
        ]. In contrast, multispectral imaging (MSI) uses fewer (3–15) broader bands
targeting key wavelengths, supporting vegetation indices like NDVI and EVI [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        HSI is valuable in precision agriculture due to its ability to distinguish visually similar crops and
detect early plant stress [
        <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
        ]. However, its high dimensionality complicates storage and
analysis, often requiring PCA, ICA, or autoencoders. Moreover, deep learning models demand large
annotated datasets, which are costly to produce [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Spectral similarity among classes further
complicates classification, often mitigated via spatial context or data augmentation [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
HSI combines spectroscopy and imaging, producing a 3D cube with rich spectral-spatial data [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
Pixel-wise classification supports tasks like target detection, change monitoring, and crop mapping
[
        <xref ref-type="bibr" rid="ref17 ref18">17, 18</xref>
        ]. While early methods used SVM or KNN, modern approaches rely on CNNs, 3D-CNNs, and
hybrid networks for improved performance [
        <xref ref-type="bibr" rid="ref19 ref8 ref9">8, 9, 19</xref>
        ].
      </p>
      <p>
        Data collection is supported by platforms like satellites (e.g., Sentinel-2, Landsat), which offer
wide coverage but lower resolution and weather limitations [
        <xref ref-type="bibr" rid="ref10 ref6">6, 10</xref>
        ]; UAVs, which provide high
resolution but limited scalability [
        <xref ref-type="bibr" rid="ref11 ref20 ref21 ref22 ref23 ref24">11, 20–24</xref>
        ]; and open-access datasets (e.g., HISUI) that drive
development of HSI classification methods. Combining platforms enables flexible monitoring
tailored to agricultural needs.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Using generative adversarial networks for HSI classification tasks</title>
      <p>
        Generative adversarial networks (GANs) are effective in generating structured data, including
hyperspectral images, helping reduce dependence on large labeled datasets—a key benefit in
datascarce settings [
        <xref ref-type="bibr" rid="ref15 ref25">15, 25</xref>
        ]. This study utilizes the AC-WGAN-GP architecture, which combines class
conditioning (AC-GAN), Wasserstein loss for stability, and gradient penalty regularization [
        <xref ref-type="bibr" rid="ref20">20, 26</xref>
        ].
      </p>
      <p>
        The model generates realistic synthetic samples with specified labels, preserving diversity even
with limited training data. Architectural enhancements further address class imbalance and
spectral similarity, improving classification performance in both HSI and MSI contexts—relevant
for precision agriculture [
        <xref ref-type="bibr" rid="ref18">18, 27</xref>
        ]. Such hybrid designs can be generalized across AI systems for
improved adaptability and robustness [28–30].
      </p>
      <sec id="sec-3-1">
        <title>3.1. AC-WGAN-GP architecture and its characteristics</title>
        <p>
          The AC-WGAN-GP architecture includes a generator (G), discriminator (D), and auxiliary
classifier (C). The generator receives Gaussian noise, spectral features (e.g., PCA), and class labels
(one-hot or embedded) to produce synthetic spectral samples. Its structure comprises 1D
transposed convolutions (Deconv1D) with ReLU activations and a final Tanh layer, using batch
normalization for training stability [
          <xref ref-type="bibr" rid="ref25">25, 31</xref>
          ]. Related research on the detection of synthetic visual
content highlights parallels between hyperspectral image generation and deepfake detection. For
instance, recent studies have explored neural network-based systems for identifying biometric
image manipulations [32, 33], demonstrating architectural strategies that can be adapted to
improve the robustness of GAN-based HSI generation and classification.
        </p>
        <p>Figure 1 shows the data flow and interaction among components, each optimized within a
unified training framework.
The discriminator, built from 1D Conv layers with Leaky-ReLU activations, evaluates sample
authenticity. It outputs a linear value and uses Wasserstein loss with gradient penalty, ensuring
stable training under distribution shifts [26].</p>
        <p>
          The auxiliary classifier performs multi-class classification on both real and generated samples. It
consists of a Conv layer, flattening, and a fully connected Softmax output. Batch normalization is
excluded to preserve spectral sensitivity. It also guides the generator to produce correctly labeled
data [
          <xref ref-type="bibr" rid="ref20">20, 34</xref>
          ].
        </p>
        <p>
          Together, G, D, and C form a closed feedback loop: the generator is informed by both
discriminator and classifier, promoting realism and class accuracy—crucial for imbalanced or
overlapping spectral classes [
          <xref ref-type="bibr" rid="ref15 ref18">15, 18, 35</xref>
          ].
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Training process and loss functions</title>
        <p>
          The AC-WGAN-GP model trains the generator (G), discriminator (D), and auxiliary classifier (C) in
an alternating fashion to ensure stable and controlled generation. Batch Normalization (BN)
accelerates convergence, while Gradient Penalty (GP) enforces Lipschitz continuity for stable
training [
          <xref ref-type="bibr" rid="ref25">25, 26</xref>
          ].
        </p>
        <p>The discriminator uses the Wasserstein loss with GP:</p>
        <p>L D = ~x ~ pg D (~x )− x ~ p( x) D ( x)+ λx^~ p( x^) [( ‖ ∇ x^ D ( x^) ‖2− 1)2 ].</p>
        <p>This balances the Wasserstein distance and gradient regularization to prevent mode collapse.
The generator minimizes:</p>
        <p>LG = − ~x ~ pg [D (~x ) ]+ [log p ( С = с |~x ) ]
combining realism (discriminator) and class consistency (classifier).</p>
        <p>The classifier is trained on both real and synthetic data:</p>
        <p>LC = [log p ( C = c | x) ]+ [log p ( C = c |~x ) ]
ensuring inter-class discrimination even under spectral overlap and imbalance.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Hyperspectral image processing using AC-WGAN-GP</title>
      <p>
        HSI processing faces the “curse of dimensionality”: hundreds of spectral channels increase training
complexity and risk overfitting, especially with limited labeled data [
        <xref ref-type="bibr" rid="ref15 ref25">15, 25</xref>
        ].
      </p>
      <p>
        AC-WGAN-GP addresses this by generating synthetic samples that preserve spectral and
semantic class properties. Conditional generation via class labels and classifier guidance improves
data diversity and reduces overfitting [
        <xref ref-type="bibr" rid="ref18 ref20">18, 20, 26</xref>
        ]. Labeled HSI data is scarce due to costly expert
annotation, leading to class imbalance. AC-WGAN-GP augments datasets—particularly rare
classe —improving balance and training efficacy [
        <xref ref-type="bibr" rid="ref15">15, 27</xref>
        ].
      </p>
      <p>
        Spectral overlap between classes causes classification ambiguity. The model maintains semantic
consistency through classifier feedback, enhancing discrimination [
        <xref ref-type="bibr" rid="ref14 ref25">14, 25</xref>
        ]. The improved
ACWGAN-GP shows gains in average accuracy (AA) and Cohen’s kappa () on benchmarks like
Indian Pines, Salinas, and Pavia University [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. Mode collapse is mitigated through gradient
penalty and auxiliary classification, ensuring diversity and training stability [26, 36].
      </p>
      <p>In summary, AC-WGAN-GP effectively processes hyperspectral data under limited-label
conditions, enhancing classification through synthetic, spectrally valid augmentation.
(1)
(2)
(3)</p>
    </sec>
    <sec id="sec-5">
      <title>5. Problem formulation of hyperspectral image classification using</title>
    </sec>
    <sec id="sec-6">
      <title>AC-WGAN-GP</title>
      <p>
        In hyperspectral image classification tasks involving generative models, evaluation metrics play a
crucial role in objectively comparing model performance and quantifying improvements resulting
from architectural modifications. In this study, we employ three widely adopted metrics: Overall
Accuracy (OA), Average Accuracy (AA), and the Cohen’s Kappa coefficient (), which are standard
in hyperspectral classification research [
        <xref ref-type="bibr" rid="ref18 ref20 ref25">18, 20, 25</xref>
        ].
      </p>
      <p>While global metrics like OA, AA, and  assess overall performance, per-class metrics—
Precision, Recall, and F1-score—reveal how well individual classes are classified.</p>
      <p>Overall Accuracy (OA) is a standard metric in HSI classification that measures the proportion of
correctly predicted samples among all test samples:</p>
      <p>where  is the total number of test samples, C¯ is the number of classes, and ℎ represents
correctly classified samples of class (confusion matrix diagonal).</p>
      <p>Average Accuracy (AA) evaluates classification performance across all classes equally,
regardless of class size. It is calculated as the mean of per-class accuracies:</p>
      <p>OA =</p>
      <p>¯
1 C</p>
      <p>N ∑i= 1 hii,
AA =</p>
      <p>¯
1 ∑C hii ,</p>
      <p>C¯ i= 1 N i
κ=</p>
      <p>C¯ C¯
N ∑ hii− ∑ ( hi+⋅ h+i)</p>
      <p>i= 1 i= 1
where C¯ is the number of classes, ℎ the correctly classified samples for class , and the total
test samples in class .</p>
      <p>
        The Kappa coefficient () measures agreement between predicted and true labels while
accounting for chance. Unlike OA, it reflects class distribution, making it suitable for imbalanced
datasets [
        <xref ref-type="bibr" rid="ref20 ref25">20, 25</xref>
        ]. It is computed as:
(4)
(5)
(6)
¯
      </p>
      <p>C
N 2− ∑ ( hi+⋅ h+i)</p>
      <p>i= 1
where  is the total number of test samples, C¯ the number of classes, ℎ correct predictions, ℎ+
actual counts, and ℎ+ predicted counts per class.</p>
    </sec>
    <sec id="sec-7">
      <title>6. Proposed method</title>
      <p>The improved AC-WGAN-GP retains the classical conditional GAN structure comprising a
generator (G), a discriminator (D), and an auxiliary classifier (C), but introduces targeted
modifications to address class imbalance, spectral overlap, mode collapse, and training instability
(Figure 2).</p>
      <p>A revised training strategy complements the architecture. Conditional samples are generated
using class embeddings and PCA vectors, followed by clustering in spectral space—applied only to
real training data. Synthetic samples closest to the cluster centers (measured by cosine similarity)
are selected and merged with real data for classifier training.</p>
      <p>Crucially, all processing is confined to training data, avoiding test set leakage. This improves
evaluation rigor and reproducibility, ensuring fair performance assessment under realistic
constraints.</p>
      <sec id="sec-7-1">
        <title>6.1. Improved generator architecture G</title>
        <p>In the improved AC-WGAN-GP, the generator synthesizes conditional hyperspectral samples by
combining class and spectral information. The baseline design with Deconv1D layers, ReLU, and
batch normalization suffered from mode collapse and weak control via one-hot labels (Figure 3).</p>
        <p>The updated architecture incorporates a Class-aware Sampling and Label Embedding (CS+LE)
module, encoding labels into dense vectors and concatenating them with PCA features and
Gaussian noise. This richer input better captures class identity and spectral variation.</p>
        <p>To improve training stability and representation of minority classes, ResNet-style Deconv1D
blocks with skip connections were added. A cross-attention mechanism aligns label and spectral
embeddings with generator features, enhancing semantic coherence. Spectral Dropout is applied in
intermediate layers to zero out entire spectral bands, improving robustness. The output is
generated via UpSampling1D and a Conv1D layer with Tanh activation, ensuring normalized
spectra. This architecture produces more diverse, class-consistent, and spectrally realistic samples.</p>
      </sec>
      <sec id="sec-7-2">
        <title>6.2. Improved discriminator architecture D</title>
        <p>In AC-WGAN-GP, the discriminator assesses how closely generated spectra resemble real
hyperspectral data and provides feedback to the generator. The initial version used Conv1D layers
with LeakyReLU and batch normalization, but the latter conflicts withWGAN-GP’s requirement for
sample independence, causing instability (Figure 4).</p>
        <p>To address this, batch normalization was replaced with LayerNorm, which operates per sample
and ensures stable training with gradient penalty. Each Conv1D layer is followed by LeakyReLU
and LayerNorm for consistent processing. To mitigate mode collapse, a Minibatch Discrimination
layer was added to detect similarity across samples, encouraging output diversity. The final output
is a scalar critic score from Flatten and Dense(1), as required by the WGAN formulation. Overall,
these changes improve training stability, prevent mode collapse, and enhance the model’s ability to
distinguish synthetic from real spectra.</p>
      </sec>
      <sec id="sec-7-3">
        <title>6.3. Improved architecture of the classifier C</title>
        <p>The auxiliary classifier C in AC-WGAN-GP predicts class labels for real and synthetic samples and
guides the generator. The initial design with a single Conv1D layer and Softmax output was too
shallow to handle spectral similarity and rare classes (Figure 5).
To improve performance, the classifier was deepened with three Conv1D layers followed by ReLU
and batch normalization. Class labels are passed as dense embeddings, encoding semantic
relationships. Extracted features are flattened and concatenated with the label embedding, then
processed by a dense layer (256 units) used for both classification and contrastive loss. This
enhances class separation while maintaining intra-class compactness.</p>
      </sec>
      <sec id="sec-7-4">
        <title>6.4. Loss functions of the improved AC-WGAN-GP</title>
        <p>The improved AC-WGAN-GP architecture employs a multi-component loss formulation to enable
efficient and stable training across all network modules. Each component of the loss not only
incorporates core adversarial objectives common to classical GANs but also introduces
domainspecific terms tailored to the challenges of hyperspectral classification.</p>
      </sec>
      <sec id="sec-7-5">
        <title>6.4.1. Generator loss</title>
        <p>Unlike in traditional GANs, the generator in AC-WGAN-GP is optimized not only through
adversarial feedback from the discriminator but also by enforcing alignment with class conditions
and spectral context.</p>
        <p>The basic Wasserstein loss component for the generator is given by:
where:  denotes the latent noise vector;  is the conditional class label; (, ) is the generated
spectral sample; ((, )) is the “realism” score assigned by the discriminator.</p>
        <p>1. Cosine Similarity with PCA Vectors (Cosine PCA Loss)—ensures that the generated spectrum
aligns with the average PCA vector of its target class:</p>
        <p>LWGAN = − z,c [D ( G ( z , c )) ],
L PCA= [1 − cos ( x^, xPCA ) ].</p>
        <p>Lalign= [1 − cos ( f , e ) ].
2. Cosine Alignment Loss—enforces the classifier’s internal feature representation  to align
with the class embedding vector :</p>
        <p>3. Categorical Cross-Entropy—penalizes the generator if the classifier fails to recognize the
correct class of a generated sample:</p>
        <p>Lce= z,c [− log P cls ( c | G ( z , c )) ].</p>
        <p>The full generator loss is then defined as:</p>
        <p>LG = LWGAN +  PCA⋅ L PCA +  align⋅ Lalign+  ce⋅ Lce,
where PCA, align, and ce are weighting coefficients that control the contribution of each loss
component. These are tuned empirically based on data characteristics, class imbalance, and desired
classification performance.</p>
      </sec>
      <sec id="sec-7-6">
        <title>6.4.2. Discriminator loss</title>
        <p>In the AC-WGAN-GP framework, the discriminator functions as a critic that estimates the
divergence between real and generated spectral samples. Unlike in classical GANs, where the
discriminator performs binary classification, the WGAN formulation approximates the Wasserstein
distance between real and synthetic distributions.
(7)
(8)
(9)
(10)
(11)
where:  is the distribution of generated samples (from the generator); data is the distribution
of real training samples; ~ is a generated spectrum (, );  is a real spectral sample; ^ is a linear
interpolation between  and ~ ;  is a hyperparameter controlling the weight of the gradient
penalty term Lgp that ensures 1-Lipschitz continuity.</p>
      </sec>
      <sec id="sec-7-7">
        <title>6.4.3. Classifier loss</title>
        <p>The auxiliary classifier in AC-WGAN-GP is responsible for both class prediction and learning
discriminative features for regularization. Its loss function comprises several components aimed at
maximizing classification accuracy while structuring the feature space.</p>
        <p>1. Categorical Cross-Entropy (Class-Weighted) This standard classification loss is weighted to
compensate for class imbalance:</p>
        <p>Lce= − x,y [ω y⋅ log P cls ( y | x) ],
(14)
where  is the spectral sample,  is the true class label, cls( | ) is the predicted probability,
and  is the inverse class frequency weight.</p>
        <p>2. Contrastive Loss This term promotes closeness of features from the same class and separation
between features from different classes:</p>
        <p>Lcontrast =  ,{max ( 0 ,‖ f i− f j‖ − δ )2
‖ f i− f j‖ 2
, if yi= y j ,
, if yi ≠ y j
where ,  are feature vectors and  is a margin parameter.</p>
        <p>3. Cosine Alignment Loss Aligns the feature vector with the corresponding class embedding:
The discriminator loss is defined as:</p>
        <p>L D = ~x ~ pg D (~x )− x ~ p( x) D ( x)+ ⋅ Lgp,</p>
        <p>Lgp= x^~ p( x^) [( ‖ ∇ x^ D ( x^) ‖2− 1)2 ],
(12)
(13)
(15)
(16)
(17)
(18)
Lalign= x,y [1 − cos ( f ( x) , e y) ],</p>
        <p>Ldiv= ∑ (
i ≠ j || ei− e j ||2 + ε
1
),
where () is the feature vector from the classifier and is the embedding of class .
4. Embedding Divergence Loss Regularizes class embeddings to prevent their collapse in latent
space:</p>
        <p>where ,  are embeddings of different classes, and  is a small positive constant to avoid
division by zero.</p>
        <p>Total Classifier Loss:</p>
        <p>LС = Lce+  contrast⋅ Lcontrast +  align⋅ Lalign+  div⋅ Ldiv,
where contrast, align, div are hyperparameters controlling the contribution of each
regularization component. These are selected empirically based on task complexity and class
imbalance.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>7. Results</title>
      <sec id="sec-8-1">
        <title>7.1. Experimental setup and execution specifics</title>
        <p>All experiments were conducted using PyCharm Community Edition 2024.3.4 with Python 3.9 and
the TensorFlow 2.19.0 framework. The development environment ran on Windows 10 and local
machine specifications were as follows. Synthetic samples were generated inonline mode without
being saved to disk, reducing memory usage and preventing data duplication. Spectral vectors were
reduced to  = 30 components using Principal Component Analysis (PCA). The training and test
splits were performed with strict class separation, eliminating potential data leakage.</p>
      </sec>
      <sec id="sec-8-2">
        <title>7.2. Analysis of incremental improvements in AC-WGAN-GP</title>
      </sec>
      <sec id="sec-8-3">
        <title>7.3. Analysis of results at different training set sizes</title>
        <p>The classification performance was evaluated on three benchmark HSI datasets—Salinas, Indian
Pines, and KSC—at varying training ratios. As shown in Table 2, all datasets demonstrate clear
improvement in OA, AA, and  with increased training size. Prediction maps for studied datasets
are shown in Figures 6–8.</p>
        <p>Per-class F1 analysis reveals that classes with stable and well-separated spectral signatures (e.g.,
Stubble, Woods, Water) achieve F1 scores above 90–95%. In contrast, classes with limited samples or
high spectral overlap (e.g., Oats, Corn, Oak Forest) show lower F1 scores (40–70%).</p>
        <p>The improved model achieves reliable classification even under extreme label scarcity,
achieving over 90% OA on the Salinas and KSC datasets using only 5% of labeled data, and over 68%
on Indian Pines—one of the most challenging datasets.</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>Conclusions</title>
      <p>This study presented an enhanced AC-WGAN-GP architecture for conditional hyperspectral
sample generation, addressing class imbalance, spectral overlap, and training instability.
Improvements include class-aware sampling, label embeddings, PCA-based conditioning, ResNet
deconvolutions, crossattention, and spectral dropout in the generator; layer normalization and
minibatch discrimination in the discriminator; and weighted categorical, contrastive, and
embedding divergence losses in the classifier.</p>
      <p>The method demonstrated strong performance under limited data and imbalanced class
distributions, with all evaluations performed without test data leakage. Future research will extend
this approach to 3D hyperspectral data, semantic segmentation tasks.</p>
    </sec>
    <sec id="sec-10">
      <title>Declaration on Generative AI</title>
      <p>While preparing this work, the authors used the AI programs Grammarly Pro to correct text
grammar and Strike Plagiarism to search for possible plagiarism. After using this tool, the authors
reviewed and edited the content as needed and took full responsibility for the publication’s content.
[26] J. Feng, N. Zhao, R. Shang, X. Zhang, L. Jiao, Self-Supervised Divide-and-Conquer Generative
Adversarial Network for Classification of Hyperspectral Images, IEEE Transactions on
Geoscience and Remote Sensing 60 (2022) 1–17. doi:10.1109/TGRS.2022.3202908
[27] Y. Lu, D. Chen, E. Olaniyi, Y. Huang, Gans for Image Augmentation in Agriculture: A
Systematic Review, Comput. Electron. Agriculture 200 (2022).
doi:10.1016/j.compag.2022.107208
[28] M. Zgurovsky, V. Sineglazov, E. Chumachenko, Formation of Hybrid artificial Neural
Networks Topologies, in: Artificial Intelligence Systems Based on Hybrid Neural Networks,
volume 904 of Studies in Computational Intelligence, Springer, Cham, 2021.
doi:10.1007/978-3030-48453-8_3
[29] V. M. Sineglazov, K. D. Riazanovskiy, O. I. Chumachenko, Multicriteria Conditional
Optimization based on Genetic Algorithms, System Research and Information Technologies
(2020). doi:10.20535/SRIT.2308-8893.2020.3.07
[30] A. V. Iatsyshyn, et al. Application of Augmented Reality Technologies for Education Projects</p>
      <p>Preparation, in: Cloud Technologies in Education, 2643, 2020, 134–160.
[31] M. Zaliskyi, R. Odarchenko, S. Gnatyuk, Y. Petrova, A. Chaplits, Method of Traffic Monitoring
for DDoS Attacks Detection in e-Health Systems and Networks, in: Informatics &amp; Data-Driven
Medicine, 2255, 2018, 193–204.
[32] V. Dudykevych, H. Mykytyn, K. Ruda, The Concept of a Deepfake Detection System of
Biometric Image Modifications based on Neural Networks, in: 2022 IEEE 3rd KhPI Week on
Advanced technology: conference proceedings, 2022, 585–588.
[33] V. Dudykevych, S. Yevseiev, G. Mykytyn, K. Ruda, H. Hulak, Detecting Deepfake
Modifications of Biometric Images using Neural Networks, in: Cybersecurity Providing in
Information and Telecommunication Systems, 3654, 2024, 391–397.
[34] V. Kharchenko, I. Chyrka, Detection of Airplanes on the Ground using YOLO Neural Network,
in: Int. Conf. on Mathematical Methods in Electromagnetic Theory, 2018, 294–297.
doi:10.1109/MMET.2018.8460392
[35] O. Zaporozhets, V. Isaienko, K. Synylo, Trends on Current and Forecasted Aircraft Hybrid
Electric Architectures and Their Impact on Environment, Energy 211 (2020).
doi:10.1016/j.energy.2020.118814
[36] Z. Hu, Y. Khokhlachova, V. Sydorenko, I. Opirskyy, Method for Optimization of Information
Security Systems Behavior under Conditions of Influences, Int. J. Intell. Syst. Appl. 9 (12)
(2017) 46–58. doi:10.5815/ijisa.2017.12.05</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Bioucas-Dias</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Plaza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Camps-Valls</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Scheunders</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Nasrabadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chanussot</surname>
          </string-name>
          ,
          <source>Hyperspectral Remote Sensing Data Analysis and future Challenges</source>
          ,
          <source>IEEE Geoscience and Remote Sensing Magazine</source>
          <volume>1</volume>
          (
          <year>2013</year>
          ). doi:
          <volume>10</volume>
          .1109/MGRS.
          <year>2013</year>
          .2244672
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.</given-names>
            <surname>Ghamisi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Plaza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Plaza</surname>
          </string-name>
          ,
          <article-title>Advanced Spectral Classifiers for Hyperspectral Images: A Review</article-title>
          ,
          <source>IEEE Geoscience and Remote Sensing Magazine</source>
          <volume>5</volume>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>X. X.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tuia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Mou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.-S.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Fraundorfer</surname>
          </string-name>
          ,
          <article-title>Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources</article-title>
          ,
          <source>IEEE Geoscience and Remote Sensing Magazine</source>
          <volume>5</volume>
          (
          <year>2017</year>
          ). doi:
          <volume>10</volume>
          .1109/MGRS.
          <year>2017</year>
          .2762307
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ghamisi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Benediktsson</surname>
          </string-name>
          ,
          <article-title>Deep Learning for Hyperspectral Image Classification: An Overview</article-title>
          ,
          <source>IEEE Transactions on Geoscience and Remote Sensing</source>
          <volume>57</volume>
          (
          <year>2019</year>
          )
          <fpage>6690</fpage>
          -
          <lpage>6709</lpage>
          . doi:
          <volume>10</volume>
          .1109/TGRS.
          <year>2019</year>
          .2907932
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>B.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. D.</given-names>
            <surname>Dao</surname>
          </string-name>
          , J. Liu,
          <string-name>
            <given-names>Y.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shang</surname>
          </string-name>
          ,
          <source>Recent Advances of Hyperspectral Imaging Technology and Applications in Agriculture, Remote Sensing</source>
          <volume>12</volume>
          (
          <year>2020</year>
          )
          <article-title>2659</article-title>
          . doi:
          <volume>10</volume>
          .3390/rs12162659
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>B. G.</given-names>
            <surname>Ram</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Oduor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Igathinathane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Howatt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>A Systematic Review of Hyperspectral Imaging in Precision Agriculture: Analysis of Its Current State</article-title>
          and
          <string-name>
            <given-names>Future</given-names>
            <surname>Prospects</surname>
          </string-name>
          ,
          <source>Comput. Electron. Agriculture</source>
          <volume>222</volume>
          (
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .1016/j.compag.
          <year>2024</year>
          .109037
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>H.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <surname>A Critical</surname>
          </string-name>
          <article-title>Review on Applications of Hyperspectral Remote Sensing in Crop Monitoring</article-title>
          ,
          <source>Experimental Agriculture</source>
          <volume>58</volume>
          (
          <year>2022</year>
          ). doi:
          <volume>10</volume>
          .1017/S0014479722000278
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>V.</given-names>
            <surname>Sineglazov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kot</surname>
          </string-name>
          ,
          <article-title>Design of Hybrid Neural Networks of the Ensemble Structure</article-title>
          ,
          <source>EasternEuropean J. Enterprise Technol</source>
          .
          <volume>1</volume>
          (
          <year>2021</year>
          )
          <fpage>31</fpage>
          -
          <lpage>45</lpage>
          . doi:
          <volume>10</volume>
          .15587/
          <fpage>1729</fpage>
          -
          <lpage>4061</lpage>
          .
          <year>2021</year>
          .225301
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Zgurovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Sineglazov</surname>
          </string-name>
          , E. Chumachenko,
          <source>Classification and Analysis Topologies Known Artificial Neurons and Neural Networks, in: Artificial Intelligence Systems Based on Hybrid Neural Networks</source>
          , vol.
          <volume>904</volume>
          , Springer, Cham,
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -48453-
          <issue>8</issue>
          _
          <fpage>1</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>I.</given-names>
            <surname>Goodfellow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pouget-Abadie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mirza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Warde-Farley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ozair</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Courville</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          , Generative Adversarial Nets,
          <source>in: Advances in Neural Information Processing Systems</source>
          ,
          <volume>27</volume>
          ,
          <year>2014</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.1406.2661
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Odena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Olah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shlens</surname>
          </string-name>
          ,
          <article-title>Conditional Image Synthesis with Auxiliary Classifier Gans</article-title>
          , arXiv,
          <year>2016</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.1610.09585
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>I.</given-names>
            <surname>Gulrajani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ahmed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Arjovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Dumoulin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Courville</surname>
          </string-name>
          , Improved Training of Wasserstein Gans,
          <source>in: Advances in Neural Information Processing Systems</source>
          ,
          <volume>30</volume>
          ,
          <year>2017</year>
          ,
          <fpage>5767</fpage>
          -
          <lpage>5777</lpage>
          . doi:
          <volume>10</volume>
          .48550/arXiv.1704.00028
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Arjovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chintala</surname>
          </string-name>
          , L. Bottou,
          <string-name>
            <surname>Wasserstein</surname>
            <given-names>GAN</given-names>
          </string-name>
          , arXiv,
          <year>2017</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.1701.07875
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Abbas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gour</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vankudothu</surname>
          </string-name>
          ,
          <article-title>Tomato Plant Disease Detection using Transfer Learnin with C-GAN Synthetic Images</article-title>
          ,
          <source>Comput. Electron. Agriculture</source>
          <volume>187</volume>
          (
          <year>2021</year>
          ). doi:
          <volume>10</volume>
          .1016/j.compag.
          <year>2021</year>
          .106279
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>C.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Meng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Zhang,</surname>
          </string-name>
          <article-title>AC-WGAN-GP: Generating Labeled Samples for Improving Hyperspectral Image Classification with Small-Samples, Remote Sensing 14 (</article-title>
          <year>2022</year>
          )
          <article-title>4910</article-title>
          . doi:
          <volume>10</volume>
          .3390/rs14194910
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.</given-names>
             
            <surname>Radford</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
             
            <surname>Metz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chintala</surname>
          </string-name>
          ,
          <article-title>Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks</article-title>
          , arXiv,
          <year>2016</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv. 1511.06434
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <article-title>Semisupervised Hyperspectral Image Classification based on Generative Adversarial Networks</article-title>
          ,
          <source>IEEE Geoscience and Remote Sensing Letters</source>
          <volume>15</volume>
          (
          <year>2018</year>
          )
          <fpage>212</fpage>
          -
          <lpage>216</lpage>
          . doi:
          <volume>10</volume>
          .1109/LGRS.
          <year>2017</year>
          .2780890
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>Z.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ghamisi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zu</surname>
          </string-name>
          ,
          <article-title>Hypervitgan: Semisupervised GAN with Transformer for HSI Classification</article-title>
          ,
          <source>IEEE J. Selected Topics Appl. Earth Observations Remote Sensing</source>
          <volume>15</volume>
          (
          <year>2022</year>
          ). doi:
          <volume>10</volume>
          .1109/JSTARS.
          <year>2022</year>
          .3192127
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>M.</given-names>
            <surname>Zgurovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Sineglazov</surname>
          </string-name>
          , E. Chumachenko,
          <article-title>Classification and Analysis of Multicriteria Optimization Methods</article-title>
          ,
          <source>in: Artificial Intelligence Systems Based on Hybrid Neural Networks</source>
          ,
          <volume>904</volume>
          ,
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -48453-
          <issue>8</issue>
          _
          <fpage>2</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          , J. Liu,
          <source>Limited Agricultural Spectral Dataset Expansion based on Generative Adversarial Networks, Comput. Electron. Agriculture</source>
          <volume>215</volume>
          (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .1016/j.Compag.
          <year>2023</year>
          .108385
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>V.</given-names>
            <surname>Sokolov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Skladannyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Platonenko</surname>
          </string-name>
          ,
          <article-title>Video Channel Suppression Method of Unmanned Aerial Vehicles</article-title>
          ,
          <source>in: IEEE 41st Int. Conf. on Electronics and Nanotechnology</source>
          (
          <year>2022</year>
          )
          <fpage>473</fpage>
          -
          <lpage>477</lpage>
          . doi:
          <volume>10</volume>
          .1109/ELNANO54667.
          <year>2022</year>
          .9927105
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>P.</given-names>
            <surname>Skladannyi</surname>
          </string-name>
          , et al.,
          <article-title>Adaptive Methods for Embedding Digital Watermarks to Protect Audio and Video Images in Information and Communication Systems</article-title>
          , in: Classic, Quantum, and
          <string-name>
            <surname>Post-Quantum</surname>
            <given-names>Cryptography</given-names>
          </string-name>
          , vol.
          <volume>4016</volume>
          ,
          <year>2025</year>
          ,
          <fpage>13</fpage>
          -
          <lpage>31</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>Y.</given-names>
             
            <surname>Kostiuk</surname>
          </string-name>
          , et al.,
          <article-title>Application of Statistical and Neural Network Algorithms in Steganographic Synthesis and Analysis of Hidden Information in Audio and Graphic Files</article-title>
          , in: Classic, Quantum, and
          <string-name>
            <surname>Post-Quantum</surname>
            <given-names>Cryptography</given-names>
          </string-name>
          , vol.
          <volume>4016</volume>
          ,
          <year>2025</year>
          ,
          <fpage>45</fpage>
          -
          <lpage>65</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>N.</given-names>
            <surname>Dovzhenko</surname>
          </string-name>
          , et al.,
          <article-title>Research of UAV and Sensor Network Integration Features for Routing Optimization and Energy Consumption Reduction, in: Cybersecurity Providing in Information and Telecommunication Systems II</article-title>
          , vol.
          <volume>3826</volume>
          (
          <year>2024</year>
          )
          <fpage>236</fpage>
          -
          <lpage>241</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ghamisi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Benediktsson</surname>
          </string-name>
          ,
          <article-title>Generative Adversarial Networks for Hyperspectral Image Classification</article-title>
          ,
          <source>IEEE Transactions on Geoscience and Remote Sensing</source>
          ,
          <volume>56</volume>
          (
          <year>2018</year>
          )
          <fpage>5046</fpage>
          -
          <lpage>5063</lpage>
          . doi:
          <volume>10</volume>
          .1109/TGRS.
          <year>2018</year>
          .2805286
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>