<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Discriminator-Guided Unlearning: A Framework for Selective Forgetting in Conditional GANs</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Byeongcheon Lee</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sangmin Kim</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sungwoo Park</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Seungmin Rho</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mi Young Lee</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Industrial Security, Chung-Ang University</institution>
          ,
          <addr-line>Seoul 06974</addr-line>
          ,
          <country>Republic of Korea</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Security Convergence, Chung-Ang University</institution>
          ,
          <addr-line>Seoul 06974</addr-line>
          ,
          <country>Republic of Korea</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>The advancement of generative artificial intelligence (AI) has made the machine unlearning essential for resolving data privacy and copyright issues. While retraining models from scratch is efective, it is computationally expensive, and fine-tuning, a common alternative, sufers from catastrophic forgetting. To overcome this problem, this study proposes a two-step framework for auxiliary classifier-based generative adversarial network called "Discriminator-Guided Unlearning." Instead of directly targeting the generator, we intentionally weaken the ability of the discriminator to recognize specific classes. Feedback from this weakened discriminator guides the generator to avoid generating images of that class. Experiments suggest that our framework achieves efective forgetting performance comparable to retrained models while maintaining the quality of the remaining classes, efectively mitigating "catastrophic forgetting." Our approach provides a promising direction for building trustworthy AI.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Machine Unlearning</kwd>
        <kwd>Image Generation Model</kwd>
        <kwd>Generative Adversarial Networks</kwd>
        <kwd>Selective Forgetting</kwd>
        <kwd>Data Privacy</kwd>
        <kwd>AI Safety</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Generative artificial intelligence (AI) techniques, such as generative adversarial networks (GANs) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ],
have achieved remarkable success in various computer vision domains [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. These generative AI
techniques require massive datasets for training to produce high-quality outputs. However, this poses
serious ethical and legal issues, as the models can indiscriminately learn and reproduce copyrighted
images, sensitive personal information, and biased data [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. These issues, in conjunction with the need
for regulations such as the "right to be forgotten" in the General Data Protection Regulation (GDPR),
are further increasing demand for "machine unlearning," the technique of removing the influence of
specific data from trained models [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        In machine unlearning, the simplest approach is to remove the data to be forgotten and then retrain the
model from scratch using the remaining dataset. While this approach has the advantage of completely
removing desired information, the enormous time and computational resources required for training
make it impractical [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. A more practical alternative approach is to fine-tune a trained model with the
remaining dataset, but this approach may lead to "catastrophic forgetting" problem [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. This problem
not only removes target information but also loses knowledge about other data that the model needs to
retain, significantly degrading overall performance.
      </p>
      <p>
        Therefore, we propose a practical and efective selective unlearning framework that avoids both the
excessive cost of retraining and the performance degradation of fine-tuning. The proposed framework
is composed of a novel two-step method of “discriminator-guided unlearning," which is specialized for
auxiliary classifier-based GAN (ACGAN) [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. In the first step of the proposed framework, the ability of
a discriminator to recognize a particular class is intentionally weakened, and in the second step, this
‘confused’ discriminator is utilized to guide the generator to stop generating images of that class. The
main contributions of this study are as follows:
1. Unlike existing unlearning methods that focus on directly modifying the generator, we propose a
novel unlearning mechanism that induces confusion in the discriminator to prevent recognition
of specific classes and then uses its feedback to guide the generator away from producing images
of the target classes.
2. Through extensive experiments on representative image datasets including MNIST, FashionMNIST,
SVHN, and CIFAR-10, we indicate that the proposed framework can mitigate the catastrophic
forgetting problem by suppressing the generation of classes to be forgotten while maintaining
high-quality image generation for classes to be retained.
      </p>
      <p>The remainder of this paper is organized as follows. In Section 2, we introduce the theoretical background
of GANs and machine unlearning methods. In Section 3, we describe the details of the proposed
framework and then present experimental results in Section 4. Finally, in Section 5, we summarize our
study and present the conclusions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Theoretical Background</title>
      <p>This section briefly reviews GANs and machine unlearning methods, which are the core technologies
of our research, and explains why we adopted ACGAN and a discriminator-based unlearning strategy.</p>
      <sec id="sec-2-1">
        <title>2.1. Generative Adversarial Networks (GANs)</title>
        <p>
          GANs, first proposed by Goodfellow [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], utilize a competitive process of a generator, which produces a
synthetic image from noise, and a discriminator, which distinguishes between real and synthetic images.
While GANs have significantly advanced the field of generative modeling [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], the primary structure of
a GAN is limited by the fact that it uses only random noise as input, making it dificult to control the
data generation process precisely. To overcome this limitation, conditional GANs (cGANs) [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] have
been proposed. cGANs leverage additional condition information, such as class labels, within both the
generator and discriminator to control the image generation process in a desired direction.
        </p>
        <p>
          In this study, we adopted the ACGAN [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] as the primary model for the proposed framework to further
maximize the advantages of cGAN. ACGAN is a structure that adds an auxiliary classifier to the cGAN,
which allows the discriminator to determine the authenticity of an image while also classifying the
generated image. As a result, both the quality of the generated image and the learning stability of
the GAN are greatly improved. Furthermore, since the core of our research is "discriminator-driven
unlearning," which aims to weaken the ability of a discriminator to distinguish between specific classes,
the ACGAN structure, with its explicit classification function within the discriminator, is well-suited to
our research objective.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Machine Unlearning</title>
        <p>
          Machine unlearning research explores how to eficiently remove the influence of specific data from
trained models. As comprehensively surveyed by [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], research has shifted to more eficient
"approximate unlearning" methodologies to overcome the practical limitations of retraining.
        </p>
        <p>
          Most of these methodologies focus on directly modifying the generator to induce forgetting. These
include improving the fine-tuning process via techniques like label reversal [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], using
gradient-descentbased methodologies [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], directly altering model parameters or features [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], and utilizing knowledge
distillation [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. This line of inquiry continues to evolve, with recent work by [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], for instance,
focusing on unlearning specific identities from the latent space of a GAN. While many methods target
the generator, other components of the machine learning pipeline have also been leveraged. A diferent
line of work by [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], for example, explores using an auxiliary discriminator trained for membership
inference as a guiding signal for unlearning in classification models.
        </p>
        <p>However, most of these methodologies risk degrading the performance of other classes when removing
information from the target class. Therefore, we propose a novel approach that allows for efective
unlearning while preserving the quality of generation for other classes by initially focusing on the
discriminator.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed Framework</title>
      <p>In this paper, we propose a novel two-step selective unlearning framework for ACGAN models to
efectively remove certain forgetting classes  . The overall flow of the proposed framework is summarized
in Figure 1. In the first step, the goal is to “confuse” the discriminator ( ) so that it cannot clearly
distinguish the forgetting class ( ). In the second step, we utilize the feedback from this confused
discriminator to guide the pre-trained original generator () to no longer generate images of
class  .</p>
      <sec id="sec-3-1">
        <title>3.1. Step 1: Discriminator Soft Forgetting</title>
        <p>The first step of unlearning is the "soft forget" process described in the left panel of Figure 1. In this
step, a new generator () and a new discriminator () are trained together from scratch.
The main goal of this phase is to selectively weaken the ability of the discriminator to determine the
forgetting class  . To do this, the source discrimination loss is modified when a true image (  ) that
belongs to the forgetting class  is input to the discriminator.</p>
        <p>
          In the ACGAN in this experiment with hinge loss, discriminator D is trained to output () ≥ 1
for the real image  and () ≤ −1 for the fake image . In the proposed framework, we
introduce a hyperparameter, a ‘soft target’  ∈ [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ], to confuse the discriminator for real images
 of the forgetting class  . The loss function is a weighted sum that treats the image as ‘real’ with
probability  and ‘fake’ with probability (1 −  ).
        </p>
        <p>Let  denote the true image of class ,  denote the true image of the forgetting class  ,  denote
the latent vector, and  denote the source output of the discriminator. In the third term (the unlearning
term), if  = 1, the loss is equal to the hinge loss. If  = 0, the discriminator is trained to classify real
images of class  as fake. Through this process, we ultimately obtain a discriminator () whose
judgment on class  is intentionally weakened.</p>
        <p>Mathematically, the total GAN loss for the discriminator, ,, is expressed as follows:</p>
        <sec id="sec-3-1-1">
          <title>Real (Non⏞-Forget)</title>
          <p>, = E,̸= [max(0, 1 −  ())] + E,′ [max(0, 1 + ((, ′)))]
⏟ ⏟ Fake (All⏞Classes)
+ E [ · max(0, 1 −  ( )) + (1 −  ) · max(0, 1 +  ( ))]
⏟</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>Real (Forg⏞et Class)</title>
          <p>(1)
3.2. Step 2: Generator and Discriminator Final Fine-tuning
The second step corresponds to the right panel of Figure 1. The goal of this step is to complete the
ifnal training using the ‘confused’ discriminator ( ) obtained in step 1, and to ensure that the
pre-trained original generator () no longer produces images of the forgetting class  .</p>
          <p>In this step, the loss function of the discriminator remains the same as in step 1, which means that
the discriminator continues to be trained in a confused state for class  . The key change is in the loss
function of the generator. When the generator attempts to generate an image of the forgetting class
 , it is penalized in the opposite direction to the standard GAN loss. The GAN loss for the generator
,, is split into two parts.</p>
          <p>, = E,̸= [− ((, ))] + E[((,  ))]</p>
        </sec>
        <sec id="sec-3-1-3">
          <title>S⏟tandard loss for⏞normal classes</title>
          <p>⏟Penalty for ⏞forget class
(2)
The first term is the standard hinge loss, which trains the output  of the discriminator to be closer to
+1 (to look real) when the generator produces an image of a class other than  . The second term, on
the other hand, teaches the generator to minimize the output  of the discriminator for class  , i.e.,
to get closer to -1 (look more fake). Since  already tends to give a low score for class  , the
generator will naturally avoid generating images of this class in the process of minimizing this penalty
term. The total loss of the generator is  = , +  ·  ,.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental Results</title>
      <p>This section details the experimental setup and experimental results. Especially, we evaluated the
proposed framework by four aspects: qualitative, quantitative, runtime (eficiency), and attack-based
robustness.</p>
      <sec id="sec-4-1">
        <title>4.1. Experimental Setup</title>
        <p>
          Our experiments were conducted on the representative image generation benchmark datasets MNIST,
FashionMNIST, SVHN, and CIFAR-10. All experiments were performed on a system equipped with two
NVIDIA RTX 5000 Ada Generation GPUs. To ensure consistency and reliability, we utilized the same
ACGAN architecture in all experiments, which combines the structural advantages of ResNet [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] with
self-attention [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. The main hyperparameters used in the experiments are summarized in Table 1.
Training Epoch
        </p>
        <p>Original / Retrain
Step 1 (Soft Forgetting)
Step 2 / Finetune
T
S
I
N
M
T
S
I
N
M
n
o
i
h
s
a
F
N
H
V
S
0
1
R
A
F
I
C</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Qualitative Results</title>
        <p>To analyze visually the efectiveness of the proposed discriminator-guided unlearning framework, we
compare images generated using a fixed noise vector for each model. Figure 2 shows these qualitative
results. Each row corresponds to a diferent dataset (MNIST, SVHN, Fashion-MNIST, CIFAR-10), and
each column represents the output of a diferent model: (a) Original, (b) Retrain, (c) Finetune, (d) Our
proposed model. In all experiments, specific classes are targeted for forgetting (‘3’ in MNIST, ‘3’ in
SVHN, ‘coat’ in Fashion-MNIST, ‘cat’ in CIFAR-10).</p>
        <p>The results consistently suggest the efectiveness of the proposed framework. As shown in columns
(a) and (b), the original model successfully generates all classes, and the retrained model suppresses the
generation of forgotten classes while maintaining high quality for the remaining classes. In contrast,
the fine-tuned model (c) exhibited a "catastrophic forgetting" phenomenon. This phenomenon is most
evident in the complex CIFAR-10 dataset, where a degradation in image generation quality for the
remaining class is observed.</p>
        <p>The proposed framework (d) successfully unlearns the forgetting target class in all datasets, producing
images of that class with unrecognizable noise patterns. Moreover, the image quality of the remaining
nine classes is preserved at almost the same level as the retraining model (b). This consistency is
maintained regardless of the complexity of the data, from simple black and white numbers in MNIST to
complex color objects in CIFAR-10.</p>
        <p>In summary, the qualitative results demonstrate that the proposed framework selectively and
efectively removes the target class without "catastrophic forgetting."</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Quantitative Results</title>
        <p>FID(, ) = ||  −</p>
        <p>||2 + Tr(Σ + Σ − 2(Σ Σ)1/2)
MMD2(, ) = E,′∼  [(, ′)] + E,′∼  [(, ′)] − 2E ∼ ,′∼  [(, ′)]
IS() = exp(E∼  [KL((|)||())])
(3)
(4)
(5)
The interpretation of these metrics is nuanced depending on the evaluation scenario. In the retention
scenario, the standard interpretation holds, as the goal is to preserve high-quality generation for the
remaining classes. Conversely, in the forgetting scenario, the objective is to fail at generating the target
class; therefore, a higher FID/KID score is interpreted as more efective unlearning, as it signifies that
the generated outputs have successfully collapsed into a noise distribution far from the real data. The
results clearly show the superiority of our method in achieving a balance between efective forgetting
and knowledge preservation on all test datasets.</p>
        <p>In the forgetting scenario, which measures how well the target class is removed, our method achieves
a much more complete unlearning efect than all baseline models. The forgetting FID/KID scores are
significantly higher, indicating that the images generated for the target class have successfully collapsed
into unrecognizable noise. For example, on FashionMNIST, the forgetting FID (417.23) is significantly
higher than the fine-tuning (131.30). More importantly, this strong forgetting ability does not lead
to "catastrophic forgetting." In the retention scenario, our framework consistently and significantly
outperforms the fine-tuning model. For MNIST and FashionMNIST, retention performance is better than
the retrained baseline model, suggesting a positive generalization efect from our unlearning process.
Even on the more complex CIFAR-10 dataset, performance remains competitive with the retrained
model.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Execution Time</title>
        <p>In this section, to demonstrate eficiency, we compare the execution times of retrain, finetune, and
our proposed framework. As shown in Table 3, the execution phase of our framework is significantly
faster than full retraining, ofering a crucial advantage for time-sensitive removal requests. More
importantly, unlike fine-tuning approach that ofer comparable speed but sufer from catastrophic
forgetting phenomenon, our framework maintains high image quality comparable to much slower
retraining models.</p>
      </sec>
      <sec id="sec-4-5">
        <title>4.5. Attack-Based Evaluation</title>
        <p>This section presents qualitative evidence from a model inversion attack and quantitative proof from a
membership inference attack (MIA). This section aims to more directly and rigorously quantify the
absence of specific learned information. A model inversion attack visually probes the conceptual
knowledge of the model. In contrast, MIA aims to determine whether a specicfi data point was part of
the original training set of the model by exploiting subtle diferences in the output of the model on
training data versus unseen data. Therefore, for an unlearned class, an efective unlearning method
should obscure these behavioral diferences, thereby reducing the attack accuracy.</p>
        <p>MNIST</p>
        <p>SVHN</p>
        <p>The qualitative results in Figure 3 are unequivocal. The reconstructed image for the forgotten class ‘3’
in our model (d) is an unrecognizable artifact, mirroring the ‘Retrain’ gold standard (b). This contrasts
with the Finetune model (c), which fails to fully forget the target class and exhibits severe catastrophic
forgetting on retained digits in the SVHN dataset. Our method, however, preserves the quality of all
other digits, providing clear visual proof of selective and robust unlearning.</p>
        <p>The quantitative MIA results in Table 4 further solidify these findings. Our framework consistently
achieves the lowest attack accuracy on the Forget Set—even surpassing the Retrain baseline on complex
datasets—while maintaining a high accuracy on the Retain Set. This confirms that our strong forgetting
performance does not induce catastrophic forgetting.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>In this study, we propose a selective unlearning framework based on a "discriminator-guided" method
to efectively remove specific classes from generative models. The framework introduces a two-step
structure. First, it selectively weakens the ability of the discriminator to recognize the target class and
then uses this altered feedback to guide the unlearning process of the generator.</p>
      <p>The experimental results validated across four benchmark datasets show that this indirect approach
presents a promising solution to the unlearning problem. The framework achieves strong forgetting
performance comparable to a retrained model, efectively suppressing the generation of target class
images. It successfully mitigates the “catastrophic forgetting" problem associated with simple fine-tuning,
maintaining stable image generation quality for retained classes. Furthermore, the efectiveness of the
framework was additionally confirmed through attack-based evaluations, including model inversion
and membership inference attacks. These findings indicate that discriminator-guided unlearning is a
practical and efective direction for removing specific data from generative models, demonstrating a
superior balance between performance, eficiency, and verifiability.</p>
      <p>However, a practical limitation is that its time eficiency relies on the first step being prepared in
advance for anticipated unlearning requests. Future work will focus on applying this framework to
other types of generative models and validating its efectiveness on high-resolution image datasets.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This research was supported by the "Regional Innovation System &amp; Education (RISE)" through the
Seoul RISE Center, funded by the Ministry of Education (MOE) and the Seoul Metropolitan Government.
(2025-RISE-01-024-04) and in part by the National Research Foundation of Korea (NRF) grant funded by
the Korea government (MSIT) (RS-2025-00518960, 50%).</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used OpenAI ChatGPT-4 and DeepL in order to:
Grammar and spelling check. After using these tool(s)/service(s), the author(s) reviewed and edited the
content as needed and take(s) full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>I. J.</given-names>
            <surname>Goodfellow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pouget-Abadie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mirza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Warde-Farley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ozair</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Courville</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          , Generative adversarial nets,
          <source>Advances in neural information processing systems</source>
          <volume>27</volume>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>T.</given-names>
            <surname>Karras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Laine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Aila</surname>
          </string-name>
          ,
          <article-title>A style-based generator architecture for generative adversarial networks</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>4401</fpage>
          -
          <lpage>4410</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Rombach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Blattmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lorenz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Esser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ommer</surname>
          </string-name>
          ,
          <article-title>High-resolution image synthesis with latent difusion models</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>10684</fpage>
          -
          <lpage>10695</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>N.</given-names>
            <surname>Carlini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Tramer</surname>
          </string-name>
          , E. Wallace,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jagielski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Herbert-Voss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roberts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Brown</surname>
          </string-name>
          , D. Song,
          <string-name>
            <given-names>U.</given-names>
            <surname>Erlingsson</surname>
          </string-name>
          , et al.,
          <article-title>Extracting training data from large language models</article-title>
          ,
          <source>in: 30th USENIX security symposium (USENIX Security 21)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>2633</fpage>
          -
          <lpage>2650</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Q.-V.</given-names>
            <surname>Dang</surname>
          </string-name>
          ,
          <article-title>Right to be forgotten in the age of machine learning</article-title>
          ,
          <source>in: International Conference on Advances in Digital Science</source>
          , Springer,
          <year>2021</year>
          , pp.
          <fpage>403</fpage>
          -
          <lpage>411</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Bourtoule</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Chandrasekaran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. A.</given-names>
            <surname>Choquette-Choo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Travers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Papernot</surname>
          </string-name>
          ,
          <article-title>Machine unlearning</article-title>
          ,
          <source>in: 2021 IEEE symposium on security and privacy (SP)</source>
          , IEEE,
          <year>2021</year>
          , pp.
          <fpage>141</fpage>
          -
          <lpage>159</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>McCloskey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. J.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <article-title>Catastrophic interference in connectionist networks: The sequential learning problem, in: Psychology of learning and motivation</article-title>
          , volume
          <volume>24</volume>
          ,
          <string-name>
            <surname>Elsevier</surname>
          </string-name>
          ,
          <year>1989</year>
          , pp.
          <fpage>109</fpage>
          -
          <lpage>165</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Odena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Olah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shlens</surname>
          </string-name>
          ,
          <article-title>Conditional image synthesis with auxiliary classifier gans</article-title>
          ,
          <source>in: International conference on machine learning, PMLR</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>2642</fpage>
          -
          <lpage>2651</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mirza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Osindero</surname>
          </string-name>
          ,
          <article-title>Conditional generative adversarial nets</article-title>
          ,
          <source>arXiv preprint arXiv:1411.1784</source>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Xiong</surname>
          </string-name>
          ,
          <article-title>A survey of machine unlearning in generative ai models: Methods, applications, security, and challenges</article-title>
          ,
          <source>IEEE Internet of Things Journal</source>
          (
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Li</surname>
          </string-name>
          , P.-g. Ye,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <surname>Z. Zhang,</surname>
          </string-name>
          <article-title>Finetune and label reversal: Privacy-preserving unlearning strategies for gan models in cloud computing</article-title>
          ,
          <source>Computer Standards &amp; Interfaces</source>
          <volume>93</volume>
          (
          <year>2025</year>
          )
          <fpage>103976</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Golatkar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Achille</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Soatto</surname>
          </string-name>
          ,
          <article-title>Eternal sunshine of the spotless net: Selective forgetting in deep networks</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>9304</fpage>
          -
          <lpage>9312</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Moon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Cho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <article-title>Feature unlearning for pre-trained gans and vaes</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>38</volume>
          ,
          <year>2024</year>
          , pp.
          <fpage>21420</fpage>
          -
          <lpage>21428</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>H.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Woo</surname>
          </string-name>
          ,
          <article-title>Layer attack unlearning: Fast and accurate machine unlearning via layer level attack and knowledge distillation</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>38</volume>
          ,
          <year>2024</year>
          , pp.
          <fpage>21241</fpage>
          -
          <lpage>21248</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>J.</given-names>
            <surname>Seo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-H.</given-names>
            <surname>Lee</surname>
          </string-name>
          , T.-
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lee</surname>
          </string-name>
          , S. Moon, G.-M. Park,
          <article-title>Generative unlearning for any identity</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>9151</fpage>
          -
          <lpage>9161</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Zhavoronkin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pautov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kalmykov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Sevriugov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kovalev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O. Y.</given-names>
            <surname>Rogov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. V.</given-names>
            <surname>Oseledets</surname>
          </string-name>
          ,
          <article-title>Ungan: machine unlearning strategies through membership inference</article-title>
          ,
          <volume>540</volume>
          (
          <year>2024</year>
          )
          <fpage>46</fpage>
          -
          <lpage>60</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>K.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Ren,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Deep residual learning for image recognition</article-title>
          ,
          <source>in: Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>770</fpage>
          -
          <lpage>778</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , I. Goodfellow,
          <string-name>
            <given-names>D.</given-names>
            <surname>Metaxas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Odena</surname>
          </string-name>
          ,
          <article-title>Self-attention generative adversarial networks</article-title>
          ,
          <source>in: International conference on machine learning, PMLR</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>7354</fpage>
          -
          <lpage>7363</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>