<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Reconstruction of the image captured by TempestSDR</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Filip Tuch</string-name>
          <email>tuch1@uniba.sk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Richard Ostertág</string-name>
          <email>richard.ostertag@fmph.uniba.sk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, Faculty of Mathematics</institution>
          ,
          <addr-line>Physics and Informatics</addr-line>
          ,
          <institution>Comenius University</institution>
          ,
          <addr-line>Bratislava</addr-line>
          ,
          <country country="SK">Slovakia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>TempestSDR DRUNet @ 70 epoch DnCNN @ 70 epoch</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>A computer's image is sent to the monitor as a stream of RGB pixels, with their intensities encoded as voltage levels in the signal traveling through the cable. During transmission, these signals give of unwanted electromagnetic emissions, which can be picked up using TempestSDR software to reconstruct the original image. However, the resulting image is often noisy and hard to read. With standard software defined radios (SDR), you can only eavesdrop on the image from a short distance, about 50 cm. In this paper, we propose two methods to improve the quality of the captured image. First, we used a directional Yagi-Uda antenna, a signal amplifier, and a band-pass iflter. This setup extended the capture distance to around 20 meters. Even so, the image remained noisy, so we turned to deep learning for our second approach. We created scripts to automate TempestSDR and the dataset generation process, producing a large dataset covering typical everyday computer use. We selected two convolutional neural networks designed for image reconstruction, trained them on our dataset, and evaluated their performance using standard image quality metrics. The results showed a significant improvement in reconstructed image quality across all tested metrics. Finally we integrated the trained models into TempestSDR, making high-quality image reconstruction much easier for users. Our findings highlight potential vulnerability of display devices and emphasize the need for preventive measures to enhance their security.</p>
      </abstract>
      <kwd-group>
        <kwd>electromagnetic side-channel attack</kwd>
        <kwd>TempestSDR</kwd>
        <kwd>convolutional neural network</kwd>
        <kwd>image processing</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Confidentiality and privacy are fundamental pillars of information security. While conventional
approaches focus primarily on mechanisms like authentication, encryption, and access control,
hardwarebased vulnerabilities receive considerably less attention. Among these, side-channel attacks exploit
unintended information leakage arising from standard hardware operations.</p>
      <p>
        Electromagnetic (EM) emissions have long been recognized as a source of interference in radio
communications. Historically, most eforts focused on minimizing such emissions to prevent signal
degradation. However, military and intelligence institutions also recognized a more critical implication:
the possibility of intercepting these emissions to reconstruct sensitive data. Because early research on
this topic remained classified [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], it contributed to a widespread perception, that such attacks required
specialized, expensive equipment and were thus impractical for the general public. As a result, consumer
electronics were rarely designed with protection against side-channel attacks in mind.
      </p>
      <p>
        Computer monitors (and display devices in general) present a particularly viable target in this
context. During image transmission from a computer to a monitor, unintentional EM emissions are
inadvertently produced. These emissions can be intercepted and analyzed to reconstruct the original
screen content. The first public demonstration of such an attack was conducted by Wim van Eck in
1985, who successfully recovered images from analogue CRT monitors [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Although LCD monitors and
modern digital interfaces were initially believed to be immune to this threat, Markus Kuhn demonstrated
in 2005 that digital systems can, in fact, be even more susceptible [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        Today, this type of attack can be executed using TempestSDR [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], a software-defined radio (SDR)
application, capable of reconstructing images from captured EM emissions in real time. However, using
standard equipment (such as a common omnidirectional antenna), our measurements showed, that the
      </p>
      <p>CEUR
Workshop</p>
      <p>ISSN1613-0073
efective range for a successful image reconstruction is limited to under 50 centimeters. Furthermore,
the reconstructed images are typically significantly compromised by noise and frequently lack clarity.</p>
      <p>This paper investigates methods for extending the efective attack range and improving the quality
of reconstructed images. The following section outlines the principles of image transmission, its
inherent vulnerabilities, and explains how TempestSDR processes EM emissions to recover image
content. The subsequent sections present two methods for improving both the range and visual clarity
of the reconstructed output.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Video interfaces and their vulnerabilities</title>
      <p>Several video interfaces are used to transmit image data from a computer to a display device. These
include both analogue and digital standards, each of which operates in a way that makes them susceptible
to EM leakage.</p>
      <p>
        VGA interface. Video Graphics Array (VGA) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] is an analogue display interface, that transmits
image data using separate voltage signals for the red, green, and blue (RGB) channels. Each color
component is encoded as an analogue voltage signal ranging from 0 V to 0.7 V. In addition, two digital
synchronization signals, HSync and VSync (typically 0 V or 5 V), are used to coordinate horizontal and
vertical refresh cycles.
      </p>
      <p>
        The video signal is transmitted as a continuous stream of pixels. These pixels form horizontal scan
lines, with each scan line ending in a short blanking interval marked by the HSync signal. Once all
scan lines for a frame have been transmitted, a vertical blanking follows, indicated by the VSync signal.
This process repeats continuously, producing a sequence of full-screen image frames.
HDMI interface. High-Definition Multimedia Interface (HDMI) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] is a digital alternative to analogue
interfaces such as VGA. Unlike VGA, which transmits pixel intensities through analogue voltage levels,
HDMI transmits data serially using Transition-Minimized Diferential Signaling (TMDS). The interface
uses four TMDS channels, one for each RGB channel, and one for clock signals.
      </p>
      <p>
        Each pixel’s color components are represented as 8-bit values, which are converted into 10-bit
TMDS symbols. These symbols are then transmitted as a high-speed serial data stream. Like VGA, the
stream of pixels is organized into horizontal scan lines, which in turn form complete image frames.
Synchronization signals are embedded within the stream, analogous to HSync and VSync.
EM susceptibility. The core principle behind the attacks demonstrated by Van Eck and Kuhn is that
image transmission relies on rapidly changing voltages to represent pixel colour intensities [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. These
voltage transitions emit EM radiation in the VHF (from 30 MHz to 300 MHz) and UHF (from 300 MHz to
3 GHz) radio bands.
      </p>
      <p>When analyzing captured emissions from analogue standards, TempestSDR leverages the fact that
image data is transmitted at a rate defined by the monitor’s pixel clock frequency   , which is the
product of display width   , height   , and frame rate   . Then, the time required to transmit a single
pixel is
  = 1/(  ⋅   ⋅   ),
(1)
which results in impulses in the captured signal occurring at multiples of   . In essence, the amplitude
of each impulse corresponds to the intensity of the pixel being transmitted at that moment. As a result,
the signal can be properly segmented according to pixel transmission periods, and the amplitude of
each impulse can be used to infer the approximate grayscale value for the corresponding pixel.</p>
      <p>For digital transmission, where each pixel intensity is represented using   bits, the bit period is
  = 1/(  ⋅   ⋅   ⋅   ).
(2)
TempestSDR reconstructs the image by averaging the bit-level signal over each pixel period. Note
that for digital transmissions, in which Display Stream Compression (DSC) is used, e.g. HDMI 2.1, an
attack with the described approach is not feasible. However, it may still be possible to eavesdrop on the
computer screen itself, rather than the cable.</p>
      <p>Beyond signal demodulation, TempestSDR also ofers automatic detection of the target monitor’s
resolution and refresh rate, allowing relatively straightforward attacks even in cases where these
parameters are initially unknown.</p>
      <p>Baseline attack. An illustrative eavesdropping attempt involves a VGA monitor with a resolution
of 1024×768 pixels operating at a 60 Hz refresh rate. Using a USRP x310 software-defined radio, the
emissions were captured at a distance of approximately 50 centimeters. TempestSDR was tuned to the
second harmonic frequency of the monitor’s pixel clock, which, in this case, is approximately 130 MHz.
A close-up comparison of the original (reference) image and the reconstructed output obtained through
this baseline setup is shown in Figure 1.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Extending the range of attack</title>
      <p>Although much attention is typically given to software-based processes such as signal demodulation,
the ability to receive a clean and suficiently strong signal at a distance is equally important. In
electromagnetic (EM) side-channel attacks, this makes hardware optimization a critical factor.</p>
      <p>Without appropriate hardware, the efective capture range is limited to very short distances
(approximately 50 centimeters in our baseline setup). To extend this range, we employ a high-gain antenna,
a low-noise amplifier, and a band-pass filter. This setup not only increases the reception range, but also
improves the signal-to-noise ratio, thereby enhancing the quality of the reconstructed images.
Directional antennas. The most common types of directional antennas applicable for this scenario
are the log-periodic antenna and the Yagi-Uda antenna. While both are directional, they difer in
bandwidth and frequency gain. Log-periodic antennas typically cover a wider frequency range (e.g.
from 100 MHz to 1 GHz), whereas Yagi-Uda antennas have a narrow bandwidth, around 3% from their
centre frequency. On the other side, Yagi-Uda antennas commonly ofer a higher gain compared to
log-periodic antennas.</p>
      <p>
        Log-periodic antennas are more suitable when the frequency of interest is not known in advance,
making them ideal for exploratory attacks. In contrast, the Yagi-Uda antenna is optimal when the
monitor specifications are known, allowing the antenna to be tuned precisely to the required frequency
for maximum signal gain. Previous works also support this rationale: van Eck originally used a Yagi-Uda
antenna [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], while Kuhn employed both [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. More recently, Meulemeester demonstrated a successful
image reconstruction from a distance of 80 meters using two professional high-gain log-periodic
antennas, double amplification, and band-pass filtering [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
Our Yagi-Uda implementation. Given our knowledge of the target monitor’s characteristics, we
designed a 5-element Yagi-Uda antenna with approximately 8 dBd gain tuned to the second harmonic
frequency of the monitor’s pixel clock (approximately 130 MHz). The antenna geometry was calculated
using an online Yagi-Uda calculator from Changpuak [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. We decided to build the antenna from
aluminium for lightness and compactness. Both straight and folded dipole variants were implemented.
      </p>
      <p>The antenna was connected to the SDR using a 50 Ω coaxial cable. Since the folded dipole has a higher
characteristic impedance (typically from 200 Ω to 300 Ω), we implemented a 4:1 balun to match it to the
SDR’s 50 Ω input. A TQP3M9037 low-noise amplifier was added in line, providing an additional 20 dB
of gain. Using a Tektronix MDO4104C multi-domain oscilloscope, we observed substantial interference
in the 90-120 MHz band, mostly from FM radio broadcasts. To protect the SDR’s input channel and to
isolate the desired signal, we applied a band-pass filter tuned to 125-135 MHz. A schematic setup is
shown in Figure 2.</p>
      <p>
        Results. With inspiration from [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], we created a reference image (see Figure 3) incorporating
diminishing text and standard image quality symbols, including SMPTE color bars, a Tuning Signals test card,
and a checkerboard pattern.
      </p>
      <p>We tested the implemented system in a hallway with a metal structure, containing various reflective
and absorptive objects. The target device, a laptop connected to a monitor via an HDMI-to-VGA
converter, and a basic unshielded VGA cable, was positioned at one end of the hallway. The attacker
remained in their room, while the Yagi-Uda antenna was placed in the hallway to capture emissions
from the target. The test scenario is depicted in Figure 4, and reconstruction results from various
distances are shown in Figure 5. Both folded and straight dipoles achieved similar results.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Recovering images with deep learning</title>
      <p>While using optimized hardware allowed us to capture clearer signals from longer distances, the
reconstructed images often remained degraded due to noise and distortion. To address this issue, we
explored the application of deep learning techniques, specifically convolutional neural networks (CNNs),
for image post-processing and enhancement.
Problem definition. The image restoration task can be formally formulated as recovering a clean
image  from a degraded observation  , defined by the relationship  =  +  , where  represents
degradation, e.g. noise.</p>
      <p>
        In the context of electromagnetic side-channel attacks, previous works have demonstrated the
feasibility of using deep learning to restore screen content from captured emissions [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ]. However,
these works often relied on artificially generated low-resolution patches, which may not accurately
reflect the challenges present in real-world attacks.
      </p>
      <p>
        Our approach instead focuses on reconstructing entire frames obtained from actual TempestSDR
captures. A similar objective was pursued in the Deep-Tempest paper [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], which also attempted to
restore complete images from side-channel attacks. However, their method mainly processed
complexvalued IQ (in-phase and quadrature) images captured using gr-tempest software 1, whereas we rely
solely on amplitude images extracted from TempestSDR. With this approach, we try to make the attack
      </p>
      <sec id="sec-4-1">
        <title>1TempestSDR reimplementation in GNU Radio.</title>
        <p>more suitable for practical, real-time attack scenarios.</p>
        <sec id="sec-4-1-1">
          <title>DnCNN architecture.</title>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>DnCNN [12] is a CNN specifically designed for image denoising. Unlike</title>
        <p>traditional models that attempt to predict a clean image directly from a noisy input, DnCNN is trained to
estimate the residual image, that is, the diference between the noisy and the clean image. This strategy
simplifies the task by focusing the model on removing the noise component rather than reconstructing
the entire image from scratch.</p>
        <p>Let the noisy input image be  =  +</p>
        <p>, where  is the clean image and  is additive noise. The model
learns to approximate the residual mapping ℛ( ) ≈  , such that the clean image can be recovered via
 =  − ℛ( )</p>
        <p>
          . The network parameters Θ are optimized by minimizing the mean squared error (MSE)
between the desired residual images and the estimated ones from the noisy input:

1
2 =1
ℓ(Θ) =
∑ ∥ ℛ(  ; Θ) − (  −   ) ∥2,
(3)

where (  ,   )=1 denotes a set of  noisy-clean image pairs [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
        </p>
        <p>The architecture uses a deep convolutional structure without pooling layers. DnCNN with the depth
 consists of:
• Conv + ReLU (first layer) – 64 convolutional filters with the size of
3 × 3 ×  , where  is the number
of input channels,
• Conv + BN + ReLU (hidden layers 2 to  − 1 ) – 64 convolutional filters with the size of 3 × 3 × 64,
with batch normalization between the convolution and activation function,
• Conv (final layer) –  filters with the size of 3 × 3 × 64.</p>
        <p>We selected this network for its simplicity, relatively low parameter count (0.5 ⋅ 106 for  = 17 layers),
and its efectiveness.</p>
        <sec id="sec-4-2-1">
          <title>DRUNet architecture.</title>
        </sec>
      </sec>
      <sec id="sec-4-3">
        <title>DRUNet [13] is a flexible and high-performing CNN designed for various</title>
        <p>image restoration tasks, including denoising, deblurring, and super-resolution.</p>
        <p>DRUNet formulates the task as an optimization problem, where the goal is to obtain the clean image
 from the degraded image  =  () +</p>
        <p>
          , where  represents degradation independent of noise, and 
is noise [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ].
        </p>
        <p>From a Bayesian perspective, DRUNet predicts the approximation  ̂ of the clean image  by solving
the Maximum A Posteriori (MAP) problem, which is given as:
 =̂ arg max log ( |) +
log (),
where log ( |)</p>
        <p>represents the log-probability of the observation of  with the given  , log ()
represents a prior probability of the clean image, independent of the observation  . Formally, we can
rewrite (4) as a minimization of the function:

1
2 2
(4)
(5)
 =̂ arg min

∥  −  () ∥</p>
        <p>
          2 +ℛ(),
2 is the data term, which ensures that the solution respects degradation, while
where 1
ℛ()
2 2 ∥  −  () ∥
is the regularization term, which enforces the desired properties on the solution [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ].
        </p>
        <p>
          DRUNet’s architecture is based on the U-Net model, which features an encoder-decoder structure
with skip connections. Each encoder stage consists of convolutional layers that downsample the input
and extract features. The decoder stages then upsample these features to reconstruct the original
image. DRUNet consists of a four-layer decoder (and encoder) with channel depths of 64, 128, 256,
and 512, respectively. Each stage includes four residual blocks with the ReLU activation function, and
downsampling/upsampling is performed via strided/transposed convolutions [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ].
        </p>
        <p>DRUNet normally expects both a degraded image and a noise level map as its input. Since the exact
noise profile in our case is unknown, the noise map channel is set to zero.</p>
        <p>
          We selected DRUNet for its strong denoising performance on various benchmark datasets [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], and
for its proven use in similar applications such as Deep-Tempest [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
        </p>
        <sec id="sec-4-3-1">
          <title>Dataset creation.</title>
          <p>To simulate realistic screen content, we manually collected 650 screenshots of
common desktop environments, popular websites, and various graphical user interfaces. Since
improving text legibility is a primary objective of this work, we extended the dataset with 200 synthetically
generated images containing randomly rendered text. These were created using a custom Python script
built on the Selenium framework, which randomly selects font styles and sizes, renders the text in
a browser window, and captures the resulting screen.</p>
          <p>We fully automated the data acquisition process in TempestSDR using the PyAutoGUI library. A
custom script was developed to iteratively display each reference image on the target monitor and trigger
a snapshot in TempestSDR. For every reference image, eight captures were performed, each from
a diferent distance and antenna orientation. The captures include images reconstructed from VGA and
HDMI cable, as well as from HDMI-VGA converter emissions. These variations simulate real-world
conditions, as even small changes in antenna placement can significantly afect the output image.</p>
          <p>The snapshots produced in TempestSDR contain not only the reconstructed image but also padding
caused by synchronization signals present during image transmission. Since the selected CNNs operate
on pixel-wise comparisons between reference and captured images, this padding had to be removed. To
accomplish this, we implemented a script that uses template matching from the OpenCV library. If the
script failed to locate the reference image within the snapshot, the snapshot was deemed invalid and
excluded from the dataset. Otherwise, the padding was removed, and the resulting image was retained
for training.
resulting subsets.</p>
          <p>Finally, the dataset was divided into training, testing, and validation subsets in a ratio of 80:10:10.
The ratio of images captured from HDMI, VGA cable, and HDMI-VGA converter was preserved in the</p>
          <p>PSNR
Training. Both DnCNN and DRUNet were trained on the training subset with the goal of minimizing
the MSE loss function. The initial learning rate was set to 10−4, and the Adam Optimizer was employed.
The input image pairs were split into patches of size 256 × 256 pixels by the dataloader, and the batch
size was set to 32. Each model was trained for 100 epochs, with checkpoints saved every 10 epochs.
Results. After training the models, we evaluated the performance with standard image quality metrics:
Peak Signal-to-Noise Ratio (PSNR), Mean Squared Error (MSE), and Structural Similarity Index Measure
(SSIM). To better assess text readability, we additionally measured the Character Error Rate (CER),
defined as the ratio of incorrectly recognised characters to the total number of characters in the reference
image. We also recorded the average reconstruction time per image.</p>
          <p>From the results in Table 1, it is evident that using DRUNet significantly enhances the quality of
reconstructed images. The CER was reduced by 40%, greatly enhancing text readability in the images.
While DnCNN also outperforms TempestSDR, it lags behind DRUNet across all image quality metrics.
Its primary advantage is faster inference, requiring only half the reconstruction time compared to
DRUNet. A real-world example of image reconstruction using both trained models is shown in Figure 6.
Integration into TempestSDR. Currently, in order to perform an attack using the trained models,
the user has to manually capture an image in TempestSDR, remove padding from the captured image,
run a separate script for image enhancement using a trained model, and then save the improved result.
To streamline and simplify the attack process, we integrated a model inference functionality directly
into TempestSDR.</p>
          <p>Two new buttons were added to the GUI: Select Model (DRUNet or DnCNN) and Load Model (to
specify the path to the trained model). Once loaded, the user can utilize the Take snapshot and enhance
option to automatically enhance the captured image using the selected model. While removing the
padding present in TempestSDR snapshots, contour detection is used to locate the region corresponding
to the screen content. The added functionality can be seen in Figure 7.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>We demonstrated that electromagnetic side-channel attacks on monitors, previously limited by short
range and poor image quality, can be greatly improved through non-expensive hardware and software
enhancements. By designing a directional Yagi-Uda antenna and applying a band-pass filter and
amplifier, we extended the efective capture distance from 50 cm to 20 m. To further improve image
quality, we trained CNNs on real TempestSDR captures, achieving up to a 40% improvement in text
recognition accuracy. Finally, we integrated the models into TempestSDR, enabling practical and
repeatable attacks with one-click image restoration.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This publication is the result of support under the Operational Program Integrated Infrastructure for
the project: Advancing University Capacity and Competence in Research, Development and Innovation
(ACCORD, ITMS2014+:313021X329), co-financed by the European Regional Development Fund.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used GPT-4o for grammar and spell check. After using
the tool, the authors reviewed and edited the content as needed and take full responsibility for the
publication’s content.
The sources for our modified TempestSDR, dataset, and trained models are available online:</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>National</given-names>
            <surname>Security</surname>
          </string-name>
          <string-name>
            <surname>Agency</surname>
          </string-name>
          ,
          <article-title>TEMPEST: A Signal Problem</article-title>
          , https://www.nsa.gov/portals/75/ documents/news-features/declassified-documents/cryptologic-spectrum/tempest.pdf,
          <year>1972</year>
          . Unclassified paper.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>W. van Eck</surname>
          </string-name>
          ,
          <article-title>Electromagnetic radiation from video display units: An eavesdropping risk?</article-title>
          ,
          <source>Computers &amp; Security</source>
          <volume>4</volume>
          (
          <year>1985</year>
          )
          <fpage>269</fpage>
          -
          <lpage>286</lpage>
          . doi:
          <volume>10</volume>
          .1016/
          <fpage>0167</fpage>
          -
          <lpage>4048</lpage>
          (
          <issue>85</issue>
          )
          <fpage>90046</fpage>
          -
          <lpage>X</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M. G.</given-names>
            <surname>Kuhn</surname>
          </string-name>
          ,
          <article-title>Electromagnetic Eavesdropping Risks of Flat-Panel Displays</article-title>
          , in: Privacy Enhancing Technologies, Springer Berlin Heidelberg,
          <year>2005</year>
          , pp.
          <fpage>88</fpage>
          -
          <lpage>107</lpage>
          . doi:
          <volume>10</volume>
          .1007/11423409_7.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Marinov</surname>
          </string-name>
          ,
          <article-title>Remote video eavesdropping using a software-defined radio platform</article-title>
          ,
          <source>GitHub repository of TempestSDR software</source>
          ,
          <year>2014</year>
          . URL: https://github.com/martinmarinov/TempestSDR.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>IBM</given-names>
            <surname>VGA XGA Technical</surname>
          </string-name>
          <string-name>
            <surname>Reference</surname>
          </string-name>
          , IBM,
          <year>1992</year>
          . Manual.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>High-Definition Multimedia</surname>
          </string-name>
          Interface Specification, HDMI Licensing, LLC,
          <year>2006</year>
          . Manual.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>P. de Meulemeester</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Scheers</surname>
            ,
            <given-names>G. A. E.</given-names>
          </string-name>
          <string-name>
            <surname>Vandenbosch</surname>
          </string-name>
          ,
          <article-title>Eavesdropping a (Ultra-)High-Definition Video Display from an 80 Meter Distance Under Realistic Circumstances</article-title>
          , in: 2020
          <source>IEEE International Symposium on Electromagnetic Compatibility &amp; Signal/Power Integrity (EMCSI)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>517</fpage>
          -
          <lpage>522</lpage>
          . doi:
          <volume>10</volume>
          .1109/EMCSI38923.
          <year>2020</year>
          .
          <volume>9191457</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Changpuak</surname>
          </string-name>
          ,
          <string-name>
            <surname>Yagi-Uda Antenna</surname>
          </string-name>
          Calculator, https://www.changpuak.ch/electronics/yagi_uda_ antenna_DL6WU.php,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Galvis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Morales</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kasmi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Vega</surname>
          </string-name>
          ,
          <article-title>Denoising of Video Frames Resulting From Video Interface Leakage Using Deep Learning for Eficient Optical Character Recognition</article-title>
          ,
          <source>IEEE Letters on Electromagnetic Compatibility Practice and Applications</source>
          <volume>3</volume>
          (
          <year>2021</year>
          )
          <fpage>82</fpage>
          -
          <lpage>86</lpage>
          . doi:
          <volume>10</volume>
          .1109/LEMCPA.
          <year>2021</year>
          .
          <volume>3073663</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>F.</given-names>
            <surname>Lemarchand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Marlin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Montreuil</surname>
          </string-name>
          , E. Nogues,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pelcat</surname>
          </string-name>
          ,
          <article-title>Electro-Magnetic Side-Channel Attack Through Learned Denoising and Classification</article-title>
          , in: ICASSP 2020
          <article-title>-</article-title>
          2020 IEEE International Conference on Acoustics,
          <source>Speech and Signal Processing (ICASSP)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>2882</fpage>
          -
          <lpage>2886</lpage>
          . doi:
          <volume>10</volume>
          . 1109/ICASSP40776.
          <year>2020</year>
          .
          <volume>9053913</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Fernández</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Martínez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Varela</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Musé</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Larroca</surname>
          </string-name>
          , Deep-TEMPEST:
          <article-title>Using Deep Learning to Eavesdrop on HDMI from its Unintended Electromagnetic Emanations</article-title>
          ,
          <source>in: Proceedings of the 13th Latin-American Symposium on Dependable and Secure Computing (LADC '24)</source>
          ,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .1145/3697090.3697094.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>K.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , W. Zuo,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Meng</surname>
          </string-name>
          , L. Zhang,
          <article-title>Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising</article-title>
          ,
          <source>IEEE Transactions on Image Processing</source>
          <volume>26</volume>
          (
          <year>2017</year>
          )
          <fpage>3142</fpage>
          -
          <lpage>3155</lpage>
          . doi:
          <volume>10</volume>
          .1109/TIP.
          <year>2017</year>
          .
          <volume>2662206</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>K.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zuo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. Van Gool</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Timofte</surname>
          </string-name>
          ,
          <string-name>
            <surname>Plug-</surname>
          </string-name>
          and
          <article-title>-play image restoration with deep denoiser prior</article-title>
          ,
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          <volume>44</volume>
          (
          <year>2022</year>
          )
          <fpage>6360</fpage>
          -
          <lpage>6376</lpage>
          . doi:
          <volume>10</volume>
          .1109/TPAMI.
          <year>2021</year>
          .
          <volume>3088914</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>• Modified TempestSDR on GitHub: https://github.com/filippt1/TempestSDR_Enhanced, • Dataset on Hugging Face: https://huggingface.co/datasets/filippt1/TempestSDR_Enhanced_Dataset, • Trained models on Google Drive: https://drive.google.com/drive/folders/1zFWvRVtZ-9s4WG3DEF2Kivu6meQ9UQvB?usp= sharing.</mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>