<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>New Approaches Based on PRNU-CNN for Image Camera Source Attribution in Forensic Investigations</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Giorgio De Magistris</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rafał Grycuk</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lorenzo Mandelli</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rafał Scherer</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computational Intelligence, Czestochowa University of Technology</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Computer, Control and Management Engineering, Sapienza University of Rome</institution>
        </aff>
      </contrib-group>
      <fpage>67</fpage>
      <lpage>72</lpage>
      <abstract>
        <p>Digital image forensics currently mainly uses PRNU noise as a fingerprint to attribute an image to a particular camera. However PRNU is usually extracted manually using Maximum Likelihood estimation from multiple images from the same source device. In this paper we show that the PRNU can be learned in a data driven fashion using a ResNet based neural network. We also show that it is possible to train a neural network for camera attribution directly on the residual noise, that contains both the PRNU and a random component. We show that both approaches are valid as we obtained results comparable with the state of the art.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;PRNU</kwd>
        <kwd>CNN</kwd>
        <kwd>Image Classification</kwd>
        <kwd>Digital Camera Identification</kwd>
        <kwd>Forensic</kwd>
        <kwd>Pattern Noise</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>cameras, smartphones, and other digital devices often
capture photographs and videos that serve as crucial
evCamera source identification consists in the attribution idence in criminal investigations. By determining the
of an image to the digital camera from which it was camera source, investigators can establish a connection
originally captured by using only image features and no between a suspect and a specific crime scene or event.
external information. Camera source attribution plays This information becomes vital in establishing a suspect’s
a pivotal role in forensic investigations, particularly in presence at a particular location and time, strengthening
the realm of digital imagery and video analysis. The the case against them.
source information holds significant value for several Camera source attribution also aids in tracking cyber
reasons, making it a critical aspect of modern forensic criminal activity, particularly in cases involving child
examinations. exploitation, cyberbullying, or online harassment. By</p>
      <p>One of the primary reasons camera source attribu- identifying the camera source, law enforcement can trace
tion is crucial is its role in authenticating evidence. In the origin of illegal or harmful content, leading to the
any criminal investigation, the authenticity of evidence is identification and apprehension of ofenders. This
proacparamount. By determining the camera source, the courts tive approach helps protect potential victims and curtail
can verify whether an image or video is an original cap- criminal activities.
ture or if it has been tampered with or manipulated. This Moreover, standardized camera source attribution
verification process is crucial for establishing the chain practices facilitate cooperation among law enforcement
of custody and ensuring the evidence presented in court agencies. Criminal activities often transcend
jurisdicis reliable and admissible. tional boundaries, and evidence may be collected by
dif</p>
      <p>Camera source attribution also helps in determining ferent agencies. By following consistent attribution
practhe integrity of images. With the advent of sophisticated tices, professionals can seamlessly exchange and analyze
photo and video editing software, the risk of forged or visual evidence, enhancing the overall efectiveness of
altered visuals has increased. However, each camera criminal investigations.
model possesses unique characteristics that act as digital The rest of the paper is structured as follows: section
ifngerprints. 2 reviews relevant related works, section 3 describes the</p>
      <p>Furthermore, camera source attribution aids investi- dataset, section 4 describes the proposed method, and in
gators in linking suspects to crime scenes. Surveillance particular section 4.1 describes a convolutional networks
that is trained to classify the source directly from the
SYSYEM 2024: 10th Scholar’s Yearly Symposium of Technology, Engi- residual noise, that contains both the PRNU and a
ranneering and Mathematics, Rome, December 2-5, 2024 dom component, while section 4.2 introduces a diferent
r$afadle.gmryagcuisktr@isp@czd.ipalg(.Run.iGrorymcau1k.i)t; m(Ga.nDd.eMllia@gdisitargis.u);niroma1.it convolutional network that is trained on PRNU isolated.
(L. Mandelli); rafal.scherer@pcz.pl (R. Scherer) The results are presented in section 5 and conclusions
0000-0002-3076-4509 (G. D. Magistris); 0009-0008-8447-0830 are drawn in section 6.
(L. Mandelli)
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License
Attribution 4.0 International (CC BY 4.0).</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related works</title>
      <p>
        Camera attribution techniques are based on the analysis
of the sensor pattern noise (SPN) that is introduced by
the acquisition device. This signal however is the sum of
two components: a random component that depends on
diferent factors in the image acquisition process and a
deterministic component, that depends on intrinsic
properties of the image sensor. This second component, called
Photo Response Non-Uniformity Noise (PRNU), should
be approximately the same in diferent images acquired
by the same device and can be used as a fingerprint of the
device itself. The PRNU component of an image can be
estimated from multiple images coming from the same
device as follows. First the image signal  is separated
from the residual noise  using a low-pass filter  :
 =  −  ()
Then the deterministic component  is separated from
the random component by averaging the residual noise
of multiple images [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] or using a more sophisticated
minimum variance estimator like in [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]:
(1)
(2)
 =
∑︀=1 
∑︀
=1 2
where  is the number of images used to estimate the
PRNU,  is an input image and  is the residual noise
obtained through high-pass filtering. Once the
deterministic component  of the camera  is known, the
attribution of a new input image  is computed
thresholding the correlation between the residual noise of the
input image  =  −  () and the PRNU of the
sensor ().
to the residual noise while the second uses a ResNet based
CNN to extract the PRNU from a single image and then
the same convolutional network for classification. With
both approaches we obtain results comparable with the
state of the art.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Dataset</title>
      <p>We tested the proposed method on the Vision dataset
[16], that contains labelled images acquired by common
devices. In particular the dataset contains flat and natural
images for each device, where natural images represent
common scenes while flat images represent homogeneus
backgrounds, without edges, and can be used to extract
the PRNU. Some samples from the dataset are shown
in figure 1. We used flat images to compute the
residual noise and the PRNU target. We cropped all images,
keeping only the top left corner of the image with size
256 × 256.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Method</title>
      <p>In this section we describe the proposed method, and in
particular we describe the two approaches mentioned
in section 1: in section 4.1 the input of the network is
the residual noise, with both deterministic and random
components, while in section 4.2 the input of the
classification network is the PRNU extracted by a second
neural network. The two approaches share the the same
convolutional neural network for classification described
in section 4.3.</p>
      <sec id="sec-4-1">
        <title>4.1. Classification from Residual Noise</title>
        <p>( ∈ ) =  ((, )) (3) In this section we describe in detail the process we used to
isolate the residual noise from the input signal. We used
where  is the Dirac function and  is a thresholding a method based on wavelet decomposition. This method
function. is based on the assumption that the wavelet coeficients</p>
        <p>
          In the last few years deep learning has revolution- are modeled as iid Gaussian variables with zero mean
ized the field of computer vision, and in particular many and variance given by a deterministic, unknown spatially
state of the art approaches in computer vision tasks such varying variance field. Given the variance field, the image
as classification, segmentation, etc. are based on con- wavelet coeficients without noise are estimated with a
volutional neural networks[
          <xref ref-type="bibr" rid="ref4 ref5 ref6 ref7 ref8">4, 5, 6, 7, 8</xref>
          ]. End-to-end Minimum Mean Squared Error procedure. The extraction
deep learning approaches have been successfully applied of the residual noise then consists in the estimation of the
also to image source identification, formulating the attri- variance field and the estimation of the clean coeficients
bution as a classification task. Some approaches apply using the variance field. This process can be summarized
convolutional neural networks directly to raw images in the following steps:
[
          <xref ref-type="bibr" rid="ref10 ref11 ref9">9, 10, 11</xref>
          ], however often to this approach is preferred the Step 1. Calculate the fourth-level wavelet
decomposiapplication of a domain transformation before process- tion of the noise image. The following steps will be taken
ing [
          <xref ref-type="bibr" rid="ref12 ref13 ref14 ref15">12, 13, 14, 15</xref>
          ]. While the authors of [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] extract the with ℎ(, ) as an example and the same steps will be
PRNU manually and then use a convolutional network taken for other subbands. Denote the vertical, horizontal
for the classification. and diagonal subbands as (, ), ℎ(, ), (, ), where
        </p>
        <p>In this paper we present two approaches: the first con- (, ) runs through an index set J that depends on the
sists in a simple convolutional network applied directly decomposition level.
67–72</p>
        <p>(6)</p>
        <sec id="sec-4-1-1">
          <title>Step 3. Use Winner filter to denoise the wavelet coef</title>
          <p>ifcient.</p>
          <p>ℎ(, ) = ℎ(, )</p>
          <p>ˆ2(, )
ˆ2(, ) +  02
and similar for (, ) and (, ), (, ) ∈</p>
          <p>Step 4. Repeat Steps 1–3 for each level and each color
channel. The denoised image is obtained by applying
the inverse wavelet transform to the denoised wavelet
coeficients.</p>
          <p>
            For further details we refer the reader to [
            <xref ref-type="bibr" rid="ref15">17, 18, 15,
19, 20</xref>
            ]. The isolated residual noise is the input of the
convolutional neural network described in section 4.3,
that predicts the source camera.
          </p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Classification from PRNU</title>
        <sec id="sec-4-2-1">
          <title>The most common method for extracting PRNU is MLE,</title>
          <p>but MLE requires multiple images from the same device to
extract the PRNU noise from a single image. In this paper
we propose a novel approach based on deep learning to
extract the PRNU component from the residual noise.
According to [21], when the original mapping is closer
to an identity mapping, the residual mapping is easier to
be optimized. Therefore we propose Resnet-based CNN,
and in particular the CSI-CNN architecture[22] shown
in Figure 2.</p>
          <p>In order for the model to learn the specific target PRNU,
the input data is residual noise, the target is PRNU noise
extracted with MLE and the loss function is mean squared
error(MSE). The PRNU extracted from the Resnet-based
CNN is then used to train the Convolutional classifier
described in the next section. The complete workflow is
illustrated in figure 3.</p>
        </sec>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Convolutional Classifier</title>
        <sec id="sec-4-3-1">
          <title>Take the minimum of the 4 variances as the final estimate,</title>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Results</title>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>Our classifier is trained to recognize ten camera classes In this paper we presented two novel deep learning
ap(K = 10): Samsung GalaxyS3Mini, Apple iPhone4s, Apple proaches for camera attribution from raw images,
obtainiPhone5c, Huawei P9, LG D290, Lenovo P70A, Sony Xpe- ing results comparable with the state of the art. Moreover
riaZ1Compact, Microsoft Lumia640LTE, Wiko Ridge4G, we showed that the PRNU, that is the fingerprint of the
Xiaomi RedmiNote3, two of these devices are from the image sensor, can be isolated from the input image using
same brand. In total, there are 2194 images. As described a data driven approach. We also showed that a neural
netin the previous section, we tested two configurations: in work can be trained to classify the target source camera
the first the input of the classifier is the residual noise directly from the residual noise, that contains both the
while in the second the ResNet based neural network is PRNU and a random component, obtaining even better
trained to predict the PRNU from raw images and then results.
the obtained PRNU is used to train the classifier. Figure While significant progress has been made in
address5 shows the learning curve of the ResNet based neural ing image source attribution, there are still areas that
network. Figure 6 shows the confusion matrices for both warrant further exploration and development. A
potenconfigurations. With both approaches we reached state tial avenue for future research and improvements of this
of the art results, as shown in table 1. I work could be the application of similar techniques to
compressed images. This is particularly important
because the popular social media platforms compress the
uploaded content, hindering the source attribution.
Another important line of research could be the application
of these models to AI generated content. The advent of
AI-generated images indeed, driven by advancements
in machine learning and deep learning algorithms, has
significant implications across various domains. While
these technologies ofer exciting possibilities and creative
opportunities, they also raise important ethical, legal, and
societal concerns.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Acknowledgment</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Lukas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Fridrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Goljan</surname>
          </string-name>
          ,
          <article-title>Digital camera identification from sensor pattern noise</article-title>
          ,
          <source>IEEE Transactions on Information Forensics and Security</source>
          <volume>1</volume>
          (
          <year>2006</year>
          )
          <fpage>205</fpage>
          -
          <lpage>214</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Fridrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Goljan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lukas</surname>
          </string-name>
          ,
          <article-title>Determining image origin and integrity using sensor noise</article-title>
          ,
          <source>IEEE Transactions on Information Forensics and Security</source>
          <volume>3</volume>
          (
          <year>2008</year>
          )
          <fpage>74</fpage>
          -
          <lpage>90</lpage>
          . doi:
          <volume>10</volume>
          .1109/TIFS.
          <year>2007</year>
          .
          <volume>916285</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Caldelli</surname>
          </string-name>
          , I. Amerini,
          <string-name>
            <given-names>C. T.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Prnu-based image classification of origin social network with cnn</article-title>
          ,
          <source>in: 2018 26th European Signal Processing Conference (EUSIPCO)</source>
          , IEEE,
          <year>2018</year>
          , pp.
          <fpage>1357</fpage>
          -
          <lpage>1361</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>De Magistris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Caprari</surname>
          </string-name>
          , G. Castro,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Iocchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Nardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <article-title>Vision-based holistic scene understanding for context-aware human-robot interaction 13196 LNAI (</article-title>
          <year>2022</year>
          )
          <fpage>310</fpage>
          -
          <lpage>325</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -08421-8\_21. wavelet decomposition, in: Proceedings of the In-
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Brociek</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. De Magistris</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Cardia</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Coppa</surname>
            , ternational Joint Conference on Neural Networks,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Russo</surname>
          </string-name>
          ,
          <article-title>Contagion prevention of covid-19 by volume 2015-September, Institute of Electrical and means of touch detection for retail stores</article-title>
          , in: 2021 Electronics Engineers Inc.,
          <year>2015</year>
          . doi:
          <volume>10</volume>
          .1109/ Scholar's
          <source>Yearly Symposium of Technology, Engi- IJCNN</source>
          .
          <year>2015</year>
          .
          <volume>7280461</volume>
          . neering and Mathematics,
          <string-name>
            <surname>SYSTEM</surname>
          </string-name>
          <year>2021</year>
          ,
          <year>2021</year>
          , pp. [16]
          <string-name>
            <given-names>D.</given-names>
            <surname>Shullani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Fontani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Iuliani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O. A.</given-names>
            <surname>Shaya</surname>
          </string-name>
          ,
          <volume>89</volume>
          -
          <fpage>94</fpage>
          . A.
          <string-name>
            <surname>Piva</surname>
          </string-name>
          ,
          <article-title>Vision: a video and image dataset for source</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ponzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Puglisi</surname>
          </string-name>
          , S. Russo, identification,
          <source>EURASIP Journal on Information I. E. Tibermacine, Exploiting robots as healthcare Security</source>
          <year>2017</year>
          (
          <year>2017</year>
          )
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          .
          <article-title>resources for epidemics management</article-title>
          and support [17]
          <string-name>
            <surname>M. K. Mihcak</surname>
            , I. Kozintsev,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Ramchandran</surname>
          </string-name>
          , Spacaregivers,
          <source>in: CEUR Workshop Proceedings</source>
          , vol- tially
          <source>adaptive statistical modeling of wavelet imume 3686</source>
          ,
          <year>2024</year>
          , p.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          .
          <article-title>age coeficients and its application to denoising,</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>E.</given-names>
            <surname>Iacobelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <surname>C. Napoli,</surname>
          </string-name>
          <article-title>A machine learning in: 1999 IEEE International Conference on Acousbased real-time application for engagement detec- tics,</article-title>
          <string-name>
            <surname>Speech</surname>
            , and
            <given-names>Signal</given-names>
          </string-name>
          <string-name>
            <surname>Processing</surname>
          </string-name>
          .
          <source>Proceedings. tion, in: CEUR Workshop Proceedings</source>
          , volume
          <volume>ICASSP99</volume>
          (
          <article-title>Cat</article-title>
          .
          <source>No. 99CH36258)</source>
          , volume
          <volume>6</volume>
          , IEEE,
          <volume>3695</volume>
          ,
          <year>2023</year>
          , p.
          <fpage>75</fpage>
          -
          <lpage>84</lpage>
          .
          <year>1999</year>
          , pp.
          <fpage>3253</fpage>
          -
          <lpage>3256</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>N.</given-names>
            <surname>Brandizzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fanti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Gallotta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          , L. Ioc- [18]
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Pappalardo</surname>
          </string-name>
          , E. Tramontana, Improvchi,
          <string-name>
            <given-names>D.</given-names>
            <surname>Nardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <article-title>Unsupervised pose es- ing files availability for bittorrent using a difutimation by means of an innovative vision trans- sion model</article-title>
          ,
          <source>in: Proceedings of the Workshop on former, in: Lecture Notes in Computer Science Enabling Technologies: Infrastructure for Collab(including subseries Lecture Notes in Artificial In- orative Enterprises, WETICE</source>
          ,
          <year>2014</year>
          , p.
          <fpage>191</fpage>
          -
          <lpage>196</lpage>
          .
          <source>telligence and Lecture Notes in Bioinformatics)</source>
          , vol- doi:10.1109/WETICE.
          <year>2014</year>
          .
          <volume>65</volume>
          . ume 13589 LNAI,
          <year>2023</year>
          , p.
          <fpage>3</fpage>
          -
          <lpage>20</lpage>
          . doi:
          <volume>10</volume>
          .1007/ [19]
          <string-name>
            <given-names>G.</given-names>
            <surname>Lo Sciuto</surname>
          </string-name>
          , G. Capizzi,
          <string-name>
            <given-names>R.</given-names>
            <surname>Shikler</surname>
          </string-name>
          , C. Napoli,
          <fpage>Or978</fpage>
          -3-
          <fpage>031</fpage>
          -23480-
          <article-title>4_1. ganic solar cells defects classification by using a</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <article-title>Camera model iden- new feature extraction algorithm and an ebnn with tification with residual neural network, in: 2017 an innovative pruning algorithm</article-title>
          ,
          <source>International IEEE International Conference on Image Processing Journal of Intelligent Systems</source>
          <volume>36</volume>
          (
          <year>2021</year>
          )
          <fpage>2443</fpage>
          -
          <lpage>2464</lpage>
          . (ICIP), IEEE,
          <year>2017</year>
          , pp.
          <fpage>4337</fpage>
          -
          <lpage>4341</lpage>
          . doi:
          <volume>10</volume>
          .1002/int.22386.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>V.</given-names>
            <surname>Ponzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Wajda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          , A com- [20]
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Pappalardo</surname>
          </string-name>
          ,
          <string-name>
            <surname>E. Tramontana,</surname>
          </string-name>
          <article-title>An parative study of machine learning approaches for agent-driven semantical identifier using radial autism detection in children from imaging data, in: basis neural networks and reinforcement learning</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          , volume
          <volume>3398</volume>
          ,
          <year>2022</year>
          , in: CEUR Workshop Proceedings, volume
          <volume>1260</volume>
          , p.
          <fpage>9</fpage>
          -
          <lpage>15</lpage>
          .
          <year>2014</year>
          . URL: https://www.scopus.com/inward/
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>X.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Huang</surname>
          </string-name>
          , Camera iden- record.uri?eid=
          <fpage>2</fpage>
          -
          <lpage>s2</lpage>
          .
          <fpage>0</fpage>
          -
          <lpage>84919742629</lpage>
          &amp;
          <article-title>partnerID= tification based on domain knowledge-driven deep 40&amp;md5=c3ee8a3fa1716b39215326edfc67d955. multi-task learning</article-title>
          ,
          <source>IEEE Access 7</source>
          (
          <year>2019</year>
          )
          <fpage>25878</fpage>
          - [21]
          <string-name>
            <given-names>K.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , W. Zuo,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Meng</surname>
          </string-name>
          , L. Zhang,
          <volume>25890</volume>
          .
          <string-name>
            <surname>Beyond</surname>
          </string-name>
          <article-title>a gaussian denoiser: Residual learning of</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Falciglia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Betello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <article-title>Learn- deep cnn for image denoising, IEEE transactions ing visual stimulus-evoked eeg manifold for neural on image processing 26 (</article-title>
          <year>2017</year>
          )
          <fpage>3142</fpage>
          -
          <lpage>3155</lpage>
          . image classification,
          <source>Neurocomputing</source>
          <volume>588</volume>
          (
          <year>2024</year>
          ). [22]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Rong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Xu</surname>
          </string-name>
          , Image doi:
          <volume>10</volume>
          .1016/j.neucom.
          <year>2024</year>
          .
          <volume>127654</volume>
          .
          <article-title>source identification using convolutional neural</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          <article-title>Tibermacine, networks in iot environment, Wireless CommuD</article-title>
          . Chebana,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nahili</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Starczewscki</surname>
          </string-name>
          , C. Napoli,
          <source>nications and Mobile Computing</source>
          <year>2021</year>
          (
          <year>2021</year>
          ).
          <article-title>Analyzing eeg patterns in young adults</article-title>
          exposed to [23]
          <string-name>
            <given-names>F.</given-names>
            <surname>Marra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Gragnaniello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Verdoliva</surname>
          </string-name>
          ,
          <article-title>On the diferent acrophobia levels: a vr study, Frontiers vulnerability of deep learning to adversarial attacks in Human Neuroscience 18 (</article-title>
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .3389/ for camera model identification,
          <source>Signal Processing: fnhum.2024.1348154. Image Communication</source>
          <volume>65</volume>
          (
          <year>2018</year>
          )
          <fpage>240</fpage>
          -
          <lpage>248</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bonanno</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Capizzi</surname>
          </string-name>
          , An hybrid neuro- [24]
          <string-name>
            <given-names>L.</given-names>
            <surname>Bondi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Barofio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Güera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bestagini</surname>
          </string-name>
          ,
          <string-name>
            <surname>E. J.</surname>
          </string-name>
          <article-title>wavelet approach for long-term prediction of so- Delp, S. Tubaro, First steps toward camera model lar wind, in: Proceedings of the International As- identification with convolutional neural networks</article-title>
          ,
          <source>tronomical Union</source>
          , volume
          <volume>6</volume>
          ,
          <year>2010</year>
          , p.
          <fpage>153</fpage>
          -
          <lpage>155</lpage>
          .
          <source>IEEE Signal Processing Letters</source>
          <volume>24</volume>
          (
          <year>2016</year>
          )
          <fpage>259</fpage>
          -
          <lpage>263</lpage>
          . doi:
          <volume>10</volume>
          .1017/S174392131100679X.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wozniak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          , E. Tramontana, G. Capizzi,
          <string-name>
            <given-names>G.</given-names>
            <surname>Lo</surname>
          </string-name>
          <string-name>
            <surname>Sciuto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Nowicki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Starczewski</surname>
          </string-name>
          ,
          <article-title>A multiscale image compressor with rbfnn and discrete</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>