<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>I. Irum, M. Sharif, M. Raza, S. Mohsin, A Nonlinear Hybrid Filter for Salt &amp; Pepper Noise
Removal from Color Images, Journal of Applied Research and Technology</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1016/S1665-6423(15)30015-8</article-id>
      <title-group>
        <article-title>Vladimir Lukinа, Sergey Krivenkoа, Fangfang Liа, Sergey Abramovа and Viktor Makarichevа</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>National Aerospace University</institution>
          ,
          <addr-line>17 Chkalova Street, Kharkiv, 61070</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>13</volume>
      <issue>2015</issue>
      <fpage>79</fpage>
      <lpage>85</lpage>
      <abstract>
        <p>It is well known that image processing efficiency considerably depends on image properties. Among operations of image processing, we mean quality assessment, noise characteristic estimation, lossless and lossy compression, denoising, etc. In many papers, such terms as “image complexity”, “rich image content”, “highly textural image” are used. Their meaning is intuitively clear and described verbally but there is a limited amount of quantitative estimation and analysis. In this paper, we show that there is a very high correlation of performance of different operations of image processing like characteristics of lossless and lossy compression, blind variance estimation and denoising and so on. We also recall what quantitative parameters indirectly describe image complexity for different operations of image processing. Image complexity, performance, image processing operations, rank correlation.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Images have become an essential part of our life [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1-3</xref>
        ]. They are applied in numerous areas like
entertainment and advertising [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ], ecological monitoring, agriculture [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], forestry [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], etc. Images are
subject to many different operations at stages of their acquisition, transferring, pre-processing,
enhancement, and so on [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. This can be quality estimation [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], blind evaluation of noise type and
characteristics [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], denoising and enhancement [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], lossless and lossy compression [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], object
detection and classification [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], etc.
      </p>
      <p>Certainly, not all these operations are employed in an image processing chain for a given image and
a given application. Some of them are not needed or can be skipped. Meanwhile, practically each
scientist/researcher dealing with image processing knows that there are images processing of which is
relatively easy and successful and there are images processing of which is problematic (difficult, does
not bring an obvious success). The latter usually happens for images that have a small percentage of
pixels belonging to homogeneous regions, contain a lot of small-sized objects, edges, textures. For
example, designers of nonlinear filters (denoising techniques) for grayscale and color images know [12]
that it is considerably more difficult to get good results for the test image Baboon than for the test
images Lena and Peppers. This is one of reasons why it is usually recommended to test performance of
an image processing method for a set of images containing images with different properties. This was
also the reason for creating image databases like LIVE (http://live.ece.utexas.edu/research /quality/),
TID2008
(http://www.ponomarenko.info/tid2008.htm)
and</p>
      <p>TID2013
(http://www.
ponomarenko.info/tid2013.htm), Kodak (http://r0k.us/graphics/kodak/), etc. Nowadays, there is a
tendency of creating medical (for example, SICAS Medical Image Repository, https://www.smir.ch)
and remote sensing (for instance, ESA “Sentinel-2”, https://sentinel.esa.int/web/sentinel/missions/</p>
      <p>2022 Copyright for this paper by its authors.
sentinel-2) image databases for different purposes, in particular, for training modern neural networks
(deep learning) for which a huge volume of training and verification data is needed. There are also
texture image database USC-SIPI (https://sipi.usc.edu/database/), European Space Agency database of
color images (https://www.esa.int/ESA_Multimedia/Images) and many others. The existence of such
databases offers such opportunities as statistical analysis of indicators of image processing efficiency
[13], detection of outlying results, anomalies and “strange” images [14], thorough comparisons of
methods’ performance [15], scatter-plot obtaining and regression (curve fitting) [16, 17], etc.</p>
      <p>As the result of such analysis for a particular application, images are often classified to simple
structure and complex structure (textural) ones where there are no objective criteria for such a
classification (or the criteria are mostly intuitive or expressed verbally). Because of this, the goal of this
paper is to generalize the problem, to show some criteria characterizing image complexity, and to
demonstrate inter-connection between such criteria and image processing efficiency.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Lossless and lossy compression</title>
      <p>Probably, the most known characteristic (criteria) of image complexity is its entropy that directly
influences image compression ratio for lossless compression (coding) techniques [18, 19]. Let us give
one example. Consider 25 color test images used in the databasesTID2008 and TID2013. Twenty four
of them are natural scene images and one (#25) is artificial. Small copies of these images are given in
Fig. 1. As one can see, images are of different complexity where the test image #13 is the most complex
since it is practically impossible to find homogeneous (non-textural) fragments for it. Meanwhile, there
are also quite simple structure images like, e.g., the test images ## 3 or 20 with rather large
homogeneous regions.</p>
      <p>These properties appear themselves in entropy E calculated for all 25 test images. Its values are
given in Fig. 2,a. They are in the limits from about 6 for the test image # 2 till about 7.5 for the test
images ## 13, 14, and 19. Alongside, Fig. 2 shows the values of compression ratio (CR) attained by
ZIP. Comparison of data in Figures 2,a and 2,b clearly shows that a larger CR is usually obtained for
images having smaller entropy values and vice versa. The Spearman rank order correlation coefficient
(SROCC) is equal to -0.62 and its quite large absolute value shows that there is a rather strict rank
correlation between E and CR.</p>
      <p>Let us also present some data for grayscale images. In the paper [14], a set containing 61 images has
been analyzed. This set included standard test images as Baboon, Lena and others, a lot of texture
images from the USC-SIPI database, and some others. Entropy values have been determined and they
are shown by dots of different colors in Fig. 3. As one can see, E values are from 0.5 till 7.9, whist CR
for lossless compression CRlossless varies from 100.7 to 1. Correlation factors for these two indicators
(parameters) are the following: -0.74 for Pearson correlation, -0.88 for Spearman rank correlation, and
-0.76 for Kendall rank correlation. In other words, a larger CRlossless usually relates to a simpler structure
image with smaller entropy. In [14], it has been proposed to divide images into three groups: 1) complex
structure images having E&gt;7; 2) middle complexity images for which 6≤E≤7; 3) simple structure images
with E&lt;6. Fig. 4 shows examples for all three groups. The entropy values are equal to 5.82, 6.80, and
7.36, respectively.</p>
      <p>It is also interesting that among simple structure images (E&lt;6) there are three images that have very
small E (about 3 and less). Analysis shows that these are artificially created images or scanned copies
of such images (see Fig. 5) that have large homogeneous regions (or a large percentage of pixels that
belong to such regions). Further analysis will show that performance characteristics of image processing
operations can differ a lot from performance characteristics for natural scene images. This indicates that
it is necessary to use such artificial images with care in analysis of different operations of image
processing (denoising, compression, classification) since conclusions and tendencies obtained based on
the corresponding studies can be wrong or, at least, be aside of tendencies and conclusions for most
typical images met in practice. In other words, if there is no possibility to carry out studies concerning
image processing except of using artificially created images, it is worth making such images as close to
natural as possible.</p>
      <p>
        In lossy image compression [
        <xref ref-type="bibr" rid="ref10">10, 14, 16</xref>
        ], there is a much wider variety of performance
characteristics. First, CR can be varied where this is done in different ways in different coders. CR can
be controlled and varied by quality factor, scaling factor, quantization step, bits per pixel (BPP) and so
on. Second, since distortions (losses) are introduced, they can be characterized using various criteria.
These can be traditional criteria as mean square error (MSE) or peak signal-to-noise ratio (PSNR)
strictly connected with MSE; meanwhile, visual quality metrics (see [14, 26] and the metric
PSNRHVS-M, http://www.ponomarenko. info/psnrhvsm.htm) are also widely used since visual inspection is
often an ultimate goal of compressed images.
      </p>
      <p>1
6
11
16
21
2
7
12
17
22</p>
      <p>Visual quality metrics take into account some peculiarities of human vision system (HVS). For
example, the metric PSNR-HVS-M (see PSNR-HVS-M official website http://www.ponomarenko.
info/psnrhvsm.htm) takes into account two properties: a larger sensitivity to distortions in low spatial
frequencies and a masking effect of texture. PSNR-HVS-M is expressed in dB and it has smaller values
for more degraded images. Due to aforementioned properties, PSNR-HVS-M is usually larger than
PSNR. This takes place until distortions due to lossy compression become too large and spatially
correlated, then masking effect disappears and PSNR and PSNR-HVS-M become practically equal or
even PSNR-HVS-M occurs to be smaller than PSNR. Such an example is shown in Fig. 5,a for the
coder SPIHT (http://www.spiht.com/). Recall that since compression parameters can be varied we have
to analyze the so-called rate/distortion curves that can be presented in different ways. For SPIHT, this
is dependence of a metric on BPP whilst, e.g., for the coder BPG (better portable graphics,
https://bellard.org/bpg/) these are dependences of a metric on parameter Q that controls compression
(Q falls into integer values from 1 to 51 and a larger Q results in worse quality). Examples of
rate/distortion curves are given in Fig. 5,b. Note that usually the rate/distortion curves are smooth
monotonous functions (Fig. 5,b). Meanwhile, they can be “not very smooth” like in Fig. 5,a and even
not monotonous as this happens for “strange images” – see examples in Fig. 5,c and 5,d.</p>
      <p>Behavior of the rate/distortion curves for particular images has common features but is individual.
The examples in Fig. 6 clearly show this for two coders: SPIHT and AGU. As one can see, there are
sufficient differences in PSNR values for the same BPP for SPIHT and QS for AGU.
c d
Figure 5: PSNR and PSNR-HVS-M on BPP for SPIHT, the test image Goldhill (a), PSNR and PSNR-HVS-M
on Q for BPG, the test image Goldhill (b), PSNR on quantization step (QS) for the coder AGU
(http://www.ponomarenko.info/agu.htm) for “strange images” (c), PSNR-HVS-M on QS for AGU for
“strange images” (d)</p>
      <p>a b
Figure 6: Examples of rate/distortion curves for particular images for different coders: PSNR on BPP
for SPIHT (a) and PSNR on QS for AGU (b)</p>
      <p>Although some curves intersect, it is quite probable that if a metric value for an image1 is larger
than for an image2 for BPP1, it will be also larger for BPP2. Similar holds for rate/distortion curves for
the coder AGU – the curves for more complex structure images go “below” the curves for simpler
structure images.</p>
      <p>It is easy to show that entropy that characterizes image complexity is highly correlated with
performance of lossy image compression. But it is also possible to show that other performance
parameters are also highly correlated. We have analyzed CR (compression ratio for ZIP highly
correlated with entropy) and parameters PSNR40, PSNR35, PSNR30, PSNRHVSM40,
PSNRHVSM35, PSNRHVSM30 where, e.g., PSNR40 means PSNR provided by AGU coder for
QS=40. It occurs that there is correlation close to unity (both Pearson and Spearman) for such pairs of
parameters as PSNR40 and PSNR35, PSNR30 and PSNR40, PSNR35 and PSNR30, PSNRHVSM40
and PSNRHVSM35, PSNRHVSM30 and PSNRHVSM40, PSNRHVSM35 and PSNRHVSM30. There
is also a rather high correlation (about 0.6) for such pairs of parameters as PSNR30 and PSNRHVSM30,
PSNR35 and PSNRHVSM35. This means that such parameters as, e.g., PSNR30 or PSNR35 are able
to characterize image complexity. Larger values of these parameters relate to simpler structure images.
Correlation of these parameters with CR is not high – SROCC is of the order 0.13.</p>
      <p>Note that there is no necessity to carry out compression and decompression of an image compressed
by AGU with QS=30 to determine PSNR30 and compare it to a threshold to classify this image. There
is a very fast method that allows predicting PSNR for a given QS by estimating MSE in a limited
number (e.g., 500) of 8x8 blocks for which direct DCT is carried out.</p>
      <p>All data and conclusions that are given above relate to images that are practically noise-free. If
images to be compressed are noisy, lossless compression is practically useless since it produces very
small values of CR whilst lossy compression has specific features that will be considered later.</p>
      <p>Two things are worth noting here. First, analysis of entropy or CR for ZIP can be useful in other
applications. For example, analysis of CR for electrocardiograms compressed by ZIP has allowed to
assess ECG quality. Analysis of entropy can indicate that images have been pre-compressed. For
example, Sentinel sites offer images that have been slightly compressed by JPEG2000. This leads to
entropy values that are slightly smaller than for uncompressed images. For example, for visualized
three-channel images in Fig. 7 the entropy values are equal to 4.15, 5.67, 4.46, and 5.72, respectively.
As one can see, entropy values are larger for the images SS2 and SS4 which are obviously more textural.
These images are compressed worse (CR for them is smaller than for the images SS1 and SS3 for the
same quality characterized by PSNR of PSNR-HVS-M) and they are classified worse [18] irrespectively
are the compression techniques based on DCT used or based on atomic functions employed [18].</p>
    </sec>
    <sec id="sec-3">
      <title>3. Noise characteristic estimation, image filtering and lossy compression of noisy images</title>
      <p>
        In many practical situations, noise type and/or characteristics are unknown a priori and this makes
problems for image processing since many techniques exploit this information for providing high
efficiency (good examples are well known local statistic Lee and sigma filters that have versions for
additive and multiplicative noise). Thus, noise type and characteristics have to be estimated once for a
certain type of images with stable noise characteristics or each time if these characteristics can vary. If
the number of such images or its components (like in hyperspectral data) is large, it is difficult to carry
out such estimation “manually” (in interactive way) using special tools of image analysis. Then, one
has to apply automatic or blind methods for estimation of noise characteristics [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>Let us quickly recall what characteristics of the noise can be estimated. If the noise is white, these
are statistical characteristics like noise variance for additive noise, relative variance if noise is pure
multiplicative, and two or several parameters if noise is signal-dependent. If noise is not white, then
spatial spectral or correlation characteristics have to be estimated. The most complicated situation is,
probably, for the noise with non-stationary characteristics where variations of spatial and spectral
properties are possible.</p>
      <p>The main problem in blind estimation of noise characteristics (BENC) is to separate noise from
signal/noise (image/noise) mixture. Pre-filtering and further analysis of difference data is often
inefficient since filtering without exploiting noise characteristics is far away from perfect and, thus,
difference signal, in addition to the noise, contains errors due imperfect pre-filtering. Most existing
approaches try to detect (extract) image regions (blocks) that are, most probably, homogeneous and for
which it is possible to obtain local estimates of noise characteristics that are quite close to the true ones.
For this purpose, different methods and mathematical tools are used. Fractal theory can be employed
to discriminate texture informative and noise informative regions. Robust estimation and robust curve
fitting techniques are used to minimize the influence of abnormal (highly erroneous) local estimates of
noise characteristics obtained in heterogeneous (locally active) blocks falling into texture regions and/or
edge/detail neighborhoods [21].
c d
Figure 7: Three-channel Sentinel-2 images SS1 (a), SS2 (b), SS3 (c), and SS4 (d)</p>
      <p>Here we do not want to go into theory and practice of BENC deeply. Our desire is to demonstrate
that image complexity and noise characteristic estimates are highly correlated. For this purpose, let us
consider the most typical case of zero mean additive white Gaussian noise (AWGN). Some data are
presented for the so-called minimal inter-quantile method of noise variance estimation in grayscale
images in [21]. It is shown that estimation bias and variance are sufficiently larger for the highly textural
image Baboon compared to the middle complexity images Barbara and Goldhill.</p>
      <p>A more complex case has been analyzed by us where it has been supposed that noise is independent
in color (RGB) components and has equal variances. A modification of the method [21] has been
applied. Instead of obtaining local estimates in blocks in spatial domain, estimates of noise standard
deviation have been obtained in 8x8 blocks using robust processing of discrete cosine transform
coefficients. An example of the obtained estimates of the noise standard deviation is presented in Fig.
8. Note that it is usually desired to get the standard deviation estimates that differ from the true values
by no more than 10%. Because of this, two horizontal lines show the desired margins of estimates.</p>
      <p>As one can see, there are some estimates below the lower margin, e.g., for red and green components
of the test image # 6. These estimates are smaller than desired due to the noise saturation effects
observed for very intensive (almost white) and very dark (almost black) regions of images.</p>
      <p>To give a better explanation of basic principle of operation for many BENC methods, consider the
histograms of local estimate distribution for three cases. Fig. 9,a shows a typical distribution obtained
for the red component of the test image # 3 from TID2008 (Fig. 1) for the noise variance equal to 65.
The local estimates of the noise variance have been obtained in 5x5 blocks in spatial domain. As one
can see, there is a mode of local estimates (approximately equal to the true value) that correspond to
normal local estimates and a right hand heavy tail that relates to abnormal local estimates. Thus, the
task is to find the distribution mode quite accurately. However, there are cases when there are two
modes where one mode corresponds to erroneous local estimates (Fig. 9,b) that result from clipping. It
is also possible that there are very many abnormal local estimates that form their own mode that has no
relation to the true value of the noise variance.</p>
      <p>One observation is that the estimates of noise standard deviation are the largest for textural images
## 5, 8, 13, 18. This means that despite of the efforts to minimize the image content influence on noise
characteristic estimation it still exists.</p>
      <p>Fig. 10 shows the noise variance estimates for inter-quantile method operating in spatial domain.
Again most estimates are within the required limits (20% with respect to the true value). However, there
are two estimates (for the image # 20, green and red components) that are smaller than required due to
clipping effects and a few estimates that are larger than required due to high complexity of image
content.</p>
      <p>We have calculated SROCC between variance estimates and E (calculated for noise-free
components of color images). The SROCC values are equal to 0.53, 0.28, and 0.18 for red, green, and
blue components, respectively. Therefore, the estimates have a certain correlation with entropy.</p>
      <p>One problem is that the estimated SROCC is for variance estimates and entropy for noise-free
images where the latter ones are absent. Hence, we have also calculated E for noisy images and
determined SROCC for E and the estimates in Fig. 10. They are equal to 0.31, 0.07, and 0.03,
respectively. Thus, for noisy images their entropy cannot serve as a feature characterizing their
complexity and some other parameters (features) should be employed. (note that E values for noisy
images are from 6.25 for the image # 2 till 7.62 for the image # 25).</p>
      <p>^2n 150
300
250
200
100
50
0
150
a
150
b
200
250</p>
      <p>300
200
250</p>
      <p>300
Blue component
Green component</p>
      <p>Red component</p>
      <p>Meanwhile, as an auxiliary information, the interquantile method [21] is able to produce an estimate
of percentage p of homogeneous image blocks. Such estimates for images in TID2008 corrupted by
AWGN with the same variance values in all components are represented in Fig. 11. As one can see, this
percentage is the largest for simple structure images ## 3, 15, and 23 that really contain quite large
quasi-homogeneous regions. On the contrary, the percentage estimates are the smallest for highly
textural images ## 5, 8, 18. For the image # 13 which is textural as well, the percentage estimate is
larger since the local estimate histogram has a very specific appearance (Fig. 10,c).</p>
      <p>Let us now analyze how image complexity influences visual quality of noisy images and efficiency
of their denoising. It has been mentioned earlier that noise can be fully or partially masked by textures
and then image quality is perceived better than when noise is visually seen. Because of masking effects,
PSNR-HVS-M is larger than PSNR for a given image if the masking effect takes place. Fig. 12 shows
PSNR-HVS-M values (thick lines of the corresponding color) for component noisy images of TID2008
for the case of AWGN with variance equal to 65 (PSNR is about 30 dB). As one can see, all
PSNRHVS-M values are larger than 30 dB where the largest values take place for the most textural (complex
structure) images ## 1, 5, 6, 13, 14, 25, 18. Meanwhile, the smallest values are observed for the simple
structure images ## 3, 15, 16, 20, and 23. The SROCC for PSNR-HVS-M and E calculated for noisy
images is equal to 0.55, 0.59, and 0.57, respectively. This shows that the correlation is quite high but
not large enough to accurately predict performance characteristics.</p>
      <p>PSNR − HVS − M k (n), dB
41
40
39
38
37
36
35
34
33
32 0
5
10
15
20
25
n</p>
      <p>Concerning image denoising, there are numerous existing methods. Our goal is not to study and
compare them, but to analyze general tendencies of denoising efficiency. In this sense, it is worth
considering potential and attained (practical) efficiency of image filtering. In the sense of potential
efficiency, very interesting results have been obtained by P. Milanfar et al in [22-25]. We have
determined potential efficiency for grayscale images corrupted by AWGN processed by non-local
filters. Some data have been got for component images of TID2008 (see data in Table 1 for AWGN
variance equal to 65). Potential (lower bound) efficiency is characterized by MSElb, the smaller the
better. As one can see, MSElb for different color components are practically the same. This is explained
by the fact that color components of the same image are highly correlated and, thus, are practically of
the same complexity. The smallest MSElb are observed for simpler structure images, e.g., images ## 10
and 23. In turn, the largest values are observed for the most complex structure images ## 13, 1, and 5.
It is interesting that MSElb is only 1.5 times smaller than noise variance, i.e. even potentially the noise
removal efficiency is very low.</p>
      <p>Among possible methods of image denoising, consider the DCT-based filter. It is one of the best
transform-based filters which is only a little worse compared to the BM3D filter [15]. The MSE values
obtained for the DCT-based filter (denoted as MSEDCT) are given in Table 1. As one can see, MSEDCT
are almost the same as MSElb for complex structure images. Meanwhile, there are sufficient differences
between MSEDCT and MSElb (they differ by several times) for simple structure images.</p>
      <p>However, it does not mean that efficiency of denoising for simple structure images is low.
Comparison of PSNR-HVS-M before and after filtering (Fig. 12) shows that the largest differences in
visual quality take place just for simple structure images whilst filtering can be practically useless for
the complex structure images (see data for the image # 13). Thus, image complexity plays the key role
in image filtering efficiency. Additional materials concerning noise variance estimation and filtering
efficiency can be found in [26].</p>
      <p>Keeping this in mind, we have started to develop the direction of studies dealing with prediction of
image denoising efficiency and decision undertaking on expedience of filtering use for a given image.
It contains a thorough analysis of filtering efficiency for quite many test images, a set of modern filters
and several quality metrics. The outcomes of this study are very interesting. First, it is shown that
filtering efficiency can be predicted for many modern filters including transform-based, nonlocal and
other ones, i.e. filters based on different principles. Second, it is demonstrated that filtering efficiency
for many modern filters is of the same order and close to potential efficiency for texture images. Third,
improvement of many metrics due to filtering can be quite accurately predicted even if one or two input
parameters are used.</p>
      <p>Here it is worth recalling the main principle and assumptions put into basis of filtering efficiency
prediction. The following has been assumed: 1) there is a parameter that adequately characterizes
filtering efficiency (e.g., improvement of PSNR or PSNR-HVS-M due to filtering); 2) there exist one
or several parameters that adequately characterize image and noise properties; 3) there is a rather strict
dependence between input parameter(s) and a considered parameter characterizing filtering efficiency
where this dependence is established in advance (before filtering) and it is quite simple; 4) input
parameter(s) can be calculated easily and quickly (faster than image denoising) to make possible
decision on is it worth filtering a given image or it is reasonable to skip denoising and save time and
resources.</p>
      <p>It has been shown that all these assumptions are valid and efficient prediction is possible. Many
variants have been proposed. One of the main input parameters is Pkσ (or its variants for
signaldependent noise) where k is a factor and σ is AWGN standard deviation supposed to be known in
advance. The parameter Pkσ is determined as a mean of local estimates Pkσ(n), n=1,…,N in N 8x8 pixel
blocks. A local estimate Pkσ(n) is calculated as Nkσ(n)/63 for the n-th 8x8 pixel block where Nkσ(n) is
the number of DCT coefficients that have absolute values smaller than kσ (the DCT coefficient that
corresponds to the block mean is excluded from analysis, k is usually set equal to 2 although other
variants are possible, e.g., P0.5σ can be analyzed).</p>
      <p>The parameter Pkσ jointly characterizes image complexity and noise intensity. Its small values (for
example, P0.5σ about 0.15) correspond to complex structure images and/or low intensity noise. On the
contrary, large values (for example, P0.5σ about 0.35) relate to simple structure images and/or high
intensity noise for which the filtering efficiency is usually high, i.e. large improvements of PSNR and
PSNR-HVS-M are observed. The difference in these improvements for images of different complexity
can be up to 10 dB.</p>
      <p>The use of more than one input parameter improves prediction making it more accurate. This can
be illustrated by the plot in Fig. 13 where points show the estimated parameters for images used in
analysis and the regression plain is presented (fitted). Note that it is not a problem to calculate noisy
image PSNR if noise variance is known a priori or pre-estimated with high accuracy whist σ has to be
known for calculation of P0.5σ as well. Also note that more complicated but more accurate prediction
approaches have been proposed recently based on trained neural networks [26]. They are applicable to
images corrupted by signal-dependent spatially correlated noise (speckle) typical for synthetic aperture
radar images produced by Sentinel-1. An advantage of this approach is that, based on prediction carried
out for the Lee filter with different scanning window sizes, the optimal size is recommended.</p>
      <p>One idea that follows from the results obtained in [26] is that image quality and complexity can be
assessed using neural network training approach.</p>
      <p>The last application considered here is lossy compression of noisy images. The known peculiarity
is the noise filtering effect [27]. One more peculiarity is that performance can be analyzed with respect
to noisy image (subject to compression) and noise-free (ideal) image. Dependences of traditional type,
e.g., when a metric is determined for the compressed image and image subject to compression, behave
as they are thought conventionally, i.e. the compressed image quality decreases if CR increases. Here
it is worth mentioning the recently obtained dependences for the BPG coder, an example is given in
Fig. 14,a. As one can see, the dependences are almost linear in the limits from Q≈15 till Q≈36 where
PSNR is approximately equal to input PSNR (about 26.7 dB for the considered case of AWGN with
σ=12) where Q is the parameter that controls compression in the BPG coder.</p>
      <p>Meanwhile, the dependences of PSNRtc calculated for the compressed noisy and noise-free (true)
images behave in a specific manner (Fig. 14,b). There can be maxima of these curves observed for such
Q that PSNRnc≈PSNRinp=10lg(2552/σ2). Then, if σ2 is known, it is easy to determine Q.
b
Figure 14: Examples of dependences PSNRnc(Q) (a) and PSNRtc(Q) (b) for three test grayscale images
corrupted by AWGN with σ2=144</p>
      <p>The argument for which maximum takes place is the so-called optimal operation point (OOP). If it
exists (and this happens for simple structure images corrupted by rather intensive noise), it is worth
compressing a noisy image in OOP or its neighborhood (we say neighborhood because noise standard
deviation can be known or pre-estimated not accurately). An example of image compression in OOP is
given in Fig. 15. Noise filtering effect is seen in homogeneous image regions.</p>
      <p>In turn, if OOP does not exist, it is worth using a slightly smaller Q than for OOP (e.g., Q=33 for
the case considered in Fig. 14 for the test image Diego which is highly textural). Thus, again the image
processing parameters should be adapted to image complexity.</p>
      <p>The procedure of OOP existence for the BPG coder has not been developed and tested yet.
However, it has been developed and tested for the DCT-based coder AGU [25]. Not surprisingly, the
parameter P2σ again allows prediction (see the scatter-plot in Fig. 16). This time it allows predicting the
difference of a metric for potential OOP and for noisy image. The polynomial of the fifth order is fitted
very well and it shows that for P2σ&gt;0.81 the OOP exists with high probability. Recall that this happens
if an image has a quite simple structure and/or noise is intensive.</p>
      <p>We hope that the methodology for prediction OOP existence for the BPG coder will be developed
soon. Meanwhile, not only OOP existence can be predicted (it is supposed that OOP according to a
given metric exists if the predicted metric improvement is positive). The metric value itself can be
predicted as well.</p>
      <p>The presented materials show that for the case of noisy images not entropy but other parameters
relate to image complexity. An image can be considered simple if efficiency of its processing is high
(metric improvement due to filtering is large, OOP exists for lossy compression). Obviously, noise
properties play the role too.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions</title>
      <p>In this paper, we have tried to provide a systematic analysis of image complexity and its influence
on image processing performance. It has been shown that, for noise-free images, the entropy can be a
good characteristic of image complexity that strictly affects lossless compression and has a sufficient
impact on characteristics of lossy compression.</p>
      <p>If noise is present, then entropy becomes almost useless to characterize image complexity. Other
statistical parameters that jointly describe image properties and noise intensity can be used as indicators
of image complexity. The image complexity sufficiently influences potential and practical efficiency
of noise suppression in terms of conventional and visual quality metrics. It also has impact on existence
of optimal operation point in lossy compression of images corrupted by different types of noise for
various compression techniques.</p>
      <p>Meanwhile, the task of designing a unified metric for characterizing the image complexity is not
solved yet. Maybe, the use of neural networks able to unite different indicators in nonlinear manner can
help.</p>
    </sec>
    <sec id="sec-5">
      <title>5. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E.</given-names>
            <surname>Provenzi</surname>
          </string-name>
          (Ed.), Special issue:
          <source>Color Image Processing</source>
          ,
          <year>2018</year>
          . doi:
          <volume>10</volume>
          .3390/books978-3-
          <fpage>03842</fpage>
          - 958-6.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D. K.</given-names>
            <surname>Pillai</surname>
          </string-name>
          ,
          <article-title>New Computational Models for Image Remote Sensing and Big Data</article-title>
          , in: P. Swarnalatha, P. Sevugan, (Eds.),
          <source>Big Data Analytics for Satellite Image Processing and Remote Sensing, IGI Global</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Wu</surname>
          </string-name>
          , and
          <string-name>
            <surname>H.-I. Suk</surname>
          </string-name>
          ,
          <article-title>Deep learning in medical image analysis</article-title>
          ,
          <source>Annual review of biomedical engineering 19</source>
          (
          <year>2017</year>
          ),
          <fpage>221</fpage>
          -
          <lpage>248</lpage>
          . doi:
          <volume>10</volume>
          .1146/annurev-bioeng-
          <volume>071516</volume>
          -044442
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Pizurica</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Platisa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ruzic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Cornelis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dooms</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Martens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Dubois</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Devolder</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. De Mey</surname>
            ,
            <given-names>I. Ingrid</given-names>
          </string-name>
          ,
          <article-title>Digital Image Processing of The Ghent Altarpiece: Supporting the painting's study and conservation treatment</article-title>
          ,
          <source>IEEE Signal Processing Magazine</source>
          <volume>32</volume>
          (
          <year>2015</year>
          )
          <fpage>112</fpage>
          -
          <lpage>122</lpage>
          . doi:
          <volume>10</volume>
          .1109/MSP.
          <year>2015</year>
          .
          <volume>2411753</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>E.V.</given-names>
            <surname>Bataeva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.A.</given-names>
            <surname>Polyakova</surname>
          </string-name>
          ,
          <article-title>Motive-analysis of TV-advertising visual content</article-title>
          ,
          <source>Sotsiologicheskiy Zhurnal</source>
          ,
          <volume>24</volume>
          (
          <year>2018</year>
          )
          <fpage>66</fpage>
          -
          <lpage>89</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>N.</given-names>
            <surname>Kussul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lavreniuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Shelestov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Skakun</surname>
          </string-name>
          ,
          <article-title>Crop inventory at regional scale in Ukraine: Developing in season and end of season crop maps with multi-temporal optical and SAR satellite imagery</article-title>
          ,
          <source>European Journal of Remote Sensing</source>
          <volume>51</volume>
          (
          <year>2018</year>
          )
          <fpage>627</fpage>
          -
          <lpage>636</lpage>
          . doi:
          <volume>10</volume>
          .1080/22797254.
          <year>2018</year>
          .
          <volume>1454265</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.L.</given-names>
            <surname>Mitchell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rosenqvist</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mora</surname>
          </string-name>
          ,
          <article-title>Current remote sensing approaches to monitoring forest degradation in support of countries measurement, reporting and verification (MRV) systems for REDD+</article-title>
          ,
          <source>Carbon Balance Manage</source>
          <volume>12</volume>
          (
          <year>2017</year>
          ).
          <source>doi: 10.1186/s13021-017-0078-9.</source>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>K.</given-names>
            <surname>Okarma</surname>
          </string-name>
          ,
          <article-title>Current Trends and Advances in Image Quality Assessment</article-title>
          , Elektronika Ir Elektrotechnika,
          <volume>25</volume>
          (
          <year>2019</year>
          )
          <fpage>77</fpage>
          -
          <lpage>84</lpage>
          . doi:
          <volume>10</volume>
          .5755/j01.eie.
          <volume>25</volume>
          .3.23681.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , G. Wang,
          <string-name>
            <given-names>J.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <article-title>Parameter estimation of signal-dependent random noise in CMOS/CCD image sensor based on numerical characteristic of mixed Poisson noise samples</article-title>
          .
          <source>Sensors</source>
          <volume>18</volume>
          (
          <year>2018</year>
          ). doi:
          <volume>10</volume>
          .3390/s18072276.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>I.</given-names>
            <surname>Blanes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Magli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Serra-Sagrista</surname>
          </string-name>
          ,
          <article-title>A Tutorial on Image Compression for Optical Space Imaging Systems</article-title>
          ,
          <source>IEEE Geoscience and Remote Sensing Magazine</source>
          <volume>2</volume>
          (
          <year>2014</year>
          )
          <fpage>8</fpage>
          -
          <lpage>26</lpage>
          . doi:
          <volume>10</volume>
          .1109/MGRS.
          <year>2014</year>
          .
          <volume>2352465</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Radosavljevic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Brkljac</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Lugonja</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Crnojevic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Trpovski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Xiong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Vukobratovic</surname>
          </string-name>
          ,
          <article-title>Lossy Compression of Multispectral Satellite Images with Application to Crop Thematic Mapping: A HEVC Comparative Study</article-title>
          ,
          <source>Remote Sens</source>
          <volume>12</volume>
          (
          <year>2020</year>
          )
          <article-title>1590</article-title>
          . doi:
          <volume>10</volume>
          .3390 /rs12101590.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>