<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>On Measures of Visual Contrast and Their Use in Image Processing</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>L2TI, Institut Galilee, Universite Sorbonne Paris Nord</institution>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Visual contrast is one of the most studied notions in the eld of Visual Neuroscience and Psychophysics. It is a measure associated with a psycho-physical sensation that is not easy to de ne in an objective and unique way. Indeed, the contrast as de ned through Weber's famous experiment is associated with the subjective notion of just noticeable di erence between a stimulus observed against an uniform background. In this paper a critical review of di erent representative measures of visual contrast and their uses in various applications is presented. This study provides also some insights on how to de ne the contrast and how to chose the most appropriate measure for developing contrast based methods for visual information processing and analysis. Perspectives and challenges that remain to be addressed are also discussed in light of new trends in visual information processing. Through this study, it becomes clear that the concept of contrast and its use are highly application-dependent and that there is no universal contrast measure. It is also shown that, given the large number of psycho-physical parameters involved, it is not easy to de ne a contrast measure that is easy to use in the various methods of image processing and analysis. Simplifying contrast models without neglecting the most fundamental aspects seems to be the most pragmatic and practicable solution.</p>
      </abstract>
      <kwd-group>
        <kwd>Contrast Measures</kwd>
        <kwd>Just Noticeable Di erence (JND)</kwd>
        <kwd>Visual Contrast</kwd>
        <kwd>Image Processing and Analysis</kwd>
        <kwd>Perceptual Image compression</kwd>
        <kwd>Perceptual watermarking</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Visual contrast is one of the most studied notions in the eld of Psychophysics
and Visual Neuroscience [
        <xref ref-type="bibr" rid="ref13 ref20">13, 20</xref>
        ]. It is a measure associated with a
psychophysical sensation that is not easy to de ne objectively and in a unique way.
Indeed, contrast as de ned through Weber's famous experiment is associated
with the subjective notion of Just Noticeable Di erence (JND) between a
stimulus observed against an uniformed background. From the study conducted by
Hecht [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ] it seems that Bouguer (1760) was the rst to study the di erential
sensibility of the Human Visual System (HVS). The idea of measuring the just
noticeable increment needed to discern one stimulus from another was later
studied by Weber and Fechner [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Indeed, the idea of analysing and measuring the
minimum increment of intensity between two stimuli in order to discern them
is at the basis of the very de nition of what is commonly known as the
WeberFechner contrast [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. In fact it was Fechner, a few years later, who formalized
Weber's results in an experimental and theoretical framework. Weber-Fechner
law has led to many experiments and sophisticated models for understanding
the notion of perceptual contrast [
        <xref ref-type="bibr" rid="ref25 ref33">25,33</xref>
        ]. Since these pioneer works, many visual
contrast measures have been proposed in the literature [
        <xref ref-type="bibr" rid="ref16 ref24 ref31 ref35 ref39 ref41 ref42">16, 24, 31, 35, 39, 41, 42</xref>
        ].
      </p>
      <p>
        The taking into account of psycho-physical factors such as perceptual
contrast in a visual information processing and transmission system is essentially
linked to the fact that the observer is the key element in any chain of acquisition,
processing and transmission of visual information. Indeed, the human observer
is often the supreme judge in the evaluation of the di erent stages of the
processing and transmission chain. It is therefore quite natural to think of developing
methods inspired by the mechanisms of the human visual system to
incorporate perceptual criteria that meet the requirements of the observer. It is worth
noticing that among the perceptual aspects of the HVS, visual contrast is one
of the widely investigated psycho-visual aspects in vision research [
        <xref ref-type="bibr" rid="ref13 ref24 ref25 ref48">13, 24, 25, 48</xref>
        ].
According to this principle and reasoning several methods of image processing
and analysis based on contrast measure have been developed [
        <xref ref-type="bibr" rid="ref3 ref48">3, 48</xref>
        ]. Indeed,
contrast plays a prominent role in important applications such as medical
imaging [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], image quality enhancement (IQE) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and image quality assessment
(IQA) [
        <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
        ], image fusion [
        <xref ref-type="bibr" rid="ref44 ref9">9, 44</xref>
        ], and other applications such as image
quantization and compression [
        <xref ref-type="bibr" rid="ref12 ref28">12, 28</xref>
        ]. In this article we will review some of these
applications and discuss the most suitable contrast measures in each case. This
is not an exhaustive study of all known methods, but we will limit ourselves
to a few representative studies. The main objectives of this contribution are as
follows:
{ to discuss the fundamental criteria and factors that de ne visual contrast
and present a critical review of the most representative contrast measures
and associated models,
{ provide a brief description and discussion on some applications in the eld
of visual information processing based on perceptual contrast measure
{ provide some insights on how to use and chose the appropriate contrast
measure in some selected visual information processing and analysis applications,
The paper is organized as follows: rst some historical contrast models are
discussed in Section 2 followed by a classi cation discussion of some
representative contrast measures in Sections 3, 4, and 5. Section 6 is dedicated to some
selected contrast based applications. Finally the paper ends by concluding
remarks, challenges and some future directions of research in Section 7.
      </p>
      <p>
        Perceptual Contrast: History and Basic Notions
This section presents a brief historical review of the research on contrast
measures and associated models developed by the scienti c research community in
psycho-physics, optics, neuroscience and digital visual information processing.
The notion of visual contrast has been introduced in a clear and well de ned
theoretical and experimental framework for the rst time by Fechner [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. Since
this pioneering work based on Weber experiments and model, several studies
have been carried out to enrich existing models with advances on both the
theoretical and experimental levels [
        <xref ref-type="bibr" rid="ref24 ref25 ref33 ref35">24, 25, 33, 35</xref>
        ]. The introduction of the frequency
sensitivity aspect of contrast has been established through psycho-physical
experiments [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. The directional selectivity of HVS has been clearly demonstrated
by Hubel and Wiesel [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ], the two Nobel Prize laureates. Other aspects related to
the distance or viewing angle parameter have been introduced explicitly
according to optical models [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ] or implicitly through a multi-resolution representation
of visual contrast [
        <xref ref-type="bibr" rid="ref14 ref44">14, 44</xref>
        ]. However, the colour aspect was neglected for a long
time in the rst experiments. This is due to the fact that the notion of contrast
is much more related to the detection of details and more particularly the
contours of objects which is traditionally regarded as an achromatic process [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
However, it has been shown that the colour aspect also plays a major role in
contrast sensitivity [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. Other spatio-temporal and 3D aspects could be
incorporated in the de nition of the spatio-temporal contrast. It is worth noticing
that to the best of the author's knowledge there is no measure of contrast in
an analytical form that integrates all these psycho-physical and geometrical
parameters. Rather simpli ed and mathematically tractable expressions are often
used to de ne an objective measure of visual contrast. The essential criteria and
HVS properties that could be taken into account in the visual contrast measure
are given below.
      </p>
      <p>{ sensitivity to the relative change of luminance
{ luminance adaptation phenomena
{ frequency selectivity
{ directional sensitivity
{ multiscale/multiresolution aspects
{ color aspects
{ viewing distance (or viewing angle)
{ temporal aspects (in the case of spatio-temporal visual signals)
Note that it is not easy to de ne a contrast measure that integrates all these
properties and criteria. Simpli cations are often used, keeping only a few
properties and aspects to establish a measure of visual contrast. There are two ways
to de ne contrast, depending on whether we associate a single value to the entire
image or a value for each pixel or group of pixels. In the rst case it is global
contrast, while in the second case it is local contrast. The contrast measure could be.
computed in the spatial domain, frequency domain, and even multi-resolution
or multi-scale representations.</p>
      <p>It should be noted that given the di erent forms of representation of the
visual signal, the large number of contrast measures proposed in the literature
and the di erent contexts and elds of application, it is not easy to classify
all the contrast measures. In the following we classify contrast measures into
three categories that is psycho-physic and neuroscience based contrasts, local
structure based contrasts and statistical information based contrasts. Here we
limit ourselves to a few representative contrast measures from each category.
3</p>
      <p>
        Psycho-physic and neuroscience based contrasts
In this rst category are some representative contrast measures based on
psychophysical experiments or models from theoretical and experimental studies in
neuroscience. Figure 1 illustrates the simultaneous contrast phenomena. It also
illustrates one of the most relevant parameters that should be taken into account
in the contrast de nition, that is the in uence of the surround and background
luminance in the visual perception of stimuli. This gure represents the foveal
image model used in several psycho-physical experiments such as the Moon and
Spencer experiment [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ].
      </p>
      <p>
        What should be observed in Figure 1 is the e ect of the background on
the appearance of the small dark disc and the ring around it. Indeed, as can
be seen, despite having the same luminance and the same gradient in the ve
con gurations the disc and the ring appear di erent according to the position
in the non-uniform background. It can be concluded from this example that the
gradient alone cannot account for the visual appearance of the visual signals. The
background against which the object is observed is important. This observation
leads us to question the many contrast measures based solely on the luminance
di erence between stimuli, i.e. gradient. This is one of the reasons why it is
important to de ne contrast as a measure of relative variation in luminance, i.e.
a relative ratio. This is the case with the contrast measure proposed by Weber
and Fechnner [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] described below.
      </p>
    </sec>
    <sec id="sec-2">
      <title>Weber-Fechner contrast</title>
      <p>
        Weber-Fechner contrast is one of the simplest contrast measures [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. However,
it only applies to simple scenarios in which a uniform luminance background
L contains an object with an incremental luminance L. The aim of
WeberFechner (W-F) contrast is to determine the value of L, referred to as the
JND, which makes the object (target) just visible. The W-F contrast measure
is de ned by:
      </p>
      <p>L</p>
      <p>
        CW = L (1)
One of the most important results of the experiments conducted by Weber and
Fechner is that this ratio remains constant over a fairly wide range of luminance
values. This value is called the Just Noticeable Contrast (JNC) and is of the
order of .02. It should be noted that many sophisticated contrast measures, used
in visual information processing and coding, are in one form or another based
on the Weber-Fechner de nition [
        <xref ref-type="bibr" rid="ref14 ref2 ref35 ref42 ref44 ref50">2, 14, 35, 42, 44, 50</xref>
        ].
3.2
      </p>
    </sec>
    <sec id="sec-3">
      <title>Michelson contrast</title>
      <p>
        The Michelson contrast was rst introduced in a purely physical context and
concerns the measurement of the visibility of interference fringes produced by
thin lms [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ]. However, it has been widely used in psycho-physical experiments
and more particularly the study of the frequency sensitivity of the HVS to
perceptual contrast [
        <xref ref-type="bibr" rid="ref13 ref45">13, 45</xref>
        ]. The Michelson contrast is de ned as follows:
CM =
      </p>
      <p>Lmax Lmin ;
Lmax + Lmin
(2)
where Lmin and Lmax correspond to the minimum and maximum luminance
values in the optical image, respectively. Although this contrast has been used
extensively by the scienti c community of vision research, it has several
limitations. Indeed, its use in the case of natural images can lead to over- or
underestimation of contrast and in particular in the case of images contaminated by
impulse-type noise. It is also the case of images containing some singularities,
or isolated points even if they are perceptually invisible. Moreover, it does not
take into account the in uence of the background and in particular the
lumninance adaptation phenomenon. It also does not consider the frequency aspect
although the stimulus signal shape is designed from a sinsusoidal function. It is
nevertheless surprising that such a simple contrast model has been widely used
by the vision research community for almost a century..
3.3</p>
    </sec>
    <sec id="sec-4">
      <title>Moon-Spencer contrast</title>
      <p>
        The main idea behind Moon and Spencer's [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ] model is to apply Holladay's
principle [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ] that any non-uniform background may be replaced by another uniform
luminance that produces the same perceptual e ect. This leads to the de nition
Cmin =
8 CW
&lt; LS
      </p>
      <p>CW
: LS</p>
      <p>A + p</p>
      <p>LA
A + q LLSA2 2
2
if LA</p>
      <p>LS
if LA &lt; LS</p>
      <p>:
C = log</p>
      <p>
        LO
LB
Where CW corresponds to the Weber-Fechner JNC contrast, and A is a constant
determined experimentally from psycho-visual tests and Hecht's law [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ] which
is equal to 0:808. Note that this contrast is more interesting in the sense that
it corresponds more or less to realistic con gurations. It has been successfully
used and adapted to digital images in various applications [
        <xref ref-type="bibr" rid="ref27 ref28 ref34 ref6">6, 27, 28, 34</xref>
        ].
3.4
      </p>
    </sec>
    <sec id="sec-5">
      <title>Lillesaeter contrast</title>
      <p>
        Noting the asymmetry in Weber's de nition of contrast, Lillesaeter proposed two
measures of contrast [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ]. In the rst one only luminance is taken into account
while in the second de nition he proposes to include the geometry and shapes
of the objects observed in the image. Indeed, it is observed that the negative
and positive Weber-Fechner contrasts with the same absolute increment are not
perceived equally. The Lillesaeter contrast is then de ned as
of luminance adaptation that can be calculated according to a simpli ed model.
Based on this principle, Moon and Spencer proposed a simple enough model to
express the adaptive luminance, which is given below.
      </p>
      <p>LA =</p>
      <p>S LS +</p>
      <p>BLB;
Where LS and LB are the luminance of the surround (immediate neighborhood)
and that of the background (or far surrounding), respectively. The two
weighting parameters, S and B are set experimentally to the values :923 and :077,
respectively. Moon and Spencer de ne the minimum perceptible contrast as:
where LO and LB correspond to the average luminance of the object and the
background, respectively. Note that when LO and LB are very close to each
other the Lillesaeter contrast is equivalent to that of the Weber-Fechner contrast
measure</p>
      <p>C =</p>
      <p>LO</p>
      <p>LB
LB
log LO
log LB
(if jCj &lt;&lt; 1):
The second de nition of Lillesaeter contrast incorporates the object contour
geometry as perceived by human. The idea of taking into account the geometry
of perceived objects is relevant but impractical in the evaluation of contrast in
digital images. Indeed, it leads to the computation of curvilinear integral which
requires the exact knowledge of the object contours. This leads inevitably to an
ill-posed problem which is image segmentation.
(3)
(4)
(5)
(6)</p>
    </sec>
    <sec id="sec-6">
      <title>DOG based contrast</title>
      <p>
        Based on models describing the retinal ganglion cells and Lateral Geniculate
Nucleus (LGN) responses [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] to visual stimuli on the other, Tadmor and
Tolhurst [
        <xref ref-type="bibr" rid="ref42">42</xref>
        ] proposed three measures of local contrast. The principle is based on
the use of linear ltering by two isotropic Gaussian Impulse Responses (IR) of
di erent sized kernels. The bandpass behaviour of the HVS is then modelled
through the Di erence Of Gaussian (DOG) model [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ]. The two Gaussian IRs
are associated with two regions, a center zone c and a surround zone s, to
mimic the receptive elds of the ON and OFF cells [
        <xref ref-type="bibr" rid="ref20 ref29">20, 29</xref>
        ].
      </p>
      <p>The two responses are given by:
(7)
(8)
(9)
(10)
(11)
and</p>
      <p>I c (i; j) = (I</p>
      <p>h c )(i; j);
I s (i; j) = (I
h s )(i; j)
where I is the input image signal, h c and h c are the two Gaussian IR associated
with the center and surround zones .</p>
      <p>The convolutions are performed in the sliding windows c(i; j) and s(i; j) of
odd size [ 3 c; +3 c]x[ 3 c; +3 c] and [ 3 s; +3 s]x[ 3 s; +3 s], respectively.</p>
      <p>Three local contrasts are then de ned as follows:</p>
      <p>C1DOG(i; j) =
C2DOG(i; j) =
C3DOG(i; j) =</p>
      <p>I c (i; j)</p>
      <p>I s (i; j)</p>
      <p>I c (i; j)
I c (i; j)</p>
      <p>I s (i; j)</p>
      <p>I s (i; j)
I c (i; j)</p>
      <p>I s (i; j)
I c (i; j) + I s (i; j)
;
;
;
The global contrast is derived by averaging the local contrasts. It can be noticed
that these contrasts do not take into account the directional and frequency
selectivity nor the colorfulness aspects. Note that this contrast expressed as a ratio
between a di erential signal component and a low-pass component is somehow
inspired by Weber-Fechner's simple model and corresponds well to the notion of
visual contrast.
4</p>
      <p>Local structure based contrast measures
In this section, we introduce and discuss some representative contrast measures
that could be used in digital image processing and analysis applications. These
are essentially measures that explicitly or implicitly incorporate some features
of the SVH.
4.1</p>
    </sec>
    <sec id="sec-7">
      <title>Edginess based contrast measure</title>
      <p>
        It is well established that one of the primitives of the image signal that is most
related to contrast and therefore to the visibility of detail is contour
information. Indeed, the most representative contours of the image signal correspond to
the spatial frequencies where the CSF reaches its maximum values [
        <xref ref-type="bibr" rid="ref13 ref29 ref45">13, 29, 45</xref>
        ].
Inspired by the contrast de ned by Gordon and Rangayan in their contrast
enhancement method [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], Beghdadi and Le Negrate introduced a new measure of
contrast incorporating edginess information [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. This local contrast measure is
computed using a sliding window of odd size . For each Wij window, the mean
edge gray-level at the center pixel (i; j), is computed as
Note that this contrast measure does not take into account the frequency
selectivity nor the directional selectivity aspects. Furthermore, when used in CE
it may introduce halo e ects around the edges which is the result of
overenhancement [
        <xref ref-type="bibr" rid="ref38 ref5">5, 38</xref>
        ]
In (12), f (k; l) corresponds to the gray level at the pixel (k; l) and ( kl) is
an increasing monotonic function of the gradient operator at (k; l). A simple
function would be knl, with n &gt; 0. The local contrast is expressed as:
Toet was the rst to propose a contrast measures taking into account the
multiresolution aspect for image fusion [
        <xref ref-type="bibr" rid="ref44">44</xref>
        ]. It is based on the Burt and Adelson
pyramid decomposition scheme [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The contrast is expressed as
Where the components gk(i; j) and gk 1(i; j) are the gray-level of pixel (i; j) in
the Gaussian pyramid at the kth and (k 1)th levels, respectively. Note that this
expression of local contrast is also inspired by Weber-Frechner's intuitive de
nition. Indeed, the numerator is nothing more than a di erential signal (intensity
increment) and the denominator is the signal against which the increment is
measured (reference signal). Here is also,the directional sensitivity and the
colorfulness aspects are note taken into account. This contrast measure has inspired
several works and led to various interesting applications such as image fusion [
        <xref ref-type="bibr" rid="ref44">44</xref>
        ],
contrast enhancement [
        <xref ref-type="bibr" rid="ref40">40</xref>
        ] and image distortion prediction [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
(12)
(13)
(14)
      </p>
    </sec>
    <sec id="sec-8">
      <title>Bandlimited contrast</title>
      <p>
        By exploiting the results of psycho-physical experiments on the frequency
sensitivity of the HVS to contrast and in particular its behaviour as a bandpass
lter, Peli introduced the notion of band-limited contrast [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ]. The image signal
is analyzed using a bank of cos-log type isotropic band-pass lters to extract
the di erent components describing the signal at di erent frequency bands. The
image component captured by the kth channel is given by
Where hk, is the impulse response corresponding to the kth band-pass lter and
gk the associated ltered component. For each pixel (i; j) in the kth component,
the contrast is expressed as:
gk(i; j) = (f
      </p>
      <p>hk)(i; j):
where the bk is given by:</p>
      <p>Here, too, it can be assumed that this measure is somehow inspired by the
intuitive idea of Weber and Fechner's model. Indeed, the numerator is the di
erential signal , i.e. pass-band signal, and the denominator contains the sum of all
the lower frequency components, i.e. baseband signal. This ratio does measure a
relative change in signal amplitude just as in the Weber-Fechner contrast model.</p>
      <p>Like Toet's contrast, Peli's contrast can be used for complex natural images,
unlike other contrasts such as Weber or Michelson. However, the lack of
directional selectivity and colorfulness aspects of Peli's contrast limits its use in some
real-world applications where these aspects play prominent roles.
4.4</p>
    </sec>
    <sec id="sec-9">
      <title>Daly contrast</title>
      <p>
        The contrast model proposed by Daly [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] is essentially based on the cortex
transform introduced by Watson [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. It should be noted that in this contrast model
both frequency selectivity and directional selectivity are taken into account by
means of two families of linear lters. The input signal f is rst analysed by
means of two cascades of isotropic band-pass linear lters called "dom" and
"fan" corresponding to the frequency selectivity for the former and the
directional selectivity for the latter [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. The ltered versions of the input signal f
are given by:
gkl(i; j) = (hkl f )(i; j)
(18)
where k and l are the dom and fan lter indices as de ned in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Daly contrast
is then de ned by
      </p>
      <p>Ckl(i; j) =
gkl(i; j)</p>
      <p>gkl(i; j)
gkl(i; j)
:</p>
      <p>Where gkl is the mean of the band (k; l). Note that this measure is unstable
because of the denominator, which tends to zero. Daly proposes two solutions
to overcome this problem. He then introduces two contrast measures where the
denominator is replaced in one case by the baseband signal mean and in the other
by the baseband signal calculated at each pixel. These two modi ed contrast
measures are given by:
and</p>
      <p>Ckl(i; j) =
Ckl(i; j) =
gkl(i; j)</p>
      <p>
        gkl(i; j)
gK
gkl(i; j) gkl(i; j)
gK (i; j)
:
:
(19)
(20)
(21)
(22)
Note that Daly contrast model does not incorporate the colorfulness aspect. This
contrast measure has been used successfully in the design of image distortion
prediction models [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
4.5
      </p>
    </sec>
    <sec id="sec-10">
      <title>Isotropic contrast</title>
      <p>
        The consideration of the multi-scale aspect in the visual contrast measure is
to some extent related to the characteristics of the HVS [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. One of the rst
contrast measures based on wavelet analysis was introduced by Winkler and
Vandergheynst [
        <xref ref-type="bibr" rid="ref50">50</xref>
        ]. It is surprising that most of the contrast measures proposed
so far did not explicitly take into account the multiscale aspect. However, it
should be noted that Toet's proposed measure is somehow quite close in that it
introduces the multi-resolution aspect. The main idea of the measure proposed
by Winkler and Vandergheynst was to overcome the limitations of the contrast
proposed by Peli. They proposed a contrast measure using a directional wavelet
decomposition based on a translation invariant multiresolution representation
using 2-D analytical lters. By combining the di erent analytic oriented lter
responses they derived the isotropic contrast expressed as follows.
where gkl(i; j) is the gray-level at pixel (i; j) in the band-limited directional
ltered image obtained by ltering the input signal f by the directional wavelet
at resolution k and direction l. Similarly to Peli's contrast, the denominator
corresponds to the baseband signal, i.e. the ltered signal with the scaling function
at scale k. It has been demonstrated that in contrast to the Peli's model, this
new contrast gives a at response to sinusoidal patterns [
        <xref ref-type="bibr" rid="ref50">50</xref>
        ]. However, it is
important to note that although in its design this contrast uses directional lters,
it provides an isotropic contrast measure. This could be bene cial for certain
applications where directionality is not important, such as in the case of digital
watermarking [
        <xref ref-type="bibr" rid="ref46">46</xref>
        ].
4.6
      </p>
    </sec>
    <sec id="sec-11">
      <title>Directional bandlimited contrast</title>
      <p>
        Based on the work of Peli and Daly, Dauphin et al. [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] proposed a contrast
where directional selectivity is taken into account in the nal contrast measure.
Indeed, other contrasts such as those of Daly and Winkler-Vandergheynst
integrate directional selectivity in the analysis of signal components but the nal
contrast is rather isotropic. Whereas, in the model de ned in [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] the nal
contrast measure is anisotropic in the sense that the nal response emphasizes the
most directional salient signal components in the signal. A non-linear operation
of type max is thus used for the calculation of the local contrast. The image
signal is analysed using a multichannel Gabor decomposition. The local directional
bandlimited contrast is computed as
Where gk(i; j) is the gray-level associated to the frequency sub-band k and to
one of the four directions (0; =4; =2; 3 =4) represented by l. The
normalization term gk(i; j) represents the total energy of the background below the band
(k) which is obtained by ltering the original image by a Gaussian lter with a
standard deviation
This local bandlimited directional contrast does not incorporate the colorfulness
aspect. It has been compared to Peli's contrast and has been proven more e cient
and less complex than the wavelet-based contrast proposed in [
        <xref ref-type="bibr" rid="ref50">50</xref>
        ].
The RAMMG contrast measure proposed by Rizzi et al. [
        <xref ref-type="bibr" rid="ref39">39</xref>
        ] is one of the few
measures of contrast that incorporates both multi-resolution and colour aspects.
The image signal is decomposed using a pyramidal scheme in the CIELAB colour
space. At each level of resolution each pixel is associated with a local contrast
de ned as the response of a pseudo-Laplacian computed by convolving the input
signal I by the mask S given by:
      </p>
      <p>S =
Ck(i; j) = (Dk
The local contrast at the kth level of the pyramid is computed as follows.
(23)
(24)
(25)
where Dk(i; j) is the absolute di erence of the luminance between the current
pixel (i; j) and the central pixel at the kth level of resolution. The RAMMG global
contrast is obtained by averaging all local contrasts across all the di erent levels
of the pyramidal decomposition.</p>
      <p>CRAMMG =</p>
      <p>Wl
1
Hl</p>
      <p>K</p>
      <p>
        K Wl Hl
X X X Ck(i; j):
k=1 i=1 j=1
(27)
where K is the total number of decomposition levels and Wk and Hk represent
the width and height of the image at the kth level respectively. A very similar
global contrast measure called Gobal Contrast Factor (GCF) has been proposed
in [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ]. GCF contrast has been found somehow consistent with subjective ranking
of a relatively wide range of natural images with varying contrast. However, it
should be noted that both contrast RAMMG et GFC are not relative measures of
the variation in the energy of the image signal and therefore cannot be included
in the family of contrasts conforming to the notion of contrast as de ned by
Weber-Fechner.
5
      </p>
      <sec id="sec-11-1">
        <title>Statistical features based contrasts</title>
        <p>Very few contrasts measures based on statistical information have been
introduced in the literature. These measures are often related to the pixel values
distribution, such as the grey-level histogram or the 2D distribution computed
from the grey-level cooccurrence matrix (GLCM). Here we limit ourselves to
three contrast metrics based on some simple statistical features of pixel values.
5.1</p>
      </sec>
    </sec>
    <sec id="sec-12">
      <title>Texture contrast measure</title>
      <p>
        Haralick was the rst who introduced the idea of using some statistical invariant
features for texture analysis. The set of these spatial descriptors introduced
in [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] are based on the GLCM computed from the digital image. Among the
Haralick's texture descriptors a global contrast is de ned. It is computed as
follows.
      </p>
      <p>K 1 K 1
CH = X X )(i
i=0 j=0
j)2pij
(28)
Where i and j are the grey-levels of adjacent pixels in a de ned neighbourhood
and pij is the joint mass probability function computed from the GLCM.
Although this measure has always been identi ed as a contrast, it does not meet
the basic criteria to be truly considered as a measure of contrast in the usual
sense and in line with psycho-physical experiences and the notion of contrast
that has been well established since the 19th century.</p>
    </sec>
    <sec id="sec-13">
      <title>Mutual information based contrast measure</title>
      <p>
        Another way to exploit inter-pixel correlation, directly related to image contrast,
is to consider the measure of mutual information extracted from the GLCM. A
new global contrast measure based on mutual information has thus been
introduced for the rst time to quantify the side e ects such as saturation or halo
e ect that could result from contrast enhancement [
        <xref ref-type="bibr" rid="ref38">38</xref>
        ]. This contrast measure
is de ned by:
      </p>
      <p>CMI = KXi=01 KXj=01 pxy(i; j) log2 ppxx(iy)(pi;yj(j)) !; (29)
where pxy is the joint probability mass function of the gray-level, whereas px and
py represent the marginal probabilities computed from the GLCM. While this
contrast is simple to compute, however, it does not provide information directly
related to visual contrast as it is purely based on statistical analysis of the signal
values distribution.
5.3</p>
    </sec>
    <sec id="sec-14">
      <title>Root Mean Square contrast</title>
      <p>
        The Root Mean Square (RMS) of the luminance in natural images has been
considered by Bex and Makhous [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] as a potential contrast measure in their
study on human observer sensitivity to contrast. As noticed by these authors,
this measure when divided by the average mean luminance of the image is a
good predictor of the relative contrast.
      </p>
      <p>
        Another version of this RMS based contrast has been proposed by Frasor
and Geisler [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] to make it more suitable to natural images . Local contrast
is measured in di erent randomly selected patches in the image. The contrast
associated with a given patch is calculated as follows:
(30)
(31)
(32)
(33)
where N is the total number of pixels in the patch, Li is the luminance of the ith
pixel, L the patch luminance and wi is a windowed isotropic weighting function
given by:
where, ri = p(xi xc)2 + (yi yc)2 , p is the radius of the patches , (xi; yi) is
the position of the ith pixel within the patch, and (xc; yc) is the center of the
patch. The patch luminance is given by
      </p>
      <p>It could be noticed that none of these statistical information based contrast
measures take into account the spatial frequency and directional content of the
visual signal. Nor do they incorporate other important aspects such as the
luminance in uence of near and far surrounds, the viewing distance and chromatic
aspect. Table 1 summarizes the key features of these representative local and
global contrast measures.</p>
      <p>
        Concluding remarks From this brief review, we conclude that it is uneasy
to nd a simple contrast measure that incorporates all the relevant perceptual
aspects related to the contrast notion. Therefore, most often simple measures
derived from psycho-physical experiments are used and adapted to digital
images to solve a number of real problems such as image and video processing,
analysis and compression. Despite the enormous amount of work dedicated to
visual contrast, it is still not easy to evaluate and compare the di erent contrast
measures objectively. However, a few studies limited to subjective evaluation
have been carried out in the literature [
        <xref ref-type="bibr" rid="ref41">41</xref>
        ].
      </p>
      <p>
        Moreover, through this critical analysis on the existing contrast measures, it
becomes important to answer some relevant questions. One important question
is: what are the most relevant visual signal characteristics that the contrast
measure should capture? To the best of author's knowledge none of the published
work addressed this issue properly. However, some interesting studies have been
dedicated to address this critical question [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ]. Peli conducted a thorough
experimental study in 1997 on how to de ne the contrast measure [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ]. To be
consistent with the ndings on the human visual perception the study provides
guidelines con rming that computational contrast metrics should take into
account multiscale aspects [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ].
      </p>
      <p>
        It is also worth noticing that there is no clear and objective criteria nor
ground truth data to exploit in comparing the proposed contrast measures.
Although, some attempts have been made in the study of di erent measures of
contrast in digital images [
        <xref ref-type="bibr" rid="ref41">41</xref>
        ].
6
      </p>
      <p>A Brief Overview of the use of Contrast in VIPA
An important issue to consider is how the contrast measure is de ned and used
in developing various methods for Visual Information Processing and Analysis
(VIPA). We limit ourselves here to a few applications where the notion of
contrast plays a predominant role. It should be noted that the choice of contrast
measure depends on the application. It is not always easy to make an appropriate
choice and pragmatic solutions are often used, based mainly on feedback.
6.1</p>
    </sec>
    <sec id="sec-15">
      <title>Image Quality Assessment and Enhancement</title>
      <p>
        Most objective HVS-based image quality methods explicitly or implicitly
incorporate contrast measure [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Peli's contrast and its variants have been
successfully incorporated in the design of IQA measures [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and distortion predictor
MI based contrast [
        <xref ref-type="bibr" rid="ref38">38</xref>
        ]
1992
1993
1999
2003
2004
2006
2015
      </p>
      <p>Key features
De ned on simple images (an object of uniform
intensity in a uniform background). Could not be
applied to natural and complex images
used for sinusoidal signals, not adapted for natural
images. Does not integrate any HVS properties.
considers object embedded in a non-uniform
luminance background. Can be applied to digital
images of natural scenes with a few adjustments
and adaptations.
capture the average local variations and spatial
dependence of the pixels computed from gray-level
co-occurrence matrix
uses local edginess information in an image and
quanti es local sharpness of the contour by
measuring the visibility of the salient features by using
a sliding window.
a spatial multi-resolution contrast based on
Gaussian pyramid decomposition.</p>
      <p>An image is decomposed into several channels
using a bank of cosine-log bandpass lters, well
suited for complex images and often used in the
computation of the quality of encoded images.</p>
      <p>
        It is band-limited contrast based on the cortex
transform and used by Daly in the Visible Di
erence Predictor (VDP) model [
        <xref ref-type="bibr" rid="ref48">48</xref>
        ].
not easy to use in practice since it requires
beforehand knowledge of the object contours and
computation of the curvilinear integral along the
boundaries of the objects contained in the
observed image.
uses non-seperable directional lters. It has the
advantage of giving a at response instead of an
oscillating response to sinusoid gratings.
      </p>
      <p>
        Based on Lateral Geniculate Nucleus responses
model [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. It uses Di erence Of Gaussian (DOG)
to model the bandlimited responses.
      </p>
      <p>It is local directional bandlimited contrast
computed from multichannel Gabor decomposition
and a nonlinear operation.</p>
      <p>
        Multilevel analysis (pyramid representation),
8neighborhood pyramid subsampling of the image
to various levels in the CIELAB.
a based on the RMS contrast introduced by
Bex and Makous [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] using a windowed isotropic
weighting function. It is a single scale contrast
based on only pixel values.
based on the mutual information computed from
the gray-level co-occurrence matrix.
models [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Another application where the concept of contrast is important
concerns the improvement of image quality and in particular Contrast Enhancement
(CE). The CE methods could be roughly classi ed into two categories, namely
direct methods and indirect methods [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The main idea of the direct methods is
to estimate the local contrast and to amplify it by means of a monotonic
transformation and to deduce the intensity of the pixel corresponding to this new
contrast. The contrast de ned by Beghdadi and Le Negrate [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and its variants
have been successfully used in many CE methods [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The edginess based contrast
measure has been also extended to stereo images by incorporating the depth
information [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. Another direct method based on the bandlimited contrast [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ]
proposed by Tan et al. [
        <xref ref-type="bibr" rid="ref43">43</xref>
        ] which operates in the discrete cosine transform
domain. Unfortunately most CE methods su er from some side e ects and there
is no uni ed framework to control these e ects. Indeed, this pre-processing that
aims at amplifying the visibility of details by increasing gradient and sharpness
may introduce some artefacts such noise ampli cation, saturation and halo e ect.
It is therefore useful to quantify these undesirable side e ects. Many objective
measures for Contrast Enhancement Evaluation (CEE) based on the local or
global contrast have been studied in [
        <xref ref-type="bibr" rid="ref37">37</xref>
        ]. A critical study of CEE metrics has
revealed that the mutual information contrast measure is the most promising in
terms of simplicity and e cacy [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. However, this metric has limitations and it
is in our interest to develop other measures that integrate the multi-scale and
multi-directional aspects. From this point of view the three contrast measures
de ned in [
        <xref ref-type="bibr" rid="ref16 ref50 ref8">8, 16, 50</xref>
        ] are good candidates.
6.2
      </p>
    </sec>
    <sec id="sec-16">
      <title>Visual Data Protection and Compression</title>
      <p>
        The protection and coding of visual data are two classic problems of very active
research. Here, it is more precisely about image watermarking using perceptual
approaches [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The aim is to insert a watermark that is both invisible and
resistant to various attacks. This is then a very di cult problem where one tries
to achieve the best compromise between two antagonistic criteria that are
robustness and transparency. Indeed, robustness requires putting more energy into
the watermark, which inevitably makes it visible and therefore breaks the
transparency. The visibility of the watermark is very much linked to the notion of JND
de ned in the contrast measure. In this type of application where one seeks to
insert in a robust and transparent way the watermark the multi-scale or
pyramidal approach through JND measures related to the contrasts de ned in [
        <xref ref-type="bibr" rid="ref44 ref50">44,50</xref>
        ] is
the most promising solution as shown by the studies in [
        <xref ref-type="bibr" rid="ref34 ref46">34, 46</xref>
        ]. The other
application where contrast plays an important role is image compression with quality
control. This involves using contrast as a measure of the visibility of distortion
and artifacts inherent in lossy compression methods. The JND measure, again
very much related to contrast, and the visual masking phenomenon are the most
important parameters to be considered in the quantization and coe cient
selection scheme in the transformed domain. The idea of exploiting contrast in the
quanti zation of the image signal dates back to the work of Kretz [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ] based on
the contrast of Moon and Spencer [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ]. Since this pioneering work other more
sophisticated contrast measurements have been successfully introduced into image
compression and video coding models [
        <xref ref-type="bibr" rid="ref12 ref47">12, 47</xref>
        ]. Another interesting perceptual
coding scheme proposed in [
        <xref ref-type="bibr" rid="ref51">51</xref>
        ] based on the Watson-Solomon Contrast Gain
Control (CGC) [
        <xref ref-type="bibr" rid="ref49">49</xref>
        ] model has been proposed for High E ciency Video Coding
(HEVC). It is worth noticing that most of visually lossless coding methods in the
literature exploit the contrast measure in an explicit or implicit manner [
        <xref ref-type="bibr" rid="ref3 ref48">3, 48</xref>
        ].
The only contrast model for perceptual coding and quantization that seems to
be complete and e cient is the one introduced recently in [
        <xref ref-type="bibr" rid="ref51">51</xref>
        ]. Therefore, we
recommend to use this model for perceptual coding.
6.3
      </p>
    </sec>
    <sec id="sec-17">
      <title>Image fusion</title>
      <p>
        Image fusion is becoming an active eld of research and especially with the
reviving of arti cial intelligence based approaches. The use of perceptual information,
and particularly contrast, in visual data fusion schemes seems to be the most
promising approach in various applications [
        <xref ref-type="bibr" rid="ref40 ref8">8, 40</xref>
        ]. There are several ways to
merge information, depending on the application and the data analysis method
used. A rst approach is to exploit the multi-scale representation of visual
information in the development of the fusion scheme. Perceptual contrast is one
of the signal features that could be used in the design of the fusion scheme. The
idea behind the use of perceptual contrast is to exploit the most relevant
perceptual information that is contained in the contrast map. The strategy consists
then in using the contrast measure in the weighting function used in the fusion
scheme. One of the attractive image fusion approach based on this idea is to
use the directional wavelet based contrast as done in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. This strategy could be
used not only in multimodal medical imaging but also in various applications
such multi-modal video-surveillance, multi-focus based computing photography
and hyperspectral imaging, to name a few. It is also the case of the contrast
enhancement method based on a perceptual fusion scheme proposed in [
        <xref ref-type="bibr" rid="ref40">40</xref>
        ].
7
      </p>
      <sec id="sec-17-1">
        <title>Conclusions and challenges</title>
        <p>Through this panoramic and chronological study on visual contrast, and the
underlying models and experimental studies, it becomes clear that it is not easy
to express a contrast measure where all the relevant psycho-physical factors and
parameters related to the notion of visual contrast are taken into account. Note
also that the notion of contrast and its use are very application-dependent and
the best way to exploit a visual contrast model to solve real problems is to
simplify it while keeping the most fundamental aspects.</p>
        <p>Furthermore, it is di cult to classify the existing de nitions and measures
related to the notion of contrast. This is mainly due to the various forms and
representations of visual and optical signals. Indeed, development of imaging
technologies has led to various image modalities. Therefore, it is now necessary
to rethink and de ne the concept of contrast according to the modality of the
visual signal under consideration.</p>
        <p>It should also be noted that the absence of a universal de nition of visual
contrast has opened the door to formulations and extensions of this concept
to other contexts where one seeks only to quantify the di erence between two
stimuli or elements of the signal. This has led, for example, to the consideration
of signal gradient measure as contrast in several studies. A unifying framework
with clear criteria for de ning the contrast will certainly help in avoiding such
confusions and mistakes and allows to progress properly in exploiting vision
research results in developing e cient contrast based VIPA methods.</p>
        <p>We can also see through this study that the measure of perceptual contrast
in the case of colour images is not su ciently studied. According to the author's
knowledge at the present time there is no well established de nition or measure
of chromatic contrast that is recognized by the scienti c community in the eld
of vision research or digital image processing.</p>
        <p>The temporal aspect is also important to introduce in the contrast measure.
Another aspect that could be taken into account is the inter-channel interactions
in the de nition of multi-scale contrast. With the renewed interest in arti cial
intelligence approaches for solving complex problems and in particular
featurebased learning approaches, the contrast may play a key role in the design of
perceptual loss functions in convolutional neural network architectures.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Andrew</surname>
            <given-names>B</given-names>
          </string-name>
          ,
          <string-name>
            <surname>W.:</surname>
          </string-name>
          <article-title>The cortex transform: Rapid computation of simulated neural images</article-title>
          .
          <source>Computer Vision Graphics and Image Processing</source>
          <volume>39</volume>
          (
          <issue>3</issue>
          ),
          <volume>311</volume>
          {
          <fpage>327</fpage>
          (
          <year>1987</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Beghdadi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dauphin</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bouzerdoum</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Image analysis using local band directional contrast</article-title>
          .
          <source>In: Proc of the International Symposium on Intelligent Multimedia, Video and Speech Processing</source>
          , ISIMP'
          <volume>04</volume>
          (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Beghdadi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Larabi</surname>
            ,
            <given-names>M.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bouzerdoum</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Iftekharuddin</surname>
            ,
            <given-names>K.M.:</given-names>
          </string-name>
          <article-title>A survey of perceptual image processing methods</article-title>
          .
          <source>Signal Processing: Image Communication</source>
          <volume>28</volume>
          (
          <issue>8</issue>
          ),
          <volume>811</volume>
          {
          <fpage>831</fpage>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Beghdadi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Le Negrate</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Contrast enhancement technique based on sc detection of edges</article-title>
          .
          <source>Computer Vision</source>
          , Graphics, and
          <source>Image Processing</source>
          <volume>46</volume>
          (
          <issue>2</issue>
          ),
          <volume>162</volume>
          {
          <fpage>174</fpage>
          (
          <year>1989</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Beghdadi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Qureshi</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deriche</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>A critical look to some contrast enhancement evaluation measures</article-title>
          .
          <source>In: 2015 Colour and Visual Computing Symposium (CVCS)</source>
          . pp.
          <volume>1</volume>
          {
          <issue>6</issue>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Belkacem-Boussaid</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beghdadi</surname>
            ,
            <given-names>A.:</given-names>
          </string-name>
          <article-title>A new image smoothing method based on a simple model of spatial processing in the early stages of human vision</article-title>
          .
          <source>IEEE Transactions on Image Processing</source>
          <volume>9</volume>
          (
          <issue>2</issue>
          ),
          <volume>220</volume>
          {
          <fpage>226</fpage>
          (
          <year>2000</year>
          ). https://doi.org/10.1109/83.821735
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Bex</surname>
            ,
            <given-names>P.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Makous</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          :
          <article-title>Spatial frequency, phase, and the contrast of natural images</article-title>
          .
          <source>JOSA A</source>
          <volume>19</volume>
          (
          <issue>6</issue>
          ),
          <volume>1096</volume>
          {
          <fpage>1106</fpage>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Bhatnagar</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Raman</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>A new image fusion technique based on directive contrast. ELCVIA: electronic letters on computer vision and image analysis 8(2</article-title>
          ),
          <volume>18</volume>
          {
          <fpage>38</fpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Bhatnagar</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>Q.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          :
          <article-title>Directive contrast based multimodal medical image fusion in nsct domain</article-title>
          .
          <source>IEEE transactions on multimedia 15(5)</source>
          ,
          <volume>1014</volume>
          {
          <fpage>1024</fpage>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Burt</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Adelson</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          :
          <article-title>The laplacian pyramid as a compact image code</article-title>
          .
          <source>IEEE Transactions on communications 31(4)</source>
          ,
          <volume>532</volume>
          {
          <fpage>540</fpage>
          (
          <year>1983</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Chandler</surname>
            ,
            <given-names>D.M.:</given-names>
          </string-name>
          <article-title>Seven challenges in image quality assessment: past, present, and future research</article-title>
          .
          <source>ISRN Signal Processing</source>
          <year>2013</year>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Chandler</surname>
            ,
            <given-names>D.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hemami</surname>
            ,
            <given-names>S.S.:</given-names>
          </string-name>
          <article-title>Dynamic contrast-based quantization for lossy wavelet image compression</article-title>
          .
          <source>IEEE Transactions on Image Processing</source>
          <volume>14</volume>
          (
          <issue>4</issue>
          ),
          <volume>397</volume>
          {
          <fpage>410</fpage>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Cornsweet</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          : Visual Perception. Academic Press (
          <year>1970</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Daly</surname>
            ,
            <given-names>S.J.:</given-names>
          </string-name>
          <article-title>Visible di erences predictor: an algorithm for the assessment of image delity</article-title>
          .
          <source>In: Human Vision</source>
          , Visual Processing, and
          <article-title>Digital Display III</article-title>
          . vol.
          <volume>1666</volume>
          , pp.
          <volume>2</volume>
          {
          <fpage>16</fpage>
          . International Society for Optics and
          <string-name>
            <surname>Photonics</surname>
          </string-name>
          (
          <year>1992</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Damera-Venkata</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kite</surname>
          </string-name>
          , T.D.,
          <string-name>
            <surname>Geisler</surname>
            ,
            <given-names>W.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Evans</surname>
            ,
            <given-names>B.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bovik</surname>
            ,
            <given-names>A.C.</given-names>
          </string-name>
          :
          <article-title>Image quality assessment based on a degradation model</article-title>
          .
          <source>IEEE transactions on image processing 9</source>
          (
          <issue>4</issue>
          ),
          <volume>636</volume>
          {
          <fpage>650</fpage>
          (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Dauphin</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beghdadi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , de Lesegno, P.V.
          <article-title>: A local directional bandlimited contrast</article-title>
          .
          <source>In: Seventh International Symposium on Signal Processing and Its Applications</source>
          . vol.
          <volume>2</volume>
          , pp.
          <volume>197</volume>
          {
          <fpage>200</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Fechner</surname>
          </string-name>
          , G.:
          <article-title>Elemente der psychophysik (leipzig: Breitkopf &amp; hartel)</article-title>
          .
          <source>In: English translation of</source>
          Vol.
          <article-title>1 by HE Adler 1966</article-title>
          . Holt, Rinehart, and Winston New York (
          <year>1860</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Frazor</surname>
            ,
            <given-names>R.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Geisler</surname>
            ,
            <given-names>W.S.:</given-names>
          </string-name>
          <article-title>Local luminance and contrast in natural images</article-title>
          .
          <source>Vision research</source>
          <volume>46</volume>
          (
          <issue>10</issue>
          ),
          <volume>1585</volume>
          {
          <fpage>1598</fpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Gordon</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rangayyan</surname>
            ,
            <given-names>R.M.</given-names>
          </string-name>
          :
          <article-title>Feature enhancement of lm mammograms using xed and adaptive neighborhoods</article-title>
          .
          <source>Applied optics 23(4)</source>
          ,
          <volume>560</volume>
          {
          <fpage>564</fpage>
          (
          <year>1984</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Grossberg</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mingolla</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Todorovic</surname>
            ,
            <given-names>D.:</given-names>
          </string-name>
          <article-title>A neural network architecture for preattentive vision</article-title>
          .
          <source>IEEE Transactions on Biomedical Engineering</source>
          <volume>36</volume>
          (
          <issue>1</issue>
          ),
          <volume>65</volume>
          {
          <fpage>84</fpage>
          (
          <year>1989</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Hachicha</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beghdadi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cheikh</surname>
            ,
            <given-names>F.A.</given-names>
          </string-name>
          :
          <article-title>Combining depth information and local edge detection for stereo image enhancement</article-title>
          .
          <source>In: 2012 Proceedings of the 20th European Signal Processing Conference (EUSIPCO)</source>
          . pp.
          <volume>250</volume>
          {
          <fpage>254</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Hansen</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gegenfurtner</surname>
            ,
            <given-names>K.R.</given-names>
          </string-name>
          :
          <article-title>Color contributes to object-contour perception in natural scenes</article-title>
          .
          <source>Journal of Vision</source>
          <volume>17</volume>
          (
          <issue>3</issue>
          ),
          <volume>14</volume>
          {
          <fpage>14</fpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Haralick</surname>
            ,
            <given-names>R.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shanmugam</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dinstein</surname>
            ,
            <given-names>I.H.</given-names>
          </string-name>
          :
          <article-title>Textural features for image classication</article-title>
          .
          <source>IEEE Transactions on systems, man, and cybernetics (6)</source>
          ,
          <volume>610</volume>
          {
          <fpage>621</fpage>
          (
          <year>1973</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Haun</surname>
            ,
            <given-names>A.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peli</surname>
          </string-name>
          , E.:
          <article-title>Measuring the perceived contrast of natural images</article-title>
          .
          <source>In: SID Symposium Digest of Technical Papers</source>
          . vol.
          <volume>42</volume>
          , pp.
          <volume>302</volume>
          {
          <fpage>304</fpage>
          . Wiley Online Library (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Hecht</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>The visual discrimination of intensity and the weber-fechner law</article-title>
          .
          <source>The Journal of general physiology 7</source>
          (
          <issue>2</issue>
          ),
          <volume>235</volume>
          {
          <fpage>267</fpage>
          (
          <year>1924</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Hubel</surname>
            ,
            <given-names>D.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wiesel</surname>
            ,
            <given-names>T.N.</given-names>
          </string-name>
          :
          <article-title>Brain mechanisms of vision</article-title>
          .
          <source>WH Freeman</source>
          (
          <year>1979</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Iordache</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beghdadi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , de Lesegno, P.V.:
          <article-title>Pyramidal perceptual ltering using moon and spencer contrast</article-title>
          .
          <source>In: Image Processing</source>
          ,
          <year>2001</year>
          . Proceedings. 2001 International Conference on. vol.
          <volume>3</volume>
          , pp.
          <volume>146</volume>
          {
          <fpage>149</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Kretz</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Subjectively optimal quantization of pictures</article-title>
          .
          <source>IEEE Transactions on Communications</source>
          <volume>23</volume>
          (
          <issue>11</issue>
          ),
          <volume>1288</volume>
          {
          <fpage>1292</fpage>
          (
          <year>1975</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Levine</surname>
          </string-name>
          , M.D.:
          <article-title>Vision in man and machine</article-title>
          .
          <string-name>
            <surname>McGraw-Hill College</surname>
          </string-name>
          (
          <year>1985</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Lillesaeter</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          :
          <article-title>Complex contrast, a de nition for structured targets and backgrounds</article-title>
          .
          <source>JOSA A</source>
          <volume>10</volume>
          (
          <issue>12</issue>
          ),
          <volume>2453</volume>
          {
          <fpage>2457</fpage>
          (
          <year>1993</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Matkovic</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Neumann</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Neumann</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Psik</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Purgathofer</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          :
          <article-title>Global contrast factor-a new approach to image contrast</article-title>
          .
          <source>Computational Aesthetics</source>
          <year>2005</year>
          ,
          <volume>159</volume>
          {
          <fpage>168</fpage>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <surname>Michelson</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Studies in Optics</article-title>
          .
          <source>The Univ. of Chicago Science Series</source>
          , University Press (
          <year>1927</year>
          ), https://books.google.fr/books?id=FXazQgAACAAJ
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          33.
          <string-name>
            <surname>Moon</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Spencer</surname>
            ,
            <given-names>D.E.</given-names>
          </string-name>
          :
          <article-title>Visual data applied to lighting design</article-title>
          .
          <source>JOSA</source>
          <volume>34</volume>
          (
          <issue>10</issue>
          ),
          <volume>605</volume>
          {
          <fpage>617</fpage>
          (
          <year>1944</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          34.
          <string-name>
            <surname>Nguyen</surname>
            ,
            <given-names>P.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beghdadi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Luong</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Perceptual watermarking using a new just-noticeable-di erence model</article-title>
          .
          <source>Signal Processing: Image Communication</source>
          <volume>28</volume>
          (
          <issue>10</issue>
          ),
          <volume>1506</volume>
          {
          <fpage>1525</fpage>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          35.
          <string-name>
            <surname>Peli</surname>
          </string-name>
          , E.:
          <article-title>Contrast in complex images</article-title>
          .
          <source>JOSA A</source>
          <volume>7</volume>
          (
          <issue>10</issue>
          ),
          <year>2032</year>
          {
          <year>2040</year>
          (
          <year>1990</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          36.
          <string-name>
            <surname>Peli</surname>
          </string-name>
          , E.:
          <article-title>In search of a contrast metric: Matching the perceived contrast of gabor patches at di erent phases and bandwidths</article-title>
          .
          <source>Vision Research</source>
          <volume>37</volume>
          (
          <issue>23</issue>
          ),
          <volume>3217</volume>
          {
          <fpage>3224</fpage>
          (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          37.
          <string-name>
            <surname>Qureshi</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beghdadi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deriche</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Towards the design of a consistent image contrast enhancement evaluation measure</article-title>
          .
          <source>Signal Processing: Image Communication</source>
          <volume>58</volume>
          ,
          <issue>212</issue>
          {
          <fpage>227</fpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          38.
          <string-name>
            <surname>Qureshi</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deriche</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beghdadi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mohandes</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>An information based framework for performance evaluation of image enhancement methods</article-title>
          .
          <source>In: 2015 International Conference on Image Processing Theory, Tools and Applications</source>
          ,
          <string-name>
            <surname>IPTA</surname>
          </string-name>
          <year>2015</year>
          , Orleans, France,
          <source>November 10-13</source>
          ,
          <year>2015</year>
          . pp.
          <volume>519</volume>
          {
          <issue>523</issue>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          39.
          <string-name>
            <surname>Rizzi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Algeri</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Medeghini</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marini</surname>
            ,
            <given-names>D.:</given-names>
          </string-name>
          <article-title>A proposal for contrast measure in digital images</article-title>
          . In: Second European Conference on Color in Graphics, Imaging, and
          <string-name>
            <surname>Vision</surname>
          </string-name>
          (CGIV). pp.
          <volume>187</volume>
          {
          <fpage>192</fpage>
          .
          <article-title>Society for Imaging Science</article-title>
          and Technology,
          <string-name>
            <surname>Aachen</surname>
          </string-name>
          (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          40.
          <string-name>
            <surname>Saleem</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beghdadi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boashash</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>A distortion-free contrast enhancement technique based on a perceptual fusion scheme</article-title>
          .
          <source>Neurocomputing</source>
          <volume>226</volume>
          ,
          <issue>161</issue>
          {
          <fpage>167</fpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          41.
          <string-name>
            <surname>Simone</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pedersen</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hardeberg</surname>
            ,
            <given-names>J.Y.</given-names>
          </string-name>
          :
          <article-title>Measuring perceptual contrast in digital images</article-title>
          .
          <source>Journal of Visual Communication and Image Representation</source>
          <volume>23</volume>
          (
          <issue>3</issue>
          ),
          <volume>491</volume>
          {
          <fpage>506</fpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          42.
          <string-name>
            <surname>Tadmor</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tolhurst</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Calculating the contrasts that retinal ganglion cells and lgn neurones encounter in natural scenes</article-title>
          .
          <source>Vision research</source>
          <volume>40</volume>
          (
          <issue>22</issue>
          ),
          <volume>3145</volume>
          {
          <fpage>3157</fpage>
          (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          43.
          <string-name>
            <surname>Tang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peli</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Acton</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Image enhancement using a contrast measure in the compressed domain</article-title>
          .
          <source>IEEE signal processing LETTERS</source>
          <volume>10</volume>
          (
          <issue>10</issue>
          ),
          <volume>289</volume>
          {
          <fpage>292</fpage>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          44.
          <string-name>
            <surname>Toet</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Image fusion by a ration of low-pass pyramid</article-title>
          .
          <source>Pattern Recognition Letters</source>
          <volume>9</volume>
          (
          <issue>4</issue>
          ),
          <volume>245</volume>
          {
          <fpage>253</fpage>
          (
          <year>1989</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          45.
          <string-name>
            <surname>Triantaphillidou</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jarvis</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Psarrou</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gupta</surname>
          </string-name>
          , G.:
          <article-title>Contrast sensitivity in images of natural scenes</article-title>
          .
          <source>Signal Processing: Image Communication</source>
          <volume>75</volume>
          ,
          <issue>64</issue>
          {
          <fpage>75</fpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          46.
          <string-name>
            <surname>Vandergheynst</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kutter</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Winkler</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Wavelet-based contrast computation and application to digital image watermarking</article-title>
          .
          <source>In: Wavelet Applications in Signal and Image Processing VIII</source>
          . vol.
          <volume>4119</volume>
          , pp.
          <volume>82</volume>
          {
          <fpage>93</fpage>
          . International Society for Optics and
          <string-name>
            <surname>Photonics</surname>
          </string-name>
          (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          47.
          <string-name>
            <surname>Watson</surname>
            ,
            <given-names>A.B.</given-names>
          </string-name>
          :
          <article-title>Dct quantization matrices visually optimized for individual images</article-title>
          . In:
          <article-title>Human vision, visual processing, and digital display IV</article-title>
          . vol.
          <year>1913</year>
          , pp.
          <volume>202</volume>
          {
          <fpage>216</fpage>
          . International Society for Optics and
          <string-name>
            <surname>Photonics</surname>
          </string-name>
          (
          <year>1993</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          48.
          <string-name>
            <surname>Watson</surname>
          </string-name>
          , A.B. (ed.):
          <article-title>Digital Images and Human Vision</article-title>
          . MIT Press, Cambridge, MA, USA (
          <year>1993</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          49.
          <string-name>
            <surname>Watson</surname>
            ,
            <given-names>A.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Solomon</surname>
            ,
            <given-names>J.A.</given-names>
          </string-name>
          :
          <article-title>Model of visual contrast gain control and pattern masking</article-title>
          .
          <source>J. Opt. Soc. Am. A</source>
          <volume>14</volume>
          (
          <issue>9</issue>
          ),
          <volume>2379</volume>
          {2391 (Sep
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          50.
          <string-name>
            <surname>Winkler</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vandergheynst</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Computing isotropic local contrast from oriented pyramid decompositions</article-title>
          .
          <source>In: Image Processing</source>
          ,
          <year>1999</year>
          . ICIP 99.
          <string-name>
            <surname>Proceedings</surname>
          </string-name>
          . 1999 International Conference on. vol.
          <volume>4</volume>
          , pp.
          <volume>420</volume>
          {
          <fpage>424</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          51.
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alam</surname>
            ,
            <given-names>M.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chandler</surname>
            ,
            <given-names>D.M.:</given-names>
          </string-name>
          <article-title>Visually lossless perceptual image coding based on natural-scene masking models</article-title>
          .
          <source>Recent Advances in Image and Video</source>
          Coding p.
          <volume>1</volume>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>