=Paper= {{Paper |id=Vol-2623/paper7 |storemode=property |title=A Wavelet and HSV Pansharpening Technology of High Resolution Satellite Images |pdfUrl=https://ceur-ws.org/Vol-2623/paper7.pdf |volume=Vol-2623 |authors=Vita Kashtan,Volodymyr Hnatushenko |dblpUrl=https://dblp.org/rec/conf/intelitsis/KashtanH20 }} ==A Wavelet and HSV Pansharpening Technology of High Resolution Satellite Images== https://ceur-ws.org/Vol-2623/paper7.pdf
 A Wavelet and HSV Pansharpening Technology of High
              Resolution Satellite Images

      Vita Kashtan1 [0000-0002-0395-5895], Volodymyr Hnatushenko2 [0000-0003-3140-3788]
             1 Oles Honchar Dnipro National University, Dnipro, 49010, Ukraine
                 2 Dnipro University of Technology, Dnipro, 49005, Ukraine

                    vitalionkaa@gmail.com, vvgnat@ukr.net



       Abstract. High resolution satellite images are used to monitor environmental
       changes, map-making and military intelligence and forecast natural disasters.
       Nowadays, these contain spatial dissimilarities due to differences in their radi-
       ometry resolution, spectral characteristics and time delay from high resolution
       satellite sensors (multispectral and panchromatic). The use of pansharpened
       high spatial resolution images significantly increases the possibility of thematic
       recognition. Pansharpening is a technique that is used to combine the spatial de-
       tails of a panchromatic image with the the several spectral bands of a lower res-
       olution multispectral image. To date there have proposed a large number of fu-
       sion methods. However, most available methods are not effective for the latest
       very high resolution, such as WorldView-3 satellite imagery. Most methods for
       increasing spatial resolution lead to artifacts - objects that are not present in the
       original scene, but that appear in the resulting image. In this paper, we present
       the pansharpening technology of high resolution satellite images, using integra-
       tion of bicubic interpolation, color system HSV and wavelet-transform. The aim
       of the proposed technology is to obtain the high resolution multispectral satel-
       lite image after the previous geometric correction of primary multispectral im-
       ages and optimal wavelet decomposition into approximation and detail coeffi-
       cients according to the chosen information value function linear forms. The
       proposed technology was verified by a number of different satellite data. The
       experimental evaluations are carried out on WorldView-3 images. Visual and
       quantitative analyses show that our presented technology can achieve high
       spectral and spatial quality and outperforms some existing pansharpening meth-
       ods.

       Keywords: Pansharpening, Satellite Image, High Resolution, Panchromatic,
       Multispectral, Wavelet Transform, Resampling.


1      Introduction

High resolution satellites, such as WorldView-2,3, provide very valuable data about
the Earth, e.g., for urban damage detection, environmental monitoring, weather fore-
casting, map-making and military intelligence [1-3]. The satellite images are charac-
terized by their spatial, spectral, radiometric and temporal resolutions. But for more
Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons
License Attribution 4.0 International (CC BY 4.0). IntelITSIS-2020
practical applications, used only spatial and spectral resolutions are considered. Gen-
erally, satellites take various images from different frequencies in the visual and non-
visual ranges called as monochrome images. Most modern satellite systems that
monitor the Earth have the ability to obtain multispectral (MUL) and panchromatic
(PAN) images of different spatial resolutions. All things being equal, panchromatic
images have a higher spatial resolution. Based on the frequency range each mono-
chrome image contains various information about the object. Each monochrome im-
age is represented as a band [4]. Multispectral image of high spatial resolution sensor
contains four (Red, Green, Blue and near-infrared) or eight bands (Coastal, Blue,
Green, Yellow, Red, Red Edge, Near-Infrared 1, NearInfrared 2). The combination of
these bands produces a new color image [5].


2      State of Art

At present, there are different pansharpening methods which can generally be divided
into: Brovey-transform, principal component analysis, independent component analy-
sis, Gram-Schmidt, intensity-hue-saturation transform (IHS) [6-13]. The typical algo-
rithm for component substitution fusion technique is IHS transform fusion algorithm,
which provides good quality visual high resolution multispectral image, but spectral
distortion occurs [14]. Rahmani et al. proposed a modified IHS pansharpening method
[14]. An image adaptive coefficient for IHS was found to obtain a more accurate
spectral resolution. Work [15] proposed a pansharpening with multiscale normalized
nonlocal means filter. This filter computes each pixel value as the weighted average
of all pixels over a sliding window, and the pixel weight depends upon the distance
from the center pixel. The decomposed high frequency details are added into each
band of multispectral image and the final smooth fused image is obtained. Almost all
these approach are fast and easy to implement but suffer from spectral distortion be-
cause the PAN image does not cover exactly the same spectrum band as that of the
MUL image; and not efficient for the latest image generation.
    In the last decade, image fusion techniques based on Multi Resolution Analysis
methods, like wavelet transform [12, 16] and “à-trous” wavelet transform have be-
come significant due to their ability to capture the information present at different
scales. It is accomplished by using wavelet basis functions due to its desirable proper-
ties such as multiscale decomposition and space frequency localization [17]. In [18]
component substitution fusion method is proposed to reduce color distortion. Work
[19] proposed method divided multispectral and panchromatic images into several
pixel groups by k-means algorithm. Then, the panchromatic image was estimated by a
weighted summation of MUL bands and the fused image was generated by ratio en-
hancement [19]. In [20] authors suggested pansharpening algorithm using a guided
filter that has good properties such as edge-preserving and structure transferring. The
underlying idea of the approach in [21] is to consider the spectral difference of each
pixel between multispectral image and panchromatic image, and to adaptively inject
the Pan details into the MUL image. An improved image fusion method was proposed
through the improvement of fused spectra of of mixed pixels [22]. In paper [23] au-
thors consider the application of nonlinear image decomposition schemes based on
morphological operators to data fusion. Work [24] proposes a new regularized model-
based pan-sharpening method for the images with local dissimilarities. An adjustment
matrix is introduced into the global spatial similarity regulariser to reduce the effect of
the contrast inversion [24]. Recent research has shown that the deep neural networks
have obtained superior performance in pansharpening image [25-29]. However, the
neural network methods take more time than traditional fusion methods.
    The analysis of the existing pansharpening approaches showed that most methods
for increasing spatial resolution lead to artifacts. Objects that are not present in the
original scene, but that appear in the resulting image. Most of existing pansharpening
methods generally introduce spectral distortions.


3      Pansharpening Technology

We propose an efficient pansharpening technology of high resolution satellite images
using integration of bicubic interpolation, HSV and wavelet-transforms. The technol-
ogy scheme is shown in the Fig. 1.
   The main processing steps are:
1. Uploading the high resolution images received from satellite WorldView-3: pan-
   chromatic – PAN, multispectral: MUL in true-color composition (R, G, B) and
   NIR in color composition (NIR, B, R).
2. Resampling MUL and NIR based on bicubic interpolation [30]:
                                           3       3
                                v (x,y) = ∑ ∑ aij ⋅ Pij ,                              (1)
                                          i =0 j =0

where aij – coefficient; Pij – intensity of the image being scaled.
3. A characteristic feature of many images obtained in real scanner satellite system is
   a significant proportion of dark areas and the relatively small number of areas with
   high brightness. That is why one of the first steps of the algorithm is histogram
   equalization images. Discrete transformation brightness scale is as follows:
                                               i
                                  zi′ = z m ∑ p ( z k ) ,                              (2)
                                           k =0


    where zi′ – the value of the conversion scale brightness corresponding to the
brightness output scale;
    p ( zk ) – normalized histogram brightness of the original image ( k = 0...255 ).

4. Transform the multispectral image (MUL) from RGB components into hue–
   saturation-intensity (HSV) components:
                V   13    1
                              3
                                    1
                                    3
                                        R
                                        
                                                              V 
                                                    H = arctg  2 
               V  =  1     1     −2
                                        G  ,                V1                     (3)
                1
               V2   1             
                          6    6     6
                              −1
                                    0   B    =      (V1 ) + (V2 )
                                                               2          2
                        2    2
                                                    S




                                   Fig. 1. Technology scheme

5. The next stage is applying wavelet transform (Fig.2) which is divided into the next
   stages:
   5.1. Decompose the PAN into approximation coefficients (LL) and detail coeffi-
cients (LH, HL, and HH, the image information including vertical, horizontal and
diagonal features) to the fourth decomposition level of the discrete wavelet transform
(DWT) of the bior 2.2 class:
                                             N
                        PAN → PLL
                                N
                                  + ∑ ( PLH
                                         i
                                            + PHL
                                               i
                                                  + PHH
                                                     i
                                                        ),                              (4)
                                             i

           N
  where, PLL – approximation coefficient at level N;
    i     i     i
   PLH , PHL , PHH – horizontal, vertical, diagonal coefficients at level i respectively.
   5.2. Apply DWT to luminance Vm-component of the multispectral image. The im-
ages are subjected to the fourth level of decomposition, resulting in each of the imag-
es are approximate and detailing matrix coefficients:
                                         N
                     Vm → Vm LL
                             N
                                + ∑ (Vm LH
                                        i
                                           + Vm HL
                                                i
                                                   + Vm HH
                                                        i
                                                           ),                           (5)
                                         i

           N
  where, VmLL – approximation coefficient at level N;
      i      i       i
   Vm LH , VmHL , Vm HH – horizontal, vertical and diagonal coefficients at level i.




                              Fig. 2. Wavelet reconstruction

  5.3. Apply the concept of DWT to the NIR image. Matrix NIRLL holds the approx-
imation coefficient. Matrices NIRLH, NIRHL, and NIRHH keep the detail coefficients:
                                   N
                   NIR → NIRLL
                            N
                               + ∑ ( NIRLH
                                        i
                                           + NIRHL
                                                i
                                                   + NIRHH
                                                        i
                                                           ),                          (6)
                                    i

                N                                                i      i        i
  where, NIRLL     – approximation coefficient at level N; NIRLH   , NIRHL , NIRHH   –
horizontal, vertical and diagonal coefficients at level i.
  5.4. After decomposition, substitution has been done by placing Vm bands approx-
imation in PAN approximation at each level and NIR bands approximation in PAN
approximation at each level. For each Vm, NIR band and PAN, single fused image
coefficients are obtained. Similarly for each level, two fused image coefficients have
been obtained. The coefficients merging rule is:
          N                                            N
   N
 VmLL +   ∑       i
               ( PLH + PHL
                        i
                           + PHH
                              i
                                 ) → Vmw; NIRLL
                                             N
                                                +     ∑ ( PLHi + PHLi + PHH
                                                                         i
                                                                            ) → NIRw.
           i                                           i

   5.5. Applying the Biorthogonal wavelet transform (IDWT) to matrix obtained in
the last step for obtaining the new intensity components Vmv and NIRw. IDWT per-
forms the inverse DWT, which allows reconstructing new intensity components
where the features from the PAN and those from the initial intensity component have
been integrated. After inverse wavelet transform, two new fused images have been
obtained in each level.
6. Reverse transform from the HSV in the RGB color space, choose Hm and Sm
   components of multichannel images and the resulting Vmw after wavelet-
   transform.
7. Taking the new RGB channels from the true-color composite and the new NIRw
   channel from the false-color composite. The result is four-channel image high spa-
   tial resolution.


4      Results

4.1    Visual Analysis
The testing WorldView-3 data set consists of 8-band multispectral data with 1.24 m
spatial resolution and panchromatic image with 0.31 m spatial resolution. Fig. 3
shows 400*400 detail of the whole scene, which constitutes buildings, grass, trees and
roads. The MUL subscene image in RGB composition (Band 5-3-2) is shown in Fig
3a, in NIR color composition (Band 7-2-5) is shown in Fig 3b and full-resolution
panchromatic image is shown in Fig.3c. The results obtained from Brovey-transform,
Gram-Schmidt methods and the proposed technology are reported in Figs. 3(c)-3(f),
respectively. A visual comparison of the results makes it possible to assert that the
spatial resolution of the original multispectral data is improved. Buildings and roads
in the resulting image are much sharper than in the original image. The result obtained
from proposed technology is shown the spatial details appear as sharp as those in
panchromatic image and spectral information is faithfully preserved without any ob-
vious color distortion.
   The results obtained from Brovey-transform, Gram-Schmidt methods and the pro-
posed technology are reported in Figs. 3(c)-3(f), respectively.




                  a)                           b)                            c)




                     d)                         e)                           f)
 Fig. 3. Satellite images of WorldView-3: a) multichannel; b) NIR color composition; c) pan-
chromatic; d) Brovey-transform; e) Gram-Schmidt methods; f) the proposed technology result.
4.2     Quantitative Analysis
As the visual analysis is very subjective and depends on the interpreter, a number of
statistical analyses were performed [31-33]. To evaluate the spectral and spatial quali-
ty of pansharpened images we used relative dimensionless global error in synthesis
(ERGAS) [11, 33]. Table 1 shows ERGASspectral obtained by known pansharpening
methods (HSV, PCA, Gram-Shmidt, Wavelet) and synthesized images developed
technology. It is clear, from its definition, that low ERGAS index values represent
high image quality. One of the main difficulties is the quantitative assessment of visu-
al image quality. To assess visual quality, an approach based on the calculation of
information entropy is often used. Image entropy is a statistical feature that reflects
the average information content in an image [16]. Fig. 4 shows a graphical representa-
tion of the value of information and entropy for original images and the synthesized
multichannel image from our technology. The value of the synthesized image entropy
value far exceeds the initial entropy multichannel image, this indicates that the new
technology allows to improve information and detail objects multichannel images.

                       Table 1. Quality evaluation of ERGASspectral
                Methods                                        ERGASspectral
                   HSV                                            6.84
                   PCA                                            6.21
              Gram-Schmidt                                        5.72
                 Wavelet                                          5.48
              New technology                                      4.10

    The correlation coefficient (CORR) is an important indicator reflecting the differ-
ence between the fused image and the original image [33]. This value ranges from -1
to 1. The best correspondences between fused and original image data show the high-
est correlation values. Table 2 shows the values CORR for the Gram-Schmidt, wave-
let, packet wavelet, HSV, PCA, and new technology image fusion methods. So, Table
2 shows the best results for the proposed technology and the PCA method presents
acceptable results. All other methods have a very low correlation values.

                            Table 2. Correlation value CORR
        Methods                  R               G                B             NIR
      PCA                       0.75            0.79             0.87           0.84
      Gram-Schmidt              0.86            0.85             0.84           0.81
      HSV                       0.48            0.57             0.54           0.55
      Wavelet                   0.72            0.82             0.71           0.73
      New technology            0.96            0.95             0.97           0.95

   The SSIM shows the similarity with the original image. The structural similarity
image quality paradigm is based on the assumption that the human visual system is
highly adapted for extracting structural information from the scene [32]. Table 3
shows value of SSIM for the fused images in comparison with the multispectral im-
age. All methods except of Wavelet and proposed technology are near zero, which
confirms the fact that there is only slight similarity with the original image.
   Quantitative analysis shows that existing methods for increasing spatial resolution
lead to artifacts. The proposed technology increases the spatial resolution of multi-
spectral aerospace images without color distortion.




               Fig. 4. Graphical representation of entropy values for images

                  Table 3. Value of SSIM of the pansharpenined results
        Methods                  R                 G                B           NIR
     Gram-Schmidt              0.055             0.047            0.047        0.054
        Brovey                 0.001             0.002            0.004        0.011
        ModIHS                 0.028             0.044            0.045        0.052
        Wavelet                0.453             0.561            0.491        0.512
     New technology            0.711             0.792            0.771        0.790


5      Conclusions

In this paper, we present the new pansharpening technology of high resolution satel-
lite images. It is obtained, particularly, by utilizes the merit of the HSV fusion in
smoothly integrating spatial resolution information and the merit of the wavelet fusion
in preserving color information. The visual evaluation shows that the color of the
fusion results of the proposed wavelet-HSV pansharpening technology is very close to
the color of original MUL images for every data set; whereas the color of the stand-
alone HSV fusion results and the wavelet fusion results are substantially distorted.
Visual and quantitative analyses show that our presented technology preserves the
original spectral features, can achieve high spectral and spatial quality and outper-
forms existing pansharpening methods.
    Our further research will focus on improving the fusion accuracy of multichannel
image fusion.
References
 1. Hnatushenko, V.V., Mozgovyi, D.K., Vasyliev, V.V., Kavats, O.O.: Satellite Monitoring
    of Consequences of Illegal Extraction of Amber in Ukraine. Scientific bulletin of National
    Mining University. - State Higher Educational Institution “National Mining University”,
    Dnipropetrovsk, № 2 (158). С. 99-105, (2017).
 2. Zhang, Y.: Problems in the fusion of commercial high-resolution satellite as well as Land-
    sat 7 images and initial solutions. International Archives of Photogrammetry Remote Sens-
    ing and Spatial Information Sciences, 34(4), p. 587–592, (2012).
 3. Hordiiuk, D.M., Hnatushenko, V.V.: Neural network and local laplace filter methods ap-
    plied to very high resolution remote sensing imagery in urban damage detection. 2017
    IEEE International Young Scientists Forum on Applied Physics and Engineering (YSF),
    (2017). doi:10.1109/ysf.2017.8126648.
 4. Gnatushenko, V.: The use of geometrical methods in multispectral image processing.
    Journal of Automation and Information Sciences, Volume 35 (12), 1-8, (2003).
    doi: 10.1615/JAutomatInfScien.v35.i12.10.
 5. Kashtan, V., Hnatushenko, V., Shedlovska, Y.: Processing technology of multispectral re-
    mote sensing images. International Young Scientists Forum on Applied Physics 2017,
    p. 355-358. Lviv (2017). doi:10.1109/YSF.2017.8126647.
 6. Meng, X., Shen, H., Li, H., Zhang, L., & Fu, R.: Review of the pansharpening methods for
    remote sensing images based on the idea of meta-analysis: Practical discussion and chal-
    lenges. Information Fusion, 46, 102–113, (2019). doi:10.1016/j.inffus.2018.05.006 .
 7. Vivone, G., Alparone, L., Chanussot, J., Dalla Mura, M., Garzelli, A., Licciardi, G. A.,
    Restaino, R., and Wald, L.: A critical comparison among pansharpening algorithms. IEEE
    Trans. Geosci. Remote Sens., vol. 53, no. 5, pp. 2565-2586, (2015).
 8. Ghassemian, H.: A review of remote sensing image fusion methods. Information Fusion,
    32, 75–89, (2016). doi:10.1016/j.inffus.2016.03.003.
 9. Xu, Q., Li, B., Zhang, Y., Ding, L.: High-Fidelity Component Substitution Pansharpening
    by the Fitting of Substitution Data. EEE Transactions on Geoscience and Remote Sensing,
    vol. 52, no. 11, pp. 7380-7392, (2014). doi: 10.1109/TGRS.2014.2311815.
10. Sulaiman, A.G., Elashmawi, W.H., & El-Tawel, G.S.: A Robust Pan-Sharpening Scheme
    for Improving Resolution of Satellite Images in the Domain of the Nonsubsampled Shear-
    let Transform. Sensing and Imaging, (2019). 21(1).doi:10.1007/s11220-019-0268-5.
11. Hnatushenko, V., Hnatushenko, Vik., Kavats, O., Shevchenko, V.: Pansharpening technol-
    ogy of high resolution multispectral and panchromatic satellite images. Scientific Bulletin
    of National Mining University, Issue 4, 91-98 (2015).
12. Vivone, G., Alparone, L.; Chanussot, J.; Dalla Mura, M.; Garzelli, A.; Licciardi, G.A.;
    Restaino, R., Wald, L.: Acritical comparison among pansharpening algorithms. IEEE
    Trans. Geosci. Remote Sens. 53, p. 2565–2586 (2015).
13. Aishwarya, N, Abirami, S. and Amutha, R.: Multifocus image fusion using Discrete
    Wavelet Transform and Sparse Representation. 2016 International Conference on Wireless
    Communications, Signal Processing and Networking (WiSPNET), Chennai, 2016, pp.
    2377-2382, (2016). doi: 10.1109/WiSPNET.2016.7566567.
14. Rahmani, S, Strait, M, Merkurjev, D, Moeller, M, Wittman, T.: An adaptive IHS
    pansharpening method. IEEE Geosci. Remote Sens. Lett., vol. 7, no. 4, pp.746-750 (2010).
15. Haitao, Yin, Shutao, Li.: Pansharpening with multiscale normalized nonlocal means filter:
    a two-step approach. IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no.
    10, pp. 5734-5745 (2015).
16. Kashtan, V., Hnatushenko, V.: Computer Technology of High Resolution Satellite Image
    Processing Based on Packet Wavelet Transform. International Workshop on Conflict
    Management in Global Information Networks. CMiGIN 2019, p.370-380, Lviv (2019).
17. Li, X., Xu, F., Lyu, X., Tong, Y., Chen, Z., Li, S., & Liu, D.: A Remote-Sensing Image
    Pan-Sharpening Method Based on Multi-Scale Channel Attention Residual Network. IEEE
    Access, 8, 27163–27177, (2020). doi:10.1109/access.2020.2971502.
18. Shahdoosti, H.R., Javaheri, N.: Pansharpening of Clustered MS and Pan Images Consider-
    ing Mixed Pixels, IEEE Geoscience and Remote Sensing Letters, 14, 826-830, (2017).
19. Xu, Q., Zhang, Y., Li, B., Ding, L.: Pansharpening using regression of classified MS and
    Pan images to reduce color distortion. IEEE Geosci. Remote Sens. Lett, 12, p. 28–32
    (2015).
20. Liu, J., Liang, S.: Pan-sharpening using a guided filter. International Journal of Remote
    Sensing, 37:8, 1777-1800, (2016). doi: 10.1080/01431161.2016.1163749.
21. Zheng, Y., Dai, Q., Tu, Z., Wang, L.: Guided image filtering-based pan-sharpening meth-
    od: A case study of GaoFen-2 imagery. ISPRS International Journal of Geo-Information,
    6, p. 404 (2017). doi:10.3390/ijgi6120404.
22. Li, H., Jing, L., Tang, Y., Wang, L.: An image fusion method based on image segmenta-
    tion for high-resolution remotely-sensed imagery. Remote Sens., 10, p. 790 (2018).
23. Restaino, R., Vivone, G., Dalla Mura, M., Chanussot, J.: Fusion of multispectral and pan-
    chromatic images based on morphological operators. IEEE Trans. Image-Process, 25,
    p. 2882–2895 (2016).
24. Wang, W., Liu, H., Liang, L., Liu, Q., Xie, G.: A regularized model-based pan-sharpening
    method for remote sensing images with local dissimilarities. International Journal of Re-
    mote Sensing, 1–26, (2018). doi:10.1080/01431161.2018.1539269.
25. Wei, Y., Yuan, Q., Shen, H., and Zhang, L.: Boosting the accuracy of multispectral image
    pansharpening by learning a deep residual network. IEEE Geosci. Remote Sens. Lett., vol.
    14, no. 10, pp. 1795-1799, (2017).
26. Huang, W., Xia, L., Wei, Z., Liu, H., and Tang, S.: A new pan-sharpening method with
    deep neural networks. IEEE Geosci. Remote Sens. Lett., vol. 12, no. 5, pp. 1037-1041,
    (2015).
27. Azarang, A. and Ghassemian, H.: A new pansharpening method using multi resolution
    analysis framework and deep neural networks. In 2017 3rd Int. Conf. on Pattern Recogni-
    tion and Image Analysis (IPRIA), (2017).
28. Scarpa, G., Vitale, S. and Cozzolino, D.: Target-Adaptive CNN-Based Pansharpening.
    IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 9, pp. 5443-5457,
    (2018). doi: 10.1109/TGRS.2018.2817393.
29. Li, X., Yan, H., Xie, W., Kang, L., & Tian, Y.: An Improved Pulse-Coupled Neural Net-
    work Model for Pansharpening. Sensors, 20(10), 2764, (2020). doi:10.3390/s20102764.
30. Aiazzi, B., Baronti, S., Selva, M., Alparone, L.: Bi-cubic interpolation for shift-free pan-
    sharpening. ISPRS J. Photogramm. Remote Sens., 86, 65–76, (2013).
31. Kwan, C., Budavari, B., Bovik, A. C., & Marchisio, G.: Blind Quality Assessment of
    Fused WorldView-3 Images by Using the Combinations of Pansharpening and Hyper-
    sharpening Paradigms. IEEE Geoscience and Remote Sensing Letters, 14(10), 1835–1839,
    (2017). doi:10.1109/lgrs.2017.2737820.
32. Wang, Z., Bovik, A., C., Sheikh, H.,R., Simoncelli, E., P.: Image Quality Assessment:
    From Error Visibility to Structural Similarity. IEEE Transactions on Image Processing,
    vol. 13, No. 4, pp. 600-612 (2004).
33. Jagalingam, P., Hegde, A.V.: A Review of Quality Metrics for Fused Image. Aquatic Pro-
    cedia. 4:133–142, (2015). doi: 10.1016/j.aqpro.2015.02.019.