=Paper= {{Paper |id=Vol-3616/paper6 |storemode=property |title=Tumor Detection in Mammography Images using Discrete Wavelet Transform and Bayes Fusion Technique |pdfUrl=https://ceur-ws.org/Vol-3616/paper6.pdf |volume=Vol-3616 |authors=Abdelkader Zitouni,Fatiha Benkouider,Fatima Chouireb,Mourad Reggab |dblpUrl=https://dblp.org/rec/conf/rif/ZitouniBCR23 }} ==Tumor Detection in Mammography Images using Discrete Wavelet Transform and Bayes Fusion Technique== https://ceur-ws.org/Vol-3616/paper6.pdf
                         Tumor Detection in Mammography Images using Discrete
                         Wavelet Transform and Bayes Fusion Technique

                         Abdelkader Zitouni1, Fatiha Benkouider1 , Fatima Chouireb1 and Mourad Reggab1

                         1
                                University Amar Telidji, 37G Route de Ghardaia, Laghouat, 03000, Algeria


                                             Abstract
                                             This research presents supervised classification algorithm based on information fusion for
                                             detecting masses in mammography images. Discrete wavelet transform can preserve
                                             information regarding both high and low frequencies and offer great discriminatory power
                                             between areas with strong similarities. This motivates us to use this type of features to
                                             improve image segmentation. So, in the first stage, the suggested technique used this feature
                                             extraction approach on mammography images in order to obtain additional information. After
                                             that in the second stage, estimated feature vector of each pixel is sent to a neural network
                                             classifier for initial-labeling. Then, in the third stage of the suggested technique, Bayes fusion
                                             method is used to combine the scores, within a sliding window, obtained by the neural
                                             network for each pixel. The performance of the proposed segmentation algorithm was
                                             evaluated on mammography images from Mammography Image Analysis Society (MIAS)
                                             dataset. The achieved classification results by the proposed fusion system leads to higher
                                             classification precision in detecting masses on mammography images, which are one of
                                             breast cancer signs.

                                             Keywords 1
                                             Image segmentation, Masses Detection, Breast Cancer, Neural Network, Wavelets, Bayes
                                             fusion.

                         1. Introduction
                            One of the leading causes of death worldwide among women is breast cancer [1]. Studies on breast
                         cancer have demonstrated that early detection of these abnormalities plays a very important factor in
                         cancer treatment and allows better recovery for most patients [2].
                            Medical imaging is a robust and reliable diagnostic method for the breast related diseases and it
                         can be produced from various equipment in the medical field, such as Ultrasound (USG), MRI, CT-
                         Scan / CAT-Scan, and Mammography [3].
                            Mammography is the major screening tool which is carried out for detection of breast cancer at
                         early stage, and several images processing techniques have been used for mammograms interpretation
                         in order to assist radiologists while detecting or identifying eventual abnormalities [4-18]. [4] and [5]
                         presented an automated classification of breast cancer lesions using neural networks and deep belief
                         network; [6], [7], [8] used gray-level co-occurrence matrix for mammograms classification; [10], [11]
                         proposed a mammogram classification scheme using 2d discrete wavelet and local binary pattern.
                            Generally, masses (space occupying lesions) and calcifications (tiny flecks of calcium, like grains
                         of salt) are the two abnormalities present in the mammogram images. The pre-processing and feature
                         extraction process is an important stage in identifying the presence of tumors. So, referring to the

                         RIF'23 : The 12th Seminary of Computer Science Research at Feminine, March 09, 2023, Constantine, Algeria
                         EMAIL: a.zitouni@lagh-univ.dz (A. Zitouni); fbenkouider@gmail.com (f. Benkouider); f.chouireb@lagh-univ.dz (F. Chouireb);
                         m.reggab@lagh-univ.dz (M. reggab)
                                             2023 Copyright for this paper by its authors.
                                        Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
                                        CEUR Workshop Proceedings (CEUR-WS.org)


CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
advantages and disadvantages of the methods and algorithms that have been developed by previous
researchers, in this paper we attempt to use one type of feature extraction techniques with the purpose
of detecting masses on mammography images. This feature is a structural feature obtained by using
the wavelet transform coefficients. For the pre-processing process, which finds to improve image
quality such as contrast enhancement to obtain a better image visualization [1], there are different
types of filtering techniques. In this study the contrast of the mammogram images was regulated by
histogram adjustment which improves the contrast of the output image by spreading out the intensity
values.
    So, in the first stage of our work, we have used the discrete wavelet transform as feature extraction
strategy in order to get more information that enables the classifiers to discriminate between the
different areas in the mammography image. In the second stage, an appropriate classification
algorithm is applied using the set of extracted features obtained from the previous stage. The
Backpropagation Artificial Neural Networks classifier is chosen among the most well-known
classifiers, it was initiated in [19-20]. The estimated feature vector of each pixel is sent to the neural
networks classifier for initial-labeling.
    A sliding window, whose class is assigned to its central pixel, is used. However, this central pixel
belongs to other window neighbors that may be classified into other classes. Consequently, in the
third stage, in order to obtain a more precise segmentation result, a Bayes fusion method is used for
each pixel to combine the scores results of several windows that contain this central pixel. The
proposed segmentation algorithm performance was verified on mammography images from MIAS
dataset [21]. The obtained results lead to higher classification precision in detecting masses which are
one of breast cancer signs.
    The rest of this manuscript is divided in three sections. The first one describes the background
theory of several techniques used in this paper. Then, section 2 is dedicated to give in details the
suggested segmentation process and the reached performance of the proposed fusion technique.
Finally, in section 3, we conclude and recommend possibilities for future work.

2. Feature Extraction Algorithm and Fusion Theory

    2.1 The wavelet analysis
   Since the work of Grossman and Morlet [22], the wavelets transform has appeared as a powerful
tool to solve problems in different application. The wavelets transform decomposes the input signal
into a series of wavelet functions        ( ) derived from a mother function ( ) given by dilatation
(factor a) and translation (factor b) operations. Figure1 illustrates some examples of wavelets that are
generally used in image processing [23].




           a                   b              c
                         (a)                   (b)                  (c)
Figure 1: Examples of wavelets.
(a) Morlet Wavelet, (b) Mexican hat Wavelet, (c) Meyer Wavelet


   Wavelet analysis transforms a finite energy signal in the spatial domain into another finite energy
signal in the spatio-frequency domain. The components         of this novel signal, which are described
in equation (1), are called wavelet coefficients. In an image, these coefficients offer information on
the local variation of the grey levels around a given pixel. The more significant is this variation, the
higher they are [24].
                                       ∫       ( )       ( )                                         ( )

   where

                                 ( )             (        )                                          ( )
                                           √


   The most important advantage of wavelets as compared to other frequency methods, as Fourier
transform, is that it offers both frequency and spatial locality [25]. In 1989, Mallat [26] suggested a
multi-resolution decomposition algorithm based on wavelets transform. The algorithm decomposes an
input image into a set of detail images and an approximation image using a filter bank comprising a
high pass filter (HP) and a low pass filter (LP). At each decomposition level the size of the
transformed images is reduced by a factor of two [24]. The discrete wavelet transform of a 2-D image
can be obtained by performing the filtering consecutively along horizontal and vertical directions
(separable filter bank) [27]. Four images are then created at each level. Figure 2 shows an example of
decomposition of the image on one level.




Figure 2: Use of 1-stage discrete wavelet transform (DWT)

    In Figure 2, the DWT decomposes the image into 4 orthogonal sub-band: low-low (LL), high low
(HL), low high (LH), and high-high (HH) consisting of approximation, horizontal, vertical, and
diagonal information. The approximations image is the smoothed version of the original image and it
contains global information that is similar to the original image with the number of rows and the
number of columns being half of the original image. Horizontal, vertical, and diagonal contain the
detail and represent the fluctuations of the pixel intensity in horizontal, vertical, and diagonal
directions and they have low-intensity areas, whereas areas with high intensity are only found on the
edges of the image object.
   The values of transformed coefficients in detail and approximation images (sub-band images)
represent the necessary features that capture useful discrimination information for masses
segmentation [28].

   1. Wavelet’s Choice: In our research, we have used a second order biorthogonal spline wavelet.
This wavelet is used for masses analysis due to its excellent location in the frequency and spatial
domains and its sensitivity to local singularity and correlation of the image [24].
   2. Indices’ Calculation: One of the most used indices for characterizing the masses in the spatio-
frequency plane is the measurement of energy. Because the transformed images have different
frequencies, scales and orientations, the energy index is a local measure of the wavelet coefficient
distribution according to the scale, the orientation and the frequency. It has been used successfully for
segmentation and classification of masses [24].

   The expression of energy is given by [24]:
                                                     ∑        (   )                                  ( )
  The second index used in conjunction with energy is the measurement of the local mean of the
wavelet coefficients given by [24]:

                                                         ∑| (   )|                                         ( )

   where N denotes the number of pixels, designated by the indices (            ), and enclosed in the area R.
   The calculation of these indices was done on a sliding window W. The local mean and the energy
on the sliding window are calculated from the resultant sub-band images. So the feature vector of
each window is made of eight parameters V= [ELL, ELH, EHL, EHH, MLL, MLH, MHL, MHH], as seen in
Figure 3.
   Several tests were carried out on a series of window sizes going from 5×5 to 25×25. The highest
good classification rate was reached for a window of dimension 11×11. The produced features vector
of each window is used as an input to the neural network classifier for a primary labeling, and the
score for the window delivered by the neural network is assigned to its mid pixel.




Figure 3: Wavelet features extraction stage

    2.2 Theory of Bayes’ Fusion
   One of the first techniques used to combine images with decision-making was the Bayes fusion.
This model was chosen by several authors because it has a very well-defined context with known
mathematical properties [29].
    Consider   ,      , ...           to be a collection of mutually exclusive hypotheses. They satisfy the
following conditions:


                                  {                                                                        ( )

   where     represents the hypothesis' space (that is to say, the set of fused image classes). The
hypotheses are mutually exclusive and form a partition of .
   Consider      and      to be two characteristic primitives from two different images, representing
the same object, or the same hypothesis     , the Bayesian theory computes the likelihood of getting
the hypothesis from the two measures         and      through Bayes rule [30]:

                                                          ( )     (           ⁄ )
                                      (          )                                                         ( )
                                                     ∑      ( )       (        ⁄ )

   where (            ⁄ )represents the joint probability of having both measures ( , ) once the
hypothesis      is realized, and ( ) is the prior probability of the hypothesis , which shows the
possibility of occurrences of the hypothesis in the general case [30].
   If     and       are two independent random variables, the conditional probability (                  ⁄ ),
also called the likelihood function, becomes a separable function of the two variables  and              .

                              (           ⁄ )        (      )    (        )                                ( )
   Hence equation (6) takes the following form:

                                                    ( )       (       )        (       )
                             (            )                                                            ( )
                                              ∑       ( )         (        )       (       )

   Consequently, in order to determine the posterior probabilities (           ), we first need to
compute the prior probabilities ( ) for all hypotheses , i going from 1 to N, and the likelihood
functions (        ) for each image primitive     and for each hypothesis. To model the likelihood
functions, we work under the Gaussian hypothesis [30]:

                                                                  (       ̅̅̅)
                                 (       )                (                        )                   ( )
                                                √

   where ̅̅̅ denotes the mean and        is the standard deviation of the Gaussian expression.

    When the combination of the probabilities realized by equation (8), we have to select a decision’s
criterion to decide which hypothesis         is supposed to be chosen according to all posterior
probabilities. Many criteria are suggested in the literature: The maximum of posterior probability is
the most commonly used criterion, which selects the hypothesis         having the highest probability
  (            ) [31].

3. The Proposed Segmentation Algorithm
   Data obtained from the mammographic images is often, noisy, incomplete, inconsistent, and have
low contrast. Therefore, pre-processing is needed in the medical image processing to improve image
quality, remove unwanted noise, preserves the edges within an image, and make the feature extraction
phase more reliable [1].
   There are different types of filtering techniques in the pre-processing. So, in the first step of our
work, the contrast of the mammogram images was regulated by histogram adjustment which increases
and improves the contrast of the output image by spreading out the intensity values.
   After that in the second step, the proposed segmentation method uses Wavelets transform as
feature extraction strategy on mammography images in order to get more information in this data set.
The parameters of the feature set were selected as mentioned in the previous sections. After a proper
features' extraction, each estimated feature vector of each pixel is sent to the neural networks classifier
for primary labeling. The MLP neural networks classifier is chosen among the most well-known
classifiers, it was initiated in [19], [20]. For the choice of the hidden layers' number and the number of
neurons in each layer, we choose the rule proposed by [34] since there is no general rule other than
rules of thumb as proposed in [35], [36]. The size of the hidden layer is 75% of the input layer.
   For the transfer functions, we retain the most used in the literature, namely the logistic function
and the hyperbolic tangent function. The gradient backpropagation algorithm is used for the training
of neural networks.
Figure 4. Composition of the fusion vector


    Using a sliding window, the class for this window is assigned to its central pixel. However, this
central pixel belongs to other window neighbors that may be classified into other classes.
Consequently, in order to achieve a more precise segmentation result, for each pixel a Bayes fusion
method is used to combine the scores results of several windows that contain this central pixel:
   Consider    to be the segmented image comprising the scores              of each pixel (the output of the
neural networks classifier):

            with i =1…. n, j =1…. m,
   where n and m represent the sizes of the mammography image.

   We perused the images by using a sliding window of size                , so that every pixel is
surrounded by         pixels. Each central pixel    of window       with score      belongs to the
        window in the surrounding windows before the classification process. However, each central
pixel    of the window , with                    produced different scores       .

   For example, in the case of pixel          with score    and       , the central pixel is surrounded
by eight pixels, that are the center of the eight windows which pixel     belonged to, (see Figure 4).

   From the above example, we joined the scores produced by the current block and its eight
neighboring ones for the wavelet features: {S33 ,S32 ,S34 ,S23 ,S24 , S22 , S42 , S44 , S43 , S33 , S32 , S34 ,
S23 , S24 , S22 , S42 ,S44 , S43} (see Figure 5). In this work, a sliding window of size 9×9 is used, so the
central pixel is surrounded by 80 pixels, that are the center of the 80 windows which pixel P 5,5
belonged to.
    Figure 5. Classification stages block diagram

   The essential fusion algorithm steps are summarized as follows:
 Step1: Pre-processing of the mammography image.
 Step2: Feature extraction via DWT.
 Step3:     Neuronal classification of the estimated
          feature vector. We obtain:
                Scores1: for DWT.
                while the recognition rate is changed
           do
 Step4:         for each pixel do
                           Bayesian fusion of Score1
                          and the neighboring scores.
                end
                Calculate the recognition rate.
                End

   The performance of the proposed algorithm for segmenting mammography images is assessed
using many images from the MIAS (Mammographic Image Analysis Society) database [21]
containing 322 mammograms sized 1024 x 1024 pixels. The images are arranged in pairs: those with
even-numbers correspond to left MLO (medio-lateral oblique Mammograms) and those with odd-
numbers are right MLO.
   For the learning phase, we have used image mdb028 of the MIAS database, and to test our
algorithm we have taken randomly the following MIAS images:
   mdb025 and mdb132, for Well-defined/circumscribed masses (CIRC) [32].
   mdb184, for Spiculated masses (SPIC) [32].
   mdb134, mdb271 and mdb274, for Other, ill-defined masses (MISC) [32].
   mdb136 and mdb310, for Normal breast (NORM) [32].


   Figure 6 illustrates the obtained results (detected masses are displayed in cyan color) compared to
expert decision (masses centers coordinates and radiuses shown in blue color). Table 1 demonstrates
the Jaccard index that was achieved by utilizing this fusion algorithm.
   So, as shown in figure 6 and table 1, the results obtained on these taken images from MIAS
database are promising and they show the effectiveness of our fusion algorithm for the masses
segmentation on mammography images.
   We can see clearly that our algorithm gives a good result whatever the kind of class of abnormality
present, CIRC, SPIC, … etc. And also, we don’t have any false detection in the cases of normal
breast. So, the proposed method has the potential to identify the presence of any masses in the
mammogram image.




                                 (a) mdb028                  (b) mdb028




                                  (a) mdb134                 (b) mdb134




                                  (a) mdb271                 (b) mdb271
(a) mdb025   (b) mdb025




(a) mdb184   (b) mdb184




(a) mdb132   (b) mdb132




(a) mdb274   (b) mdb274




(a) mdb136   (b) mdb136
                                 (a) mdb310                 (b) mdb310

Figure 6: Experimental results on MIAS database: The first column (images a) contains the original
mammograms with expert masse location. The second column (images b) represents the
classification results using our approach.



   Figure 7 illustrates a comparison of our proposed approach, on MIAS images mdb184 and
mdb028, with another unsupervised techniques proposed by Kanta Maitra et al [33], based on Divide
and conquer algorithm, and Boulehmi Hela et al [2], based on Generalized Gaussian Density.

   As seen in figure 7, the proposed approach has the advantage of being simple and precise; we have
exactly detected the shape of the present masses.

Table 1
Jaccard index of the fusion algorithm
                           Image                      jaccard index (%)
                          mdb028                          99.87
                          mdb134                          99. 77
                          mdb271                          99.86
                          mdb025                          99.41
                          mdb184                          99.74
                          mdb132                          99.83
                          mdb274                          99.48
                          mdb136                           100
                          mdb310                           100
                                   (a) mdb028                  (a) mdb184




                                   (b) mdb028                  (b) mdb184




                                   (c) mdb028                  (c) mdb184

          Figure 7: Comparison between results from:
                    (a) Our approach, (b) [33], (c) [2].

   The outcomes of our contribution demonstrate that it is possible to reach excellent fusion
performance by neatly selecting the best fusion method. We also note that by our fusion method, the
segmentation results of the mammography images are much improved as compared to other works.

4. Conclusion
   In this article, we have presented and discussed a new approach for the segmentation of
mammography images based totally on information fusion. We started by extracting the features
using the wavelet transform. After that, the estimated vector of features for every pixel was sent to the
neural network classifier for primary labeling. Next, a new fusion model for improving decision-
making is used, it consists of combining the scores of each pixel within a sliding window. The
proposed fusion algorithm was tested on mammography images from MIAS dataset.
   This research has shown that this method is very effective for the automatic detection of
abnormalities in digital mammogram.
   As perspective we will complete the masses detection system by classifying abnormal
mammographic images into benign and malignant.
5. References
[1] M. Wisudawati Lulu, M. Sarifuddin, P. Wibowo Eri, A. Abdullah Arman, Feature Extraction
     Optimization with Combination 2D-Discrete Wavelet Transform and Gray Level Co-Occurrence
     Matrix for Classifying Normal and Abnormal Breast Tumors, Modern Applied Science 14 (15):
     5 (2020).
[2] H. Boulehmi, H. Mahersia, K. Hamrouni, Unsupervised Masses Segmentation Technique Using
     Generalized Gaussian Density, International Journal of Image Processing and Graphics (IJIPG),
     1, (2) (2013).
[3] B. U. Fahnun, A. B. Mutiara, E. P. Wibowo, J. Arlan, A. Latief, Filtering techniques for noise
     reduction in liver ultrasound images, Phd thesis, Program Doktor Teknologi Informasi
     Universitas                Gunadarma,              (2018),            261–266.             URL:
     https://doi.org/10.1109/EIConCIT.2018.8878547
[4] M. M. Abdelsamea, M. H. Mohamed, M. Bamatraf, Automated classification of malignant and
     benign breast cancer lesions using neural networks on digitized mammograms, Cancer
     Informatics,           18,        (2019),         1–3.        Sage         Journal.        URL:
     https://doi.org/10.1177/1176935119857570PMid:31244522 PMCid:PMC6580711
[5] D. Lestari, S. Madenda, J. Massich, A Segmentation Algorithm for Breast Lesion Based on
     Active Contour Model and Morphological Operations, Advanced Science, Engineering and
     Medicine,           7,         920-924.         10.1166/asem.          (2015).1786.        URL:
     https://doi.org/10.1166/asem.2015.1786
[6] M. A. Al-antari, M. A. Al-masni, SU. Park et al, An Automatic Computer-Aided Diagnosis
     System for Breast Cancer in Digital Mammograms via Deep Belief Network, J. Med. Biol. Eng.
     38, (2018), 443–456. URL: https://doi.org/10.1007/s40846-017-0321-6
[7] M. Pratiwi, Alexander, J. Harefa, S. Nanda, Mammograms classification using gray-level co-
     occurrence matrix and radial basis function neural network, (2015).                        URL:
     https://doi.org/10.1016/j.procs.2015.07.340
[8] R. Biswas, A. Nath, S. Roy, Mammogram classification using gray-level co-occurrence matrix
     for diagnosis of breast cancer, (2016), 161– 166. URL: https://doi.org/10.1109/ICMETE.2016.85
[9] S. Ergin, I. Esener, T. Yuksel, A genuine glcm based feature extraction for breast tissue
     classification on mammograms, International Journal of Intelligent Systems and Applications in
     Engineering, (2016), 124– 124. URL: https://doi.org/10.18201/ijisae.269453
[10] K. Ucar, H. E. Kocer, Breast cancer classification with wavelet neural network, International
     Artificial Intelligence and Data Processing Symposium (IDAP), (2017), 1–5. URL:
     https://doi.org/10.1109/IDAP.2017.8090347
[11] A. J. Putra, Mammogram classification scheme using 2d discrete wavelet and local binary pattern
     for detection of breast cancer, Journal of Physics: Conference Series, (2018). URL:
     https://doi.org/10.1088/1742-6596/1008/1/012004
[12] M. Pawar, S. Talbar, Local entropy maximization-based image fusion for contrast enhancement
     of mammogram, Journal of King Saud University Computer and Information Sciences, (2018).
     URL: https://doi.org/10.1016/j.jksuci.2018.02.008
[13] M. Pawar, S. Talbar, A. Dudhane, Local binary patterns descriptor based on sparse curvelet
     coefficients for false-positive reduction in mammograms, Journal of healthcare engineering,
     (2018). URL: https://doi.org/10.1155/2018/5940436PMid:30356422 PMCid:PMC6178513
[14] Y. Shachor, H. Greenspan, J. Goldberger, A mixture of views network with applications to the
     classification of breast microcalcifications, Computer Vision and Pattern Recognition, (2018).
[15] A. J. Bekker, M. Shalhon, H. Greenspan, J. Goldberger, Multi-view probabilistic classification of
     breast microcalcifications, in: Proceedings of the IEEE Transactions on Medical Imaging, 2016.
[16] N. Dhungel, G. Carneiro, A. P. Bradley, Fully automated classification of mammograms using
     deep residual neural networks, in: Proceedings of the 14th International Symposium on
     Biomedical Imaging (ISBI 2017), IEEE, 2017.
[17] Y. Li, H. Chen, L. Cao, J. Ma, A survey of computer-aided detection of breast cancer with
     mammography, J. Health Med. Inform. 7 (4) (2016).
[18] K. J. Geras, S. Wolfson, N. W. Y. Shen, S.G. Kim, E. Kim, L. Heacock, U. Parikh, L. Moy, K.
     Cho, High-resolution breast cancer screening with multi-view deep convolutional neural
     networks, in: Proceedings of the Computer Vision and Pattern Recognition, 2017.
[19] J. Hérault, Ch. Jutten, Réseaux neuronaux et traitement du signal, Hermès, Paris, 1994.
[20] J. F. Jodouin, Les réseaux de neurones. Principes et définitions, Hermès, Paris, 1994.
[21] J. Suckling, J. Parker, D. Dance, S. Astley, I. Hutt, C. Boggis, I. Ricketts et al, Mammographic
     Image      Analysis     Society    (MIAS)      database      v1.21    [Dataset],    2015.   URL:
     https://www.repository.cam.ac.uk/handle/1810/250394
[22] A. Grossman, J. Morlet, Decomposition of Hardy Functions into Square Integrable Wavelets of
     Constant Shape, SIAM Journal on Mathematical Analysis, 15, (4), 1984, pp. 723-736.
[23] M. H. Sahbani, K. Hamrouni, Segmentation d’images texturées par transformée en ondelettes et
     classification C-moyenne floue, Proc. 3rd Int. Conf. Sciences of Electronic, Technologies of
     Information and Telecommunications, Tunisia, March 27-31, 2005.
[24] T. Iftene, A. Safia, Comparaison Entre La Matrice De Cooccurrence Et La Transformation En
     Ondelettes Pour La Classification Texturale Des Images Hrv (Xs) De Spot, Télédétection, 4, (1),
     (2004), pp. 39–49.
[25] Ch. Anibou, M. N. Saidi, D. Aboutajdine, Classification of Textured Images Based on Discrete
     Wavelet Transform and Information Fusion, J. Inf. Process. Syst, 11, (3), (2015), pp. 421-437.
[26] S. Mallat, A theory of multiresolution signal decomposition: the wavelet representation, IEEE
     Trans. Pattern Analysis and Machine Intelligence, 11, (7), (1989), pp. 674-693.
[27] P. Scheunders, S. Livens, G. Van de Wouwer et al, Wavelet-based Texture Analysis, Int. J.
     Computer Science and Information Management, (1997).
[28] S. Arivazhagan, L. Ganesan, Texture segmentation using wavelet transform, Pattern Recognition
     Letters, 24, (16), (2003), pp. 3197–3203.
[29] A. Zitouni, F. Benkouider, F. Chouireb, M. Belkheiri, Classification of Textured Images Based
     on New Information Fusion Methods, IET Image Processing, vol.13, issue 9, (2019), pp. 1540 -
     1549.
[30] A. Dromigny-Badin, Fusion d’images par la théorie de l’évidence en vue d’applications
     médicales et industrielles, PhD. Dissertation, Institut National des Sciences Appliquées de Lyon,
     1998.
[31] A. Zitouni, Image Processing Methodology and Textures Analysis for their Segmentation, PhD.
     Dissertation, University Amar Telidji of Laghouat, Algeria, 2020.
[32] The mini-MIAS database of mammograms. URL: http://peipa.essex.ac.uk/info/mias.html.
     Accessed 20 Dec 2020.
[33] I. Kanta, S. Maitra, S. Nag, K. Bandyopadhyay, Detection of Abnormal Masses using Divide and
     Conquer Algorithm in Digital Mammogram, Int. J. Emerg. Sci., 1(4), (2011), 767-786.
[34] B. Wierenga, J. Kluytmans, Neural nets versus marketing models in time series analysis: a
     simulation studies, in: Proceedings of the 23 annual conference “European marketing
     association”, Maastricht, 1994, pp. 1139-1153.
[35] V. Venugopal, W. Baets, Neural networks and statistical techniques in marketing research: A
     conceptual comparison, Marketing Intelligence and Planning, vol. 12, no. 7, (1994), pp. 30-38.
[36] D, Shepard, The new direct marketing, Chapter Business one Irwin Homewood IL, Journal of
     Direct Marketing, vol. 6, (1992), pp. 52-53.