=Paper= {{Paper |id=Vol-2744/paper40 |storemode=property |title=Unsupervised Palm Vein Image Segmentation |pdfUrl=https://ceur-ws.org/Vol-2744/paper40.pdf |volume=Vol-2744 |authors=Ekaterina Safronova,Elena Pavelyeva }} ==Unsupervised Palm Vein Image Segmentation== https://ceur-ws.org/Vol-2744/paper40.pdf
        Unsupervised Palm Vein Image Segmentation*

     Ekaterina Safronova[0000-0001-7473-0178] and Elena Pavelyeva [0000-0002-3249-2156]

Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University,
                                     Moscow, Russia
               katyasafit@gmail.com, paveljeva@yandex.ru



       Abstract. In this article the new hybrid algorithm for palm vein image segmen-
       tation using convolutional neural network and principal curvatures is proposed.
       After palm vein image preprocessing vein structure is detected using unsuper-
       vised learning approach based on W-Net architecture, that ties together into a
       single autoencoder two fully convolutional neural network architectures, each
       similar to the U-Net. Then segmentation results are improved using principal cur-
       vatures technique. Some vein points with highest maximum principal curvature
       values are selected, and the other vein points are found by moving from starting
       points along the direction of minimum principal curvature. To obtain the final
       vein image segmentation the result of intersection of the principal curvatures-
       based and neural network-based segmentations is taken. The evaluation of the
       proposed unsupervised image segmentation method based on palm vein recogni-
       tion results using multilobe differential filters is given. Test results using CASIA
       multi-spectral palmprint image database show the effectiveness of the proposed
       segmentation approach.

       Keywords: Biometrics · Image Segmentation · Palm Vein Recognition · Un-
       supervised Learning · Principal Curvatures.


1      Introduction

Nowadays information security plays crucial role in human life and, as it turned out,
accustomed keys and passwords are not reliable enough. Instead, the biometric charac-
teristics that uniquely identify a person from an entire population based on intrinsic
physical or behavioral traits [1], provide stable and safe data protection. Biometrics
recognizes individuals based on these characteristics.
   One of the most advanced and progressive personal identification technologies is
palm vein recognition. Veins are usually not visible to others that provides low risk of
fake or theft. As for other important advantages, vein patterns are quite unique to the
owners, image acquisition does not require physical contact and the system can be made
compact. Deoxygenated hemoglobin in the vein blood absorbs near infrared light so
infrared camera captures images containing veins.


Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License
Attribution 4.0 International (CC BY 4.0).
2 E. Safronova, E. Pavelyeva


   Palm vein recognition algorithm consists of several steps. Firstly, the region of in-
terest (ROI) is extracted and segmentation of ROI image into two classes is performed.
Vein points are marked in white while the other points are marked in black. The second
step, feature vector extraction, represents the main difference between existing ap-
proaches. Since vein recognition is relatively young study, some feature extraction
methods could be derived from other biometric recognition algorithms based on statis-
tical information [2], image key points [3, 4, 5], subspace-based methods [6], phase
based methods [7, 8], etc. Some approaches were developed specifically for vein recog-
nition [9]. The last algorithm step, image matching, is based on feature vector type. At
this step the distance between palm vein images is calculated. Much recent work has
been focused on employing deep convolutional neural networks (CNN) in biometrics.
Deep learning methods can be applied to any step of palm vein recognition algorithm
[10, 11, 12, 13].
   In this paper we propose a hybrid approach based on unsupervised machine learning
and mathematical methods to obtain good vein segmentation. The problem of unsuper-
vised image segmentation is one of the major challenge in computer vision which has
been deeply researched. The range of well-known techniques for solving this issue con-
tains normalized-cuts [14], Markov random field-based methods [15], CNN-based ap-
proaches [16], etc. However, the results of applying of these methods may include in-
accuracy due to specific features of a technique and lack of correct ground truth. So the
mathematical methods shall control the results of CNN. In this paper we propose a
hybrid segmentation method consisting of two approaches: based on CNN and principal
curvatures (Fig.1).




            Fig. 1. The scheme of the proposed palm vein segmentation algorithm.

The rest of this paper is organized as follows. In Section 2 the palm vein image prepro-
cessing and ROI extraction algorithms are described. The vein structure extraction is
described in Section 3 where Subsections 3.1 and 3.2 present principal curvatures and
CNN-based segmentation algorithms, the hybrid approach is described in Subsection
3.3. The evaluation of the proposed unsupervised image segmentation method based on
                                                Unsupervised Palm Vein Image Segmentation 3


multilobe differential filters for palm vein feature extraction is described in Section 4.
The experimental results for images from CASIA multi-spectral palmprint image data-
base [17] are given in Section 5. Finally, Section 6 concludes this paper.


2      Palm vein image preprocessing

The proposed palm vein region of interest (ROI) detection and enhancement scheme is
illustrated in Fig. 2.




                     (a)                       (b)                      (c)




                      (d)                                (e)                     (f)




                      (g)                      (h)                        (i)
Fig. 2. Illustration of palm vein ROI extraction and preprocessing: (a) original palm image, (b)
binary hand image, (c) points 𝑃1 and 𝑃2 , (d) function that represents the distance between the
center of palm and all points on the hand contour, (e) ROI - the square region of interest on the
rotated image. (f) ROI image with uniform illumination. (g) ROI after CLAHE, (h) ROI after
NLM, (i) final ROI image.

First, hand boundary is detected by OTSU binarization algorithm [18] and points be-
tween the fingers are found as points where local minimum of Euclidean distances be-
tween the center of palm and all points on the hand contour is reached. The points
4 E. Safronova, E. Pavelyeva


between index and middle fingers, 𝑃1 , and forth and little fingers, 𝑃2 , can be taken as
landmarks for extraction of square ROI (Fig. 2 d) [19]. To eliminate the influence of
palm rotation, the image is rotated to the angle θ which is the angle between the line
𝑃1 𝑃2 and the horizontal line. To reduce the non-uniform illumination appearing in palm
vein images the background is subtracted and the histogram is stretched (Fig. 2 f). To
emphasize vein structure contrast-limited adaptive histogram equalization (CLAHE)
technique [20] is used (Fig. 2 g). After contrast enhancement all image details including
noise and glares are sharper. In order to smooth the undesirable details, non-local means
(NLM) algorithm [21] is used to reduce noise (Fig. 2 h). NLM smoothies also veins a
little so CLAHE is applied again to obtain distinguishable veins (Fig. 2 i). Fig. 2 shows
the ROI of palm vein image and the results of preprocessing algorithm. After prepro-
cessing veins become sharper and more distinguishable [22].


3      Vein structure extraction

3.1    Principal curvatures
The next step is the vein structure extraction. Consider an image as a surface in a three-
dimensional space, where the brightness value of the pixels is the z-coordinate. We are
going to extract vein structure using principal curvatures method [22].
      Let 𝐿(𝑥, 𝑦) denote the image intensity at the pixel position, 𝐺 (𝑥, 𝑦) be the image
gradient vector. Then the normalized gradient after a hard thresholding is defined as:
                                     𝐺(𝑥,𝑦)
                                             , ‖𝐺(𝑥, 𝑦)‖ ≥ 𝛾
                       𝐺𝛾 (𝑥, 𝑦) = {‖𝐺(𝑥,𝑦)‖                 ,                        (1)
                                        0, ‖𝐺(𝑥, 𝑦‖ < 𝛾

where γ is a threshold level. In the experiments we use γ = 4. The normalized gradient
field contains noisy components so we smooth it with Gaussian function 𝐻 (𝑥, 𝑦):
                           𝐻𝛾 (𝑥, 𝑦) = 𝐺𝛾 (𝑥, 𝑦) ∗ 𝐻 (𝑥, 𝑦).                          (2)

Let 𝐻𝛾 (𝑥, 𝑦) = (ℎ𝑥 (𝑥, 𝑦), ℎ𝑦 (𝑥, 𝑦)). The local shape characteristics of an image at a
point (𝑥, 𝑦) can be described by the Hessian matrix 𝐻𝑆 (𝑥, 𝑦).
                                        𝜕ℎ𝑥(𝑥,𝑦)   𝜕ℎ𝑥(𝑥,𝑦)
                                         𝜕𝑥          𝜕𝑦
                          𝐻𝑆 (𝑥, 𝑦) = (𝜕ℎ𝑦(𝑥,𝑦)    𝜕ℎ𝑦 (𝑥,𝑦)
                                                             ).                       (3)
                                           𝜕𝑥         𝜕𝑦

Let 𝜆1 , 𝜆2 be the eigenvalues and 𝑣1 , 𝑣2 be the corresponding eigenvectors of
𝐻𝑆 (𝑥, 𝑦), |𝜆1 | > |𝜆2 |. Then two principal directions, the directions of the maximum and
minimum curvatures, are determined by two eigenvectors 𝑣1 and 𝑣2 . Consequently,
two eigenvalues 𝜆1 , 𝜆2 represent the principal curvatures (the curvatures along the
principal directions) [9]. The tubular-shaped regions have maximum principal curva-
ture 𝜆1 higher than other regions and vector 𝑣1 is directed across tubular direction,
vector 𝑣2 – along tubular direction [23].
                                              Unsupervised Palm Vein Image Segmentation 5


      In order to catch veins of different widths, consider the set of parameters σ for the
                                                         4
Gauss function: 𝜎0 , … , 𝜎𝑛−1 , where 𝑛 = 10, 𝜎𝑖 = 𝜎0 ∙ √2𝑖 , 𝜎0 = 2, 𝑖 = 0, 1, … , 9. For
each value of 𝜎 the Hessian matrix is constructed, at each point the maximum positive
eigenvalue 𝜆1 and an eigenvector 𝑣2 corresponding to 𝜆2 are calculated. Then, at each
point of the image, the largest value of 𝜆1 over all 𝜎 and the corresponding vector 𝑣2
are taken.
      We select points with highest maximum principal curvature values as points that
certainly belong to veins. The other vein points can be found [22] from starting points
by moving along direction of vector 𝑣2 by |𝜆1 |. The results of this approach is shown
in Fig.3.




Fig. 3. Vein structures extraction. First row shows ROI for different palm vein images of one
person, second row shows found vein structure.


3.2    Unsupervised convolutional neural network
As we do not have the ground truth for the task of ROIs segmentation, the unsupervised
method is required, so the approach based on W-Net architecture [16] is proposed. The
authors of W-Net method present a new architecture which ties two fully convolutional
network (FCN) architectures, each similar to the U-Net [24], together into a single au-
toencoder (Fig. 4). The first FCN encodes an input image into a k-way soft segmenta-
tion: 𝑈𝐸𝑛𝑐 : ℝ𝐻×𝑊×3 → ℝ𝐻×𝑊×𝐾 , where 𝐻 × 𝑊 denotes a size of input image,
𝑈𝐸𝑛𝑐 (𝑥)𝑖𝑗𝑘 = 𝑝(𝑥𝑖𝑗 = 𝐴𝑘 ) ∈ [0, 1] measures the probability of pixel 𝑥𝑖𝑗 belonging to
class k (𝐴𝑘 is set of pixels in segment k). The second FCN, decoder, reverses this pro-
cess, going from the segmentation layer back to a reconstructed image:
𝑈𝐷𝑒𝑐 : ℝ𝐻×𝑊×𝐾 → ℝ𝐻×𝑊×3 (Fig.4).
   The both reconstruction errors of the autoencoder and soft normalized cut loss func-
tion on the encoding layer are used during training. The reconstruction loss is standard
6 E. Safronova, E. Pavelyeva


for training encoder-decoder architecture and can be defined as: 𝐽𝑟𝑒𝑐𝑜𝑛𝑠𝑡𝑟 =
‖𝑥, 𝑈𝐷𝑒𝑐 (𝑈𝐸𝑛𝑐 (𝑥))‖22 .




                                       Fig. 4. W-Net architecture.

The output of the 𝑈𝐸𝑛𝑐 is a normalized dense prediction. By taking the argmax, we can
obtain a K-class prediction for each pixel and compute the normalized cut loss as fol-
lowing [14]:
                                           𝑐𝑢𝑡(𝐴 ,𝑉−𝐴 )         ∑𝑢∈𝐴 ,𝑣∈𝑉−𝐴 𝑤(𝑢,𝑣)
             𝑁𝑐𝑢𝑡𝐾 (𝑉 ) = ∑𝐾       𝑘     𝑘    𝐾
                           𝑘=1 𝑎𝑠𝑠𝑜𝑐(𝐴 ,𝑉) = ∑𝑘=1
                                                                    𝑘      𝑘
                                                                  ∑𝑢∈𝐴 ,𝑡∈𝑉 𝑤(𝑢,𝑡)
                                                                                     ,          (4)
                                                   𝑘                   𝑘


where 𝐴𝑘 is set of pixels in segment k, V is the set of all pixels, and w measures the
weight between two pixels.
   However, since the argmax function is non-differentiable, it is impossible to calcu-
late the corresponding gradient during backpropagation. Instead, it is proposed to use a
soft version of the Ncut loss which is differentiable [16]:
                                           𝑐𝑢𝑡(𝐴 ,𝑉−𝐴 )               𝑎𝑠𝑠𝑜𝑐(𝐴 ,𝐴 )
       𝐽𝑠𝑜𝑓𝑡−𝑁𝑐𝑢𝑡 (𝑉, 𝐾 ) = ∑𝐾       𝑘     𝑘        𝐾          𝑘 𝑘
                             𝑘=1 𝑎𝑠𝑠𝑜𝑐(𝐴 ,𝑉) = 𝐾 − ∑𝑘=1 𝑎𝑠𝑠𝑜𝑐(𝐴 ,𝑉) = 𝐾 −
                                                    𝑘                          𝑘
     ∑
     𝑢∈𝑉,𝑣∈𝑉   𝑤(𝑢,𝑣)𝑝(𝑢=𝐴𝑘 )𝑝(𝑣=𝐴𝑘 )                      ∑    𝒑(𝒖=𝑨𝒌 ) ∑𝒗∈𝑽 𝒘(𝒖,𝒗)𝒑(𝒗=𝑨𝒌 )
∑𝐾
 𝑘=1   ∑
                                              = 𝑲 − ∑𝑲
                                                     𝒌=𝟏
                                                         𝒖∈𝑽
                                                            ∑
                                                                                            ,   (5)
           𝑢∈𝐴𝑘,𝑡∈𝑉 𝑤(𝑢,𝑡)𝑝(𝑢=𝐴𝑘 )                               𝒖∈𝑽 𝒑(𝒖=𝑨𝒌) ∑𝒕∈𝑽 𝒘(𝒖,𝒕)


where 𝑝(𝑢 = 𝐴𝑘 ) measures the probability of node u belonging to class k that is directly
computed by the encoder. The weight matrix W for 𝐽𝑠𝑜𝑓𝑡−𝑁𝑐𝑢𝑡 is defined as:
                    −‖𝐹(𝑖)−𝐹(𝑗)‖2
                                2          −‖𝑋(𝑖)−𝑋(𝑗)‖2
                                                       2
                         𝜎2                     𝜎2
         𝑤𝑖,𝑗 = 𝑒          𝐼        ∗ {𝑒         𝑋             if ‖𝑋(𝑖) − 𝑋(𝑗)‖2 < 𝑟 ,          (6)
                                             0                          otherwise,
where 𝑋(𝑖 ) and 𝐹 (𝑖 ) are the spatial location and pixel value of node i, respectively.
Since the size of our ROI images is 128×128 which is smaller than in the original work
[16], the depth of W-Net was decreased in our experiments, as it is shown in Fig. 5. We
use 𝑈𝐸𝑛𝑐 : ℝ128×128×1 → ℝ128×128×𝐾 and 𝑈𝐷𝑒𝑐 : ℝ128×128×𝐾 → ℝ128×128×1 .
                                               Unsupervised Palm Vein Image Segmentation 7




                      Fig. 5. The modified FCN for W-Net architecture.

As vein images have several semantic classes, such as veins with different intensity,
background, skin wrinkles, the neural network was applied for the overclustering, 𝐾 =
16. The training dataset contains 120 images. In Fig.6 one can see the results of this
approach: the first column in the figure shows input images (Fig. 6 a) while the second
one illustrates their reconstruction (Fig. 6 b); the third column presents the result of
overclustering (Fig. 6 c). After unification of classes corresponding to veins the ob-
tained vein image binarization is shown in the fourth column (Fig. 6 d).




            (a)                   (b)                   (c)                   (d)
Fig. 6. (a) Input ROI images; (b) The result of CNN reconstruction; (c) The result of overclus-
tering: each color corresponds to its own class; (d) The result of vein image segmentation using
W-Net.
8 E. Safronova, E. Pavelyeva


3.3    Hybrid segmentation method
Both principal curvatures-based and CNN-based approaches provide oversegmenta-
tion. To find the final vein mask the result of intersection of the principal curvatures-
based and CNN-based vein segmentations is taken (Fig. 7).




         (a)                      (b)                      (c)                             (d)
Fig. 7. (a) Input ROI images; (b) Segmentation results using CNN; (c) Segmentation results using
principal curvatures approach; (d) The hybrid palm vein image segmentation.


4      Evaluation of the image segmentation method

There are no ground truth masks for CASIA palmprint image database [17]. To evaluate
the proposed unsupervised segmentation method, we use multilobe differential filters
(MLDF) [25] that highlight vein branch points (Fig. 8) for palm vein feature extraction
and normalized root-mean-square error for feature maps matching [22]. Mathematically
the MLDFs are given as follows:
                                        −(𝑋−𝜇𝑝𝑖)2                          −(𝑋−𝜇𝑛𝑖)2
                     𝑁𝑝    1               2𝜎2            𝑁𝑛    1             2𝜎2
          𝑀𝐿𝐷𝐹 = 𝐶𝑝 ∑𝑖=1        𝑒            𝑝𝑖     − 𝐶𝑛 ∑𝑖=1          𝑒        𝑛𝑖     ,         (7)
                         √2𝜋𝜎𝑝𝑖                               √2𝜋𝜎𝑛𝑖


where the variables µ and σ denote the central positions and the scales of a 2D Gaussian
filters respectively, 𝑁𝑝 denote the number of positive lobes, and 𝑁𝑛 denote the number
                                               Unsupervised Palm Vein Image Segmentation 9


of negative lobes. Constant coefficients 𝐶𝑝 and 𝐶𝑛 are used to ensure zero sum of the
MLDF. We take the convolution results of the ROI images in vein points obtained after
vein image segmentation with the proposed MLDF kernels to obtain the feature maps
of vein images. In order to provide slight translation and rotation invariance matching,
the normalized root-mean-square error (NRMSE) [26] is proposed for feature maps
matching [22].




                Fig. 8. Multilobe differential filters for vein image analysis.

Given the intra- and interclass vein matching results, the recognition performance is
measured by the following indicators: the distribution of genuine and impostor scores,
False Acceptance Rate (FAR), False Reject Rate (FRR) and Equal Error Rate (EER) −
the cross-over error rate when FAR is equal to FRR. Lower EER means higher accuracy
of a biometric matcher.


5      Experimental results

Experimental results using CASIA Multi-Spectral Palmprint Image Database [17] are
presented. The database contains 7200 palm images captured from 100 different people
using a self-designed multiple spectral imaging device. Each sample contains six palm
images which are captured at the same time with six different electromagnetic spec-
trums. Each hand of any person in the database is represented by six images at one
wavelength. In our study the images from CASIA database obtained at 850 nm are
taken.
   The CNN model was implemented in PyTorch [27] framework and trained for 100
epochs with Google Colaboratory with a batch size of 16 using Adam optimizer [28]
with the learning rate of 0.001. In order to train W-Net we randomly selected 20 hands
from the dataset and took all 6 corresponding images, so we got 120 images in training
set.
   To test the proposed hybrid segmentation method, the recognition results using a
part of CASIA database are presented in Fig. 9. Recognition results after image seg-
mentation with principal curvatures and without W-Net based CNN (Fig. 3) is shown
in Fig.9 a. Recognition results after image segmentation with W-Net based CNN and
without principal curvatures (Fig. 6) is shown in Fig.9 b. Recognition results after pro-
posed hybrid segmentation (Fig. 7) is shown in Fig.9 c.
10 E. Safronova, E. Pavelyeva




(a)




(b)




(c)
Fig. 9. Illustrations of FAR and FRR curves with EER (the left column) and the distribution of
genuine and impostor scores (the right column) on validation set using different segmentation
methods: (a) the principal curvature approach; (b) the CNN-based approach; (c) the hybrid
method.


6      Conclusion

In this article the new palm vein image segmentation method based on principal curva-
tures and unsupervised convolutional neural network is proposed. It is shown that the
method based on principal curvatures improve segmentation results obtained by CNN.
Experimental results using CASIA multi-spectral palmprint image database are pre-
sented.
                                                Unsupervised Palm Vein Image Segmentation 11


References
 1. Jain, A. K., Bolle, R., Pankanti, S.: Biometrics: personal identification in networked society,
    Vol. 479. Springer Science & Business Media (2006).
 2. Rosdi, B. A., Shing, C. W., Suandi, S. A.: Finger vein recognition using local line binary
    pattern. Sensors 11(12), 11357-11371 (2011).
 3. Matsuda, Y., Miura, N., Nagasaka, A., Kiyomizu, H., Miyatake, T.: Finger-vein authentica-
    tion based on deformation-tolerant feature-point matching. Machine Vision and Applica-
    tions 27(2), 237-250 (2016).
 4. Protsenko, M. А., Pavelyeva, E. A.: Iris image key points descriptors based on phase con-
    gruency. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spa-
    tial Information Sciences 42(2/W12), 167-171 (2019).
 5. Wang, L., Leedham, G., Cho, D. S. Y.: Minutiae feature analysis for infrared hand vein
    pattern biometrics. Pattern recognition 41(3), 920-929 (2008).
 6. Wu, J. D., Liu, C. T.: Finger-vein pattern identification using principal component analysis
    and the neural network technique. Expert Systems with Applications 38(5), 5423-5427
    (2011).
 7. Han, W. Y., Lee, J. C.: Palm vein recognition using adaptive Gabor filter. Expert Systems
    with Applications 39(18), 13225-13234 (2012).
 8. Pavelyeva, E. A.: Image processing and analysis based on the use of phase information.
    Computer Optics 42(6), 1022-1034, 2018.
 9. Choi, J. H., Song, W., Kim, T., Lee, S. R., Kim, H. T.: Finger vein extraction using gradient
    normalization and principal curvature. In: Image Processing: Machine Vision Applications
    II, pp. 725111. International Society for Optics and Photonics (2019).
10. Jha, R. R., Thapar, D., Patil, S. M., Nigam, A.: Ubsegnet: Unified biometric region of inter-
    est segmentation network. In: 2017 4th IAPR Asian Conference on Pattern Recognition
    (ACPR), pp. 923-928. IEEE (2017).
11. Lefkovits, S., Lefkovits, L., Szilágyi, L.: Applications of different CNN architectures for
    palm vein identification. In: International Conference on Modeling Decisions for Artificial
    Intelligence, pp. 295-306. Springer, Cham (2019).
12. Thapar, D., Jaswal, G., Nigam, A., Kanhangad, V.: PVSNet: Palm Vein Authentication Si-
    amese Network Trained using Triplet Loss and Adaptive Hard Mining by Learning Enforced
    Domain Specific Features. In: 2019 IEEE 5th International Conference on Identity, Security,
    and Behavior Analysis (ISBA), pp. 1-8. IEEE (2019).
13. Wang, J., Yang, K., Pan, Z., Wang, G., Li, M., Li, Y.: Minutiae-Based Weighting Aggrega-
    tion of Deep Convolutional Features for Vein Recognition. IEEE Access 6, 61640-61650
    (2018).
14. Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Transactions on pattern
    analysis and machine intelligence 22(8), 888-905 (2000).
15. Zhang, Y., Brady, M., Smith, S.: Segmentation of brain MR images through a hidden Mar-
    kov random field model and the expectation-maximization algorithm. IEEE transactions on
    medical imaging 20(1), 45-57 (2001).
16. Xia, X., Kulis, B.: W-Net: A deep model for fully unsupervised image segmentation. arXiv
    preprint arXiv:1711.08506 (2017).
17. CASIA Multi-Spectral Palmprint Image Database, http://biometrics.idealtest.org/.
18. Otsu, N.: A threshold selection method from gray-level histograms. IEEE transactions on
    systems, man, and cybernetics 9(1), 62-66 (1979).
19. Lin, C. L., Chuang, T. C., Fan, K. C.: Palmprint verification using hierarchical decomposi-
    tion. Pattern Recognition 38(12), 2639-2652 (2005).
12 E. Safronova, E. Pavelyeva


20. Zuiderveld, K.: Contrast limited adaptive histogram equalization. Graphics gems, 474-485
    (1994).
21. Buades, A., Coll, B., Morel, J. M.: A non-local algorithm for image denoising. In: 2005
    IEEE Computer Society Conference on Computer Vision and Pattern Recognition
    (CVPR'05), vol.2, pp. 60-65. IEEE (2005).
22. Safronova, E. I., Pavelyeva, E. A.: Palm Vein Recognition Algorithm using Multilobe Dif-
    ferential Filters. In: Proceedings of 29-th International Conference on Computer Graphics
    and Vision GraphiCon, vol.1, pp. 117-121 (2019).
23. Renault, C., Desvignes, M., Revenu, M.: 3D curves tracking and its application to cortical
    sulci detection. In: Proceedings 2000 International Conference on Image Processing (Cat.
    No. 00CH37101), vol.2, pp. 491-494. IEEE (2000).
24. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image
    segmentation. In: International Conference on Medical image computing and computer-as-
    sisted intervention, pp. 234-241. Springer, Cham (2015).
25. Sun, Z., Tan, T.: Ordinal measures for iris recognition. IEEE Transactions on pattern analy-
    sis and machine intelligence 31(12), 2211-2226 (2008).
26. Fienup, J. R.: Invariant error metrics for image reconstruction. Applied optics 36(32), 8352-
    8357 (1997).
27. Paszke, A. et al. Automatic differentiation in pytorch (2017).
28. Kingma, D. P., Ba, J. Adam: A method for stochastic optimization. arXiv preprint
    arXiv:1412.6980 (2014).