=Paper= {{Paper |id=None |storemode=property |title=Robust Facial Feature Detection for Registration |pdfUrl=https://ceur-ws.org/Vol-845/paper-6.pdf |volume=Vol-845 }} ==Robust Facial Feature Detection for Registration== https://ceur-ws.org/Vol-845/paper-6.pdf
       Robust Facial Feature Detection for Registration
      Taher KHADHRAOUI                                      Fouazi BENZARTI                                    Hamid AMIRI
         LSTS Laboratory                                     LSTS Laboratory                                 LSTS Laboratory
  National School of Engineers of                     National School of Engineers of                 National School of Engineers of
      Tunis (ENIT), Tunisia                               Tunis (ENIT), Tunisia                            Tunis (ENIT), Tunisia
      khadhra.th@gmail.com                                  benzartif@yahoo.fr                           hamidlamiri@yahoo.com


Abstract—Face image analysis, in the context of computer vision,         recognition. Generally, these systems include automatic
is, in general, about acquiring a high-level knowledge about a           feature extractors and change trackers in the facial features in
facial image. Facial feature extraction is important in many face-       static images.
related applications, such as face recognition, pose normalization,      The paper is organized as follows. In Section 2 we review
expression understanding and face tracking. Although there is
not yet a general consensus, the fiduciary or first tier facial
                                                                         some related work in the face image analysis. The proposed
features, most often cited in the literature, are the eyes or the eye    approach is given in Section 3. The experimental results are
corners, the tip of the nose and the mouth corners. Similarly,           provided in Section 4. In Section 5, we discuss future work
second tier features are the eyebrows, the bridge of the nose, the       and draw our conclusions.
tip of the chin, and so on. Registration based on feature point
correspondence is one of the most popular methods for
alignment. The importance of facial features stems from the fact                               II.   RELATED WORK
that most face recognition algorithms in 2D and/or 3D rely on               The analysis of faces has received substantial effort in
accurate feature localization. This step is critical not only directly
                                                                         recent years. In Facial feature extraction, local features on face
for recognition techniques based on features themselves, but
indirectly for the global appearance-based techniques that               such as nose, and then eyes are extracted and then used as
necessitate prior image normalization. For example, in 2D face           input data. And it has been the central step for several
recognition, popular recognition techniques, such as eigenfaces or       applications [8]. Various approaches have been proposed in
Fisherfaces, are very sensitive to registration and scaling errors.      this article to extract these facial points from images or video
For 3D face recognition, the widely used iterative closest point         sequences of faces. Among these approaches: Geometry-based
(ICP) registration technique requires scale-normalized faces and         approach, Template-based, Apprearance-based approaches.
a fairly accurate initialization. In any case, both modalities
require accurate and robust automatic landmarking. In this
paper, we inspect shortcomings of existing approaches in the             A. Geometry-based approaches
literature and propose a method for automatic landmarking of                 These methods extracted features using geometric
near-frontal faces. We show good detection results on different
                                                                         information such as relative positions and sizes of the face
large image datasets under challenging imaging conditions.
                                                                         components.
    Keywords-component; face Alignment; feature extraction; face         Mauricio Hess and G. Martinez [2] used SUSAN algorithm to
registration                                                             extract facial features such as eye corners and center, mouth
                                                                         corners and center, chin and cheek border, and nose corner etc.
                        I.    INTRODUCTION                               Nevertheless these techniques require threshold, which, given
   The face detection and facial feature extraction is                   the prevailing sensitivity, may adversely affect the achieved
instrumental for the successful performance of subsequent                performance.
tasks in related computer vision applications. Many high-level
vision applications such as facial feature tracking, facial
modeling and animation, facial expression analysis, and face
recognition, require reliable feature extraction. Facial feature
points are referred to in the literature as "salient points",
"anchor points", or "facial landmarks" [9]. The most
frequently occurring fiduciary facial features are the four eye
corners, the tip of the nose, and the two mouth corners [1].
                                                                                          Figure 1. Geometry based approach
Facial feature detection is a challenging computer vision
problem due to high inter-personal changes (gender, race), the
                                                                         B. Template-based
intra-personal variability (pose, expression) and acquisition
conditions (lighting, scale, facial accessories). To make valid,            This technique, matches the facial components to
more accurate, quantitative measurements in diverse                      previously designed templates using appropriate energy
applications, it is needed to develop automated methods for
functional. Genetic algorithms have been proposed for more
efficient searching times in template matching.                                                       Input
                                                                                                     Image
The best match of a template in the facial image will yield the
minimum energy. Proposed by Yuille et al [3] these
algorithms require a priori template modeling, in addition to
their computational costs, which clearly affect their                                               Face
performance.                                                                                      Detection




                                                                                                Facial Feature
                                                                                                 Points in 2D



                 Figure 2. Template based method
                                                                                                 Estimation of
C. Colour segmentation techniques                                                                 Face Pose
    This approach makes use of skin color to isolate the face.
Any non-skin color region within the face is viewed as a
candidate for eyes and/or mouth. The performance of such
techniques on facial image databases is rather limited, due to                                  Generation of
the diversity of ethnical backgrounds [4].                                                      3D Geometric
                                                                                                  features
                                                                             Figure 4. Framework of facial features extraction

                                                                  Typical image analysis and in particular facial features
                                                                  detection usually consists of several steps:

                                                                  A. Face detection
              Figure 3. Colour segmentation approach
                                                                      Face detection is a technology to determine the locations
D. Apprearance-based approaches                                   and size of a human being face in a digital image. It only
                                                                  detects facial expression and rest all in the image is treated as
    These approaches generally use texture (intensity)
                                                                  background and is subtracted from the image. Region of
information only and learn the characteristics of the landmark
                                                                  interest (ROI) is defined here as a facial feature candidate
neighborhoods projected in a suitable subspace. Methods such
                                                                  point or region [1]. Depending on the representation of the
as principal component analysis [5], independent component
                                                                  facial features, methods can be divided to region based and
analysis [6], and Gabor wavelets [7] are used to extract the
                                                                  point based.
feature vector.
                                                                  In some applications preprocessing may increase the accuracy
These approaches are commonly used for face recognition
                                                                  of the localization. This applies mostly to the cases where the
rather than person identification.
                                                                  acquisition parameters are insufficient, for example poor
                                                                  lighting, noise or inadequate camera properties.
                   III.   PROPOSED METHOD                         Point based ROI detection can be performed in various ways.
                                                                  Most of the facial features, for example eye corners, mouth
   The proposed method is summarized in figure 4. It uses
                                                                  corners, nostrils, are placed on the edges.
four main steps:
                                                                  To build fully automated systems that analyze the information
                                                                  contained in face images, robust and efficient face detection
                                                                  algorithms are required. Given a single image, the goal of face
                                                                  detection is to identify all image regions which contain a face
                                                                  regardless of its three-dimensional position, orientation, and
                                                                  lighting conditions. Such a problem is challenging because
                                                                  faces are nonrigid and have a high degree of variability in size,
                                                                  shape, color, and texture. These algorithms aim to find
                                                                  structural features that exist even when the pose, viewpoint, or
                                                                  lighting conditions vary, and then use the these to locate faces.
B. Features extraction                                               We apply a least squares error metric E (Eq. 1) that minimizes
  In the 2D processing part of the proposed method a number          the sum of squared distances di from each point mi of model to
of points are detected across the facial area of the current input   the plane containing the destination point Si, oriented
image.                                                               perpendicular to normal

   1) Nose holes.
   Finding nose holes in an area given from face's geometry
depends on the angle between camera and face. If there isn't a
direct line of sight between nose holes and camera, it is            T4×4: Homogeneous Model Transformation Matrix
obviously not possible to track them.                                 mi, Si: Points of model and Reconstruction
Nose holes color have a significant saturation, depending on         Geometric alignment of mesh M3D (3) and point set W3D (4) is
its color black.                                                     realized with a variant of the Iterative Closest Point (ICP)
The threshold must be defined and over geometry or clustering        algorithm. In the ICP procedure we determine pose vector T
two centers of saturation can be found.                              (5), which represents the optimal model transformation
                                                                     parameters, with respect to an error metric.
  2) Mouth
  Detecting the middle of the mouth isn't as simple as it is
thought. There are a lot of possibilities, going over gradient
horizontal and/or vertical decent, hue or saturation. At the
moment it is implemented utilizing the distinct hue of lips.
Light reflects on lips and this point is fetched by a defined hue
value. In contrast to the methods, this method is not light          The ICP principle applied is as follows:
independent, thus intensity and direction of the light can                   Let cluster W3D (4) be a set of n points pi and M3D (3)
influence results. A better method should be included in the                  a surface model with m vertices aj and normals bj
future.                                                                      Let CP( pi, aj ) be the closest vertex aj to a point pi
  3) Eyes and pupils
                                                                                 1. Let T[0] be an initial transformation estimate
  A lot of ways can be developed to find pupils in the given
                                                                                     (5).
area surrounding the eyes, a Gabor eye-corner filter is
                                                                                 2. Repeat for k = 1...kmax or until convergence:
constructed to detect the corner point. It is more robust than
projection methods or edge detection methods.                            •    Compute the set of corresponding pairs S


C. Estimation of Face Pose
     In order to correct those geometrically violated facial             •    Compute the new transformation T[k] that minimizes
features that deviate far from their actual positions, the                    Error metric E (2) w.r.t. all pairs S.
geometry constraint among the detected facial features is            Finally, we present a feature preserving Delaunay refinement
imposed. However, in practice, the geometry variations among         algorithm which can be used to generate 3D geometric feature.
all the thirty facial features under changes in individuals,
facial expressions and face orientations are too complicated to
be modelled.                                                         D. Generation of 3D Geometric Features
Estimation of face pose is a fundamental task in computer               3D structure of a face is estimated using the face feature
vision. We infer face pose from geometric alignment of face          points. 3D measurements of any three points on a face can be
model and coarse face mesh reconstruction.                           computed based on the perspective projection of a triangle.
Homogeneous model transformation matrix T4x4 formulates              These three feature points derived from the eyes and the
the alignment (Eq. 1). The accurate triangular face mesh             middle of the mouth
model is gained from a previous range scan of the person
observed.                                                                                      P1(X1, Y1, Z1)
                                                                                                         d3
                                                                                                                       P2(X2, Y2, Z2)


                                                                                                     d2           d1


                                                                                                     P3(X3, Y3, Z3)

                                                                          Figure 5. Illustration of Perspective Projection of a 3D Triangle
The 3D measurement of lengths d1, d2 and d3 of all edges of                                      IV.     EXPERIMENTS RESULTS
the triangles are computed from the Equation (7).                             In our preliminary experiments, we have obtained some
                                                                           promising results for different facial expressions (see Figure.
                                                                           6). Thus, in order to determine a six degrees of freedom pose
                                                                           vector, at least three points need to be found in the face, which
                                                                           firstly, have to be visible also in a range of perspectives and
                                                               (7)         secondly, do not change during expression.
                                                                           Even more, these points need to be well distributed in space
                                                                           and must be robustly and accurately detectable in the image.


                                                                                                        V.   CONCLUSION
                                                                               In this paper we have proposed an approach for the facial
                                                                           feature detection which is efficient and fast to implement.
                                                                            It provides a practical solution to the recognition problem. We
                                                                           are currently investigating in more detail the issues of
                                                                           robustness to changes in head size and orientation. Also we are
                                                                           trying to recognize the gender of a person using the same
                                                                           algorithm.




             (a)                          (b)                                             (c)                             (d)

                                    Figure 6. Results of facial features extraction, (a) face image input,
                                              (b) face detection, (c) facial features extraction,
                                                  (d) generation of 3D geometric features
                              REFERENCES
[1]   S.P. Khandait, P.D. Khandait, and Dr.R.C.Thool, “An Efficient
      Approach to Facial Feature Detection for Expression Recognition”,
      International Journal of Recent Trends in Engineering, Vol 2, No. 1,
      November 2009.

[2]   Mauricio Hess and Geovanni Martinez, “Facial Feature Extraction based
      on Smallest Univalue Assimilating Nucleus (SUSAN) Algorithm”.

[3]   A. Yuille, D. Cohen, and P. Hallinan, “Facial feature extraction from
      faces using deformable templates”, Proc. IEEE Computer Soc. Conf. On
      Computer Vision and Pattern Recognition, pp. 104-109, 1989.

[4]   Yingjie XIA, Kuang MAO, and Wei WANG, “A Novel Facial Feature
      Localization and Extraction Method”, Journal of Computational
      Information Systems 6:8(2010) 2779-2786
      Available at http://www.Jofcis.com

[5]   K. Susheel Kumar, Shitala Prasad, Vijay Bhaskar Semwal, and R C
      Tripathi, “Real Time Face Recognition Using Adaboost Improved Fast
      PCA Algorithm", International Journal of Artificial Intelligence &
      Applications (IJAIA), Vol.2, No.3, July 2011.

[6]   Antonini, G., V. Popovici and J. P. Thiran, ”Independent Component
      Analysis and Support Vector Machine for Face Feature Extraction”, Int.
      Conf. on Audio and Video-based Biometric Person Authentication, pp.
      111-118, Guildford, UK, 2003.

[7]   Vukadinovic, D. and M. Pantic, ”Fully Automatic Facial Feature Point
      Detection using Gabor Feature Based Boosted Classifiers”, IEEE Int.
      Conf. on Systems, Man and Cybernetics, Hawaii, October 2005.

[8]   Chai Tong Yuen, M. Rizon, Woo San San, and M.Sugisaka, “Automatic
      Detection of Face and Facial Features”, Proceedings of the 7th WSEAS
      International Conference on SIGNAL PROCESSING, ROBOTICS and
      AUTOMATION (ISPRA '08).

[9]   Sanqiang ZHAO and Yongsheng GAO, “Towards Robust and
      Automated Head Pose Estimation: Elastic Energy Model”, Biomedical
      Soft Computing and Human Sciences, Vol. 14, No. 1, pp. 21-26 (2009).