An Evaluation of Features Extracted from Facial Images in the Context of Accurate Age Estimation⋆ Malik Awais Khan1,∗,† , Aurelia Power2,† , Peter Corcoran3,† and Christina Thorpe4,† 1 School of Informatics and Cybersecurity, Technological University Dublin, Ireland 2 School of Informatics and Cybersecurity, Technological University Dublin, Ireland 3 College of Science and Engineering, University of Galway, Ireland 4 School of Informatics and Cybersecurity, Technological University Dublin, Ireland Abstract Age estimation by face image recognition can be used in numerous ways with regression models to manage access control, improve security, and guarantee the protection of children online. The approaches used for predicting age—including data selection, cleaning techniques, feature extraction, algorithm choice, and hyperparameter tuning—often struggles with generalization. Furthermore, a lot of methods neglect to specifically address how extracted face features might be used for prediction. To address the lack of racial diversity we acquired a dataset consisting of different races from literature. We also examined the ability of local, global and hybrid facial features to predict ages. Two variants of Local Binary Pattern (LBP) were used to extract local features: one variant based on the number of uniform patterns produced 16,384 features, and another variant based on a fixed-length histogram bins produced 10 features. We have used 12, 25, 35 and 37 face ratios and Euclidean distances between different facial landmarks for global feature extraction. All the feature sets are evaluated using Pearson correlation, F-regression, and Information Gain to assess the predictive capability of the features. Finally, the random forest regressor is applied on the extracted features via different models to evaluate the Mean Absolute Error (MAE) and R-square (R2). The results indicate that the 37 geometric facial ratios and Euclidean distances outperform all other models achieving the lowest MAE of 1.99 years and an R2 of 0.90. Our results show that geometrical features are more effective in the context of age regression, yielding fewer features that are more relevant to accurate age estimation. This minimizes the need for feature selection and reduction techniques, which we assume can lead to increased computational costs. Keywords Age estimation, Geometric features, Feature evaluation, Regression 1. Introduction The advancement of computers, global connectivity, and accessibility has opened the way for cyber- crimes [1]. Increased internet access has made it easier for preparators to exploit children using Child Sexual Exploitation Materials (CSEM) [2]. The Online Safety and Media Regulation Bill, enacted in Ireland in December 2022, marks a significant step toward enhancing online safety, particularly in protecting children from harmful online content [3]. Consequently, age estimation is gaining impor- tance as a means of access control to restrict entry to CSEM. One way to achieve this is by utilizing facial attributes [4]. Most of the current research on face age estimate uses regression techniques that rely on publicly accessible datasets that are relatively biased towards a specific racial group, primarily White or Black. As a result, regression models trained on these datasets may yield biassed results [5]. Additionally, existing approaches utilize various feature extraction techniques that generate large feature sets; however, they often fail to explicitly evaluate the predictive potential of these features, thereby missing the opportunity to eliminate noise and reduce computational overhead. In order to address the lack of racial diversity, we obtained a high-quality preprocessed dataset of neutral faces without background from a prior study [6]. To examine feature predictiveness, we used two different AICS’24: 32nd Irish Conference on Artificial Intelligence and Cognitive Science, December 09–10, 2024, Dublin, Ireland ∗ Corresponding author. † These authors contributed equally. $ Malikawais.khan@tudublin.ie (M. A. Khan); aurelia.power@tudublin.ie (A. Power); peter.corcoran@universityofgalway.ie (P. Corcoran); christina.thorpe@tudublin.ie (C. Thorpe) © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings forms of LBP to extract local features: one with full length features and the other with fixed length histogram bins. In addition, we used geometrical facial ratios and Euclidean distances derived from facial points to extract global features. For hybrid features, we combined the 10-bin LBP features with the 12, 25, 35, and 37 geometric ratio features, as well as employing a hybrid partial AAM model where facial points coordinates, and texture features are utilized. For feature evaluation, we used Information Gain, F-regression, and Pearson correlation techniques on the feature sets. We have utilized the Random Forest Regressor as an age estimation model to our extracted features due to its advantage of dealing with overfitting, non-linear large data, and speed. Regression metrics, namely Mean Absolute Error (MAE) and R-squared (R²), were utilized to evaluate the performance of various feature sets. Our contributions are as follows: • We extracted local, global and hybrid features and evaluated their ability to predict age using three univariate feature evaluation techniques: Pearson Correlation, F-Regression, and Information Gain. • We have further evaluated the contribution of those features to estimating age by conducting various regression experiments using a Random Forest Regressor. This evaluation process is two-fold: first, we evaluated these features subsets using the Gini Index based feature importances outputted by the Random Forest Regressor, and, secondly, we analysed performance results of each model in terms of Mean Absolute Error (MAE) to understand in practical terms how accurate these features subsets are in relation to estimating age, as well as in terms of coefficient of determination (R2) to understand the level of variance in age that each features subset can explain. The rest of the paper is organized as follows: Section 2 provides an overview of related work; Section 3 details the methodology; Section 4 discusses the results; Section 5 concludes the paper, identifying contributions, limitations and providing directions for future work. 2. Related Work The topic of estimating face age has seen a significant amount of research. Handcrafted features from facial images were mostly used in the early days. The work presented in paper [7] is based on wrinkle features and craniofacial development. They divided the age range into three categories: babies, adults, and old. The challenge of differentiating between babies and adults based on wrinkles is a major drawback of this method since both groups typically lack wrinkles, which could result in misclassification. Age estimation models are influenced by several factors. Image quality, lighting, expression, and posture are examples of extrinsic factors [8], whereas race, gender, way of life, and sickness are examples of intrinsic factors [9]. Most of the datasets are relatively homogeneous as in paper [4] used and reviewed public datasets from UTKFace, Fg-Net, Morph, and All-Age-Faces. We have acquired a preprocessed dataset that has a variety of racial groups and age ranges [6]. Most of the studies preprocessed facial photos using viola-Jones for face detection. In paper [10], the author used Viola-Jones for face detection, a less complicated and more precise technique that is still employed in literature. Most of the authors have utilized facial expression detection and blur detection to get good quality and neutral face images [11, 12, 13]. All methods above, are utilized in [6], and we acquired that dataset for our research. Different feature extraction techniques, i.e. local and global, have been utilized by many researchers. Deep learning-based local feature extraction algorithms are proposed in paper [14] but, they come with greater computational costs. LBP is one of the most often used techniques for local feature extraction, to obtain local texture data [15], that lacks strong feature predictiveness. Histogram of orientated gradients has been utilized by certain studies to extract features [16]. Gabor filters are edge detection techniques utilized by some researchers having less information about feature predictiveness [17]. Also, there is insufficient work done in the exploration of predictiveness of geometrical features for accurate age estimation [18]. The paper [18] highlights the use of facial ratios for age classification. It demonstrates that facial features around the mouth, nose, eyes, and eyebrows play a key role in accurately determining age. In paper [19], the author calculated six ratios based on facial points distances, specifically aimed at distinguishing between babies and adults. The author in paper [20] worked on the importance of iris ratios for age estimation. AAM is also used by many researchers as a global feature extraction technique [5, 21]. For better performance, hybrid features (LBP features and facial ratios) were employed in the work in [22, 23], which was limited to the Fg-Net dataset. Most of the previously listed research that we looked at lacks thorough feature predictiveness analysis or relatively racial heterogeneity in their data. Also, there is insufficient work done in the area of geometrical features and their predictiveness for accurate age estimation. Additionally, some employed deep learning, which has a lack of transparency and is more computationally expensive. In this paper we considered a dataset [6] from previous studies that consist of variety of races and ages for generalization. We have also explored the geometrical facial ratios /Euclidean distances that contributed much to the final output. 3. Methodology 3.1. Data Acquisition A dataset from study [6], which was assembled from four distinct benchmark datasets—UTK-Face, FG-NET, MORPH, and All-Age-Faces—is used in this work. The dataset consists of 12214 images of five different races that are distributed over different ages from 0 to 116. The racial groups consist of Asian, Black, White, Indian and Others.The population is distributed as follows: 1–16 years (15.71%), 17–29 years (50.74%), 30–45 years (15.84%), and above 45 years (17.71%). In terms of ethnicity, the largest group is A (42.47%), followed by B (34.58%), W (17.24%), and smaller groups like O (4.45%) and I (1.26%). Gender distribution shows a notable imbalance, with 70.16% male and 29.84% female. The dataset consists of neutral high-quality images which were selected using filtering techniques such as face detection, blur checks, and emotions/expression detection [6]. 3.2. Feature Extraction We extracted various types of features, including texture-based local features using LBP to capture surface details of the face. Global features are extracted using geometric ratios and Euclidean distances of different facial points [18] as well as the coordinates of those facial points. 3.2.1. Local Binary Pattern (LBP) Texture classification is performed using a visual descriptor called LBP. For LBP to function, a central pixel must be chosen and compared to eight surrounding pixels. If a surrounding pixel is larger than or equal to the central pixel, it is given a value of 1, and if not, it is given a value of 0. After that, the binary values are transformed into a decimal number that represents the final value of the central pixel and arranged in a clockwise manner. One or two transitions between 0 and 1 characterize a uniform LBP, whereas two or more transitions are indicative of a non-uniform LBP. For instance, because pattern 00011100 only has two transitions—one from 0 to 1 and another from 1 to 0—it is regarded as uniform. To extract local texture features, we employed two LBP variants which were selected for their computational efficiency, robustness, and ease of implementation. The first variant is the full features LBP, which is based on an image size of 128x128. As a result, the feature set that is produced has 16384 LBP features, one for every 16384 pixels. The second variation of LBP we employed utilizes a fixed-length histogram, where uniform LBP features are extracted and binned to create the feature vector. This approach can significantly reduce the dimensionality of the feature set [24]. The number of bins in Local Binary Patterns (LBP) is determined by the parameters P (number of circularly symmetric neighboring points) and R (radius), both critical for LBP configuration. In our method, we used P=8 and R=1, a standard setup for texture analysis. With the uniform method and P=8, the total number of unique patterns is P + 2, resulting in 9 bins for uniform patterns and 1 bin for non-uniform patterns, totaling 10 bins. The uniform method groups patterns with two or fewer transitions, efficiently capturing key texture features while maintaining computational simplicity [24]. 3.2.2. Global Geometrical Features We employed the face landmarks model depicted in Figure 1 to extract global features. The eyes, nose, mouth, and eyebrows are among facial features that contain more information and influence the final output [19]. The forehead region is excluded because we are considering the ratios and euclidean distances instead of texture features. In our method, the following are the 6 basic parts and 33 points while excluding some points i.e 1-3 and 13-15 which lie on the same line with negligible variations, on the face selected based on the findings from [19] in Figure 1: Figure 1: Facial Landmarks [25] – Eyes: Points 36, 37, 40, 39, 42, 45, 37, 41, 43, 46, and 47. – Nose: Points 27, 30, 31, 33, 35. – Mouth: Points 48, 51, 54, 57. – Chin: Point 8. – Eyebrows: Points 17, 19, 21, 22, 24, 26. – Jawline: Points 0, 4, 5, 11, 12, 16. Based on the aforementioned considerations, we systematically computed various combinations of facial ratios and Euclidean distances as features to optimize the model’s ability to capture discriminative geometric information. Initially, we selected 12 geometric ratios and Euclidean distances, chosen for their established relevance in facial analysis [18]. To enhance the robustness of the feature set, we expanded this by adding 13 more ratios [26], bringing the total to 25. This expansion was guided by the goal of capturing additional geometric variations and improving feature diversity. Next, we incorporated 10 additional ratios [26], refining the total to 35 features, which offered a more comprehensive representation of facial geometry. Finally, we included the left and right iris diameters, as iris features provide critical biometric information, culminating in a total of 37 features discussed in literature [20]. Tables 1, 2, and 3 indicate the progression of feature inclusion. This structured approach ensures that each additional feature contributes meaningfully to the overall model. 3.2.3. Hybrid Partial Active Appearance model (HPAAM) For hybrid features, we employed a hybrid partial AAM, which has proven to be robust against variations in facial expressions and lighting conditions [27]. This model integrates both texture Table 1 The 12 initial ratios and Euclidean distances used for first GRF model Ratios/Euclidean Distances 1 Interocular Distance / Face Width 2 Nose Width / Face Width 3 Mouth Width / Face Width 4 Nose Length / Face Height 5 Eye Width Left / Interocular Distance 6 Eye Width Right / Interocular Distance 7 Chin Height / Face Height 8 Eye to Nose 9 Eye to Lip 10 Eye to Chin 11 Eye to Eyebrow 12 Face Height / Face Width Table 2 Table 3 The 13 ratios used for second GRF model The 10 more ratios used for third GRF model 13 More Ratios 10 More Ratios 1 Mouth_Height / Face_Height 1 Jaw_Width / Face_Width 2 Eye_Height_Left / Face_Height 2 Mouth_Height / Nose_Length 3 Eye_Height_Right / Face_Height 3 Face_Height / Nose_Length 4 Nose_Length / Face_Width 4 Eyebrow_Height / Face_Height 5 Nose_Length / Interocular_Distance 5 Jaw_Height / Face_Height 6 Nose_Base_Width / Mouth_Width 6 Nose_Base_Width / Face_Height 7 Left_Eyebrow_Length / Face_Width 7 Nose_Length / Chin_Height 8 Right_Eyebrow_Length / Face_Width 8 Mouth_Width / Chin_Width 9 Mouth_Width / Nose_Length 9 Inter-Eyebrow_Distance / Face_Width 10 Face_Width / Left_Eye_Width 10 Lower_Lip_Height / Face_Height 11 Face_Width / Right_Eye_Width 12 Chin_Width / Face_Width 13 Inter-Pupillary_Distance / Face_Width and geometric features. Texture features were extracted using the full-feature LBP method. The original image size of 124x124 pixels was reduced to 100x100 pixels, as experiments conducted on both sizes showed no significant variation in the final output. This reduction was implemented to balance computational complexity while maintaining adequate resolution for accurate texture analysis [6]. The geometric features consist of the 68 facial landmarks depicted in figure 1, represented by position of x and y coordinates, resulting in a total of 136 geometric features. This process results in a total of 10,136 combined features generated by the hybrid partial AAM [6]. 3.2.4. Combined 10-Bin LBP and Ratio Features Model The second hybrid model we employed combines features from the 10-bin LBP with 12, 25, 35 and 37 geometric ratios for assuming to improve, the performance [6]. 3.3. Feature Selection We utilized three univariate feature evaluation methods—Pearson correlation, F-regression, and Information Gain—across all feature sets to assess the significance of the features. For Pearson correlation, thresholds were determined based on p-values and correlation values. In the case of F-regression, the threshold was set using p-values and F-values. For Information Gain, the information score was used to establish the threshold. A p-value threshold of 0.05 was used to determine statistical significance, meaning there is a 5% chance that the observed results could have occurred by random chance under the null hypothesis. For 10-bin LBP, GRF-37 and their hybrid, retain all the features by setting threshold of p-value to 0.05 indicating the statistical significance of all features. For the full LBP feature set, Pearson correlation, F-regression, and Information Gain methods selected 262, 434, and 1,173 features, respectively as being the most significant. Similarly, for the hybrid partial AAM model, Pearson correlation, F-regression, and Information Gain selected 2,553, 278, and 118 features, respectively. 3.4. Age Estimation Models For age estimation, we have used the Random Forest Regressor due to its advantage of avoiding the chances of overfitting, non-linear large data, and speed [28]. We conducted experiments using Linear Regression, the Ordinary Least Squares model, and the Gradient Boosting model, achieving MAE of 8 to 9.2 years. Based on these results, the Random Forest Regressor was selected as the optimal model for its superior performance. We set 80% of the data for training and 20% for testing. The model performance is evaluated based on Mean Absolute Error (MAE) and R-square (R2 ). The feature importances for the geometric and 10-bin LBP features are calculated on the basis of impurity-based feature importances (also known as Gini importance). The feature importance attribute of a Random Forest model provides insights into the relative significance of each feature in predicting the target variable [29]. Unlike simpler models like linear regression, Random Forest captures complex, non-linear relationships, and its feature importance reflects these dependencies in a robust and interpretable way [30]. 4. Results We performed 17 different experiments on various feature sets and evaluated the results based on Mean Absolute Error (MAE) and R2 . 4.1. Full LBP Features Regression Models For full LBP model, we conducted four experiments: one on the full features which are 16384, and 3 with Pearson correlation, F-regression, and Information Gain selected features sets. The results of these experiments are presented in Table 4. The best results were achieved via the model generated on Info Gain set of features. The total number of features it selected was 1173 with 6.75 MAE. The R² score suggests that the features extracted by LBP cannot explain half of the variation in age indicating that other features are needed. Table 4 Performance Table of the FLBP Model Model MAE R2 1 FLBP 6.93 0.50 2 FLBP / P.Corr 6.88 0.50 3 FLBP / F-reg 6.76 0.50 4 FLBP / Info Gain 6.75 0.50 4.2. 10-Bin LBP Features Regression Models For feature evaluation, we applied Pearson correlation, F-regression, and Information Gain on the 10-bin LBP feature set. A p-value threshold was applied, which retained all features in the set, indicating that all features demonstrated statistical significance . With 10 features we got an MAE of 6.49, less improved from the full LBP models with decrease in R² suggests that the features extracted by 10-bin LBP account for less variance between the independent and dependent variables, indicating that these features alone cannot sufficiently explain age variation and that additional features are required depicted in Table 5. Feature importance was also calculated using the Gini impurity from the random forest regressor [29] as well as on the basis of f-value from F-regression, revealing that LBP_0, the first bin for uniform pattern is the most significant feature, contributing the greatest average reduction in impurity for age regression among all 10 features, as illustrated in Figure 2. Figure 3 illustrates the actual and predicted ages distribution. There are large number of data points above and below the perfect line indicating that many predictions are overestimating younger ages, while other are underestimating older age. For example the first data point whose actual age is 0 years but predicted above 60 years. Table 5 Performance Table of the 10-bin LBP Model Model MAE R2 1 10-Bin LBP 6.49 0.48 Figure 3: Actual and Predicted Ages Distribution Figure 2: Features Importances of the 10-Bin LBP of the 10-Bin LBP Model 4.3. Global Geometrical Features Regression Models For geometric features, we evaluated 12, 25, 35, and 37 ratios/Euclidean distances derived from key facial points, including the eyes, eyebrows, nose, mouth, and jawline. We have also eval- uated these features with the aforementioned feature evaluation techniques retaining all the features. The results demonstrate that geometric features outperform others, particularly with 37 ratios/Euclidean distances. The inclusion of left and right iris diameters resulted in the lowest Mean Absolute Error (MAE) of 1.99 years and an R² of 0.90, indicating that the independent variables explain 90% of the variance in age. Table 6 presents the MAE and R² values for the various ratio models. Figure 4 illustrates the importance of various features, with the eye-to-chin ratio/Euclidean distance emerging as the most significant, achieving the highest score of 0.223 based on Gini impurity. This result aligns with its high f-value in the F-regression analysis. Figure 5 illustrates the relationship between actual versus predicted ages, although there is still overestimation and underestimation for younger and older ages respectively but reduced much than the 10-bin LBP models. Table 6 Performance Table of the Global Geometrical Models Model MAE R2 1 GRF/Eucld-12 3.43 0.78 2 GRF/Eucld-25 3.24 0.80 3 GRF/Eucld-35 3.18 0.81 4 GRF/Eucld-37 1.99 0.90 Figure 4: Features Importances Figure 5: Actual and Predicted Ages of the GRF/Eucld-37 Model Distribution of the GRF/Eucld-37 Model 4.4. Hybrid Partial AAM Regression Models We conducted the same experiments for the hybrid partial AAM as we did for the full LBP model. Table 7 shows the MAE and R² values for various feature-selected AAM models. From the table, we can deduce that the hybrid partial AAM with all features achieved the best results, with an MAE of 5.79, compared to the feature selection techniques, and resulted in a higher R², indicating a greater proportion of variance between the independent variables and age was explained by using all features compared to selected features. Although the feature selection methods reduced the number of features from 11,000 to 2,553, 278, and 118 but, they worsened the overall performance. Figure 6 shows the distribution of actual and predicted ages. Table 7: Performance Table of the Hybrid Partial AAM Regression Models Model MAE R2 1 HPAAM 5.79 0.60 2 HPAAM/P.Corr 6.70 0.49 3 HPAAM/F-reg 6.19 0.54 4 HPAAM/Info Gain 6.46 0.49 Figure 6: Actual and Predicted Ages Distribution of the HPAAM Models 4.5. Combined 10-Bin LBP and Ratio Features Regression Models To evaluate potential improvements, we combined 10-bin LBP features with sets of 12, 25, 35, and 37 geometric ratio/Euclidean distance features. Based on the results in Table 8, we observe an improvement over the 10-bin LBP model. However, the performance falls short when compared to the models that solely use geometric features. The least MAE we achieved is through 10-bin LBP and 37-geometric ratios/Euclidean distances, which is 4.21 with an R2 of 0.77, indicating that 77% of variance between independent variables and ages are explained. Figure 7 shows the actual and predicted ages distribution. Table 8: Performance Table of the 10-bin LBP Combined with Geometric Features Model MAE R2 1 10-Bin LBP + GRF-12 4.46 0.73 2 10-Bin LBP + GRF-25 4.27 0.76 3 10-Bin LBP + GRF-35 4.24 0.76 4 10-Bin LBP + GRF-37 4.21 0.77 Figure 7: Actual and Predicted Ages Distribution of the 10-bin LBP Combined with GRF 4.6. Overall Results The overall results demonstrate that geometrical features outperform other models in age esti- mation. Specifically, the full Uniform LBP and hybrid partial AAM models yield average MAEs of 6.83 and 6.29, respectively. The 10-bin histogram-based LBP model achieved an MAE of 6.49. Notably, the geometric facial ratios/Euclidean distances model produced an average MAE of 3.28 years. Figure 8 further illustrates that geometrical features of facial components—such as the eyes, nose, mouth, eyebrows, and jawline—outperformed all other feature types. Additionally, when two iris diameter features were added to the 35-ratio/Euclidean distance model, termed as GRF-37, it further improved performance, achieving an MAE of 1.99 years depicted in Figure 8. This indicates that the regression model may achieve accurate age estimation for certain ages using only global geometric features, eliminating the need to combine local and global models or apply feature selection techniques. Figure 9 shows the actual and predicted ages of 20 subjects randomly selected from the test data with a minimum age difference. Figure 8: Overall Performance 5. Conclusion This paper focuses on evaluating features extracted from facial images for accurate age estimation. We employed both local and global feature extraction techniques in our study and combined them to generate two additional hybrid feature sets. All feature sets are evaluated using three univariate feature selection methods. Our results show that the global features—geometrical approaches are more effective in the context of age regression, yielding fewer features that are more relevant to accurate age estimation. This minimizes the need for feature selection and reduction techniques, which we assume can lead Figure 9: Actual and Predicted Images of 20 Test samples to increased computational costs. While many state-of-the-art (SOTA) methods[31] utilize deep learning models for age estimation, they often function as black-box systems, offering limited interpretability of the features contributing to predictions. In contrast, our approach not only achieves a lower MAE compared to many SOTA methods, particularly those using machine learning for age regression, but also provides valuable insights into feature importance. This interpretability makes our method particularly suitable for applications requiring both accuracy and explainability. However, our approach evaluated features individually and did not consider the incremental value of feature subsets. Future research could incorporate subset feature selection techniques, such as forward or backward feature selection, alongside univariate methods. Furthermore, additional image enhancement techniques, such as face alignment, or more advanced regression algorithms like deep learning models, could also be applied to improve performance. In future we can incorporate forehead region while extracting wrinkle features and explore alternative modalities beyond facial images, such as speech recognition or natural language processing techniques, for more accurate age estimation. 6. Acknowledgments This work was conducted with the financial support of the Science Foundation Ireland Centre for Research Training in Digitally-Enhanced Reality (d-real) under Grant No. 18/CRT/6224. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. References [1] R. Montasari, R. Hill, S. Parkinson, P. Peltola, A. Hosseinian-Far, A. Daneshkhah, Dig- ital forensics: challenges and opportunities for future studies, International Journal of Organizational and Collective Intelligence (IJOCI) 10 (2020) 37–53. [2] M. Roopak, S. Khan, S. Parkinson, R. Armitage, Comparison of deep learning classification models for facial image age estimation in digital forensic investigations, Forensic Science International: Digital Investigation 47 (2023) 301637. [3] Government of Ireland, President higgins signs crucial online safety and media legislation into law, 2022. URL: https://www.gov.ie/en/press-release/ 120ff-president-higgins-signs-crucial-online-safety-and-media-legislation-into-law/, accessed: 2024-11-27. [4] S. K. Gupta, N. Nain, Single attribute and multi attribute facial gender and age estimation, Multimedia Tools and Applications 82 (2023) 1289–1311. [5] T. Dhimar, K. Mistree, Feature extraction for facial age estimation: A survey, in: 2016 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), IEEE, 2016, pp. 2243–2248. [6] M. Khan, et al., An evaluation of features extracted from facial images in the context of binary age classification, in: FCSIT 2024: 2024 3rd Eurasian Conference on Frontiers of Computer Science and Information Technology, 2024. [7] Y. H. Kwon, N. da Vitoria Lobo, Age classification from facial images, Computer vision and image understanding 74 (1999) 1–21. [8] H. Han, C. Otto, X. Liu, A. K. Jain, Demographic estimation from face images: Human vs. machine performance, IEEE transactions on pattern analysis and machine intelligence 37 (2014) 1148–1161. [9] G. Guo, G. Mu, Y. Fu, T. S. Huang, Human age estimation using bio-inspired features, in: 2009 IEEE conference on computer vision and pattern recognition, IEEE, 2009, pp. 112–119. [10] M. Wang, W. Chen, Age prediction based on a small number of facial landmarks and texture features, Technology and Health Care 29 (2021) 497–507. [11] A. Jaiswal, A. K. Raju, S. Deb, Facial emotion detection using deep learning, in: 2020 international conference for emerging technology (INCET), IEEE, 2020, pp. 1–5. [12] R. K. Reghunathan, V. K. Ramankutty, A. Kallingal, V. Vinod, Facial expression recognition using pre-trained architectures, Engineering Proceedings 62 (2024) 22. [13] R. Bansal, G. Raj, T. Choudhury, Blur image detection using laplacian operator and open-cv, in: 2016 International Conference System Modeling & Advancement in Research Trends (SMART), IEEE, 2016, pp. 63–67. [14] Q. Kuang, Face image feature extraction based on deep learning algorithm, in: Journal of Physics: Conference Series, volume 1852, IOP Publishing, 2021, p. 032040. [15] R. Nava, G. Cristóbal, B. Escalante-Ramírez, A comprehensive study of texture analysis based on local binary patterns, in: Optics, Photonics, and Digital Technologies for Multimedia Applications II, volume 8436, SPIE, 2012, pp. 125–136. [16] C. Orrite, A. Ganán, G. Rogez, Hog-based decision tree for facial expression classification, in: Pattern Recognition and Image Analysis: 4th Iberian Conference, IbPRIA 2009 Póvoa de Varzim, Portugal, June 10-12, 2009 Proceedings 4, Springer, 2009, pp. 176–183. [17] R. Rouhi, M. Amiri, B. Irannejad, A review on feature extraction techniques in face recogni- tion, Signal & Image Processing 3 (2012) 1. [18] M. Z. Alom, M.-L. Piao, M. S. Islam, N. Kim, J.-H. Park, Optimized facial features-based age classification, International journal of computer and information Engineering 6 (2012) 327–331. [19] Y. H. Kwon, N. da Vitoria Lobo, Age classification from facial images, Computer vision and image understanding 74 (1999) 1–21. [20] C. E. P. Machado, M. R. P. Flores, L. N. C. Lima, R. L. R. Tinoco, A. Franco, A. C. B. Bezerra, M. P. Evison, M. A. Guimarães, A new approach for the analysis of facial growth and age estimation: Iris ratio, PloS one 12 (2017) e0180330. [21] C. N. Duong, K. G. Quach, K. Luu, H. B. Le, K. Ricanek, Fine tuning age estimation with global and local facial features, in: IEEE International Conference on Acoustics, Speech and Signal Processing, Prague, Czech Republic, 2011, pp. 2032–2035. [22] N. Mehrabi, S. P. H. Boroujeni, Age estimation based on facial images using hybrid features and particle swarm optimization, in: 2021 11th International Conference on Computer Engineering and Knowledge (ICCKE), IEEE, 2021, pp. 412–418. [23] V. Karimi, A. Tashk, Age and gender estimation by using hybrid facial features, in: 2012 20th Telecommunications Forum (TELFOR), IEEE, 2012, pp. 1725–1728. [24] M. A. Rahim, M. N. Hossain, T. Wahid, M. S. Azam, Face recognition using local binary patterns (lbp), Global Journal of Computer Science and Technology 13 (2013) 1–8. [25] G. Amato, F. Falchi, C. Gennaro, C. Vairo, A comparison of face verification with facial landmarks and deep features, in: 10th International Conference on Advances in Multimedia (MMEDIA), 2018, pp. 1–6. [26] M. K. Alam, N. F. Mohd Noor, R. Basri, T. F. Yew, T. H. Wen, Multiracial facial golden ratio and evaluation of facial appearance, PloS one 10 (2015) e0142914. [27] T. F. Cootes, G. J. Edwards, C. J. Taylor, Active appearance models, in: Computer Vi- sion—ECCV’98: 5th European Conference on Computer Vision Freiburg, Germany, June 2–6, 1998 Proceedings, Volume II 5, Springer, 1998, pp. 484–498. [28] G. Fanelli, J. Gall, L. Van Gool, Real time head pose estimation with random regression forests, in: CVPR 2011, IEEE, 2011, pp. 617–624. [29] S. Nembrini, I. R. König, M. N. Wright, The revival of the gini importance?, Bioinformatics 34 (2018) 3711–3718. [30] J. Lin, Random forest series 2.2 - model building, imbalanced dataset, feature importances & hyperparameter tuning, Machine Learning Projects, Tree Models, Pandas, 2023. Accessed: 2024-11-27. [31] K. ELKarazle, V. Raman, P. Then, Facial age estimation using machine learning techniques: An overview, Big Data and Cognitive Computing 6 (2022) 128.