Machine Learning Methods and Tools for Facial Recognition Based on Multimodal Approach Oleh Basystiuk, Nataliia Melnykova, Zoriana Rybchak Lviv Polytechnic National University, Lviv, 79000, Ukraine Abstract The problem of computer vision and one of its subproblems, face recognition, has been actively researched and developed over the past two decades, but all work has not led to a single, unified method of face recognition. Machine learning methods allowed us to bring facial recognition systems to a new level and contributed to the further development of this field, as new methods and tools for building and training systems became available. The purpose of the research is new methods and tools for recognition, evaluation of their effectiveness in various systems and improvement. The main focus of the research will be to find a unified solution for face recognition systems that could be used by small companies and teams in their everyday life. The article concludes by discussing potential applications of the multimodal handling interface, including in the fields of healthcare, education, and entertainment. This emphasis is due to the lack of intelligent facial recognition systems based on machine learning algorithms that are cost-effective for small companies and teams. Keywords1 Computer vision, image recognition, multimodal data, machine learning. 1. Introduction Today, artificial intelligence systems based on machine learning technology are actively developing, and one of the most famous problems is the problem of computer vision. With the help of this technology, the problems of determining the shapes and positioning of objects are solved more effectively. However, the problem of recognition is defined in a separate category, since it is a complex problem in the field of artificial intelligence, which combines all the above problems. [1-3] Facial recognition is a growing area of research, with many applications in fields such as security, entertainment, and healthcare. In recent years, there has been a growing interest in the use of multimodal approaches to improve the accuracy and efficiency of facial recognition systems. This article presents an overview of machine learning methods and tools for facial recognition based on a multimodal approach. We discuss the use of multiple modalities, such as images, videos, and audio, and the use of machine learning algorithms, such as deep learning, support vector machines, and decision trees, to improve the accuracy of facial recognition systems. We also discuss the use of tools such as OpenCV, TensorFlow, and Keras to develop facial recognition systems. During the last decade, this field has been actively researched, a lot of innovations have been invented, which made it possible to make the development process cheaper and increase the efficiency of invented artificial intelligence systems in the context of computer vision problems. Developers of face recognition systems face the following problems: • perform a face search - regardless of whether the task is to recognize people from photos, or recognition from a video sequence, or any other; • positioning of the face - it is quite rare to see photos in which a person is standing directly facing the lens, most often his face is turned, in this context we have the task of performing such positioning as if the photo was taken straight on; MoMLeT+DS 2023: 5th International Workshop on Modern Machine Learning Technologies and Data Science, June 3, 2023, Lviv, Ukraine EMAIL: oleh.a.basystiuk@lpnu.ua (O. Basystiuk); nataliia.i.melnykova@lpnu.ua (N. Melnykova); zoriana.l.rybchak@lpnu.ua (Z. Rybchak) ORCID: 0000-0003-0064-6584 (O. Basystiuk); 0000-0002-2114-3436 (N. Melnykova); 0000-0002-5986-4618 (Z. Rybchak) © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Wor Pr ks hop oceedi ngs ht I tp: // ceur - SSN1613- ws .or 0073 g CEUR Workshop Proceedings (CEUR-WS.org) • determination of facial features inherent only to this person - this factor can be called a full- fledged face recognition step (the above were preparatory), we perform image analysis on it and obtain unique digital parameters of the face; • person identification – we compare the received data with the data we already have, if the data are similar, we output the person's name, if not, we have a person we do not yet know. A large number of libraries have been developed for all these problems. The best way to obtain detailed information about the developed system, library, or ARI is to study its documentation [1, 2]. Facial recognition is a rapidly growing area of research and development, with many applications in fields such as security, entertainment, and healthcare. Traditional facial recognition systems rely on a single modality, such as images or videos, to identify individuals. However, these systems can be limited by factors such as image quality, lighting conditions, and occlusions. In recent years, there has been a growing interest in the use of multimodal approaches to improve the accuracy and efficiency of facial recognition systems. A multimodal approach involves the use of multiple modalities, such as images, videos, and audio, to improve the accuracy of facial recognition. Machine learning algorithms, such as deep learning, support vector machines, and decision trees, have been used to analyze and interpret data from multiple modalities. In addition, there are several tools available to develop and implement facial recognition systems, such as OpenCV, TensorFlow, and Keras. Therefore, before starting the development of a recognition system, it is worth conducting a detailed study of the scope of the development and, based on this, choose a stack of technologies and tools for the implementation of the task. The actual purpose of this development is to research and improve the methods and tools used for facial recognition. Creation of a new generation solution on the basis of already existing methods that will unify recognition and greatly simplify the design and development of computer vision systems in the future [4, 5]. This article provides an overview of machine learning methods and tools for facial recognition based on a multimodal approach. We discuss the advantages and limitations of multimodal approaches, as well as the specific machine learning methods and tools that can be used to improve the accuracy of facial recognition systems. Finally, we discuss the potential applications of multimodal facial recognition systems and the future directions for research in this area. 2. Related Work In this section, we first review current research that demonstrates the value of computer vision recognition in our everyday life. Secondly, we discuss a few studies in the area of computer. Lastly, we discuss new research that suggests methods for computer vision in the context of face recognition that are supported by ontologies. We emphasize how the mentioned works relate to or differ from one another based on our research. In works [6, 7] attention is focused on the theoretical aspects of building a stable system for face recognition. Researchers [3, 5] described current methods and technologies for any stages of designing a face recognition system, because a huge number of unique solutions have been developed in the field of recognition. In any system, the question of its performance is promising, and this especially applies to face recognition systems, so the authors [13] study the speed of recognition methods. Works [9, 10] describe a method for identifying a person using the support vector method (SVM), which allows you to significantly speed up the recognition process. The method of support vectors works on the principle of dividing information into groups. That is, if we create homogeneous data sets (in our case, it is the faces of several people in different angles, under different lighting, etc.), we give them for SVM training. Based on these images, the method builds segments with which it will recognize faces in the future. Researchers [14-17] describe face landmarks estimation algorithms used for face positioning. In addition, several tools and frameworks have been developed to support the development of facial recognition systems. For example, OpenCV is an open-source computer vision library that includes tools for face detection, recognition, and tracking. TensorFlow and Keras are machine learning frameworks that can be used to develop and implement deep learning algorithms for facial recognition. The main tool for face positioning in face landmarks estimation is a mask, which consists of 68 basic landmark points. Using these landmarks, it is superimposed on the face and with the help of simple transformations, such as rotations, enlargement, reduction, centers the image. This method helps to increase the efficiency of the system because recognition becomes easier. [8] On the basis of the conducted analysis, it was established that there are no single methods and technologies that would combine all stages of construction to create a facial recognition system. So, when developing a certain system, you should pay attention to such aspects as: • amount of data the larger the set of actual data, the better it will be possible to train the algorithms; • the types of images we will transmit for recognition; • the number of faces in the images (one or several faces must be recognized at once). Therefore, before starting the development of a recognition system, it is worth conducting a detailed study of the scope of the development and, based on this, choose a stack of technologies and tools for the implementation of the task. The actual purpose of this development is to research and improve the methods and tools used for facial recognition. Creation of a new generation solution on the basis of already existing methods that will unify recognition and greatly simplify the design and development of computer vision systems in the future [18, 19]. Based on the analysis that was done, it was determined that there are no standard techniques or technologies that would combine all of the stages of building for the recognition system. Thus, research that includes a thorough explanation of technologies and the tools used to apply them, an evaluation of each technology's benefits and drawbacks, and performance studies to solve such issues are pertinent. 3. Methods Recognition is one of the issues that computer vision technologies, which are currently under active development, can help us handle more successfully. Developers now have access to a huge number of libraries to address computer vision-related issues as a result of active development. In this article, we actually have to figure out how well these libraries function. The greatest way to learn in-depth details about a specific system, library, or API is to become familiar with the documentation, which is what we'll be doing for this article's research. Works concentrate on the theoretical facets of developing a dependable face recognition system. The methods used in this article include a review of the literature on facial recognition and multimodal approaches, as well as an overview of machine learning methods and tools that can be used to develop and implement facial recognition systems based on a multimodal approach. This article aim to provide a comprehensive overview of the current state of research and development in multimodal facial recognition, as well as the machine learning methods and tools that can be used to improve the accuracy and efficiency of these systems. The article likely discusses how these machine learning methods can be used to improve the accuracy and efficiency of multimodal facial recognition systems. For example, combining information from multiple modalities can enhance facial recognition performance, particularly in challenging scenarios such as low-light conditions or occlusions. Machine learning algorithms can be trained on multimodal data to learn robust representations and improve the overall system's performance. Overall, the article aims to provide a comprehensive overview of the current state of research and development in multimodal facial recognition. For the development of artificial intelligence, a large volume of auxiliary libraries was created, most of them were developed using Python language tools, but for further popularization, they were adapted for other popular programming languages such as Java, C#, C++, etc. In general, image recognition appears to be a rather simple procedure. You only need to complete these three steps to get the desired result, its presented-on Figure 1: 1. Preprocessing: At this stage, you apply filters to your image in an effort to make it better suited for recognition. 2. Feature Extraction: In this step, you try to identify important data, extract it, and discard irrelevant data. 3. Classification: Analyzing and identifying the data you retrieved during the feature extraction process. Figure 1: Recoface image recognition system features The absolute majority of libraries for creating systems based on machine learning algorithms are developed in the context of certain problems for better optimization and the most efficient solution of tasks in this area. In the field of computer vision and face recognition, there are two main libraries, namely: • Dlib is a C++ toolkit containing machine learning algorithms and tools for developing complex C++ software to solve real-world problems. It has development support for robotics, mobile phones, embedded devices, and large, high-performance computing environments. Dlib is an open source system that allows you to use it for free in any development [10]. • OpenCV (Open Source Computer Vision Library) – this library is released under the BSD license, so it can be freely used for academic and commercial development. The existing interfaces for development using C++, Python and Java also support all popular operating systems, namely Windows, Linux, MacOS, iOS and Android. OpenCV was created with high computational efficiency, with a focus on the development of real-time systems [11]. 4. Results When you dive into images itself, we have colors and addition info information, like dimensions, which help us to realize what images are. It would make sense to represent the information contained in them using a two-dimensional structure (a matrix), and that where tensors come in handy. Tensors, which are a generalization of matrices, are often used to represent images in deep learning models. A tensor can have multiple dimensions, allowing us to capture additional information about the image beyond just color or intensity. For example, in the case of a color image, we typically use a three- dimensional tensor where each dimension corresponds to the height, width, and color channels (e.g., red, green, blue). Pooling layers are often used after convolutional layers to reduce the dimensionality of the feature maps and extract the most salient features. This helps in reducing the computational complexity and focusing on the most relevant information in the image, starting from low-level features like edges and gradually capturing higher-level features. This process is repeated across multiple layers, enabling the network to learn complex representations and extract meaningful information from the images. Each image contains edges on the vertical and horizontal axes. Some filters use convolution technique to detect edges. Consider a grayscale image that is 6 × 6 and a filter that is 3 x 3 in size (say). First, the first three dimensions of our greyscale image are multiplied by the three-by-three filter matrix. Next, one column is shifted all the way to the end. General structure of CNN approach of image processing presented on Figure 2. Figure 2: CNN flow of image processing If we are talking about CNN approach of image data processing, it consists of 5 main steps: • Input layer - three-dimensional matrices are used to represent image data. • Convo layer (Convo + ReLU) – feature extractor layer; Image is represented, as one integer output volume. • Pooling layer - after convolution, the spatial volume of the input image is decreased by two convolution layers. • Fully connected layer - weights, biases, and neurons are all part of the neural network. Neurons in one layer are linked to neurons in another layer through this. It is employed to train classify photos into several categories. • Softmax (Logistic) layer - final CNN layer. It is located at the bottom of FC layer. Softmax is used for multi-classification, while logistic is used for binary classification. • Output layer - The label is present in the output layer as one-hot encoded data. We began examining the performance behavior in prevalidated and produced datasets in order to be able to tweak the training parameters because there are numerous alternatives for possible combinations of training parameters. We employed performance indicators, with a focus on time complexity and accuracy rate, because they collectively offer a global view of the performance of the suggested model, to assess the progress in the tuning of the parameters. In order to accurately capture time complexity and evaluate the training process, we execute 30 rounds of the research process. We also perform a few more optimizations to reduce the amount of time required. The suggested techniques were implemented in Dlib and OpenCV, and the software is run on a PC with an Intel Core i7 processor, 16 GB of Memory, and an AMD Radeon R9 M370X Graphics card. Initially, we split the training set of subsets into 5 nodes, each including samples from both classes, with 80% being used to train the models and 20% being used as the validation set. This article will talk about computer vision, specifically face detection using the OpenCV and dlib libraries. Each of their benefits and drawbacks will be understood, as well as whether or not employing them in projects to build recognition systems is practical. Prior to developing and comparing libraries, we must learn the fundamental concepts and vocabulary related to this doctrine so that we can better grasp the challenges that face us as successful software product developers. A comparison of both libraries' productivity was done in terms of execution time and the number of times the used algorithms were iterated. Also, based on these libraries, two straightforward face recognition applications were created, and their performance was compared. The test set results that were obtained are provided and shown in Table 1. The results of this article include an overview of the advantages and limitations of using a multimodal approach for facial recognition. By combining data from multiple modalities, such as images, videos, and audio, multimodal facial recognition systems can improve accuracy and efficiency, particularly in challenging conditions such as low light or occlusions. Finally, the article discusses potential applications of multimodal facial recognition systems, including security, healthcare, and entertainment, as well as future directions for research in this area. Table 1 The result of the performance evaluation Libs Experiment Experiment Experiment Experiment №4 Experiment №1 №2 №2 №5 OpenCV Recogniti 0. 0634 0.0934 0.6121 1.2134 0. 5652 on time (sec.) dlib Recogniti 0.4545 0.2163 1.6889 1.9843 1.6584 on time (sec.) The uploaded or received image was initially searched for faces as part of the suggested methodology because occasionally there may be more than one face in the picture. The face has to be obtained (extracted) and placed correctly in order to facilitate subsequent processing. The identification of the person using already-existing sets of data was the third and last step, which involved the extraction of distinctive face traits. The suggested processes they carried out never had any overlap. Since these operations require quite large computing power, for further optimization and improvement of the speed of the system, the architecture of the remote API interface is proposed, for data transfer from the front-end component or from the IoT platform, depending on the format in which the user interface will work. In general, proposed architecture of face recognition system presented on Figure 3. Figure 3: Proposed architecture of face recognition system In the end, it looks promising to use large data to create and train a machine learning model for automatic recognition. The ability to discern previously unknown patterns would be obvious with more data, yet this might potentially muddle the concepts of causality and correlation. The reliability and quality of the data collected are one of the main issues with big data, which extends beyond images of the faces of people we want to recognize and identify. This is because big data can be biased and distorted by information about people, including their private photos, used during training. As a result, it's possible that some data sets don't always reflect what occurs in reality. As a result, the emphasis throughout system creation and subsequent research will be on safeguarding individual privacy and data, which is crucial and cannot be disregarded. In the future, the researchers plan to focus on the ideas presented in section above of the article, aiming to create a stable infrastructure for developing third-party face recognition applications. They acknowledge the need to incorporate more interpretive elements into their system, such as face, pose, and context detection, to enhance the recognition of human facemarks. To improve the accuracy of human face recognition, the researchers intend to create a new categorization model that integrates these additional traits. This model will be tested against the frameworks discussed in table 1, particularly the TensorFlow framework libraries, to evaluate the value and effectiveness of the new capabilities within the existing framework. Overall, the results of this article highlight the potential of multimodal approaches and machine learning methods for improving the accuracy and efficiency of facial recognition systems, and provide guidance for researchers and developers looking to implement these systems. 5. Discussion The first iteration of our research serves as a proof-of-concept that a computer vision system that can recognize faces from interactions with people is feasible and appropriate. This system may then be used to modify the computer vision behavior and create a semantic repository for further study. It is also possible to gain experience and feedback from this experience and its current limits. The problem of computer vision and one of its subproblems, face recognition, has been actively researched and developed over the past two decades, but all work has not led to a single, unified method of face recognition. Machine learning methods allowed us to bring facial recognition systems to a new level and contributed to the further development of this field, as new methods and tools for building and training systems became available. One of the main advantages of multimodal facial recognition systems is their ability to improve accuracy and efficiency by combining data from multiple modalities. This can be particularly useful in challenging conditions such as low light or occlusions, where single modality systems may struggle to accurately identify faces. By integrating multiple modalities, multimodal systems can capture more information about a person's appearance, making it easier to identify them even in difficult conditions. However, the use of multiple modalities also presents some limitations and challenges. For example, integrating data from multiple modalities can be complex and require significant computational resources. In addition, some modalities, such as audio or 3D facial data, may not be available in all settings, limiting the applicability of multimodal approaches. The purpose of the research is new methods and tools for recognition, evaluation of their effectiveness in various systems and improvement. The main focus of the research will be to find a unified solution for face recognition systems that could be used by small companies and teams in their everyday life. This emphasis is due to the lack of intelligent facial recognition systems based on machine learning algorithms that are cost-effective for small companies and teams. The accuracy of the classification process is heavily influenced by the data within the computer vision training dataset. Consequently, the results obtained during production mode recognition can significantly vary based on the quality and relevance of the input photos provided to the algorithms. To assess the value and effectiveness of the new capabilities integrated into the existing framework, this model will undergo testing alongside the frameworks outlined in table 1, with a specific focus on the TensorFlow framework libraries. This evaluation will provide insights into the performance and potential advantages offered by the enhancements made to the framework. Moreover, one of the results of the research could be development of an additional hardware component for the recognition system, which allows you to filter out the possibility of deceiving the recognition system with the help of any 2D dummies, and also allows you to recognize faces in poor lighting, since the hardware reads the marks on the face and compares them with the existing. Overall, the discussion section emphasizes the need for continued research and development in multimodal facial recognition, particularly in areas such as data integration, algorithm development, and system optimization. By addressing these challenges, researchers and developers can help to realize the full potential of multimodal facial recognition systems in a range of applications. 6. Conclusion In the process of conducting this research and writing the article, a comparison of sets of technologies for the development of an information system was carried out in the format of comparative tables. The most popular systems and means of development, in which this information system of face recognition by identification will be implemented, have also been studied. Today, one of the most well-liked areas of machine learning is computer vision. This is mostly due, in my opinion, to the wide range of computer vision applications. There are various frameworks available that allow you to quickly construct an application using your recognition system. Applications based on computer vision technologies can tackle complex tasks like scene reconstruction or motion analysis in addition to simple ones like recognize. If a company has a staff of developers, they may use a combination of open data and open-source frameworks, or they may only use hosted APIs if computer vision is not their main source of revenue. The structure of the system, the process of its interaction with third-party software, as well as its final appearance are presented. The following information about the information system was also presented: • general information about the information system; • classes of tasks to be solved; • description of the main characteristics and features of the program; • information about functional limitations of the application. In the last point, a control example of the system's operation is presented with images showing all stages of the program's operation and examples of error processing in case of incorrect data input by the user. The capabilities created in this study are the focus of our future research, starting with the ideas in Section 3 to provide a stable infrastructure for creating third-party face recognition apps. We are aware of the necessity for our system to incorporate more interpretive elements that can support the recognition of other human facemarks, such as face, pose, and context detection. In order to improve the accuracy of human face recognition, a new categorization model will be created that incorporates all these traits. To determine the value these new capabilities, contribute to our current framework, we will be able to test it once more against the frameworks we previously covered in Chapter 2, particularly the TensorFlow framework libraries. The article concludes by emphasizing the advantages and limitations of using a multimodal approach for facial recognition. It highlights that multimodal facial recognition systems, by combining data from multiple modalities, have the potential to enhance accuracy and efficiency, especially in challenging conditions. The integration of machine learning methods and multimodal approaches can contribute to improving the performance of facial recognition systems and expanding their applications. As the research and development in this field progress, it is expected that facial recognition will find even broader applications. The researchers foresee the transformation of their proposed architecture approach into a full-scale system that can be deployed in the market. This suggests further research in this project could focus on the following areas: 1. Exploring various feature selection methods and conducting a comprehensive evaluation to identify the most appropriate one for our dataset and specific conditions. 2. Developing machine learning models that can effectively support implementation of a queue system. 3. Expanding our system to incorporate additional bio-based sensing submodules such as voice recognition, touchIDs, and more. In conclusion, this article provides an overview of the advantages and limitations of using a multimodal approach for facial recognition, as well as the machine learning methods and tools that can be used to develop and implement such systems. By combining data from multiple modalities, multimodal facial recognition systems have the potential to improve accuracy and efficiency, particularly in challenging conditions. Overall, the use of machine learning methods and multimodal approaches can help to improve the accuracy and efficiency of facial recognition systems, while also expanding the range of applications for these systems. As a development in this area continue, we can expect to see further advances in the use of facial recognition for a wide range of purposes. 7. References [1] K. Bahmani, S. Schuckers. Face recognition in children: A longitudinal study. In 10th IEEE International Workshop on Biometrics and Forensics - IWBF 2022, pp. 1–8, 2022. [2] S. Singh Bhadauriya, S. Kushwaha, S. Meena. Real-Time Face Detection and Face Recognition: Study of Approaches, Proceedings of 3rd International Conference on Recent Trends in Machine Learning, IoT, Smart Cities and Applications, ICMISC 2022, 2023. doi: 10.1007/978-981-19- 6088-8_27. [3] Y. Ivanov, D. Peleshko, and et. Adaptive moving object segmentation algorithms in cluttered environments, the experience of designing and application of CAD systems in microelectronic, 2015, pp. 97-99. [4] Rosebrock A. Facial landmarks with dlib, OpenCV, and Python. URL: https://www.pyimagesearch.com/2017/04/03/facial-landmarks-dlib-opencv-python/. [5] P. Zdebskyi, V. Lytvyn,Y. Burov, and et. Intelligent system for semantically similar sentences identification and generation based on machine learning methods, CEUR Workshop Proceedings, 2020, pp. 317–346. [6] Y. Medvedev, F. Shadmand, N Gon√ßalves. Young Labeled Faces in the Wild (YLFW): A Dataset for Children Faces Recognition, doi: 10.48550/arXiv.2301.05776. [7] N. Shakhovska, N. Boyko, P. Pukach. The Information Model of Cloud Data Warehouses International Conference on Computer Science and Information Technologies, CSIT 2018, September 11-14, Lviv, Ukraine, 2019, pp. 182-191. [8] S. Chowdhury and J. Sil, "FACERECOGNITION from NON-FRONTALIMAGES Using DEEP NEURALNETWORK," in 2017 Ninth InternationalConference on Advances in PatternRecognition (ICAPR), 2017, pp. 1-6. [9] Z. Rybchak, O. Basystiuk, Analysis of computer vision and image analysis technics, ECONTECHMOD: an international quarterly journal on economics of technology and modelling processes, Lublin, Poland, 2017, pp. 79-84. [10] OpenCV: OpenCV Tutorials, 2023. URL: https://docs.opencv.org/2.4/doc/tutorials/tutorials.html [11] Dlib Python API Tutorials, 2023. URL: http://dlib.net/python/index.html. [12] M. Havryliuk, I. Dumyn, O. Vovk. Extraction of Structural Elements of the Text Using Pragmatic Features for the Nomenclature of Cases Verification. In: Hu, Z., Wang, Y., He, M. (eds) Advances in Intelligent Systems, Computer Science and Digital Economics IV. CSDEIS 2022. Lecture Notes on Data Engineering and Communications Technologies, 2023, vol 158. Springer, Cham. https://doi.org/10.1007/978-3-031-24475-9_57. [13] Zaid Khan and Yun Fu. One label, one billion faces: Usageand consistency of racial categories in computer vision. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, page 587–597, New York, NY, USA, 2021. [14] A. Kovalchuk, N. Lotoshynska, and et. An approach towards an efficient encryption-decryption of grayscale and color images, Procedia Computer Science, Vol. 155, 2019, pp. 630-635. [15] D. Peleshko, M. Peleshko, N. Kustra, I. Izonin. Analysis of invariant moments in tasks image processing, 11th International Conference The Experience of Designing and Application of CAD Systems in Microelectronics (CADSM). 2011, pp. 263-264. [16] N. Shakhovska, The method of Big data processing. XII International Conference on Computer science and information technologies (CSIT), Lviv, Ukraine, 2017, pp. 122-125. [17] S. Makara, L. Chyrun, Y. Burov, and et. An intelligent system for generating end-user symptom recommendations based on machine learning technology, CEUR Workshop Proceedings, 2020, 2604, pp. 844–883. [18] V. Lytvyn, Z. Rybchak, Design of Airport Service Automation System, International Scientific and Technical Conference on Computer Sciences and Information Technologies, 2015, pp. 195– 197. [19] Yaroslav Tolstyak, Myroslav Havryliuk "An Assessment of the Transplant's Survival Level for Recipients after Kidney Transplantations using Cox Proportional-Hazards Model", Proceedings of the 5th International Conference on Informatics & Data-Driven Medicine, Lyon, France, November 18 - 20, CEUR-WS.org, 2022. pp. 260-265. [20] Shah, K.; Bhandare, D.; Bhirud, S. Face recognition-based automated attendance system. In International Conference on Innovative Computing and Communications; Springer: Berlin/Heidelberg, Germany, 2021; pp. 945-952. [21] Al-Fahsi, R.D.H.; Pardosi, A.P.J.; Winanta, K.A.; Kirana, T.; Suryani, O.F.; Ardiyanto, I. Laboratory attendance dashboard website based on face recognition system. In Proceedings of the 2019 International Electronics Symposium (IES), Surabaya, Indonesia, 27–28 September 2019; pp. 19–23. [22] Ali, N.S.; Alhilali, A.H.; Rjeib, H.D.; Alsharqi, H.; Al-Sadawi, B. Automated attendance management systems: Systematic literature review. Int. J. Technol. Enhanc. Learn. 2022, 14, 37- 65. [23] Mustakim, N.; Hossain, N.; Rahman, M.M.; Islam, N.; Sayem, Z.H.; Mamun, M.A.Z. Face Recognition System Based on Raspberry Pi Platform. In Proceedings of the 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT), Dhaka, Bangladesh, 3-5 May 2019; pp. 1-4. [24] Boyko N., Sokil N. 2017. Building computer vision systems using machine training algorithms, ECONTECHMOD. vol. 6, No. 2, pp. 15–20. [25] Veres O., Kis Ya., Kugivchak V., Rishniak I. 2018. Development of a Reverse-search System of Similar or Identical Images. Econtechmod. Vol 7, no 2: 23-30. [26] Shestakevych T., Pasichnyk V., Kunanets N., Medykovskyy M., Antonyuk N. 2018. The content webaccessibility of information and technology support in a complex system of educational and social inclusion, 2018 IEEE 13th International Scientific and Technical Conference on Computer Sciences and Information Technologies, CSIT 2018 - Proceedings, 1, art. no. 8526691, pp. 27-31. [27] Trivedi, A.; Tripathi, C.M.; Perwej, Y.; Srivastava, A.K.; Kulshrestha, N. Face Recognition Based Automated Attendance Management System. Int. J. Sci. Res. Sci. Technol. 2022, 9, 261–268. [28] Ambre, S.; Masurekar, M.; Gaikwad, S. Face recognition using raspberry pi. In Modern Approaches in Machine Learning and Cognitive Science: A Walkthrough; Springer: Berlin/Heidelberg, Germany, 2020; pp. 1–11. [29] Zhang, Z.; Lai, C.; Liu, H.; Li, Y.-F. Infrared facial expression recognition via Gaussian-based label distribution learning in the dark illumination environment for human emotion detection. Neurocomputing 2020, 409, 341-350. [30] Pattnaik, P.; Mohanty, K.K. AI-based techniques for real-time face recognition-based attendance system-A comparative study. In Proceedings of the 2020 4th International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 5–7 November 2020; pp. 1034–1039. [31] Venugopal A, Rahul R Krishna, Rahul Varma U, "Facial Recognition System for Automatic Attendance Tracking Using an Ensemble of Deep-Learning Techniques", 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT), pp.1-6, 2021. [32] Suwarno, S.; Kevin, K. Analysis of face recognition algorithm: Dlib and opencv. J. Inform. Telecommun. Eng. 2020, 4, 173–184. [33] Liu, T.; Wang, J.; Yang, B.; Wang, X. Facial expression recognition method with multi-label distribution learning for non-verbal behavior understanding in the classroom. Infrared Phys. Technol. 2021, 112, 103594. [34] Pandey, S.; Chouhan, V.; Mahapatra, R.P.; Chhettri, D.; Sharma, H. Real-Time Safety and Surveillance System Using Facial Recognition Mechanism. In Intelligent Computing and Applications; Springer: Berlin/Heidelberg, Germany, 2021; pp. 497–506. [35] Tammisetti, A.K.; Nalamalapu, K.S.; Nagella, S.; Shaik, K.; Shaik, K.A. Deep Residual Learning based Attendance Monitoring System. In Proceedings of the 2022 8th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 25-26 March 2022; pp. 1089-1093. [36] Shrestha, A. Face Recognition Student Attendance System. 2021. URL: https://www.theseus.fi/handle/10024/503517.