Exploring Convolutional Neural Network in Computer Vision- based Image Classification Parnit Kaur1, Sunil K. Singh2, Inderpreet Singh3* and Sudhakar Kumar4 1,2,3,4 Chandigarh College of Engineering and Technology, Chandigarh, India Abstract AI consists of a large number of domains, but Machine Learning is the most prominent one. Deep learning, a subset of machine learning, is now overtaking all the various prediction techniques and algorithms. Computer vision is one of those domains which have benefited most from deep learning techniques. One of the most industrially used applications of computer vision is image classification and recognition. Deep learning methods have been proven to give state-of-the-art results in image classification utilities. Convolutional Neural Networks (CNN) is the most implemented deep learning algorithms in image classification problems. This paper gives a detailed survey of the implementation of deep learning-based CNN in image classification problems under the domain of computer vision. The paper introduces all the concepts of AI, machine learning, deep learning, CNN, and image classification. Keywords 1 Deep Learning, Convolutional Neural Networks, Image Classification, Computer Vision 1. Introduction Artificial intelligence (AI) is an expansive range of software engineering identified with building machines capable of accomplishing tasks that require extensive human insight [1]. AI identifies with different parts of science with a lot of methodologies [2-5] , however facilitation in AI and profound learning are making a model change in pretty much every area of the innovative business. Simulated intelligence expands upon the idea that the human brain can be characterized such that a machine can undoubtedly mimic it and perform undertakings, from the most straightforward to those that are considerably more compound [6]. Image Recognition, which is one amongst the undertakings in which deep neural networks (DNNs) dominate [6]. Neural organizations are figuring frameworks intended to perceive relational data in images of the designs. The main engineering utilized for image classification is Convolutional Neural Networks (CNNs) [7]. CNNs comprise a few layers with little neuron assortments, every one of them seeing little pieces of a picture. The outcomes from every one of the assortments in a layer in part cover in an approach to make the whole picture portrayal. The layer underneath then rehashes this cycle on the new picture portrayal, permitting the framework to find out about the picture synthesis. In this paper, a detailed discussion on the application of deep learning based convolutional neural networks in the domain of image classification is presented. The paper describes various image classification applications and the state-of-art CNNs utilized in these applications. A detailed study on the introductory terms AI components i.e., machine learning and deep learning along with computer vision is also presented to understand the background of CNNs in image classification. 1 International Conference on Smart Systems and Advanced Computing (Syscom-2021), December 25–26, 2021 EMAIL: parnitkaur.pk@gmail.com (A. 1); sksingh@ccet.ac.in (A. 2); inderpreet221099@gmail.com (A. 3); sudhakar@ccet.ac.in (A. 4); ORCID: 0000-0001-6034-5316 (A. 1); 0000-0003-4876-7190 (A. 2); 0000-0002-3834-025X (A. 3); 0000-0001-7928-4234 (A. 4) ©️ 2020 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org) The rest of the paper is structured as follows. Section. 2 outlines the Background of AI components (machine learning and deep learning) and the domain of computer vision. Detailed introduction of image classification is demonstrated in Section. 3. Section. 4 describes the Convolutional Neural Networks and its famous architecture. Section. 5. demonstrates the usage of CNN infamous image classification problems which includes traffic sign classification and plant species recognition. Finally, the challenges and future scope of image classification and CNNs are described in Section 6. 2. Background Explicit uses of AI incorporate normal language handling, master frameworks, discourse acknowledgment, and computer vision, ML is one of the most prominent subsets of AI. ML alludes to an AI framework that can self-learn depending on the calculation. Explicit uses of AI incorporate normal language handling, master frameworks, discourse acknowledgment, and computer vision, ML is one of the most prominent subsets of AI. ML alludes to an AI framework that can self-learn depending on the calculation. DL is an ML subset that is usually applied to massive informational collections. 2.1. Deep Learning Deep learning (DL) also comprehended as deep neural learning or deep neural network is a present- day and sparking field of ML. It is a subfield of machine learning, and its algorithms are enlightened by artificial neural networks. DL is an AI function that tries to imitate the workings of the mortal brain in processing data and creating configurations for utility in conclusion substance. Deep learning models have different layers which include the input layer, hidden dense layer, and finally the output layer. A basic deep learning model is shown in Fig. 1. Figure 1: Deep learning model DL is the utmost efficiency, supervised, period, and price-effective machine learning strategy. DL isn't a definite knowledge approach, but it sticks to multicolored procedures and geomorphology that may or may not be applicable for an immense speculum of complex issues. The method understands the explicative and differential features hierarchically [9]. DL methodologies have made an important enhancement with discernible accomplishment in an extensive diversity of operations with beneficial security tools. It's contemplated to be the stylish choice for finding and analyzing complex frames in high-dimensional data by employing a backpropagation algorithm [10]. 2.2. Computer vision An interdisciplinary scientific extent that handles in what way computers can escalate great- echelon comprehension from digital images or tapes is computer vision. It understands and automates assignments that the earthborn visual system can do [11]. Computer vision assignments carry techniques for processing, dissecting, and concluding digital pictures, and stock of primary-dimensional data from the unaffected globe in the league to frame numerical or representative data, for the specimen in the arrangement of verdicts [12]. Comprehending this terrain means the transfiguration of graphic pictures into descriptions of the world that bring in common sense to study processions and can raise suitable ambition [13]. Computer vision permits us to check, bit by bit, that each piece is in its place, or toward the finish of the interaction, that the last get together is right. This application is valuable for the get-together of apparatus, hardware, electronic sheets, or pre-congregations with a ton of intricacy. There are numerous applications of computer vision, some of the major applications under consideration are as follows: ● Object Detection: Article recognition is a computational machine vision procedure that allows us to identify and detect objects in a picture or video. With this sort of recognizable proof and restriction, object recognition can be taken advantage of to include objects in a scene and determine and trace their exact areas, all while accurately marking them [14]. ● Gesture Recognition: Gesture recognition is a sort of perceptual processing UI that allows computational machines to find and decipher human motions as orders [15]. The overall meaning of motion acknowledgment is the capacity of a computational machine to get motions and implement orders dependent on those motions. ● Image Classification: Image classification is one of those computer vision applications which is of very high industrial importance [16]. Image Classification is the essential space, wherein profound neural organizations assume the main part in picture examination. The picture arrangement acknowledges the given information pictures and delivers yield characterization for distinguishing if the illness is available [17]. The characterization cycle plans to classify every pixel in an advanced picture into one of a few land cover classes, or "topics". This arranged data can be applied to make effective aides of the land cover present in an image. Consistently, multispectral data is used to play out the request, and, no doubt, the apparition model present inside the data for each pixel is used as the numerical justification for the plan. 3. Image Classification Image classification is the task where a computational machine can examine a picture and distinguish the 'class' the picture falls under. Picture grouping is the interaction of the computational machine investigating the picture and revealing to you what is in it. Early picture characterization depended on crude pixel information. This implied that computational machines would separate pictures into singular pixels. The issue is that two photos of the same thing can look altogether different. They can have various foundations, points, presents, etcetera. This made it a remarkable test for computational machines to effectively 'analyze' and order depictions. Image classification has a couple of employments and tremendous potential as it fills in unwavering quality. Self-driving vehicles use picture characterization to recognize what's around them for example, trees, individuals, traffic signals, etc. Image classification can likewise help in medical services. The target of image classification is to recognize and depict, as a remarkably dim level (or shading), the highlights happening in a picture as far as the article these highlights address on the ground. Image classification is one of the major components of computerized picture examination [18]. Deep Learning along with IoT devices is now the most used practical implementation of classification models [19]. 4. Convolutional Neural Networks (CNN) CNN consists of several engineered layers of artificial neurons. The CNN's first echelon normally detects abecedarian features alike as reclining, erect, and canted edges. The affair of the first echelon is fed as input of the coming echelon, which excerpts more multifaceted attributes like corners and edges [20]. The convolutional neural network is a grade of deep learning systems that run dominant in prismatic computer vision jobs and attract interest across prismatic lines, including radiology [21]. The CNN is composed of several structure blocks, parallel as difficulty layers, pooling layers, and exhaustively connected layers, and is aimed to automatically determine spatial rankings of applications and functionalities through a backpropagation algorithm. Impropriety with the convolutional neural network's notions and advantages and limitations is essential to pull its possible to perfect radiologist performance [22]. Fig. 2 demonstrates the working of a general CNN model. Figure 2: Convolutional Neural Networks model Convolution is progressed where the network tries to entitle the input signal by applying what it has studied in the past. The taking labor signal is either passed on to the ensuing gentry [23]. Either the ensuing is subsampling Inputs from the complicacy gentry can be “smoothened” to reduce the delicacy of the defilements to noise and variations [24]. This smoothing process is called subsampling and can be achieved by taking norms or taking the most over a sample of the signal. The activation gentry controls how the signal flows from one gentry to the ensuing, emulating how neurons are fired in our brain [25]. Labor signals which are forcefully associated with old references would move another neuron, enabling signals to be propagated more effectively for recognition [26-27]. 5. Image classification applications 5.1. Traffic sign classification Traffic sign acknowledgment innovation is a basic part of the driver help framework as it predicts the ebb and flow driving climate. With the assistance of a traffic sign acknowledgment framework, the drivers do require consistently focusing out and about signs, and huge pressing factors and weaknesses can be decreased. Since the convolutional neural organization (CNN), has shown unprecedented order execution which can naturally acquire highlights gained from huge informational collections, it has been broadly utilized in the rush hour gridlock sign acknowledgment framework. Table 1 Traffic Sign classification performance metric analysis for different methods and models Authors Methods Performance metric Lu, Y., Lu, J., Zhang, S., & Hall, P. (2018). CNN, attention 90% Traffic signal detection and classification in model street views using an attention model [28]. Chen, L., Zhao, G., Zhou, J., & Kuang, L. CNN, MLP 98.26% (2017). Real-Time Traffic Sign Classification Using Combined Convolutional Neural Networks [29]. Saini, S., Nikhil, S., Konda, K. R.B, H. S., & CNN, SVM 99.03% Ganeshan, N. (2017). An efficient vision- based traffic light detection for vehicles [30]. Inderpreet Singh, Sunil Kr. Singh, Sudhakar Dropout VGG 99.53% Kumar, Kriti Aggarwal. (2021). Dropout-VGG based based Convolutional Neural Network for Convolutional Traffic Sign Categorization [31]. Neural Network 5.2. Plant species recognition Machine vision dependent on traditional picture handling strategies can be a helpful device for plant location and recognizable proof. Plant recognizable proof is required for weed identification, herbicide application, or other proficient synthetic spot splashing tasks. The way to fruitful recognition and recognizable proof of plants as species types is the division of plants from foundation pixel locales. Specifically, it is helpful to section singular leaves from the highest points of overhangs too. Table 2 Plant species recognition performance metric analysis for different methods and models Authors Methods Performance metric Mehdipour Ghazi, M., Yanikoglu, B., & CNN, AlexNet, Transfer 78.44% Aptoula, E. (2017). Plant identification Learning, VGCNet, using deep neural networks via GoogleNet optimization of transfer learning parameters [32]. Grinblat, G. L., Uzal, L. C., Larese, M. G., CNN, Central Patch 96.9% & Granitto, P. M. (2016). Deep learning Extraction, Vein for plant identification using vein Segmentation morphological patterns [33]. Dyrmann, Mads, H. Karstoft and H. CNN, Batch Midtiby. (2016). Plant species Normalization, Max 98% classification using deep convolutional Pooling neural network [34]. 6. Open issues and future scope There are a few problems or open issues in image classifications that are in the way of achieving complete accuracy in a model [35]. Image classification puts forward a lot of challenges for us, some of them are included in the process of a video, optical character recognition, Adhoc image classification, and many more. A few points of view of one individual on camera can confound picture order arrangements that can show one individual as a few people [36]. Solutions must combine all media into single-individual media profiles. At the point when excess profiles are decreased, advanced examiners would then be able to depend on the created waitlist of profiles to discover suspects, leads, and casualties quicker [37-39]. Frequently cell phones contain screen captures of discussions from message applications that individuals use to report discussions. Picture-to-message acknowledgment is essential to figure out pictures where the imagined message should be searchable [40]. More applications or scopes that account to image classification are internet of things devices for smart cities which involve various machine learning applications [41]. CNN also has some open-ended problems, one of many difficulties in the field of CNN is to manage the change in the information present. The human visual framework can distinguish pictures under various points, under various foundations, under a few diverse lighting conditions. At the point when the articles are concealed partially by different items or hued, the human visual framework discovers signs and different snippets of data to recognize what we are seeing [42]. Making a ConvNet that can perceive objects at a similar level as people have been demonstrated is difficult. Regardless of where the article is available in the picture an all-around prepared ConvNet can distinguish the item present in the picture [43]. Be that as it may, assuming the article in the picture comprises pivots and scaling, the ConvNet will struggle to distinguish the item in the picture. It can be tackled by adding various varieties to the picture during the preparation cycle also called Data Augmentation. Fig 3. depicts the global image classification usage in the coming years [44]. Figure3: Global Image classification usage in the coming years Image classification development has changed web-based portrayal with its applications in facial- affirmation, driverless vehicles, clinical ailment recognizing confirmation, and shockingly in the space of guidance. The destiny of picture affirmation applications is wide [45]. Extended reality has given a very surprising perspective to 'fantasizing with your eye open'. The gaming field has started using picture affirmation development joined with extended reality for their possible advantage, as it helps with outfitting gamers with a functional experience [46]. Clearly, engineers have an advantage as they can utilize picture affirmation in setting up useful gaming conditions and characters. 7. Conclusion An extensive study is performed on image classification, CNN, and its uses. Also, there are a lot of open issues that are further discussed. Deep learning-based ConvNets are under observation of the paper. All the factors are kept in mind and mentioned while discussing computer vision, and its uses in the current scenario involving the upcoming areas like IoT, AI, smart cities, etc. This paper gives a detailed study about as discussed in the paper, image classification is the process of putting images into categories and labeling groups of vectors or even pixels within an image based on particular aspects. CNN is utilized for image classification and recognition as a result of its high precision. The CNN follows a various leveled model which deals with advancing or progressing an organization, similar to a pipe, lastly gives out a completely associated layer where every one of the neurons is associated with each other and the yield is generated. In this paper, analysis is done for various image classification applications using deep learning methods. The applications include face recognition, traffic sign classification, and plant species recognition. 8. Acknowledgment We wish to acknowledge the help provided by the technical and support staff of the CCET ACM Student Chapter and the Department of Computer Science & Engineering, CCET (Degree Wing), Chandigarh. We would also like to show our deep appreciation to our supervisors who helped us in the finalization of our research paper. 9. References [1] Harith Al-Sahaf, Ying Bi, Qi Chen, Andrew Lensen, Yi Mei, Yanan Sun, Binh Tran, Bing Xue & Mengjie Zhang (2019) A survey on evolutionary machine learning, Journal of the Royal Society of New Zealand, 49:2, 205-228, DOI: 10.1080/03036758.2019.1609052 [2] Sunil Sharma, Sunil Singh, and Subhash Panja, “Human Factors of Vehicle Automation”, In: Autonomous Driving and Advanced Driver-Assistance Systems (ADAS), Taylor & Francis Group (CRC Press), Chapter 15, 2021. [3] SK Singh, K Kaur, A Aggarwal, D Verma, “Achieving High Performance Distributed System: Using Grid Cluster and Cloud Computing”, Int. Journal of Engineering Research and Applications (IJERA), 5(2), pp 59-67, 2015 [4] SK Singh, RK Singh, MPS Bhatia, “Performance evaluation of hybrid reconfigurable computing architecture over symmetrical FPGAs”, International Journal of Embedded Systems and Applications 2 (3), 107-116, 2012. [5] S Gupta, SK Singh, R Jain, Analysis and optimisation of various transmission issues in video streaming over Bluetooth, International Journal of Computer Applications 11 (7), 44-48, 2010 [6] Ramesh, A., Kambhampati, C., Monson, J., & Drew, P. (2004). Artificial intelligence in medicine. Annals of The Royal College of Surgeons of England, 86(5), 334–338. doi:10.1308/147870804290 [7] G. Biswas, T. Linden, A. Rao Padala and P. Bose, "Globe-Trotter: An Intelligent Flight Itinerary Planner" in IEEE Intelligent Systems, vol. 11, no. 02, pp. 56-64, 1989.doi: 10.1109/64.24922 [8] Feigenbaum, E. “The Art of Artificial Intelligence: Themes and Case Studies of Knowledge Engineering.” IJCAI (1977). [9] LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015). https://doi.org/10.1038/nature14539 [10] Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85–117. doi:10.1016/j.neunet.2014.09.003 [11] Singh, K. S. (2021). Linux Yourself (1st ed.). Routledge. [12] Zheng, N., Loizou, G., Jiang, X., Lan, X., & Li, X. (2007). Computer vision and pattern recognition. International Journal of Computer Mathematics, 84(9), 1265–1266. doi:10.1080/00207160701303912 [13] R. A. Jarvis, "A Perspective on Range Finding Techniques for Computer Vision," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-5, no. 2, pp. 122-139, March 1983, doi: 10.1109/TPAMI.1983.4767365. [14] C. P. Papageorgiou, M. Oren and T. Poggio, "A general framework for object detection," Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271), 1998, pp. 555-562, doi: 10.1109/ICCV.1998.710772. [15] S. Mitra and T. Acharya, "Gesture Recognition: A Survey," in IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 37, no. 3, pp. 311-324, May 2007, doi: 10.1109/TSMCC.2007.893280. [16] Lu, D., & Weng, Q. (2007). A survey of image classification methods and techniques for improving classification performance. International Journal of Remote Sensing, 28(5), 823–870. doi:10.1080/01431160600746456 [17] R. M. Haralick, K. Shanmugam and I. Dinstein, "Textural Features for Image Classification," in IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-3, no. 6, pp. 610-621, Nov. 1973, doi: 10.1109/TSMC.1973.4309314. [18] Sitaula, C., Hossain, M.B. Attention-based VGG-16 model for COVID-19 chest X-ray image classification. Appl Intell 51, 2850–2863 (2021). https://doi.org/10.1007/s10489-020-02055-x [19] A. Mishra, B. B. Gupta, D. Peraković, F. J. G. Peñalvo and C. -H. Hsu, "Classification Based Machine Learning for Detection of DDoS attack in Cloud Computing," 2021 IEEE International Conference on Consumer Electronics (ICCE), 2021, pp. 1-4, doi: 10.1109/ICCE50685.2021.9427665. [20] Yin, W., Kann, K., Yu, M., & Schütze, H. (2017). Comparative study of CNN and RNN for natural language processing. arXiv preprint arXiv:1702.01923. [21] Yamashita, R., Nishio, M., Do, R.K.G. et al. Convolutional neural networks: an overview and application in radiology. Insights Imaging 9, 611–629 (2018). https://doi.org/10.1007/s13244- 018-0639-9 [22] Kaiming He, Georgia Gkioxari, Piotr Dollar, Ross Girshick; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2961-2969 [23] Ren, S., He, K., Girshick, R., & Sun, J. (2016). Faster R-CNN: towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence, 39(6), 1137-1149. [24] Wang, L., Li, L., Li, J., Li, J., et al (2018). Compressive sensing of medical images with confidentially homomorphic aggregations. IEEE Internet of Things Journal, 6(2), 1402-1409. [25] Masud, M., Hossain, M. S., Alhumyani, H., Alshamrani, S. S., Cheikhrouhou, O., Ibrahim, S., ... & Gupta, B. B. (2021). Pre-trained convolutional neural networks for breast cancer detection using ultrasound images. ACM Transactions on Internet Technology (TOIT), 21(4), 1-17. [26] Adil, K., Jiang, F., Liu, S., Grigoriev, A., et al. (2017). Training an agent for fps doom game using visual reinforcement learning and vizdoom. International Journal of Advanced Computer Science and Applications, 8(12). [27] Zou, L., Sun, J., Gao, M., Wan, W., et al (2019). A novel coverless information hiding method based on the average pixel value of the sub-images. Multimedia tools and applications, 78(7), 7965-7980. [28] Lu, Y., Lu, J., Zhang, S., & Hall, P. (2018). Traffic signal detection and classification in street views using an attention model. Computational Visual Media, 4(3), 253-266. https://doi.org/10.1007/s41095-018-0116-x [29] Chen, L., Zhao, G., Zhou, J., & Kuang, L. (2017). Real-Time Traffic Sign Classification Using Combined Convolutional Neural Networks. 2017 4th IAPR Asian Conference on Pattern Recognition (ACPR). doi:10.1109/acpr.2017.12 [30] S. Saini, S. Nikhil, K. R. Konda, H. S. Bharadwaj and N. Ganeshan, "An efficient vision-based traffic light detection and state recognition for autonomous vehicles," 2017 IEEE Intelligent Vehicles Symposium (IV), 2017, pp. 606-611, doi: 10.1109/IVS.2017.7995785. [31] Inderpreet Singh, Sunil Kr. Singh, Sudhakar Kumar, Kriti Aggarwal. (2021). Dropout-VGG based Convolutional Neural Network for Traffic Sign Categorization. In: 2nd Congress on Intelligent Systems. CIS 2021. Lecture Notes on Data Engineering And Communication Technologies. Springer, Singapore. [32] Mehdipour Ghazi, M., Yanikoglu, B., & Aptoula, E. (2017). Plant identification using deep neural networks via optimization of transfer learning parameters. Neurocomputing, 235, 228– 235. doi:10.1016/j.neucom.2017.01.018 [33] Grinblat, G. L., Uzal, L. C., Larese, M. G., & Granitto, P. M. (2016). Deep learning for plant identification using vein morphological patterns. Computers and Electronics in Agriculture, 127, 418–424. doi:10.1016/j.compag.2016.07.003 [34] Dyrmann, M., Karstoft, H., & Midtiby, H. S. (2016). Plant species classification using deep convolutional neural network. Biosystems engineering, 151, 72-80. [35] AlZu’bi, S., Hawashin, B., Mujahed, M., Jararweh, Y., et al. (2019). An efficient employment of internet of multimedia things in smart and future agriculture. Multimedia Tools and Applications, 78(20), 29581-29605. [36] Aggarwal, K., Singh, S. K., Chopra, M., & Kumar, S. (2022). Role of Social Media in the COVID-19 Pandemic: A Literature Review. In B. Gupta, D. Peraković, A. Abd El-Latif, & D. Gupta (Ed.), Data Mining Approaches for Big Data and Sentiment Analysis in Social Media (pp. 91-115). IGI Global. http://doi:10.4018/978-1-7998-8413-2.ch004 [37] Al-Ayyoub, M., AlZu’bi, S., Jararweh, Y., Shehab, M. A., et al. (2018). Accelerating 3D medical volume segmentation using GPUs. Multimedia Tools and Applications, 77(4), 4939-4958. [38] Jain, A. K., & Gupta, B. B. (2019). Feature based approach for detection of smishing messages in the mobile environment. Journal of Information Technology Research (JITR), 12(2), 17-35 [39] Manepalli Ratna Sri, Surya Prakash and T Karuna (2021) Classification of Fungi Microscopic Images – Leveraging the use of AI, Insights2Techinfo, pp.1 [40] Y. Sun, B. Xue, M. Zhang and G. G. Yen, "Completely Automated CNN Architecture Design Based on Blocks," in IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 4, pp. 1242-1254, April 2020, doi: 10.1109/TNNLS.2019.2919608. [41] Cvitić, I., Peraković, D., Periša, M. et al. Ensemble machine learning approach for classification of IoT devices in smart home. Int. J. Mach. Learn. & Cyber. 12, 3179–3202 (2021). https://doi.org/10.1007/s13042-020-01241-0 [42] H. Shin et al., "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning," in IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1285-1298, May 2016, doi: 10.1109/TMI.2016.2528162. [43] Gupta, S., & Gupta, B. B. (2018). A robust server-side javascript feature injection-based design for JSP web applications against XSS vulnerabilities. In Cyber Security (pp. 459-465). Springer, Singapore. [44] Sudhakar Kumar, Sunil Kr Singh, Naveen Aggarwal, Kriti Aggarwal, “Evaluation of automatic parallelization algorithms to minimize speculative parallelism overheads: An experiment”, pp 1517-1528, 2021 Journal of Discrete Mathematical Sciences and Cryptography, Volume 24, Issue 5 , Taylor & Francis, (2021) [45] Sahoo, Somya Ranjan & Gupta, B B. (2021). Real-Time Detection of Fake Account in Twitter Using Machine-Learning Approach. 10.1007/978-981-15-1275-9_13. [46] Gupta, B.B., Prajapati, V., Nedjah, N. Machine learning and smart card based two-factor authentication scheme for preserving anonymity in telecare medical information system (TMIS). Neural Comput & Applic (2021). https://doi.org/10.1007/s00521-021-06152-x