=Paper= {{Paper |id=Vol-3156/paper21 |storemode=property |title=Convolutional Neural Network for Parking Slots Detection |pdfUrl=https://ceur-ws.org/Vol-3156/paper21.pdf |volume=Vol-3156 |authors=Pavlo Radiuk,Olga Pavlova,Houda El Bouhissi,Volodymyr Avsiyevych,Volodymyr Kovalenko |dblpUrl=https://dblp.org/rec/conf/intelitsis/RadiukPBAK22 }} ==Convolutional Neural Network for Parking Slots Detection== https://ceur-ws.org/Vol-3156/paper21.pdf
Convolutional Neural Network for Parking Slots Detection
Pavlo Radiuka, Olga Pavlovaa, Houda El Bouhissib, Volodymyr Avsiyevycha and Volodymyr
Kovalenkoa
a
    Khmelnytskyi National University, Instytuts’ka str., 11, Khmelnytskyi, 29016, Ukraine
b
    LIMED Laboratory, Faculty of Exact Sciences, University of Bejaia, Bejaia, 06000, Algeria


                 Abstract
                 With the rapid growth of transport number on our streets, the need for finding a vacant
                 parking spot today could most of the time be problematic, but even more in the coming
                 future. Smart parking solutions have proved their usefulness for the localization of
                 unoccupied parking spots. Nowadays, surveillance cameras can provide more advanced
                 solutions for smart cities by finding vacant parking spots and providing cars safety in the
                 public parking area. Based on the analysis, Google Cloud Vision technology has been
                 selected to develop a cyber-physical system for smart parking based on computer vision
                 technology. Moreover, a new model based on the fine-tuned convolutional neural network
                 has been developed to detect empty and occupied slots in the parking lot images collected
                 from the KhNUParking dataset. Based on the achieved results, the performance of parking
                 lots’ detections can be simplified, and its accuracy improved. The Google Cloud Vision
                 technology as parking slots detector and a pre-trained convolutional neural network as a
                 feature extractor and a classifier were selected to develop a cyber-physical system for smart
                 parking. As a result of the computational investigation, the proposed fine-tuned CNN
                 managed to process 66 parking slots in roughly 0.14 seconds on a single GPU with an
                 accuracy of 85.4%, demonstrating decent performance and practical value. Overall, all
                 considered approaches contain strengths and weaknesses and might be applied to the task of
                 parking slots detection depending on the number of images, CCTV angle, and weather
                 conditions.

                 Keywords 1
                 Video-image processing, smart parking, deep learning, convolutional neural network,
                 OpenCV, Google Cloud Vision

1. Introduction
    In recent years, the issue of creating smart parking has become highly essential, especially in large
cities. As the number of cars has rapidly increased over the last few years (see Fig. 1), so does the
need for parking spaces and search facilities. Assuming that the average driver spends 20 minutes
searching for such a place every day, about 120 hours a year could be spent on something more
useful.
    As shown in Fig. 1, most vehicles entering Ukraine have been newly imported automobiles (red
part of the column) and used cars from Europe and the USA (green part of the column). At the same
time, it is noticeable that from 2016 to 2020, the import of new vehicles remains at about the same
level, but the share of imports of used cars is gradually increasing. Such an outcome was due to
changes in the legislation on customs clearance of vehicles imported from abroad on Nov. 25, 2018; a
law was passed to simplify the procedure for customs clearance of used cars imported from abroad

IntelITSIS’2022: 3rd International Workshop on Intelligent Information Technologies & Systems of Information Security, March 23–25,
2022, Khmelnytskyi, Ukraine
EMAIL: radiukpavlo@gmail.com (P. Radiuk); olya1607pavlova@gmail.com (O. Pavlova); houda.elbouhissi@gmail.com (H. El Bouhissi)
kovalleonid4@gmail.com (V. Avsiyevych); vovakm1996@gmail.com (V. Kovalenko)
ORCID: 0000-0003-3609-112X (P. Radiuk); 0000-0001-7019-0354 (O. Pavlova); 0000-0003-3239-8255 (H. El Bouhissi); 0000-0002-
4394-6467 (V. Avsiyevych); 0000-0002-1859-5378 (V. Kovalenko)
            ©️ 2022 Copyright for this paper by its authors.
            Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
            CEUR Workshop Proceedings (CEUR-WS.org)
[1]. Nowadays, there are many smart parking projects, but ready-for-use examples can be counted on
the fingers of one hand, and information about the cost-effective aspect of their implementation is
generally minimal. It should be noted that when designing such tools, the most significant financial
part of the development is born by the software, not hardware. After considering and comparing
different parking detection techniques [2], a conclusion has been made that the method of smart
parking using visual surveillance based on an external closed-circuit television (CCTV) is much more
effective than others, considering most of the factors considered.




Figure 1: Increase in vehicle numbers in Ukraine
    Information technologies have been widely used to visualize civilian infrastructure using simple
and affordable video surveillance cameras, i.e., CCTVs. The same CCTVs can be used to detect the
occupied and empty slots in automobile parking lots. Videos and images obtained from such cameras
are processed and analyzed with computer vision (CV) techniques and tools. However, two
challenges prevent the widespread use of CV means for occupancy detection.
    The first issue implies the low performance of detecting occupied and vacant parking slots by CV
against sensor camcorders or ordinary manual counting [3]. Generally, the low accuracy of CV-based
approaches happens due to various factors, such as the different appearance of vehicles, the impact of
the environment on images (like shadows, bright sunlight, or fog), occlusion by other vehicles (or
stationary objects), visual distortion due to the inspection of cameras at an acute angle.
    The delimitation of parking slots in videos or images is another challenge of vision-based methods.
A parking area can be covered by numerous CCTVs, yet the boundaries of the parking lot may change
over time due to law enforcement or municipal reasons. In addition to this, marking manually every
parking space in videos and images is a time-consuming task and may cause many technical mistakes.
Accordingly, the use of automatic means and methods for delineating the boundaries of parking
spaces is highly relevant for parking solutions based on intelligent information technologies [4].
Consequently, in order to achieve the goal of the study, the following tasks must be completed:
    1. To search and analyze up-to-date technologies for image and video processing based on
    modern CV methods and means.
    2. To select the most appropriate technology to create a cyber-physical system for smart parking
    based on the outdoor surveillance camera of the university parking lot.
    3. To develop an information model for parking slots detection and vehicle identification.
    4. To validate the developed model in terms of its practical value.

2. Related works
   The scientific community has actively investigated and proposed novel CV methods and
approaches to identifying and demarcating parking areas. Traditional visual-based techniques for
detecting parking slots are divided into line-based [5] and marking-point-based [6]. Line-based
approaches first construct visible lines in an image around a region of interest (ROI) using various CV
features, such as the Canny edge detector [7], Laplacian operator [8], and Haar Cascade [9]. Next, the
parameters of the detected lines are predicted using a line fitting algorithm to draw boundaries around
the ROI. Similar to line-based approaches, marking-point-based ones seek for marking points in an
image around an ROI using, for example, Harris edge detector [10] or boosting decision tree [11], and
then use a pattern matching technique [12] or combining line detection [13] to locate a targeted
parking space. Even though such traditional techniques of detecting parking spaces provide decent
results, they are susceptible to changes in the environment and, therefore, not applicable to component
cases of delimitation of parking areas.
   Overall, table 1 presents decent studies that have been conducted over the past years to find the
best approach for parking slots detection and vehicle recognition.

Table 1
Analysis of existing computer vision approaches for smart parking
    #      Year       Algorithm /              Advantages                      Disadvantages
                        Model
  [12]     2018 Deep                This approach utilizes             The features obtained from
                   convolutional    coordinates of every parking       the benchmark dataset may
                   neural           slot requiring relatively less     not be practical for
                   network,         computational power.               recognizing real outdoor
                   OpenCV                                              parking lot.
   [9]     2019   Haar Cascade,     This ensemble approach allows      In multiple-vehicle detection,
                  XGBoost           identifying a vehicle or a         the superimposed features
                                    parking slot from any angle of     sometimes do not distinguish
                                    view; the use of imposed           between similar edges of
                                    features ensures the detection     objects, leading to the
                                    accuracy of an individual          detection of two vehicles as
                                    vehicle of roughly 100%.           one.
  [13]     2020   Faster R-         The hyperparameters of the         The network may miss some
                  Convolutional     neural network are fine-tuned      parking spaces, as the entry
                  neural network    according to the characteristics   point markings are faint, and
                                    of the parking spaces, leading     the parking space is less than
                                    to a high precision rate of        the threshold.
                                    99.63%.
  [14]     2020   Hough             This approach provides a           Even minor changes in light
                  Transform,        maximum recognition accuracy       and shadows might
                  OpenCV            of about 100% by the fixed         considerably worsen the
                                    CCTV position and constant         classification results.
                                    light intensity).
  [15]     2021   Long short-       High-level prediction of empty     The detection algorithm is
                  term memory       parking spaces using CV            adapted to a specific parking
                                    technology and the real-time       space; even minor changes
                                    car parking data.                  within the parking lot can
                                                                       adversely affect classification
                                                                       accuracy.
  [16]     2022   Mask R-           A scalable and relatively          The performance results of
                  Convolutional     inexpensive system can detect      this approach highly depend
                  neural            empty parking spaces based on      on the surveillance camera
                  network,          video and image data.              and computing device.
                  OpenCV
    As can be seen from Table 1, a deep learning (DL) approach, particularly deep convolutional
neural networks (CNNs), has been most frequently used over the past five years and has shown the
most robust recognition of parking lots, among other approaches. For example, DeepPS [12] is the
first multi-module information system based on DL algorithms for identifying parking spaces. This
system is based on two visual-based technologies: the well-known OpenCV library to describe the
marking points in an image around ROI and CNN to identify the target features of vehicles in an
image and match the paired marking points with the identified features. Overall, ensemble approaches
based on deep neural networks demonstrate the best performance in detecting parking slots and
vehicles in different environmental conditions.
    Therefore, considering the abovementioned analysis, two visual-based technologies were defined
as the most effective for parking lot detection – OpenCV and CNN.

2.1.    OpenCV Computer Vision Library + CNN
   Over the past decades, the OpenCV computer vision library [18] has become the leading
technology in the image processing domain. This tool set serves as a so-called infrastructure for
applying CV techniques in information systems. OpenCV is used, among other things, to resize input
images, convert them to vector form, and detect the features of target objects in the image. At the
same time, one of the most popular approaches to detecting features in the image today is DL, in
particular, CNN [19]-[21].
   The CNN model combines many functional operations that transmit the input image as feature
vectors into the resulting data to estimate the belonging of the identified objects to predefined classes.
The CNN architecture utilized in this study is from the authors’ previous work [22] and is depicted in
Fig. 2.




Figure 2: The scheme of convolutional neural network used in this work

   According to the classification results in [23], we conclude that combining the GCV API system
and OpenCV + CNN tools may achieve more robust performance and higher classification accuracy.

2.2.    Google Cloud Vision API (GCV API)
    Another equally well-known image recognition technology is the Google Cloud Vision API (GCV
API) [24]. The GCV API is a de facto set of prepared machine learning models and algorithms that
service users can quickly implement to meet their business needs. The principle of the GCV API is to
perform two steps: 1) assigning labels to the original image; 2) automatic recognition of objects in the
image by predefined classes. The GCV API is a universal classifier that identifies various moving and
still objects in an image.
    In [23], we conducted a preliminary experiment: ten images were used from the video surveillance
camera of one of the parking lots of Khmelnytskyi National University. The images were
preliminarily prepared: the contours were cropped to bring the focus as close as possible to the
location of the cars. In addition, the objects in the image were magnified to increase the likelihood of
finding the object.
   The experiment was to test the same image using two of the most popular image recognition
technologies. The object identification results on the target image, performed using OpenCV + CNN
and GCV API technologies, are shown in Fig. 3.
   Fig. 3 shows that GCV API technology coped much better with the task of identifying cars on the
image (Fig. 3a) than OpenCV + CNN technology (Fig. 3b).




                      (a)                                                 (b)
Figure 3: Identified objects on the target image that correspond to the searched cars, found by: (a) –
OpenCV + CNN, (b) – GCV API [23]

   Hence, the GCV API system as a parking slots detector and a pre-trained CNN as a feature
extractor were chosen to develop a cyber-physical system for smart parking.

3. The model
   The authors compiled Dataset (KhNUParking) from the collected images extracted from an
external closed-circuit television (CCTV). The CCTV was installed on Campus 3 of Khmelnytskyi
National University, Ukraine. The images show parking spaces of the outdoor parking lot between
campuses 3 and 4 of the university (Fig. 4).




                        (a)                                                  (b)
Figure 4: The samples of the KhNUParking dataset presenting targeted parking spaces: (a) – almost
all parking lots are empty, (b) – nearly all are fully occupied

   The initial KhNUParking was collected of 100 images extracted from a CCTV, each of which was
853 × 480 pixels, and then was split into training (70%) and validation (30%) subsets. An additional
subset of 100 images was created to test the classification models. Furthermore, actual annotations of
the parking slots, ground boxes (33 slots), and the occupancy (3300) were employed to assess the
proposed approach’s accuracy.
   PKLot: a subset of 390 randomly sampled images of 1280 × 720 pixels was collected from the
PKLot dataset [25]. It must be noted that in the original PKLot dataset, parked vehicles are displayed
from up to down.
   Experimental setup: all computational experiments were performed on the Python v3.8 stack
with Keras as the back end. The calculations were executed on 8-core Ryzen 2700 and a single GPU
card GeForce GTX1080 with 8 GB of memory.
   Methodology: the proposed approach for CV technology is depicted in Fig. 5.




Figure 5: The proposed approach for smart parking cyber-physical system

    In this work, we utilized a neural network model based on pre-trained CNN as a feature extractor
and a two-layer perceptron as a classification module. The pre-trained CNN contained 1000 classes
(pre-trained with the ImageNet dataset). To prepare the model for detecting occupied and empty
parking spaces, the last fully connected layers in the network were replaced with two classes that
correspond to “Empty” or “Occupied.” In this work, the testing models were evaluated by several
statistical indicators and run-time, an average time in seconds to read images from the hard disk and
crop them. Statistical measurements used in this study are defined as:
                                              TP + TN                                            (1)
                             Accuracy =                    ,
                                         TP + TN + FP + FN
                                              TP + TN
                             Precision =                   ,                                     (2)
                                         TP + TN + FP + FN
                                                TP
                                     Recall =         ,                                          (3)
                                             TP + FP
                                              TP
                                       𝐹1 =         ,                                            (4)
                                            TP + FN
    where TP represents true positive cases in the testing dataset, TN stands for true negative cases,
FN denotes false positive, and FN represents false negative cases.
    The data augmentation technique was also performed on the fine-tuning dataset to reduce over-
fitting.
    Two transformations were applied: 1) reflection along X and Y axes and 2) change of the X and Y
scales of the images. Furthermore, the input images were resized to 128 × 128 to fit the input of the
fine-tuned CNN.

4. Experiments and Results
   The network was pre-trained with a stochastic gradient descent with a momentum of 0.8, a
learning rate of 0.005, and a batch size of 64; training epochs were set to 20. The pre-training process
took roughly 50 minutes on a single GPU. Fig. 6 shows the training and validation accuracy and loss
curves.
   The prepared fine-tuned CNN was tested on the set of 100 KhNUParking images in the following
matter. At first, in each of 100 images, 33 individual parking slots were cropped and then passed to
the fine-tuned CNN to perform the classification task.




                        (a)                                                 (b)
Figure 6: Training and validation curves of the pre-training procedure: (a) – accuracy, (b) – loss
function

    The actual samples of the KhNUParking dataset contained the status of 3300 occupied/empty
spaces and 33 ground boxes of the parking slots.
    Here, the delineations of parking slots are presented as the so-called bounding boxes that also crop
the individual parking slots.
    A bounding box is determined by [x, y, w, h], where [x, y] – the coordinates of the middle of the
boxes, and [w, h] – the width and height. Fig. 7 presents the classification results obtained by the
testing dataset.




Figure 7: The confusion matrix of the prediction results

   As it is seen from Fig. 7, 748 empty parking spaces and 2069 occupied parking lots were correctly
identified; meanwhile, 321 vacant lots were classified wrongly, and 162 occupied spaces were
recognized as open. So, the overall classification accuracy was 85.34%. According to the obtained
classification results, the proposed fine-tuned CNN makes more mistakes and thus less accurate in
identifying empty parking spaces.
   Upon visualizing a few of the wrongly identified parking spaces in Fig. 8, it was observed that
those parking lots mostly contained parts of vehicles, people, or other objects inside the image crop.
Finally, Fin. 9 shows the visual representation of the classified parking slots.
   Several models, namely AlexNet [19], VGG-16 [20], and MobileNetV2 [21], were compared with
the fined-tuned CNN in terms of their efficiency and accuracies.
   The classification results obtained from all models are shown in Table 2.
   Occupied score: 0.866                Occupied score: 0.651           Occupied score: 0.901




   Occupied score: 0.589              Occupied score: 0.731               Occupied score: 0.488




Figure 8: A few falsely classified parking spaces with their occupied scores




Figure 9: The visualization sample of the parking lot: red color represents the occupied slots, green
color represents empty slots

Table 2
The comparison of well-known neural network architectures with our proposed fine-tuned
convolutional neural network based on the KhNUParking dataset
                                                                            Time,
          Approach             Accuracy     Precision     Recall   𝐹1
                                                                           seconds
         AlexNet [19]           0.777         0.820       0.858  0.839       0.49
        VGG-16 [20]             0.843         0.878       0.891  0.885       0.71
      MobileNetV2 [21]          0.852         0.863       0.928  0.895       0.52
        GCV API [24]            0.673         0.767       0.741  0.754       0.22
     Our fine-tuned CNN         0.854         0.866       0.927  0.896       0.14
    Table 2 presents the values of statistical measurements (1)-(4) (validity) and run-time on GPU
(efficiency) obtained by comparing approaches. As it is seen from the table, the generalizing ability of
all models is high enough for this quality of parking spaces, yet there are some differences in
indicators among the models. The proposed fine-tuned CNN performed better in classification
accuracy (85.4%) and 𝐹1 -score (89.6%), surpassing the analogs by at least 0.15% and 0.09%,
respectively. At the same time, the VGG-16 model achieved the highest precision (87.8%), surpassing
our model by 1.24%, while MobileNetV2 scored the highest recall (92.8%), surpassing our model by
0.09%. As for run-time, our fine-tuned CNN required the least computational time, scoring only 0.14
seconds to read the images from the hard disk.
    Google Cloud Vision showed worse performance than the analogs in these experiments yet
retained appropriate generalizing ability over diverse parking spaces. In conclusion, all considered
approaches contain strengths and weaknesses and might be applied to the task of parking slots
detection depending on the number of images, CCTV angle, and weather conditions.
    Overall, the proposed fine-tuned CNN could process 66 parking slots in roughly 0.14 seconds on a
single GPU with an accuracy of 85.4%, demonstrating decent performance and practical value.

5. Conclusions
   Therefore, during the study, an analysis of information technologies for image recognition based
on computer vision was conducted. Based on the analysis, Google Cloud Vision technology has been
selected to develop a cyber-physical system for smart parking based on computer vision technology.
A new model based on the fine-tuned convolutional neural network has been developed to detect
empty and occupied slots in the parking lot images collected from the KhNUParking dataset. Based
on the achieved results, the performance of parking lots’ detections can be simplified, and its accuracy
improved. It was also concluded that the Google Cloud Vision technology as parking slots detector
and a pre-trained convolutional neural network as a feature extractor and classification were decided
to develop a cyber-physical system for smart parking. As a result of the computational investigation,
the proposed fine-tuned CNN managed to process 66 parking slots in roughly 0.14 seconds on a
single GPU with an accuracy of 85.4%, demonstrating decent performance and practical value.
   Further investigation will be devoted to developing the server- and client-based parts as a mobile
app that tracks the availability of vacant places at the university’s parking lot.

6. References
[1] O. Onischuk, S. Buchaskyi, O. Novitsky, and V. Malyschuk, Analytical study of the secondary
    car market of Ukraine: Current state and prospects, p. 51, 2021. URL:
    https://eauto.org.ua/news/13-analitichne-doslidzhennya-vtorinnogo-avtorinku-ukrajini
[2] O. Pavlova, V. Kovalenko, T. Hovorushchenko, and V. Avsiyevych, Neural network based
    image recognition method for smart parking, Comput. Syst. Inf. Technol., 3 1 (2021) 49-55.
    doi:10.31891/CSIT-2021-3-7.
[3] S. D. Khan and H. Ullah, “A survey of advances in vision-based vehicle re-identification,”
    Comput. Vis. Image Underst., 182 (2019) 50-63. doi:10.1016/j.cviu.2019.03.001.
[4] M. Dixit, C. Srimathi, R. Doss, S. Loke, and M. A. Saleemdurai, Smart parking with computer
    vision and IoT technology, in 2020 43rd International Conference on Telecommunications and
    Signal Processing (TSP-2020), (2020) 170-174. doi:10.1109/TSP49548.2020.9163467.
[5] H. Do and J. Y. Choi, Context-based parking slot detection with a realistic dataset, IEEE Access,
    vol. 8, pp. 171551–171559, 2020, doi:10.1109/ACCESS.2020.3024668.
[6] W. Li, H. Cao, J. Liao, J. Xia, L. Cao, and A. Knoll, Parking slot detection on around-view
    images using DCNN, Front. Neurorobot., vol. 14, p. 46, 2020, doi:10.3389/fnbot.2020.00046.
[7] J. Trivedi, M. S. Devi, and D. Dhara, Canny edge detection based real-time intelligent parking
    management system, Sci. J. Silesian Univ. Technol. Transp., 106 (2020) 197-208.
    doi:10.20858/SJSUTST.2020.106.17.
[8] M. Noor and A. Shrivastava, Automatic parking slot occupancy detection using Laplacian
     operator and morphological kernel dilation, in 2021 10th IEEE International Conference on
     Communication Systems and Network Technologies (CSNT-2021), (2021) 825–831.
     doi:10.1109/CSNT51715.2021.9509620.
[9] I. M. Hakim, D. Christover, and A. M. Jaya Marindra, Implementation of an image processing
     based smart parking system using Haar-Cascade method, in 2019 IEEE 9th Symposium on
     Computer Applications Industrial Electronics (ISCAIE-2019) (2019) 222–227.
     doi:10.1109/ISCAIE.2019.8743906.
[10] J. S. L. Tang and S. Manickam, Parking lot occupancy detection using image overlay and
     intersection technique with Harris corner detector, J. Eng. Technol., 11 1 SE-Articles (2020) 37-
     52. URL: https://jet.utem.edu.my/jet/article/view/5876 (accessed Oct. 25, 2021)
[11] R. Sun, G. Wang, W. Zhang, L.-T. Hsu, and W. Y. Ochieng, A gradient boosting decision tree
     based GPS signal reception classification algorithm, Appl. Soft Comput., 86 (2020) 105942.
     doi:10.1016/j.asoc.2019.105942.
[12] L. Zhang, J. Huang, X. Li, and L. Xiong, Vision-based parking-slot detection: A DCNN-based
     approach and a large-scale benchmark dataset, IEEE Trans. Image Process., 27 11 (2018) 5350–
     5364. doi:10.1109/TIP.2018.2857407.
     W. Li, L. Cao, L. Yan, C. Li, X. Feng, and P. Zhao, Vacant parking slot detection in the around
     view image based on deep learning, Sensors, 20 7 (2020) 2138.doi:10.3390/s20072138.
[13] J. D. Trivedi, M. Sarada Devi, and D. H. Dave, Different modules for car parking system
     demonstrated using Hough transform for smart city development, in Intelligent Manufacturing
     and Energy Sustainability, 169 (2020) 109-121. doi:10.1007/978-981-15-1616-0_11.
[14] H. Canli and S. Toklu, “Deep learning-based mobile application design for smart parking,” IEEE
     Access, 9 (2021) 61171–61183. doi:10.1109/ACCESS.2021.3074887.
[15] G. Manjula, G. Govinda Rajulu, R. Anand, and J. T. Thirukrishna, Implementation of smart
     parking application using IoT and machine learning algorithms, in Computer Networks and
     Inventive Communication Technologies, (2022) 247-257. doi:10.1007/978-981-16-3728-5_18.
[16] C. Min, J. Xu, L. Xiao, D. Zhao, Y. Nie, and B. Dai, Attentional graph neural network for
     parking-slot detection, IEEE Robot. Autom. Lett., 6 2 (2021) 3445–3450
     doi:10.1109/LRA.2021.3064270.
[17] S. Gollapudi, OpenCV with Python, in Learn Computer Vision Using OpenCV, Apress,
     Berkeley, CA, (2019) 31-50. doi:10.1007/978-1-4842-4261-2_2.
[18] A. Krizhevsky, I. Sutskever, and E. G. Hinton, ImageNet classification with deep convolutional
     neural networks, Commun. ACM, 60 6 84-90. doi:10.1145/3065386.
[19] K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image
     recognition, in 3rd International Conference on Learning Representations (ICLR-2015), (2014)
     1-14. doi:10.48550/arxiv.1409.1556.
[20] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. Chen, MobileNetV2: Inverted residuals
     and linear bottlenecks, in 2018 IEEE/CVF Conference on Computer Vision and Pattern
     Recognition (CVPR-2018), (2018) 4510-4520. doi:10.1109/CVPR.2018.00474.
[21] O. Barmak and P. Radiuk, Web-based information technology for classifying and interpreting
     early pneumonia based on fine-tuned convolutional neural network, Comput. Syst. Inf. Technol.,
     3 1 (2021) 12-18. doi:10.31891/CSIT-2021-3-2.
[22] V. Avsiyevych and V. Kovalenko, Analysis of information technology for smart parking based
     on artificial neural networks, in XIII All-Ukrainian Scientific and Practical Conference “Actual
     Problems of Computer Science” (APCS-2021), (2021) 12-14.
     Vision AI Google Cloud, Google, Inc, 2021. URL: https://cloud.google.com/vision (accessed
     Dec. 17, 2021).
[23] P. R. L. De Almeida, L. S. Oliveira, A. S. Britto, E. J. Silva, and A. L. Koerich, PKLot - A robust
     dataset for parking lot classification, Expert Syst. Appl., 42 11 (2015) 4937-4949.
     doi:10.1016/J.ESWA.2015.02.009.