<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>surveillance systems⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Bahniuk</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Terletskyi</string-name>
          <email>t.terletskyi@lntu.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kaidyk</string-name>
          <email>o.kaidyk@lntu.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Serhii</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kostiuchko</string-name>
          <email>s.kostiuchko@lutsk-ntu.com.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Inna Kondius</string-name>
          <email>i.kondius@lntu.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Lutsk National Technical University</institution>
          ,
          <addr-line>Lvivska Street 75, 43016 Lutsk</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>The article discusses the problem of CCTV design, which consists in the constant change of recommended criteria for equipment selection caused by the continuous and rapid evolution of technologies. The results of the analysis of existing standards are presented and discrepancies are identified. This trend leads to the destabilization of clear quality criteria and the risk of unreasonable CCTV design. The paper highlights the methods and results of research into changes in the spatial resolution of images from a range of technical characteristics of video cameras based on the relevant theory, which is implemented analytically and confirmed by computer modeling using specialized software. The results of the study, obtained by calculation and modeling, indicated a general discrepancy in the data obtained, which does not exceed the permissible limits. The data presented in the study results will help designers make an informed choice of video cameras, which will reduce the risk of excessive design.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Design</kwd>
        <kwd>criteria</kwd>
        <kwd>operational task</kwd>
        <kwd>pixel density</kwd>
        <kwd>matrix</kwd>
        <kwd>image quality</kwd>
        <kwd>scene</kwd>
        <kwd>resolution</kwd>
        <kwd>1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Problem statement</title>
      <p>The current problem with CCTV design is the lack of stable, universal criteria for selecting
equipment, caused by the continuous and rapid evolution of technology. This was particularly
noticeable during the transition from analog to digital systems. This dynamic creates the following
dilemmas: destabilization of quality criteria (throughout the history of these systems, quality
assessment criteria have constantly changed) and the risk of unjustified overengineering (the
appropriate choice of video camera resolution).
(O. Kaidyk);</p>
      <p>Designers face these questions when selecting equipment for a new system in accordance with
modern requirements, which need to be addressed.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Formulation of the purpose of the article</title>
      <p>The purpose of this work is to establish the influence of technical parameters of video cameras on
the image quality of an object when solving existing types of operational CCTV tasks in
accordance with the recommendations of existing standards. This will allow for a reasonable
approach to the selection of video cameras and reduce the risk of overdesign. The question of
determining the limits of the use of these criteria is also of interest.</p>
      <p>The results of this analysis can be applied in scientific research and practical tasks related to
CCTV design.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Justification of the analysis of scientific research sources</title>
      <p>Historically, the first CCTV systems, which appeared in the 1950s, were analog in terms of image
transmission and were not subject to industry quality standards. The process of selecting video
cameras was largely reduced to a simple comparison of technical characteristics and specifications
offered by manufacturers as key selling points. This practice led CCTV developers to make a
critical mistake – they did not take into account operational requirements, i.e., the real goals and
tasks that the system had to perform.</p>
      <p>The level of image detail is one of the key factors when choosing a CCTV camera. This
indicator, which reflects the clarity of the object, directly depends on the operational tasks that the
system has to perform. It was the need to formalize these tasks that led to the emergence of the
concepts of “operational task” and “operational standards.” This concept, formulated in the UK in
the late 1980s, formed the basis of one of the first global security standards – BS 8418:1987. These
standards are of a recommendatory nature.</p>
      <p>The BS 8418:1987 standard specified that in the era of analog CCTV, the main criterion for
selecting a camera for specific operational needs was the lens. Since the main purpose was to
monitor behavior, a test mannequin served as the benchmark for configuring these cameras.</p>
      <p>This standard established the first official criteria for CCTV required to perform specific
operational tasks. These criteria were based on detail categories determined by the percentage of
the frame height occupied by a fulllength image of a person (Table 1).</p>
      <p>Due to improvements in analog CCTV resolution, BS 8418:1987 has been replaced by BS
8418:2009. The new edition of the standard has significantly expanded the list of operational tasks
to include monitoring, detection, surveillance, recognition, identification, and inspection.</p>
      <p>The monitoring task in this classification is basic and involves only assessing the general
situation at the facility (e.g., the situation at the entrance or in the lobby). It allows you to identify
general changes (e.g., crowds of people) but does not provide the detail needed to distinguish faces
and license plate numbers.</p>
      <p>The task of detection involves the guaranteed identification of subjects or objects of observation
in a controlled area. Observation consists of determining the rough characteristics of the object or
subject of observation, followed by monitoring their movements in order to identify unauthorized
or potentially dangerous actions.</p>
      <p>The goal of recognition is to determine whether a subject or object of observation, or elements
thereof, belong to a specific group of objects of observation.</p>
      <p>Solving the identification task requires the ability to distinguish fairly small, characteristic
details, such as the face of a person entering a facility or the license plate number of a vehicle.
Thus, identification is the process of recognizing a subject or object by its inherent or assigned
identifying features.</p>
      <p>Since, in analog video surveillance systems, resolution is determined by the number of
television lines (TVL), the requirements for solving the task set in BS 8418 2009 and 2015 for
security systems remained based on what part of the frame vertically should be occupied by the
object of surveillance (Table 2).
No</p>
      <p>Operational task
1
2
3
4
5
6</p>
      <p>Monitoring</p>
      <sec id="sec-4-1">
        <title>Detection</title>
      </sec>
      <sec id="sec-4-2">
        <title>Observation</title>
      </sec>
      <sec id="sec-4-3">
        <title>Recognition</title>
      </sec>
      <sec id="sec-4-4">
        <title>Identification</title>
      </sec>
      <sec id="sec-4-5">
        <title>Inspection</title>
        <p>Tasks and
opportunities</p>
      </sec>
      <sec id="sec-4-6">
        <title>Monitoring and</title>
        <p>controlling the</p>
        <p>crowd
Guaranteed
detection of a
person in the frame</p>
        <p>Determining a
person’s distinctive
features, such as</p>
        <p>clothing</p>
        <p>Recognition of
people familiar to</p>
        <p>the operator
Quality sufficient for
human identification
100 % identification
capability, which
eliminates doubt
5
10
25
50
100
400
5
10
30
60
120
400</p>
        <p>In turn, Irish I.S. 199:1987, Australian and New Zealand AS 4806.22006, and other CCTV
standards of that time also used similar criteria to solve various operational tasks.</p>
        <p>In 2013, the European Committee for Electrotechnical Standardization (CENELEC), seeking to
standardize the design of CCTV systems in the European Union, issued standard EN 501327 “Video
surveillance systems for security television”.</p>
        <p>The EN 501327 standard defined the criteria for selecting cameras and lenses, methods for
assessing scenes and lighting, as well as requirements for the number and placement of cameras. At
the same time, the criteria for solving operational tasks (detection, recognition, identification) fully
corresponded to the categories established in the British standard BS 8418:2009.</p>
        <p>The main difference between the European standard was the introduction of an alternative key
parameter – “pixel density” per unit width of the object being observed. The introduction of this
parameter was prompted by the emergence of new types of cameras on the market that functioned
differently from traditional analog cameras. The relevance of these pixel densitybased criteria
increased with the spread of megapixel digital CCTV systems and remained throughout the
transition period when digital systems coexisted with analog ones, until they began to dominate.</p>
        <p>With the dominance of IP and HD cameras on the market, the old methods of determining
operational requirements based on the percentage ratio of a person’s height to the frame have
become obsolete. Modern digital systems have moved from measuring resolution in television lines
(TVL) to pixels. Requirements are now specified as the number of image pixels per 1 meter of the
object at a given viewing distance.</p>
        <p>Thus, the criteria that existed in global standards for analog systems were gradually improved
until CCTV completely transitioned to new digital systems, where the key metric became the
“pixel” rather than the “TV line”.</p>
        <p>Until recently, the operational tasks assigned to CCTV were described by relevant international
standards (Table 3). The requirements for monitoring and detection are the most unified among
these standards, while the criteria for performing more complex operational tasks – identification
and recognition – vary significantly.
No
1
2
3
4
5
6</p>
      </sec>
      <sec id="sec-4-7">
        <title>Operational task</title>
      </sec>
      <sec id="sec-4-8">
        <title>Monitoring</title>
      </sec>
      <sec id="sec-4-9">
        <title>Detection</title>
      </sec>
      <sec id="sec-4-10">
        <title>Observation</title>
      </sec>
      <sec id="sec-4-11">
        <title>Recognition</title>
      </sec>
      <sec id="sec-4-12">
        <title>Identification</title>
      </sec>
      <sec id="sec-4-13">
        <title>Inspection</title>
        <p>
          Changes in the global political situation, a significant increase in terrorist activity, and the
development of artificial intelligence technologies have led to a review of operational tasks and,
accordingly, the emergence of IEC/EN 626764: 2024 [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ].
        </p>
        <p>The characteristic features of the IEC/EN 626764: 2024 operational standard are presented in
Table 4.</p>
        <p>The data presented makes it clear that the types of operational tasks have changed and,
accordingly, the recommendations for CCTV design.
No</p>
        <p>Characterizes the type of
person, gait and behavior, as
well as the type and
category of vehicle
Confirm familiar faces,
actions can be tracked,
vehicle license plates can be</p>
        <p>recognized
Identification of people,
recognition of vehicles by</p>
        <p>model and year of
manufacture, license plates
are clearly legible
20
40
80
125
250
500
1500</p>
        <p>The Overview task should enable the operator to identify an object that has just appeared
among other elements of the image on the monitor. Based on the definition of the Overview
concept, this task is solved taking into account the actual lighting conditions and is evaluated by
the degree of isolation of the control object from the general background of the scene.</p>
        <p>
          The influence of technical parameters of video cameras on the image quality of an object was
studied by L. Wastupranata [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], H. Gururaj [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], M. Shumeiko [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], J. Vijaya [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], Vlado Damjanovski
[
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], John Bigelow [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], and others.
        </p>
        <p>
          Modeling an image of a group of people [
          <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
          ] using specialized software for the task of
“detection” (detection), obtained by cameras with different resolutions, indicated that there was no
point in installing a camera with unlimited high resolution for a simple task, since this would not
provide additional operational value but would lead to unnecessary costs. In other words, there is a
certain limit to the resolution that is sufficient for detecting an object, and exceeding it for this
particular task is impractical.
        </p>
        <p>The author notes that there is a limit to the practicality of choosing camera resolution for each
size of object on the screen. Thus, using a 4 MP camera does not offer any advantages in terms of
“detection” compared to a 2 MP camera.</p>
        <p>A study of the impact of camera resolution on subject recognition showed that even at the same
focal length (the size of a person on the screen does not change visually), the quality and detail of
images vary significantly depending on the resolution. In particular, a camera with a resolution of 2
megapixels provides sufficient image quality for successful detailed “recognition” of a subject,
while images from a 4megapixel camera can be used to solve the task of “identification”
(establishing identity). Thus, if the subject occupies the same space on the screen, increasing the
camera’s resolution increases the level of the operational task that can be solved.</p>
        <p>When solving Perceive tasks, you first need to understand the conditions in which the objects of
observation are located. If the position of the objects is static they are immobile this is one
approach, and to solve such a task, you need to determine the focal length of the lens and the
resolution of the camera. If the objects are in motion, the number of parameters required increases.
To solve such tasks, you also need to determine the exposure time of the electronic shutter. The
exposure time of the electronic shutter must be set in order to obtain a clear, nonblurred image of
the moving object in each frame.</p>
        <p>
          The issue of transitioning from traditional image quality measurement in television lines (TVL)
to pixels, as well as the concept of changing spatial resolution density, have been highlighted and
analyzed in scientific research, in particular in works [
          <xref ref-type="bibr" rid="ref10 ref6 ref7">10, 6, 7</xref>
          ] and [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
        </p>
        <p>The theory of spatial resolution density (or pixel density) variation is a key tool for designing
modern CCTV systems. Its essence lies in determining the distance from the camera to the object at
which the number of pixels per 1 meter of the width of the monitored area (linear field of view)
reaches a predetermined value.</p>
        <p>When solving any operational tasks, it is necessary that the resolution of the monitor screen is
not worse than the resolution of the video camera. Changes in spatial resolution values obtained
experimentally for cameras with HD image quality on the monitor are illustrated in Figure 1.</p>
        <p>The range of distances at which an object must be recognized is quite extensive and lies
between the “identification” and ‘detection’ zones. In this regard, it has been proposed to divide the
“recognition” distances into three sections:
1. Recognition – the object is located closer to the edge of the “detection” zone (the object is
small on the monitor and has poor detail).
2. Medium recognition – the size and quality of the object’s display allows its main details to
be described.
3. High recognition – the object is located closer to the “identification” area (allows for a clear
description of the object).</p>
        <p>Thus, analysis of information sources has shown that spatial pixel density is used as a modern
criterion for solving various types of operational CCTV tasks, and their values vary in existing
standards. The “pixel density” parameter of the camera takes into account the characteristics of the
matrix, the lens, and the distance to the object of observation.</p>
        <p>It has been established that increasing the resolution of video cameras leads to an increase in
pixel concentration per 1 meter of the linear field of the observation scene, which allows for better
quality display of observation objects at a significantly greater distance from the camera.</p>
        <p>Based on this, it can be assumed that further development of technologies and requirements for
CCTV systems may lead to the emergence of matrices with such an extremely high number of
pixels that it will change the current classification of CCTV operational tasks. As a result, another
question will arise about the further use of existing recommendations for evaluating video
surveillance systems.</p>
        <p>
          An analysis of existing studies [
          <xref ref-type="bibr" rid="ref12 ref13 ref3">3, 12, 13</xref>
          ] on the impact of technical parameters of video
cameras on the image quality of an object when solving various types of operational CCTV tasks
has shown that the issue of excessive design needs to be addressed.
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Method of implementing</title>
      <p>To solve the problem, it was decided to use the theory of spatial resolution change as a basis, which
was implemented analytically and through computer modeling in specialized software IP Video
System Design Tool 2025.</p>
      <p>If the linear field of view Hpv and the number of pixels in the matrix nw across its width are
known, then the number of pixels npw per unit width of this field of view can be determined as:
where L – is the distance from the camera to the object being observed, m; hm – is the width of
the video camera matrix, mm; f – is the focal length of the lens, mm.</p>
      <p>To determine the number of pixels across the width of the matrix, you need to take into account
its aspect ratio. If N – is the total number of pixels in the matrix and the aspect ratio is 4:3 (width to
height), then the number of pixels across the width is determined as:</p>
      <p>Hence, the change in spatial resolution can be defined as:</p>
      <p>The focal length, depending on the selected type of task and the distance to the object of
observation, is determined as:
nn3=
n</p>
      <p>w =
H pv</p>
      <p>nw ,
L hm</p>
      <p>f
nw=√ 0.75</p>
      <p>nm .
nsv=
√ 0.75</p>
      <p>nm
L hm
f</p>
      <p>.
f = Lhm nsv .</p>
      <p>H pv
(1)
(2)
(3)
(4)</p>
      <p>All studies were based on modeling changes in spatial resolution for a video camera with a 12
mm lens and a 1/2inch sensor. The following range of parameters was selected for analysis: the
resolution of the sensor was 5 megapixels and was increased in increments of 5 megapixels, while
the range of distances to the object was from 5 to 100 meters in increments of 5 meters.</p>
      <p>The results of the calculations of spatial resolution changes are summarized in Table 5.</p>
      <p>Matrix resolution nm,</p>
      <sec id="sec-5-1">
        <title>Number of pixels nw, pcs</title>
        <p>Distance to the object L, m
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100</p>
        <p>5
2581
968
484
322
242
193
161
138
121
107
96
88
80
74
69
64
60
56
53
50
48</p>
        <p>10
3651
1369
684
456
342
273
228
195
171
152
136
124
114
105
97
91
85
80
76
72
68</p>
        <p>15
4472
1677
838
559
419
335
279
239
209
186
167
152
139
129
119
111
104
98
93
88
83</p>
        <p>20
5163
1936
968
645
484
387
322
276
242
215
193
176
161
148
138
129
121
113
107
101
96</p>
        <p>25
5773
2165
1082
721
541
433
360
309
270
240
216
196
180
166
154
144
135
127
120
113
108</p>
        <p>30
6324
2371
1185
790
592
474
395
338
296
263
237
215
197
182
169
158
148
139
131
124
118</p>
        <p>Modeling of changes in spatial pixel density in IP Video System Design Tool 2025, in order to
confirm the obtained calculation data, was carried out using the following data: focal length of the
video camera lens 12 mm; distance from the video camera to the human figure model from 20 to 60
m with a step of 5 m; video camera matrix form factor 1/2"; matrix format 4:3; video camera
resolution 5Mpx, 10Mpx, 19Mpx. Since there is no 20 Mpx video camera in the program database, a
video camera with a resolution of 19 Mpx was used as the closest to the required value.</p>
        <p>The human figure model was placed at a distance of 20 m from the video surveillance camera
and, after recording the spatial pixel density readings, was moved 5 m further. The spatial pixel
density values were obtained together with the formed image of the human model. The simulation
results in the selected range are presented in Figure 2.</p>
        <p>The corresponding graphical dependencies are presented in Figure 3.</p>
        <p>Having determined the type of operational task to be solved for a specific video surveillance
area, the designer can use graphical dependencies (Fig. 3) to select the required video camera
resolution. This will help avoid excessive design.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>Comparative analysis of data obtained by calculation and modeling revealed a discrepancy in
values at a certain distance from the camera from 5 to 2 pix/m. In general, this indicates that the
total deviation of the obtained results is no more than 3.</p>
      <p>A person, given his physiological capabilities, can recognize a familiar person at a limited
distance. This distance is mainly 3035 m. Thus, the spatial density values recommended in the
existing CCTV standards for a certain type of operational task can be fully provided at different
distances by video cameras available on the market. The presented data of the research results will
help designers to reasonably approach the choice of video cameras, which will reduce the risk of
overdesign. The use of a 25 Mpx video camera overlaps the capabilities of the human visual
apparatus. Therefore, with the free presence of 25 Mpx cameras on the market, it will be possible to
move away from the use of the currently applicable criteria for operational tasks.</p>
      <p>It is advisable to rely on the recommended criteria for operational tasks of CCTV standard
IEC/EN 626764: 2024 when designing video surveillance systems with integrated video analytics
and artificial intelligence technologies that perform a key preventive function. These systems
require high scene resolution for analysis and are capable of detecting abnormal behavior in real
time that may indicate the preparation of a terrorist act. This early detection capability allows law
enforcement agencies to gain situational awareness and respond quickly to potential threats before
they escalate into a real incident. Thus, in critical sectors, it will be possible to control their
perimeter and perform video analytics of intrusions into the area, monitor safety by checking for
the absence of helmets, protective eyewear, etc. At transport hubs, it will be possible to ensure
passenger safety, control flows, and counter terrorism by detecting abandoned objects and
aggressive behavior, analyzing crowds, and detecting queues. The implementation of these criteria
in smart city monitoring systems will improve quality of life, public safety, and the efficiency of
city services. This applies to the detection of unauthorized dumping of garbage or the start of a fire,
improper parking, the detection of outbreaks of panic or mass riots, etc.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgements</title>
      <p>Dear organizers of the International Scientific Workshop “Applied Information Technologies and
Artificial Intelligence Systems”!</p>
      <p>We sincerely thank the organizers for inviting us to participate in the 1st International Scientific
Workshop “Applied Information Technologies and Artificial Intelligence Systems.”</p>
      <p>We are honored to receive such an invitation. We have reviewed the topics and program of the
Workshop with great interest and consider it extremely relevant and important for the
development of applied information technologies and artificial intelligence systems.</p>
      <p>We are confident that the Workshop will be an excellent platform for exchanging knowledge,
experience, and establishing new scientific contacts.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used Gemini and Grammarly in order to:
Grammar and spelling check and as a smart Search Engine to find related works based on context
of conversation. After using these tools/services, the authors reviewed and edited the content as
needed and take full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <article-title>[1] Video surveillance systems for use in security applications - part 4: application guidelines</article-title>
          ,
          <year>2025</year>
          . URL: https://webstore.iec.ch/en/publication/83425.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>L.</given-names>
            <surname>Wastupranata</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Deep learning for abnormal human behavior detection in surveillance videos - a survey</article-title>
          ,
          <source>Electronics</source>
          <volume>13</volume>
          (
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .3390/electronics13132579.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>H. L.</given-names>
            <surname>Gururaj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. B. C. S.</given-names>
            <surname>Senior Member</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Priya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shreyas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Flammini</surname>
          </string-name>
          ,
          <article-title>A comprehensive review of face recognition techniques, trends and challenges</article-title>
          ,
          <source>IEEE Access 12</source>
          (
          <year>2024</year>
          )
          <fpage>107903</fpage>
          -
          <lpage>107926</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2024</year>
          .
          <volume>3424933</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Shumeiko</surname>
          </string-name>
          ,
          <article-title>Identification, recognition and detection of people according to the European standard EN 501327</article-title>
          ,
          <string-name>
            <surname>Security</surname>
            <given-names>Systems</given-names>
          </string-name>
          ,
          <year>2015</year>
          . URL: http://library.tsu.tula.ru/files/elect_periodical/system_securiti3.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Vijaya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chandrakar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Shrivastava</surname>
          </string-name>
          ,
          <article-title>A comprehensive review concerning the involvement of artificial intelligence techniques in face recognition system</article-title>
          , in: S. Tikadar,
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Puga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Rodrigues</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Shaw</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hassan</surname>
          </string-name>
          ,
          <string-name>
            <surname>B. K.</surname>
          </string-name>
          Patra (Eds.),
          <article-title>Practical Applications of Smart Human-Computer Interaction</article-title>
          , IGI Global, Hershey, PA,
          <year>2025</year>
          , pp.
          <fpage>69</fpage>
          -
          <lpage>110</lpage>
          . doi:
          <volume>10</volume>
          .4018/9798337363851.ch003.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>V.</given-names>
            <surname>Damjanovski</surname>
          </string-name>
          ,
          <article-title>Case study: evaluating face identification of IP cameras with the Vidi Labs test chart v</article-title>
          .
          <volume>5</volume>
          .
          <issue>2</issue>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bigelow</surname>
          </string-name>
          ,
          <article-title>About pixel densities and what they mean</article-title>
          ,
          <year>2017</year>
          . URL: https://www.securitysolutionsmedia.com/
          <year>2017</year>
          /08/01/aboutpixeldensitiesandwhattheymean/.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Y.</given-names>
            <surname>MyagmarOchir</surname>
          </string-name>
          , W. Kim,
          <article-title>A survey of video surveillance systems in smart city</article-title>
          ,
          <source>Electronics</source>
          <volume>12</volume>
          (
          <year>2023</year>
          )
          <article-title>3567</article-title>
          . doi:
          <volume>10</volume>
          .3390/electronics12173567.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Indhumathi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Balasubramanian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Balasaigayathri</surname>
          </string-name>
          ,
          <article-title>Realtime videobased human suspicious activity recognition with transfer learning for deep learning</article-title>
          ,
          <source>Int. J. Image, Graphics Signal Process</source>
          .
          <volume>15</volume>
          (
          <year>2023</year>
          )
          <fpage>47</fpage>
          -
          <lpage>62</lpage>
          . doi:
          <volume>10</volume>
          .5815/ijigsp.
          <year>2023</year>
          .
          <volume>01</volume>
          .05.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>V.</given-names>
            <surname>Tsakanikas</surname>
          </string-name>
          , T. Dagiuklas,
          <article-title>Video surveillance systems - current status and future trends</article-title>
          ,
          <source>Comput. Electr. Eng</source>
          .
          <volume>70</volume>
          (
          <year>2017</year>
          ). doi:
          <volume>10</volume>
          .1016/j.compeleceng.
          <year>2017</year>
          .
          <volume>11</volume>
          .011.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Uhryn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Karachevtsev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Terletskyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kaidyk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Talakh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ilin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Bogachuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kaduk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Suranchiyeva</surname>
          </string-name>
          , et al.,
          <article-title>Modern programming technologies in the tasks of identification and classification of military aircraft using machine learning algorithms</article-title>
          ,
          <source>in: Proceedings of the Photonics Applications in Astronomy, Communications, Industry, and High Energy Physics Experiments</source>
          <year>2024</year>
          , Proceedings of SPIE, Bellingham, USA,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .1117/12.3054877.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>B. S.</given-names>
            <surname>Reddy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Jha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Naware</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vattem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hussain</surname>
          </string-name>
          ,
          <article-title>Design and implementation of hybrid lowpower wide area network architecture for IoT applications</article-title>
          ,
          <source>J. Ambient Intell. Smart Environ</source>
          .
          <volume>16</volume>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          . doi:
          <volume>10</volume>
          .3233/AIS230146.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Muhammed</surname>
          </string-name>
          , I. Medvedev, N. Gonçalves,
          <article-title>VoidFace: a privacypreserving multinetwork face recognition with enhanced security</article-title>
          ,
          <source>arXiv preprint arXiv:2508.07960</source>
          (
          <year>2025</year>
          ). doi:
          <volume>10</volume>
          .48550/arXiv.2508.07960.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>