<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>H. Matos); https://hsantos.dsi.uminho.pt (H. Santos)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>People Tracking in a Smart Campus context using Multiple Cameras</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Henrique Matos</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Henrique Santos</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ALGORITMI RD Centre, University of Minho</institution>
          ,
          <addr-line>Guimarães</addr-line>
          ,
          <country country="PT">Portugal</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Object multi-tracking has been a relevant topic for diferent applications, such as surveillance, mobility, and ambient intelligence. It is particularly challenging when considering open spaces, like Smart Cities, which demand multi-camera solutions with issues like re-identification. In this paper, we describe a framework aiming to provide multi-tracking of people throughout a university campus as part of a larger project (Lab4USpaces) to develop a Smart Campus initiative. Several object detection models and real-time tracking open-source algorithms were compared. The project contemplates a set of low-cost video cameras covering most of the campus, with or without overlapping. After researching diferent alternatives, the proposed framework uses the YOLOv7 tiny model for object detection, BoT-Sort for multiple object tracking, and Deep Person Reid for re-identification. We also faced challenges concerning the privacy and security of campus users. The multi-tracking system complies with current regulations since no personal identification is ever performed, and no images are stored for longer than necessary for object detection and re-identification. Besides describing the first prototype, this paper discusses some validation tests and describes some potential uses.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Smart Campus</kwd>
        <kwd>Object Detection</kwd>
        <kwd>Multiple Object Tracking</kwd>
        <kwd>Re-Identification</kwd>
        <kwd>People Tracking</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In the context of an undergoing research project named Lab4U&amp;Spaces1 – aiming at exploring
innovative technologies to raise the quality of life at the university campus – the work described
here is focused on the campus’ users management and mobility dimension. Using this platform,
students can, for example, avoid a place with a more extensive flow of users when scheduling a
joint activity. On the other hand, campus managers can quickly locate areas of more significant
influx, properly understand this dynamic, and prepare more appropriate responses to avoid it, if
recommended. The need to prevent excessive exposure to UV rays due to carelessness by users
or even reduce contact to prevent viral dissemination (as happened in the recent pandemic
situation caused by COVID-19) are other examples of important campus management objectives
that would benefit from this platform. Using video-based techniques for this purpose, indoor
and outdoor, is not usually identified as a possible solution for economic reasons [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Even
so, the rise of processing power and widespread availability of low-cost video-capable devices
makes it possible. Security and privacy is the main concern in this type of environment, and
regulatory documents like the GDPR, in particular, its items demanding privacy-by-design and
by default must be attended to, imposing specific requirements that limit the solution space
concerning detection and identification. The paper is organised as follows: Section 2 compares
related projects and methods, and Section 3 presents the proposed solution and a comparison of
object detection, tracking, and re-identification techniques. Section 4 describes the testing and
validation methods, and the final section presents conclusions and possible project evolution.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related projects</title>
      <p>
        Most of the techniques used in this paper are more frequently detailed in video surveillance
or Computer Vision applications. Concerning university campuses and the project’s context,
those techniques should be used and framed by specific requirements. When researching
related projects, we searched within both domains, but emphasised application rather than
the algorithm’s development themselves. In [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], the authors present an IoT-based system
designed to track vehicles and pedestrians on a Smart Campus. The system combines various
sensors, including GPS, RFID, and LiDAR, with cameras to collect tracking data. This data is
then processed to create real-time location information, which is communicated and stored
in a central database. The system has potential applications in trafic management, safety
monitoring, and environmental monitoring, and the authors argue that it is both reliable and
cost-efective. The work described in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] presents a real-time tracking algorithm that can track
multiple targets using multiple cameras. The algorithm employs a Kalman filter and a
spatialtemporal model. The authors demonstrate the applicability in several surveillance applications,
including security, transportation, and sports.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] the authors proposed a smart city and trafic analysis system similar to the one planned
for our project. They used Cascade R-CNN with ResNet-101 for vehicle detection, TPM for
multiple object tracking in a single camera, and HRNet and Res2Net for vehicle re-identification.
The system was efective but has performance limitations, indicating the need for improvements.
Moreover, those limitations impact negatively people tracking. Another similar solution is
proposed in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. However, there is one significant distinction, since it was developed for a wide
range of applications. The authors introduce two techniques: DeepCC for Multi-Target
MultiCamera Tracker (MTMCT); and Adaptive Weighted Triplet Loss (AWTL) for re-identification.
The results are auspicious, but since the publication of this work, some new technologies have
emerged, allowing for optimised techniques within this research context. They will be referred
to in the next section along with the description of the proposed solution.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed solution</title>
      <p>The general system architecture for the Lab4USpaces platform is divided into four layers, as
shown in Figure 1. The physical layer includes the tracking component, which is located where
all sensors and actuators are placed. IP cameras capture video and send it to an edge server for
processing, including configuration management and object geolocation. The network layer
enables wireless communication between the sensor subsystems and the middleware. The
integration layer includes an Identity and Access Manager module for device authentication, a
Message Broker for communication organisation, a Temporal Database for data storage, and the
Home Assistant platform as the Hub. The application layer uses the collected data for analysis,
visualisation, and decision-support applications. The data tracking subsystem will be explained
in detail later.</p>
      <sec id="sec-3-1">
        <title>3.1. Object Detection</title>
        <p>
          The object detection module must be able to identify correctly all people in crowded scenarios
using low-cost video cameras. Such scenarios pose challenges like occlusion and clustering,
hindering precision and recognition [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. Open-source YOLO-based techniques were compared
for this purpose using a machine with an Intel Core i7-8550U @1.80GHz CPU and 8GB RAM.
All models were trained using the COCO dataset with 91 object types and 2.5 million labelled
instances in 328k images. Table 1 shows the mean average precision (mAP) and average process
time with and without GPU – the results obtained with YOLOv3 and some YOLOVv5 variants
were suppressed since they are not influential. YOLOv7 was recently introduced and reportedly
outperforms other detectors in both speed and accuracy [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], which aligns with our results.
However, for higher precision with low processing time, YOLOR is also an alternative. When
using a GPU YOLOR or YOLOv7 would be good choices. Overall, YOLOv7 is the best choice.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Single Camera Multiple Object Tracking</title>
        <p>There are two types of trackers: ofline (previous and subsequent frames are available to
create more accurate predictions), and online (work on the fly). In this project, we need a
real-time online tracking system. Multiple problems can occur, like occlusions, initialisation
and termination of tracks, people with similar appearances, and interaction between multiple
objects. Occlusion can cause identity switches and fragmentation of trajectories, which should
be avoided in our project. A common way to benchmark object tracking algorithms is to use
the MOTChallenge, a standardised evaluation framework for multiple object tracking (MOT). It
contains two datasets with indoor and outdoor videos.</p>
        <p>
          We evaluated four MOT algorithms: SORT [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], DeepSORT [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], StrongSORT [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], and
BOTSORT [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. Table 2 shows the results obtained for the two datasets, using the metrics relevant to
our case: HOTA (Higher Order Tracking Accuracy), IDF1 (ratio of correctly identified detections
over the average number of ground-truth and computed detections, MOTA (Multiple Object
Tracking Accuracy), and processing time per frame. SORT has the lowest processing time but
its accuracy is too low. Both BOT-SORT and BOT-SORT-ReID have better accuracy, but the
ReID version has a higher processing time, making BOT-SORT the best option.
This operation involves using query images of the person to be labelled and gallery images
(e dedicated common storage) that contain previously detected IDs from neighbour cameras
with a common path. There are well-known re-identification algorithms such as
CentroidsReid [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] and LUPerson [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. However, they were developed for specific datasets with highly
predictive flows and shapes that do not match our project’s needs. The Deep Person Reid
[
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] algorithm is similar and was chosen since it was trained and used in cross-domain with
datasets similar to what we expect to have. Re-identification is performed using line intersection
zones that delineate boundaries between camera views. When a subject crosses these lines, the
re-identification operation is triggered, either for querying a neighbour’s camera or storing a
group of images with the corresponding ID. The querying operation returns a similarity value
between the gallery images and the input one, along with associated IDs. If the value obtained
is acceptable, the ID is assumed.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results and Configurations</title>
      <p>This section presents the results of object detection, tracking, and re-identification when applied
to images captured in the Lab4U&amp;Spaces project, using two video cameras, one inside and the
other outside a building, in connected spaces without scene overlapping. We also describe the
configuration settings of the global system and demonstrate the final results on campus.</p>
      <p>Concerning object detection, Figures 3a and 3b display the outcomes obtained from both
cameras. The inside camera covers a wider area with extreme resolution variation due to near
and far objects, with no constraints imposed on the minimum object size. It is noticeable that
some people in the most distant zone, on the right side of the figure, are not detected. Even
though, the selected technique performs better than all other solutions in terms of response
time and computing resources required. The rate of false negatives in figure 3b is approximately
64% – despite not being optimal, it is acceptable.</p>
      <p>Concerning the Single Camera Multiple Object Tracking adopted technique, Figures 3c and
3d illustrate the result of the BoT-SORT algorithm in sequential scenes captured from both
cameras. The average time required for detecting and tracking people was between 142.8ms (7
(a) outside the building</p>
      <p>
        (b) inside the building
(c) outside the building
(d) inside the building
FPS) and 263ms (3.8 FPS). These values are consistent with the recommended frame rate for
this application class, as suggested in [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] – the authors propose a working point of 6 FPS with
0% accuracy loss, and it is acceptable to reduce the FPS by 80% (1.2 FPS) while maintaining
precision rates above 60%.
      </p>
      <p>Concerning the re-identification operation, figure 4a shows a person crossing the delimiter
zone line in the outside scene, using the label "cam2_1" (meaning camera 2 and the person ID 1);
Figure 4b shows the same person entering the inside camera’s view area a few seconds later and
being labelled "cam1_22", meaning it was not yet re-identified; finally, in figure 4c, it is visible
that the ID was redefined, to the one previously assigned by camera 2 (about two seconds after
being initially detected). In the small dataset used, out of four possible re-identifications, three
of them were correctly performed suggesting an eficacy of 75% – but particularly in this case,
more experiments are required to validate this result.</p>
      <p>(a) subject ID at outside
(b) subject ID at inside
(c) subject ID re-assigned</p>
      <p>After completing the main loop shown in Figure 2, the detection and tracking data is stored
in the Hub. The collected data can be utilised in several applications, such as the one depicted
in Figure 5, which shows in real-time or using recorded data the density of people in diferent
campus spaces through colour and bubble size codes – this example uses only one camera, for
illustration purposes. The left image displays the complete campus map, whereas the right
image focuses on a particular corridor where the indoor camera was installed.
The tracking module needs additional configuration details to function properly throughout
the entire campus, allowing to characterise the space and optimise computational and storage
resources. A web application was created to manage configuration data. The main items include
Zone, defined by multiple polygons in each camera’s field of view, allowing for the definition of
zones of interest where specific views or details should be highlighted; Line Intersection Zone,
which delimits the boundary between zones in a camera’s scene where particular operations like
re-identification or people counting should be applied; Black Area, used to remove unwanted
areas from a camera’s view where no person can be found, or when two cameras overlap to
prevent resource wastage; and Global Coordinates, necessary to track and re-identify individuals
throughout the campus (it involves mapping each camera’s field of view to the global campus
map and define a scale, angle, and ofset to transform the tracking data into campus’ coordinates.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>This paper describes a framework for tracking people on a university campus as part of the
Lab4U&amp;Spaces project, which aims to develop a platform for exploring smart campus
technologies. We evaluated various technologies and selected the ones that best suits the project
requirements, which included low computational resources, energy constraints, and
opensource solutions. Privacy is another fundamental requirement we guarantee by not storing any
image for consultation or beyond the time strictly required in the re-identification function.
We conducted experiments at the prototype level to validate all operations and found that the
framework is viable. We also discussed the potential use of this technology at the campus level.
However, to determine the framework’s actual usefulness, it needs testing with more than two
cameras and evaluation of the behaviour of thousands of daily campus visitors. As future work,
we start designing applications that exploit all available data to transform campus management
and life into more intelligent activities.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>Lab4U&amp;Spaces – Living Lab of Interactive Urban Space Solution, Ref.
NORTE-01-0145-FEDER000072, financed by community funds (FEDER), through Norte 2020.
2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), 2018,
pp. 368–371.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Musa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. N.</given-names>
            <surname>Ismail</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. F. M. Fudzee</surname>
          </string-name>
          ,
          <article-title>A survey on smart campus implementation in malaysia</article-title>
          , JOIV :
          <source>International Journal on Informatics Visualization</source>
          <volume>5</volume>
          (
          <year>2021</year>
          )
          <fpage>51</fpage>
          -
          <lpage>56</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Toutouh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Alba</surname>
          </string-name>
          ,
          <article-title>A low cost iot cyber-physical system for vehicle and pedestrian tracking in a smart campus</article-title>
          ,
          <source>Sensors</source>
          <volume>22</volume>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , E. Izquierdo,
          <article-title>Real-time multi-target multi-camera tracking with spatial-temporal information</article-title>
          ,
          <source>in: 2019 IEEE Visual Communications and Image Processing (VCIP)</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Ye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , L. Huang,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <article-title>A robust mtmc tracking system for ai-city challenge 2021</article-title>
          , in: 2021
          <source>IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>4039</fpage>
          -
          <lpage>4048</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>E.</given-names>
            <surname>Ristani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Tomasi</surname>
          </string-name>
          ,
          <article-title>Features for multi-target multi-camera tracking</article-title>
          and re-identification,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>D.</given-names>
            <surname>Prasad</surname>
          </string-name>
          ,
          <article-title>Survey of the problem of object detection in real images</article-title>
          ,
          <source>International Journal of Image Processing (IJIP) 6</source>
          (
          <year>2012</year>
          )
          <fpage>441</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>C.-Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bochkovskiy</surname>
          </string-name>
          , H.
          <string-name>
            <surname>-Y. M. Liao</surname>
          </string-name>
          ,
          <article-title>Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors</article-title>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bewley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Ott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ramos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Upcroft</surname>
          </string-name>
          ,
          <article-title>Simple online and realtime tracking</article-title>
          ,
          <source>in: 2016 IEEE International Conference on Image Processing (ICIP)</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>3464</fpage>
          -
          <lpage>3468</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>N.</given-names>
            <surname>Wojke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bewley</surname>
          </string-name>
          ,
          <article-title>Deep cosine metric learning for person re-identification</article-title>
          ,
          <source>in: 2018 IEEE Winter Conference on Applications of Computer Vision</source>
          (WACV), IEEE,
          <year>2018</year>
          , pp.
          <fpage>748</fpage>
          -
          <lpage>756</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhao</surname>
          </string-name>
          , Strongsort: Make deepsort great again,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>N.</given-names>
            <surname>Aharon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Orfaig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.-Z.</given-names>
            <surname>Bobrovsky</surname>
          </string-name>
          ,
          <article-title>Bot-sort: Robust associations multi-pedestrian tracking</article-title>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wieczorek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Rychalska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dabrowski</surname>
          </string-name>
          ,
          <article-title>On the unreasonable efectiveness of centroids in image retrieval</article-title>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>D.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Yuan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>Unsupervised pre-training for person re-identification</article-title>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>K.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cavallaro</surname>
          </string-name>
          , T. Xiang,
          <article-title>Omni-scale feature learning for person reidentification</article-title>
          , in: ICCV,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mohan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Kaseb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. W.</given-names>
            <surname>Gauen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-H.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Reibman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. J.</given-names>
            <surname>Hacker</surname>
          </string-name>
          ,
          <article-title>Determining the necessary frame rate of video data for object tracking under accuracy constraints</article-title>
          , in:
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>