<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Ball and Player Detection in Futsal Videos Using YOLOv8 Model*</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Shohruh Begmatov</string-name>
          <email>bek.shohruh@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mukhriddin Arabboev</string-name>
          <email>mukhriddin.9207@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mokhirjon Rikhsivoev</string-name>
          <email>mrikhsivoev@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Saidakmal Saydiakbarov</string-name>
          <email>saidakmalflash@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zukhriddin Khamidjonov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sardor Vakhkhobov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Khurshid Aliyarov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oybek Karimov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Tashkent University of Information Technologies named after Muhammad al-Khwarizmi</institution>
          ,
          <addr-line>108 Amir Temur St., Tashkent, 100084</addr-line>
          ,
          <country country="UZ">Uzbekistan</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>There has been a significant increase in people's interest and enthusiasm for sports in recent years. This has resulted in an increased emphasis on high-quality video recording of various sports to capture even the smallest details. Recording and analysis have become extremely crucial in sports such as futsal, which involve several complex and fast events. Ball detection and tracking, along with player analysis, have emerged as areas of interest among many analysts and researchers. Coaches rely on video analysis to assess their team's performance and make informed decisions to achieve better results. Furthermore, coaches and sports scouts can use this tool to scout for talented players by reviewing their past games. Ball detection is vital in aiding referees to make correct decisions during critical moments of a game. However, due to the continuous movement of the ball, its shape and appearance change over time, and it often gets blocked by players, making it challenging to track its position throughout the game. This paper proposes a deep learning-based YOLOv8 model for detecting balls and players in broadcast futsal videos.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;YOLOv8</kwd>
        <kwd>Roboflow</kwd>
        <kwd>ball detection</kwd>
        <kwd>player detection</kwd>
        <kwd>futsal</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Over the last decade, numerous studies have been conducted worldwide on the development of
computer vision and artificial intelligence technologies in sports. In [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], it is presented a detailed
overview of sports video analysis, covering various applications. These include high-level analyses
such as player detection and classification, player or ball tracking, prediction of player or ball
trajectories, recognition of team strategies, and classification of various events in sports. In [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], it is
focused on using artificial intelligence techniques in athlete monitoring applications, including
machine learning, deep learning, and natural language processing usage cases. In [3], it is considered
the task of detecting the players and sports balls in real-world handball images, as a building block for
action recognition. In [4], it is proposed a deep learning-based player tracking system to automatically
track players and index their participation per play in American football games. In [5], it is focused on
image and video content analysis of handball scenes and applying deep learning methods for detecting
and tracking the players and recognizing their activities. In the study, for the task of player and ball
detection, the YOLOv7 model is used. In [6], it is investigated object tracking techniques for the
paralympic team sport named goalball. In the study, different tracking methods have been
implemented and compared, evaluating prediction accuracy and performance speed in players and the
ball tracking. In [7], it is proposed a machine learning-based analysis of badminton videos, utilizing
two deep learning models, TrackNet and YOLOv5, to predict shuttlecock trajectories, track players,
and detect different shot types.
      </p>
      <p>The study involved the following steps: First, custom-collected datasets from both
smartphonerecorded videos and online YouTube videos of badminton matches were labelled. These labelled
datasets were then processed and used to train the machine-learning models. Finally, to evaluate the
performance of the TrackNet and YOLOv5 models, a separate testing dataset was used. In [8], it is
presented the application of deep learning methods in sports scenes to detect and track athletes and
recognise their activities. In the study, the scenes recorded during handball games and training
activities will be used as an example. Another interesting study found in [9]. The study is devoted to
basketball action recognition based on the combination of YOLO and a deep fuzzy LSTM network. In
the study, the proposed model was validated on SpaceJam and Basketball-51 datasets. In [10], the
YOLOv7 and YOLOv7_tiny models are presented for soccer-ball multi-detection with DeepSORT for
tracking by a semi-supervised system. In [11], it is explored the potential of artificial intelligence in
football. In [12], it is presented football player performance analysis using particle swarm
optimization and player value calculation using regression. In [13], it is developed a novel machine
learning approach to predict the likelihood of a team attempting to score during a segment of the
match. In [14], it is proposed a model of automated detection and classification of soccer field objects
using YOLOv7 and computer vision techniques. In [15], an efficient deep convolutional neural
network-based method is proposed to automatically detect football players from video matches
directly. In [16], it is proposed deep learning based automated sports video summarization using
YOLO. In the study, a database consisting of 1300 images was used to train (using transfer learning) a
supervised-learning based object detection algorithm. Deep learning has emerged as a key area for
sports analysis, particularly in the field of multi-object detection. A study [17] has shown that YOLOv7
models can be adapted for the multi-detection of soccer balls, demonstrating their effectiveness for
ball tracking. This finding suggests that YOLOv8 has the potential to achieve similar success in
identifying both balls and players in futsal videos.
1.1.</p>
    </sec>
    <sec id="sec-2">
      <title>The Rise of Futsal and the Need for Advanced Video Analysis</title>
      <p>
        Futsal, a fast-paced and skillful variant of soccer played on a smaller court, has witnessed a
remarkable surge in popularity worldwide in recent years [
        <xref ref-type="bibr" rid="ref1">1, 18</xref>
        ]. This growth has led to a growing
demand for advanced video analysis tools that are specifically designed for futsal videos. Unlike
traditional analysis methods used in other sports, video analysis tailored for futsal offers unique
advantages for various stakeholders:
      </p>
      <p>Improved Coaching Strategies: By analyzing game footage, coaches can gain deeper insights into
their team's performance. This includes understanding team formations, player positioning, and
passing patterns. By identifying strengths and weaknesses, coaches can develop more effective tactics
and training strategies to optimize team performance [19].</p>
      <p>Objective Performance Evaluation: Coaches can objectively assess individual and team performance
by analyzing player movements and interactions with the ball through video analysis. This allows for
targeted feedback for players, highlighting areas for improvement and tracking their skill
development over time.</p>
      <p>Targeted Training Drills: Video analysis can be a powerful tool for designing targeted training drills.
By identifying specific weaknesses in-game footage, coaches can create drills that address those areas,
leading to more efficient skill development for individual players and the entire team.</p>
      <p>Future of Officiating: While still under development, real-time video analysis has the potential to
assist referees in making close calls during matches. This could lead to improved officiating accuracy
and fairer outcomes in the future [19].</p>
      <p>These benefits highlight the potential of video analysis specifically designed for futsal videos. By
leveraging this technology, coaches, players, and referees can gain valuable insights that traditional
methods cannot offer, ultimately leading to a more strategic, data-driven approach to the sport.
1.2.</p>
    </sec>
    <sec id="sec-3">
      <title>Unique Challenges of Futsal Videos for Ball and Player Detection</title>
      <p>It is important to note that traditional computer vision techniques used in other sports video
analysis may not be sufficient for futsal due to the unique characteristics of the game. These
characteristics present distinct challenges for object detection and tracking algorithms. The challenges
include:</p>
      <p>Increased Player Density:  The smaller court size in futsal leads to frequent player-ball occlusions,
where players obstruct the view of the ball. This demands object detection models that can effectively
identify and differentiate between players and the ball, even when partially hidden behind each other
[19].</p>
      <p>Rapid Ball Movement:  Futsal is played at a faster pace compared to soccer. The ball experiences
swift and unpredictable changes in direction and speed, requiring object-tracking algorithms that can
accurately follow the ball's trajectory despite these dynamic movements.</p>
      <p>Emphasis on Ball Control: Unlike soccer, where the ball often spends significant time in the air,
futsal emphasizes close control and passing. By analyzing ball movement patterns in futsal videos,
valuable insights can be gleaned into player performance and game tactics, such as dribbling skills and
passing accuracy [20].</p>
      <p>In the following sections we will delve deeper into these challenges, explore existing video analysis
methods, and introduce our proposed approach utilizing YOLOv8, a deep learning model well-suited
for addressing the unique demands of futsal video analysis.
1.3.</p>
    </sec>
    <sec id="sec-4">
      <title>YOLOv8: Addressing the Challenges of Futsal Video Analysis</title>
      <p>As per our earlier discussion, traditional methods of analyzing futsal videos struggle to overcome
the unique challenges presented by this sport. However, YOLOv8, a state-of-the-art deep learning
model, has shown promising results in tackling these challenges. YOLOv8 has specific strengths that
make it well-suited for futsal video analysis, such as real-time processing, robust object detection, and
bounding boxes for detailed analysis. Real-time processing allows for near-instantaneous analysis of
game footage, providing coaches with immediate feedback and potentially enabling them to make
tactical adjustments during matches [21]. Additionally, real-time analysis could assist referees in
making close calls, leading to fairer outcomes. YOLOv8’s advanced convolutional neural network
architecture helps it handle frequent player and ball occlusions commonly encountered in futsal
videos. This robust object detection capability is crucial for accurately identifying and differentiating
between players and the ball, even when partially hidden behind each other [21]. YOLOv8 outputs
bounding boxes around detected objects, players, and the ball. These bounding boxes help track the
ball’s position and movement throughout the game, allowing for detailed analysis of player control
and passing accuracy techniques. Coaches can assess a player’s ability to maintain close control
during dribbling or tight spaces by analyzing the size and movement of the bounding boxes around the
ball. Similarly, coaches can evaluate passing accuracy and effectiveness by tracking the trajectory of
the ball and its relationship to the receiving player’s bounding box. Leveraging these strengths of
YOLOv8, our research aims to develop a robust ball and player detection system specifically tailored
for the unique challenges of futsal video analysis. This system can potentially contribute to
advancements in various aspects of the sport for different stakeholders. There are Player Training,
Tactical Analysis, and Officiating Support.</p>
      <p>Player training coaches can design targeted training drills using insights gained from analyzing
player movement and ball control with YOLOv8. With YOLOv8, players can receive personalized
feedback on their performance by analyzing their movement patterns and interactions with the ball.
This allows players to identify areas for improvement and focus their training efforts more effectively.
In tactical analysis, coaches can develop more effective tactics by analyzing team formations, player
positioning, and passing patterns revealed by YOLOv8. YOLOv8 can also be used to analyze opponent
tactics by studying their formations and player movements through video analysis. This allows
coaches to develop counter-strategies and gain a competitive edge. In officiating support, real-time
video analysis using YOLOv8 could potentially assist referees in making close calls, leading to
improved officiating accuracy. For example, YOLOv8 could be used to analyze close calls involving
potential fouls or out-of-bounds situations, providing referees with additional information to make
informed decisions. In the field of futsal video analysis, using YOLOv8 technology provides a
significant advantage over traditional methods. This technology has the potential to revolutionize the
way coaches, players, and referees approach the game by enabling a more data-driven and strategic
approach. As deep learning models like YOLOv8 continue to evolve, the possibilities for enhancing
futsal through video analysis are truly exciting. This study aims to develop a model for player and ball
detection in futsal videos using YOLOv8, specifically in the segmentation of players based on different
body parts such as the knee, neck, and elbow.
1.4.</p>
    </sec>
    <sec id="sec-5">
      <title>Segmentation Methods for Futsal Video Analysis</title>
      <p>Analyzing futsal videos can be challenging due to factors such as the high density of players, rapid
ball movement, and the emphasis on close ball control. To overcome these challenges, segmentation
methods play a crucial role in separating players, the ball, and other relevant objects from the
background. In this context, we will explore various segmentation approaches suitable for futsal video
analysis, including Deep Learning-based and Traditional Segmentation methods.</p>
      <p>Section 2 provides an overview of existing research, while Section 3 proposes an artificial
intelligence model for whole-body segmentation of players in futsal videos. Section 4 proposes an
artificial intelligence model for the segmentation of players in futsal videos by different body parts.
Finally, Section 5 concludes the paper.</p>
    </sec>
    <sec id="sec-6">
      <title>2. Related Work</title>
      <p>In this section, an overview of existing research on ball and player detection in futsal videos using
AI models is given. In recent years, various studies have been conducted on the development of ball
and player detection, and game analysis in futsal due to recent advancements in science. In [28], it is
proposed a multiple-camera methodology for automatic localization and tracking of futsal players.
The study presents an automated method for estimating the positions of futsal players as probability
distributions through the use of multiple cameras and particle filters, thereby reducing the need for
human intervention. In their framework, each player position is defined as a non-parametric
distribution, which is tracked using particle filters. The authors used information from multiple
cameras to create an observation model, which is a probability distribution function that describes the
likely positions of players in the court plane, at each frame. To reduce human intervention, it addresses
player confusion during tracking by using an appearance model to update the observation function.
The experiments carried out revealed tracking errors below 70 cm, demonstrating the potential for
aiding sports teams in various technical areas.</p>
      <p>In [29], the use of computer vision techniques for visually tracking futsal players is explored. The
study utilizes adaptive background subtraction and blob analysis to detect players, along with particle
filters to predict their positions and track them using data from a single stationary camera. Based on
the results of their experiments, it has been shown that the proposed method is capable of accurately
tracking players and calculating their movements during futsal matches. Their approach has been
found to have an error rate of less than 20 cm, which demonstrates its high potential for use in a
variety of futsal match analyses.</p>
      <p>In [30], a vision-based system was introduced to aid in the tactical and physical analysis of futsal
teams. This system is a simple, yet efficient solution that uses image sequences captured by a single
stationary camera to obtain top-view images of the entire court. This enables a comprehensive
analysis of the game and player performance. The results of experiments conducted with image
sequences of an official match and a training match show that the proposed system provides accurate
tracking data with global mean tracking errors below 40 cm. The system takes only 25 ms to process
each frame, which demonstrates its high potential for practical application.</p>
      <p>In [31], the applicability and reliability of using a single wide-angle lens GoPro camera for tracking
and kinematics analysis of futsal players were assessed. Four digital video cameras were used to record
an official game of a Brazilian professional team during the quarter-final round of the 2013 São Paulo
futsal league. The cameras were placed at the highest points of the court (40 x 20 m; FIFA standard) and
recorded at 30 Hz with a resolution of 720 x 480.</p>
      <p>Finally, a method for analyzing futsal matches using computer vision was proposed in [32]. Videos
were recorded using a single camera with a wide-angle lens, which facilitated the installation and
calibration process in different matches and arenas. This approach is demonstrated using video
recordings of the Pato Futsal team. The recordings were used to identify the players, project their
positions from pixels to real-world coordinates, and estimate their trajectories. The resulting data
visualization is intended to assist coaches in their physical and tactical analysis.</p>
      <p>To summarize, the overview of the previous contribution mentioned above on futsal player
detection is based on camera-related approaches. However, none is based on the model developed
using YOLOv8 presented in this work. In this study, a model for ball and player detection in futsal
videos using the YOLOv8 algorithm and the Roboflow platform is developed.</p>
    </sec>
    <sec id="sec-7">
      <title>Development of an artificial intelligence model for whole-body segmentation of players in futsal videos</title>
      <p>This section is devoted to the creation of an artificial intelligence model and dataset [33] that can do
whole-body segmentation of players in futsal videos. The Roboflow platform was used to create a
dataset of whole-body segmentation of players in futsal videos. Roboflow is one of the most popular
platforms that provides tools for managing and deploying computer vision models. The YOLOv8
algorithm was used to develop an artificial intelligence model for whole-body segmentation of players
in futsal videos. YOLOv8 is an advanced computer vision model created by Ultralytics, representing
the most up-to-date technology in this field. YOLOv8 is suitable for a wide range of object detection
and tracking, instance segmentation, image classification, and pose estimation tasks. Figure 1 shows a
graphical representation of the whole-body segmentation of players in futsal videos.</p>
      <p>It can be seen from Table 2 that the dataset developed for use in the artificial intelligence model for
whole-body segmentation of players in futsal videos consists of a total of 391 images, of which 342 are
the train set and 33 are the validation set and 16 are for the test set. Each input image has a size of
640x360. The artificial intelligence model for whole-body segmentation of players in futsal videos
consists of 4 classes. These are: ball, player1, player2, referee.</p>
    </sec>
    <sec id="sec-8">
      <title>Development of an artificial intelligence model for segmentation of players in futsal videos by knee, neck, and elbow parts of the body</title>
      <p>This section is devoted to the development of an artificial intelligence model and dataset that can
detect players in futsal videos by knee, neck, and elbow parts of the body. The Roboflow platform was
used to create a dataset of segmentation of players in futsal videos by knee, neck, and elbow parts of
the body. The YOLOv8 Large algorithm was used to develop an artificial intelligence model for
segmentation of players in futsal videos by knee, neck, and elbow parts of the body. YOLOv8 Large is
indeed the largest pre-trained model available in the YOLOv8 family. Figure 6 shows a graphical
representation of the segmentation of players in futsal videos by knee, neck, and elbow parts of the
body.</p>
      <p>It can be seen from Table 3 that the dataset developed for use in artificial intelligence model for
segmentation of players in futsal videos by knee, neck, and elbow parts of the body consists of a total
of 388 images, of which 271 are train set, 78 of them were allocated to the validation set and 39 to the
test set. Each input image has a resolution of 1280x1280. The artificial intelligence model for
segmentation of players in futsal videos by knee, neck, and elbow parts of the body consists of 2
classes. These are: ball, player.</p>
    </sec>
    <sec id="sec-9">
      <title>5. Conclusion</title>
      <p>In conclusion, our study on ball and player detection in futsal videos using the YOLOv8 model has
made significant progress in enhancing the capabilities of sports video analytics. The successful
implementation and fine-tuning of YOLOv8 for the nuanced dynamics of futsal have demonstrated its
effectiveness in real-time and accurate detection of both the ball and players, even in challenging
scenarios such as rapid player movements and occlusions.</p>
      <p>In our future work, we plan to create a larger dataset for ball and player detection in futsal videos.
Furthermore, we will conduct new research on improving the accuracy of the model developed in this
study.
[3] M. Burić, M. Pobar, and M. Ivašić-Kos, Adapting YOLO Network for Ball and Player Detection,
Int. Conf. Pattern Recognit. Appl. Methods, vol. 1 (2019) 845–851.
doi: 10.5220/0007582008450851.
[4] H. Liu, C. Adreon, N. Wagnon, A. L. Bamba, X. Li, H. Liu, S. MacCall, Yu Gan, Automated player
identification and indexing using two-stage deep learning network, Sci. Rep., vol. 13, no. 1 (2023)
1–11. doi: 10.1038/s41598-023-36657-5.
[5] K. Host, M. Pobar, and M. Ivasic-Kos, Analysis of Movement and Activities of Handball Players</p>
      <p>Using Deep Neural Networks, J. Imaging, vol. 9, no. 4 (2023). doi: 10.3390/jimaging9040080.
[6] J. Gudauskas and Ž. Matusevicius, Multiple object tracking for video-based sports analysis, CEUR</p>
      <p>Workshop Proc., vol. 2915 (2021) 1–10.
[7] A. Mohamed, Towards Machine Learning Framework for Badminton Game Analysis Using</p>
      <p>TrackNet and YOLO Models, Iowa State University (2023).
[8] M. Ivasic-Kos, K. Host, and M. Pobar, Application of Deep Learning Methods for Detection and
Tracking of Players, in Deep Learning Applications, P. Luigi Mazzeo and P. Spagnolo, Eds.</p>
      <p>IntechOpen (2021). doi: 10.5772/intechopen.96308.
[9] S. B. Khobdeh, M. R. Yamaghani, and S. K. Sareshkeh, Basketball action recognition based on the
combination of YOLO and a deep fuzzy LSTM network, J. Supercomput., vol. 80, no. 3 (2023)
3528–3553. doi: 10.1007/s11227-023-05611-7.
[10] J. A. Vicente-Martínez, M. Márquez-Olivera, A. García-Aliaga, and V. Hernández-Herrera,
Adaptation of YOLOv7 and YOLOv7_tiny for Soccer-Ball Multi-Detection with DeepSORT for
Tracking by Semi-Supervised System, Sensors (Basel), vol. 23, no. 21 (2023).
doi: 10.3390/s23218693.
[11] K. Aliyarov, M. Rikhsivoev, M. Arabboev, S. Begmatov, S. Saydiakbarov, K. Nosirov,
Z. Khamidjonov, S. Vakhkhobov, Artificial Intelligence in Performance Analysis of Football,Bull.</p>
      <p>TUIT Manag. Commun. Technol., vol. 3, no. 19 (2023).
[12] A. Jana and S. Hemalatha, Football Player Performance Analysis using Particle Swarm
Optimization and Player Value Calculation using Regression, J. Phys. Conf. Ser., vol. 1911, no. 1
(2021). doi: 10.1088/1742-6596/1911/1/012011.
[13] S. Kusmakar, S. Shelyag, Y. Zhu, D. Dwyer, P. Gastin, and M. Angelova, Machine Learning
Enabled Team Performance Analysis in the Dynamical Environment of Soccer, IEEE Access,
vol. 8 (2020) 90266–90279. doi: 10.1109/ACCESS.2020.2992025.
[14] J. AbuKhait, M. Alaqtash, A. Aljaafreh, and W. Othman, Automated Detection and Classification
of Soccer Field Objects using YOLOv7 and Computer Vision Techniques, Int. J. Adv. Comput. Sci.</p>
      <p>Appl., vol. 14, no. 11 (2023) 894–902. doi: 10.14569/IJACSA.2023.0141191.
[15] T. Wang and T. Li, Deep Learning-Based Football Player Detection in Videos, Comput. Intell.</p>
      <p>Neurosci., vol. 2022, (2022). doi: 10.1155/2022/3540642.
[16] C. Guntuboina, A. Porwal, P. Jain, and H. Shingrakhia, Video Summarization for Multiple Sports
Using Deep Learning, ELCVIA Electron. Lett. Comput. Vis. Image Anal., vol. 20, no. 1 (2021) 99-116.
doi: 10.5565/rev/elcvia.1286.
[17] J. A. Vicente-Martínez, M. Márquez-Olivera, A. García-Aliaga and V. Hernández-Herrera,
Adaptation of YOLOv7 and YOLOv7_tiny for Soccer-Ball Multi-Detection with DeepSORT for
Tracking by Semi-Supervised System, Sensors vol. 23, no. 21: 8693 (2023).
doi: 10.3390/s23218693
[18] J.C. Duarte, I. Carmona-Lobo, J.C. Burgos, S. Iglesias-Soler, and J.M. Bravo, A Literature Review
on Performance Analysis in Futsal Using Information and Communication Technologies,
Sensors, vol. 19, no. 7 (2019) 1615. doi: 10.3390/s19071615.
[19] M. Kohler, D. Rios, P. Toledo, and R. Guevara, A Review of Automatic Video Analysis Techniques
for Sport Applications, Pattern Recognition Letters, vol. 140 (2021) 145-158.
doi: 10.1016/j.patrec.2020.12.021.
[20] L.J. Fernandes, D.S. Santos, and A.L. Teixeira, Futsal Performance Analysis through Player and</p>
      <p>Ball Tracking using Deep Learning, Sensors, vol. 23, no. 4 (2023) 2224, doi: 10.3390/s23042224.
[21] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, YOLOv8: Detecting Objects in Real-Time, arXiv
preprint arXiv:2007.11883 (2020).
[22] J. Redmon, A. Bochkovskiy, YOLOv8: Detecting objects in real time, arXiv preprint
arXiv:2008.04110, (2020).
[23] M. Buric, M. Pobar, and M. Ivasic-Kos, Object detection in sports videos, Int. Conv. Inf. Commun.</p>
      <p>Technol. Electron. Microelectron. MIPRO 2018 (2028) 1034–1039.
doi: 10.23919/MIPRO.2018.8400189.
[24] H. Song, I. K. Choi, M. S. Ko, and J. Yoo, Deep Learning Image Analysis System on Embedded
Platform, Int. Conf. Ubiquitous Futur. Networks, ICUFN (2023) 911–913.
doi: 10.1109/ICUFN57995.2023.10199407.
[25] N. Otsu, A threshold selection method from gray-level histograms, IEEE transactions on
automatic control, 24(1) (1979) 62-66.
[26] A.K. Jain, Data clustering: 50 years beyond K-means. Pattern recognition letters, 31(8) (2010)
651-666.
[27] S. Soille, Morphological image analysis: principles and applications. Springer Science &amp; Business</p>
      <p>Media (2003).
[28] E. Morais, A. Ferreira, S. A. Cunha, R. M. L. Barros, A. Rocha, and S. Goldenstein, A multiple
camera methodology for automatic localization and tracking of futsal players, Pattern Recognit.</p>
      <p>Lett., vol. 39, no. 1 (2014) 21–30. doi: 10.1016/j.patrec.2013.09.007.
[29] P. H. C. De Padua, F. L. C. Padua, M. T. D. Sousa, and M. D. A. Pereira, Particle Filter-Based
Predictive Tracking of Futsal Players from a Single Stationary Camera, in Brazilian Symposium of
Computer Graphic and Image Processing (2015) 134–141. doi: 10.1109/SIBGRAPI.2015.10.
[30] P. H. C. de Pádua, F. L. C. Pádua, M. de A. Pereira, M. T. D. Sousa, M. B. de Oliveira, and E. F.</p>
      <p>Wanner, A vision-based system to support tactical and physical analyses in futsal, Mach. Vis.</p>
      <p>Appl., vol. 28, no. 5–6 (2017) 475–496. doi: 10.1007/s00138-017-0849-z.
[31] L. H. P. Vieira, E. A. Pagnoca, F. Milioni, R. A. Barbieri, R.P. Menezes, L. Alvarez, L. G. Déniz, D.</p>
      <p>Santana-Cedrés, P. R. P. Santiago, Tracking futsal players with a wide-angle lens camera:
accuracy analysis of the radial distortion correction based on an improved Hough transform
algorithm, Comput. Methods Biomech. Biomed. Eng. Imaging Vis., vol. 5, no. 3 (2017) 221–231. doi:
10.1080/21681163.2015.1072055.
[32] H. Paulichen, K. Zielinski, D. Casanova, and P. Cavalcanti, Analysis of futsal matches using a
single-camera computer vision system,” in Anais do XVI Workshop de Visão Computacional (2020)
134–139. doi: 10.5753/wvc.2020.13494.
[33] ITMADE, Player detection, Roboflow, (2023).</p>
      <p>https://app.roboflow.com/itmade/playerdetect/deploy/12</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B. T.</given-names>
            <surname>Naik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Hashmi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N. D.</given-names>
            <surname>Bokde</surname>
          </string-name>
          , A Comprehensive Review of Computer Vision in Sports: Open Issues,
          <source>Future Trends and Research Directions, Appl. Sci.</source>
          , vol.
          <volume>12</volume>
          , no.
          <issue>9</issue>
          (
          <year>2022</year>
          ). doi:
          <volume>10</volume>
          .3390/app12094429.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Rikhsivoev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Arabboev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Begmatov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Saydiakbarov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Aliyarov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Nosirov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Khamidjonov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vakhkhobov</surname>
          </string-name>
          ,
          <article-title>Comparative analysis of AI methods for athletes training</article-title>
          ,
          <source>Bull. TUIT Manag. Commun. Technol.</source>
          , vol.
          <volume>4</volume>
          , no.
          <volume>21</volume>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>