<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>BISEC'</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Real-Time Error Analysis of Exercise Posture for Musculoskeletal Disorder - A Machine Vision Approach</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dilliraj Ekambaram</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vijayakumar Ponnusamy</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Electronics and Communication Engineering, SRM Institute of Science and Technology</institution>
          ,
          <addr-line>Kattankulathur, Chennai</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>15</volume>
      <fpage>28</fpage>
      <lpage>29</lpage>
      <abstract>
        <p>Occupational diseases have been a significant cause of organ harm in the developed world in recent decades. The loss of motion is a major symptom and may also afect people's elbows, wrists, neck, knees, and other joints. It is preferable to treat work-related illnesses with exercise treatment. Recent advances in computer vision and machine learning have led to the suggestion of computationally more afordable alternatives. However, it is still dificult and insuficiently researched in the scientific-technical literature for a health professional to use artificial intelligence to monitor a patient while engaging in physical rehabilitation exercises. The contribution of this study aids humans by providing a visual system to analyze exercise posture and provide feedback on how to improve it. A human's stance is evaluated with an AI-based pose estimation method. It helps the user determine what kind of exercise they are doing by providing information like "Degree of Motion Required" for each move they make. To perform a posture analysis, this system uses OpenCV written in Python. Posture analysis is performed here on the live video feed. As the user exercises, this device analyzes their upper extremity posture in real-time and delivers feedback.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Artificial Intelligence (AI)</kwd>
        <kwd>Computer vision</kwd>
        <kwd>Work-related musculoskeletal disorders</kwd>
        <kwd>home-based recuperation training</kwd>
        <kwd>upper limb exercise</kwd>
        <kwd>Pose estimation technique</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In 2020, 21% of all injuries and diseases inciting days from work resulted from Work-related
Musculoskeletal Disorders (WMSD), with a center of 14 days from work diferentiated and 12 days for any
excess nonfatal injury and afliction cases. In Malaysia, 61% of laborers rely on a PC-based working
environment. The side efects of outer muscle issues have multifactorial gamble factors like decision
scope, mental interest, social assistance, work flimsiness, and so forth., Musculoskeletal disorders emerge
in ofices, farming, study halls, businesspeople, and so on [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. As a well-known saying goes, "Exercise
not just changes our body, it alters our perspective, demeanor, and mindset.” Wellness is a pattern
today. Everybody needs to be fit, lovely, and solid [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The justification behind having Musculoskeletal
Disorder like the absence of body movements, more static workspace in sitting/standing positions,
inconvenience of work environment, shift example of working, additional time obligation, and so on;
exercise treatment is the most recommended method for recuperating from the outer muscle issues.
      </p>
      <p>
        The specialists like modelers, engineers, engineers, creators, and even analysts need to invest more
energy in PCs. A few laborers must spend a similar stance over a longer term of more than 10 hrs. It is
because of high work pressure; that work environment design doesn’t meet the prerequisite for the
idea of the work; in the late days, more specialists are required to do ShiftWise design because of the
excellent eficiency. These issues make the issue in the human body parts like muscles, tendons, joints,
bones, ligaments, ligaments, and nerves [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        Vision-based sensors have recently been implemented in the field of activity monitoring. They
can collect reliable skeletal data. There have also been major developments in the fields of Computer
Vision (CV) and Deep Learning (DL). Because of these causes [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ], there has been a rise in interest in
developing models for autonomous patient activity monitoring. Typically, patients receive help from
trained therapists who keep tabs on their development and assess the eficacy of the treatment plan.
A lack of trained therapists has made rehabilitation centers both costly and insuficient. In addition,
assessments are prone to subjectivity and mistakes [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. The scientific community is very interested in
ifnding ways to enhance the performance of such systems to aid patients and physiotherapists.
      </p>
      <p>
        Musculoskeletal disorders primarily influence the two pieces of our entire body. One is chest area
parts like the Shoulder, Neck, Wrist, Fingers, and Joints in the hand, and the other is lower body
parts like the Hip, Leg, knee joints, Ankle, and foot. In keeping an eye on the hardships of human
oversight and evaluation of proactive errands, we base on the innate issue of a human stance appraisal
that recognizes the spots of human body key joint centers (shoulder, elbow, wrists, etc.) from a lone
picture, a movement of images by a singular camera, or diferent pictures from various cameras [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
The fundamental commitment of our work depends on foreseeing the activity treatment pose that the
physiotherapist recommends for outer muscle problems and giving remarks in light of the entertainer’s
activities continuously. The forecast of activity present is done through 33 central issues of milestones
in the human body. Feedback about rectifying the stance is constantly given through our framework.
      </p>
      <p>Our contributions to this article are deliberated as follows:
• Predicting the upper limb exercise pose without any pre-trained model. Generation of exercise
pose from real-time video feed with the help of MediaPipe pose estimation library and computer
vision method.
• Generation of real-time feedback on the degree of angle required for the particular body part.</p>
      <p>This ensures that the users do the exercise correctly.</p>
      <p>An overview of the paper’s structure is provided below. In Section 2, we’ll talk about the research
that’s been published on the subject of AI-powered exercise therapy systems. Section 3 will dive
deeper into the proposed methodology, covering pre-processing, the pose estimate concept, angular
determination from participants, and corrective feedback during exercise. The analytic results and
discussion of various upper extremity rehabilitation exercises are presented in the fourth section, and
our conclusion and suggestions for future research are shown in the final section.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Works</title>
      <p>
        Computer vision methods show promising answers for human posture assessment with the help of
Android handsets [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Nowadays there are many exercise recordings accessible on the web. Samsung
Health2 gives a committed area called programs containing short exercise recordings for diferent
activities. The objective is to help individuals play out these exercises autonomously all alone. A typical
perception is that even individuals who visit exercise centers routinely find it hard to play out all means
(body present arrangements) in an exercise precisely. Constantly doing an activity inaccurately may
ultimately cause extreme long-haul wounds [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. With the new upgraded strategies using Artificial
Intelligence (AI), cutting-edge computer vision has empowered to quantify body joints in 2D using
a single camera to assess the angle deflection in the human body parts. Upgraded computer vision
methods empower the mechanized estimation of head and body presence. A solitary camera can be
utilized to gauge head repositioning precision to determine whether individuals have neck problems
[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>
        Movement boundary estimation is fundamental for grasping creature conduct, investigating the laws
of item movement, and concentrating on control techniques. These days, high-level computer vision
because of AI innovation upholds markerless articles following 2D recordings [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. AI is a well-known
approach and it decides the position and direction of the human body. This approach produces central
issues on the human body and in light of that, it makes a virtual skeleton in a 2D aspect. The information
is the live video which is taken from an individual’s webcam and the result is catching tourist spots
or central issues on the human body. The AI Trainer determines the count and season of the settings
the individual requires to perform. It additionally determines missteps and criticism if any [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Human
pose assessment or location in computer vision/designs is the investigation of calculations, frameworks,
and pre-prepared models that recuperate the posture of an explained body, which comprises joints
and inflexible parts utilizing picture-based perceptions [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. it’s one of the longest-enduring pervasive
issues in PC vision the explanation being the intricacy of the models that relate perception with the
posture, and since of the inconstancy of circumstances during which it’d be helpful [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Table 1 shows
the existing methodologies, inferences, parameters measured, and research findings.
      </p>
      <p>This framework is very simple and easy to use. The feedback for the exercises is given to the users in
real-time. They can correct the pose instantly and perform the exercise perfectly. This model can be
utilized in exercise centers as they have enrollment plans which the clients at that point pay. therefore,
the model can be given to the participating clients.</p>
      <p>The model gives live visuals during the whole exercise which prompts an autonomous excursion
accordingly diminishing general cost. In this work, we focus on the solutions for the following important
challenges in AI-based exercise pose analysis of existing systems.</p>
      <p>
        • The process of gathering and annotating a lot of data is necessary to train and test AI models, but
it may be time-consuming and expensive [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
• People move in various ways, and there can be a lot of variation in how an exercise is carried out.
      </p>
      <p>
        Because of this, creating models that can precisely analyze a variety of exercises is challenging
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
• Exercise real-time analysis might be dificult since it calls for high-performance computers and
low-latency processing [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
• Deep learning models are inefective in real-time situations and might not be appropriate for use
cases requiring low-latency processing [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
• Deep learning models require much computing, so they might be prohibitively expensive for
some applications and require high-end hardware and substantial computational resources [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>Upper extremity exercise poses datasets were collected from more than 200 samples from the 12 various
healthy subjects. We gathered the exercise pose images from the humans concentrating on the Wrist,
3D Automated
Joint Assessment
(3D AJA) and
Kinect Software
Development Kit
(SDK) performance
are compared in
this literature
OpenCV-based AI
trainer model for
bicep curl fitness
exercise
Performance
comparisons of CNN,
CNN-LSTM, and
CNN GRU have
been done in this
work</p>
      <p>Random Shoulder
Forest (RF) Flexion,
with Open- Shoulder
source abduction,
Computer and elbow
vision algo- flexion
rithm upper limb
exercise
poses
Bicep curl
counting
for fitness
exercise.
cv2,
Media pipe,
numpy, and
OpenCV
is a
crossplatform
module
which used
in this work
Hybrid
Deep
Learning used to
train the
model</p>
      <p>Shoulder
abduction,
adduction,
internal
rotation, Elbow
flexion, and
extension
exercise poses
Yoga pose</p>
      <p>Datasets
Kinect and Camera OpenPose
based pose estima- technique
tion through open- to represent
pose the human
pose in 3D</p>
      <p>The predic- These exercise poses
tion accuracy for fitness applications
of this frame- only not applicable for
work is 91% rehabilitation training.
Neck, and Elbow joints. This section describes the methodology of our proposed system. Figure 2 shows
the process flow of our system to estimate the pose for the various upper extremity body exercises.</p>
      <p>No standard procedure is followed to collect the datasets from the subjects with their own interest.
Figure 3 shows the sample datasets collected from the subjects.</p>
      <p>In the following subsections, we deliberated the various operations involved in the process of
preprocessing, pose estimation concept, and determination of angles from the subjects and provided
feedback.</p>
      <sec id="sec-3-1">
        <title>3.1. Data Pre-processing</title>
        <p>In our work, we use the spyder (python 3.9) IDE to assess the pose. Firstly, data are called inside the
process of operation with the help of the "cv2" module. Here, the "media pipe" module is used to find
the landmarks in the human body. The input datasets are in the form of BGR (Blue, Green, Red), which
needs to be converted into RGB using the "cv2.cvtColor" function for assessing the pose in real-time.
The pixel size is fixed as 1280x720 using the function called "cv2.resize" to assess the pose angles clearly
in the datasets.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Pose Module</title>
        <p>The pose module is the process that helps to detect the pose by using the 33 landmarks mentioned
in Fig. 1. Here, we define the class called "poseDetector" to find Pose, Position, and Angle. This class
is derived with the initialization of static image mode, model complexity, smooth landmarks, enable
segmentation, detection confidence, and tracking confidence from the pose module to detect the pose
accurately. Finding the pose can be done by fixing the landmarks in the dataset and connecting the
landmarks with the attribute "POSE_CONNECTIONS". The position of the landmarks is determined by
using the "list" variable in the form of an array. In real-time, the position of the landmarks is found by
using the "list".</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Angle Determination</title>
        <p>To find the angle for the pose we used the 3 points namely, 1, 2, and 3 from the “list” of the particular
landmark for the particular pose. The angle measurement of these three points is calculated by using
the following eq. 1 in the Python programming language.</p>
        <p>= ℎ.(ℎ.2 (3 − 2, 3 − 2)
− (− ℎ.2 (1 − 2, 1 − 2)))
(1)
where, 1, 2, 3, 1, 2, 3 are the body joints points for diferent exercise poses.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Real-time Pose Assessment</title>
        <p>Finally, the pose assessment in real-time can be achieved through the comparison of angles from the
pose detector class displaying the type of exercise and providing the degree of motion required to
perform the particular exercise properly in the heuristic method. We fixing the angle variation based
on the reference gathered from the system execution. Reference angle values are taken by performing
the execution on the dataset and measuring the angle through the display.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>This work utilizes the body landmarks with the help of the “media pipe” module which is shown in Fig.
1. The analysis of the pose can be achieved by finding the angle by using eq. 1. The following is Table 2.
Shows the Poses and Parameters used to analyze the exercise pose in real-time.</p>
      <p>Figure 4 shows the sample execution of our system’s output with its corresponding exercise pose
label, if the exercise is performed by the user correctly.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion and Future Work</title>
      <p>The main focus of this work is to provide real-time visual feedback on the correctness of exercise
poses performed by the participants for the various upper limb exercise poses. The variation of angle
deflection of body parts is also displayed for the clarification of angle deviation still to be done by the
user. This system doesn’t require a specialized capture device like a Microsoft Kinect sensor, or RGB
depth camera to analyze the correctness of exercise pose. Capturing of human action is done with the
webcam on the desktop or laptop. Even though, it provides good results and guidance there are still
some limitations to consider for future work.</p>
      <p>1. Accuracy of the system needs to be analyzed to improve the quality exercise pose classification
outcome of the system.
2. We plan an in-home lower body reclamation system that licenses patients to finish recuperation
without assistance from any other person at home through Android mobile.
3. Occlusion of body parts during the real-time analysis is still a big challenge in the system.
4. Analysis of musculoskeletal patients’ exercise pose based on the American Academy of
Orthopaedic Surgeons (AAOS) standard based on goniometer angle measurement for the prescribed
exercises by the physiotherapist.</p>
      <p>The proposed system is very simple to use and provides good results to the performer. This framework
provides precise results in the output to analyze the complex exercise pose concentrating on
musculoskeletal disorder patients. Utilizing this system will provide good guidance to know about the user’s
exercise pose.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P. B.</given-names>
            <surname>Rodrigues</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. E.</given-names>
            <surname>Fukumura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Awada</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Aryal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Becerik-Gerber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Lucas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Roll</surname>
          </string-name>
          ,
          <article-title>Ergonomic assessment of ofice worker postures using 3d automated joint angle assessment</article-title>
          ,
          <source>Advanced Engineering Informatics</source>
          <volume>52</volume>
          (
          <year>2022</year>
          )
          <fpage>101596</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.</given-names>
            <surname>Vinothini</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Halim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. R.</given-names>
            <surname>Umar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Too</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Halim</surname>
          </string-name>
          ,
          <article-title>A future framework for musculoskeletal disorders symptoms among computer ofice workers</article-title>
          ,
          <source>International Journal of Physiotherapy</source>
          <volume>5</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>G.</given-names>
            <surname>Samhitha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Rao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Rupa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ekshitha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Jaswanthi</surname>
          </string-name>
          , Vyayam:
          <article-title>Artificial intelligence based bicep curl workout tacking system</article-title>
          ,
          <source>in: 2021 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES)</source>
          , IEEE,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>V. C.</given-names>
            <surname>Chan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. B.</given-names>
            <surname>Ross</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. L.</given-names>
            <surname>Clouthier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. L.</given-names>
            <surname>Fischer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. B.</given-names>
            <surname>Graham</surname>
          </string-name>
          ,
          <article-title>The role of machine learning in the primary prevention of work-related musculoskeletal disorders: A scoping review</article-title>
          ,
          <source>Applied Ergonomics</source>
          <volume>98</volume>
          (
          <year>2022</year>
          )
          <fpage>103574</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Ekambaram</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ponnusamy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. T.</given-names>
            <surname>Natarajan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. F. S. F.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence (ai) powered precise classification of recuperation exercises for musculoskeletal disorders</article-title>
          ,
          <source>Traitement du Signal</source>
          <volume>40</volume>
          (
          <year>2023</year>
          )
          <fpage>767</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Sardari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sharifzadeh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Daneshkhah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Nakisa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. W.</given-names>
            <surname>Loke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Palade</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Duncan</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence for skeleton-based physical rehabilitation action evaluation: A systematic review</article-title>
          ,
          <source>Computers in Biology and Medicine</source>
          <volume>158</volume>
          (
          <year>2023</year>
          )
          <fpage>106835</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Rahman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sarker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Haque</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Uttsha</surname>
            ,
            <given-names>M. F.</given-names>
          </string-name>
          <string-name>
            <surname>Islam</surname>
          </string-name>
          , S. Deb,
          <article-title>Ai-driven stroke rehabilitation systems and assessment: A systematic review</article-title>
          ,
          <source>IEEE Transactions on Neural Systems and Rehabilitation Engineering</source>
          <volume>31</volume>
          (
          <year>2022</year>
          )
          <fpage>192</fpage>
          -
          <lpage>207</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>V.</given-names>
            <surname>Bijalwan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. B.</given-names>
            <surname>Semwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. K.</given-names>
            <surname>Mandal</surname>
          </string-name>
          ,
          <article-title>Hdl-psr: Modelling spatio-temporal features using hybrid deep learning approach for post-stroke rehabilitation</article-title>
          ,
          <source>Neural processing letters 55</source>
          (
          <year>2023</year>
          )
          <fpage>279</fpage>
          -
          <lpage>298</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G. G.</given-names>
            <surname>Chiddarwar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ranjane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chindhe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Deodhar</surname>
          </string-name>
          , P. Gangamwar,
          <article-title>Ai-based yoga pose estimation for android application</article-title>
          ,
          <source>Int J Inn Scien Res Tech</source>
          <volume>5</volume>
          (
          <year>2020</year>
          )
          <fpage>1070</fpage>
          -
          <lpage>1073</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Nagarkoti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Teotia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Mahale</surname>
          </string-name>
          ,
          <string-name>
            <surname>P. K. Das</surname>
          </string-name>
          ,
          <article-title>Realtime indoor workout analysis using machine learning &amp; computer vision</article-title>
          , in: 2019 41st
          <article-title>Annual international conference of the IEEE engineering in medicine and biology society (EMBC)</article-title>
          , IEEE,
          <year>2019</year>
          , pp.
          <fpage>1440</fpage>
          -
          <lpage>1443</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>O. D. A.</given-names>
            <surname>Prima</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Imabuchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ono</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Murata</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Nishimura</surname>
          </string-name>
          ,
          <article-title>Single camera 3d human pose estimation for tele-rehabilitation</article-title>
          ,
          <source>Proceedings of the eTELEMED 2</source>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>X.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , L. Wang,
          <article-title>Motion parameters measurement of user-defined key points using 3d pose estimation</article-title>
          ,
          <source>Engineering Applications of Artificial Intelligence</source>
          <volume>110</volume>
          (
          <year>2022</year>
          )
          <fpage>104667</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>G.</given-names>
            <surname>Dsouza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Maurya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <article-title>Smart gym trainer using human pose estimation, in: 2020 IEEE International conference for innovation in technology (INOCON)</article-title>
          , IEEE,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>B.</given-names>
            <surname>Debnath</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>O'brien</article-title>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yamaguchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Behera</surname>
          </string-name>
          ,
          <article-title>A review of computer vision-based approaches for physical rehabilitation and assessment</article-title>
          ,
          <source>Multimedia Systems</source>
          <volume>28</volume>
          (
          <year>2022</year>
          )
          <fpage>209</fpage>
          -
          <lpage>239</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>L.</given-names>
            <surname>Alzubaidi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Humaidi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Al-Dujaili</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Duan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Al-Shamma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Santamaría</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Fadhel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Al-Amidie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Farhan</surname>
          </string-name>
          ,
          <article-title>Review of deep learning: concepts, cnn architectures, challenges, applications, future directions</article-title>
          ,
          <source>Journal of big Data</source>
          <volume>8</volume>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>74</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>