<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Research on Rotational Object Recognition Based on HSV Color Space and Gamma Correction</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Zhujun Nie</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Teng Long</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zhangbing Zhou</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Information Engineering, China University of Geosciences (Beijing)</institution>
          ,
          <addr-line>Beijing 100083</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Telecom SudParis, Institut polytechnique de Paris</institution>
          ,
          <addr-line>Paris</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <fpage>178</fpage>
      <lpage>189</lpage>
      <abstract>
        <p>In the current digital era, image processing and pattern recognition play a critical role in fields such as environmental monitoring in the Internet of Things and smart city planning. However, traditional target detection algorithms face performance challenges when dealing with rotated objects, such as low recognition accuracy. To address this issue, this study investigated a deep learning-based method for detecting rotated objects. Firstly, by applying hue, saturation, value (HSV) and gamma correction to the images, the image quality was optimized to enhance object recognition capability. Secondly, this research introduced the MMRotate framework dedicated to detecting rotated objects, which, compared to traditional target detection frameworks, better meets the detection needs for rotated objects, thereby improving detection accuracy and robustness. Finally, from the perspective of the Internet of Things, this study classified and experimentally validated relevant datasets in an IoT environment, showing the performance of diferent targets on diferent datasets. Overall, this study provides new ideas and methods for addressing the shortcomings of rotated object detection in image processing and pattern recognition in the IoT environment, ofering valuable insights and guidance for the development of smart cities and environmental monitoring.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;HSV color space</kwd>
        <kwd>Gamma correction</kwd>
        <kwd>Internet of Things</kwd>
        <kwd>image processing</kwd>
        <kwd>pattern recognition</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The Internet of Things (IoT) technology represents a pivotal direction in modern information
technology development, enabling various devices and objects to connect and exchange data
through embedded sensors, software, and other technologies. The widespread adoption of
this technology has profoundly transformed multiple sectors, including industry, agriculture,
healthcare, urban management, and domestic life. With the rapid development of the IoT
industry, a wide array of IoT devices has become ubiquitous in everyday life, particularly image
acquisition devices, which play a crucial role in numerous application scenarios. However,
the precision of these devices in recognizing rotated targets is often limited. Concurrently, as
the IoT scales up rapidly, the influx of massive volumes of image data into IoT environments
poses a challenge due to slow recognition speeds, which is an urgent issue that needs to be
addressed[
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3</xref>
        ].
      </p>
      <p>In recent years, as detection tasks have continuously evolved, traditional horizontal bounding
boxes have become inadequate for meeting the demands of specific fields. Consequently,
researchers have begun to reconsider the representation of objects. To address this challenge,
methods such as increasing the degrees of freedom in regression have been adopted to achieve
more flexible object detection. This novel detection approach is known as rotated object
detection. Ensuring high precision while efectively conducting rotated object detection has
become a current research hotspot. Numerous factors can influence the performance of deep
learning-based detectors.</p>
      <p>
        In today’s digital age, image processing and pattern recognition technologies have become
core components of Internet of Things (IoT) applications. With the widespread adoption of
IoT technologies, the importance of high-resolution images in urban planning, environmental
monitoring, and intelligent transportation is increasingly highlighted. However, as detection
tasks continue to evolve, traditional horizontal bounding boxes have become inadequate for
meeting the demands of specific fields. Accurately detecting and recognizing targets at various
angles and postures within images remains challenging. To address this challenge, methods
such as increasing the degrees of freedom in regression have been adopted to achieve more
lfexible object detection, known as rotated object detection. In this study, the dataset was
enhanced using Gamma correction[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] in the HSV[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] color space and by conducting experiments
with rotated bounding box definitions under the MMRotate[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] framework. These enhancement
processes significantly improved image visual quality, increasing color contrast and saturation,
and enhancing aesthetics and recognizability. Additionally, the impact of various conditions on
rotated object recognition was clearly highlighted.
      </p>
      <p>The content of this paper is shown as follows. Section 2 of this paper reviews recent related
work on object detection in the Internet of Things. Section 3 introduces the Gamma image
processing method based on HSV and discusses the application of the MMRotate framework
built upon this method. Section 4 describes how we set up the experiments and present the
results. Section 5 concludes with a brief summary and mentions of future work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        The application of Internet of Things (IoT) object recognition refers to the use of IoT technology
and image processing techniques to automatically detect and identify target objects in various
scenarios. This technology demonstrates significant potential in enhancing security, optimizing
resource utilization, improving eficiency, and enhancing the quality of life across diferent fields.
Mohaimenul et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] developed a system based on the ESP32-CAM platform and the YOLOv8
object detection model, which eficiently provides real-time alerts by recognizing endangered
species and harmful animals in agricultural environments. Maithili et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] proposed an
IoT-based automated object recognition system that ofers high-accuracy object detection
and recognition for visually impaired individuals in both indoor and outdoor environments,
simplifying their mobility challenges. Swapna et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] introduced a method for implementing
object detection in embedded IoT devices by integrating deep learning algorithms, achieving
real-time object detection widely applicable in security, healthcare, and workplace environments.
      </p>
      <p>
        Rotational object detection is used for detecting rotated, tilted, and deformed objects in images
or videos, typically described by rotated boxes or polygons, and employing deep learning models
with geometric transformations for precise detection. Cai et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] introduced the RealSR
dataset with low-resolution and high-resolution image pairs captured by digital zoom and
post-processing. Feng et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] developed RINet, a weakly-supervised, end-to-end
rotationinvariant aerial object detection network utilizing multi-branch detectors to refine instances
with varying rotational awareness, generating rotational consistency supervision and coupling
predictions across branches to explore potential instances from diferent angles, achieving
rotation-invariant learning and multi-instance mining. Concurrently, Wang et al. [11] proposed
a method based on prediction-aware one-to-one label assignment and 3D Max Filtering to
bridge the gap between fully convolutional networks and end-to-end object detection, which
demonstrated superior performance on COCO and CrowdHuman datasets, especially with
auxiliary losses.
      </p>
      <p>Current rotating target detection faces challenges such as high model complexity, high
computational demands, limited labeled datasets, and dificulty in handling various angles
and shapes. This paper uses an HSV-based gamma correction to enhance image quality and
mAP, compares diferent rotation bounding box definitions with the MMRotate framework, and
integrates HRSC and DOTA datasets to explore image processing applications in the Internet of
Things.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <sec id="sec-3-1">
        <title>3.1. HSV-based Gamma Correction</title>
        <p>
          The HSV [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] (Hue, Saturation, Value) color model is a three-dimensional model used to describe
colors by dividing their attributes into hue, saturation, and value. In the HSV model, hue
corresponds to the type of color, saturation to the purity of the color, and value to the brightness.
This model facilitates intuitive color adjustments, such as changing the hue to alter the color type,
adjusting saturation to control the vividness, and modifying value to adjust brightness. HSV is
particularly suitable for certain image processing algorithms due to its intuitive representation
of color properties.
        </p>
        <p>
          Due to the nonlinear nature of human visual perception of brightness, directly displayed
images may exhibit distorted contrast in dark and bright areas. Gamma correction[
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] is typically
used to address this issue by adjusting the image’s brightness and contrast. The primary purpose
of gamma correction is to compensate for the nonlinear response of display devices, ensuring
more accurate image rendering. In simple terms, gamma correction is a nonlinear process that
modifies the image’s grayscale values, making the output grayscale values follow an exponential
relationship with the input values.In this Equation.(1),  represents the output luminance,
 represents the input luminance,  is a constant, and  (gamma) is the gamma value. When
 (gamma) &lt; 1 , low increase in brightness and low grayscale details;  (gamma) &gt; 1, reduce
brightness and highlight grayscale details. This relationship indicates that the output luminance
is an exponential function of the input luminance, adjusted by the gamma value. Gamma
correction is crucial in image processing to ensure that images are displayed correctly on
diferent devices, compensating for the nonlinear way human eyes perceive brightness and the
nonlinear response of display systems.
        </p>
        <p>= 

(1)
correction: the horizontal axis represents input luminance, and the vertical axis represents
output luminance. The blue curve shows mapping for Gamma values &lt; 1, while the red curve
shows mapping for Gamma values &gt; 1. With Gamma &lt; 1, image brightness increases, enhancing
contrast in low luminance areas for better detail recognition in darker regions.
blue curve shows the mapping for Gamma values less than 1, enhancing overall brightness and contrast
in low luminance areas. The red curve represents Gamma values greater than 1.</p>
        <p>In experimental data preprocessing, adjusting image saturation is crucial for color
performance and visual quality. Existing methods have limitations: inconsistent results across diferent
image types and inadequate handling of lighting variations. This study combines a novel
HSVbased Gamma correction method to enhance saturation adjustment, improving visual quality and
color performance. Compared to traditional methods, this approach ofers enhanced robustness
and applicability, achieving stable, accurate adjustments across scenarios.</p>
        <p>The followings are images of the dataset processed in three ways: Figure.2 (a) shows the
original image, Figure.2 (b) shows the image after gamma correction, and Figure.2 (c) shows the
image after gamma correction based on HSV.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Analysis of the MMRotate Framework</title>
        <p>
          MMRotate[
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] is a free, open-source toolkit based on PyTorch, focused on rotated bounding
box detection, and is part of the OpenMMLab project. The main branch of the current version
is compatible with PyTorch 1.6 and earlier versions. The toolkit features include providing
multiple angle representation methods to accommodate diferent model configurations; it uses
a modular design that decomposes the task of rotated bounding box detection into multiple
modules, allowing users to easily build customized detectors; and it ofers industry-leading
performance with state-of-the-art algorithms and benchmark models, delivering robust support
for rotated bounding box detection tasks.
        </p>
        <p>The Figure.3 shows that MMRotate framework primarily consists of four components:
datasets, models, core, and API. The dataset component handles data loading and
preprocessing, including the datasets required for training, rotation frame data augmentation pipelines,
and samplers for data loading. The model component is the core of the framework,
encompassing rotation detection models and loss functions. The API component provides a user-friendly
interface for model training, testing, and inference. Additionally, evaluation tools and custom
hooks are integral parts of the model training core.</p>
        <p>Ensuring high precision while efectively conducting rotated object detection has become a
current research hotspot. This section will explore this topic from the perspectives of research
approaches and definition methods.</p>
        <p>OC[12] is a rotation angle representation method in a Cartesian coordinate system, typically
using the coordinates of the upper-left and lower-right corners of a rectangular bounding box
to represent the position and orientation of the target. OC is simple, intuitive, and easy to
implement. It performs well for horizontal or vertical bounding boxes but may be less accurate
for rotated bounding boxes as it struggles to describe rotation angles and aspect ratio changes.
Suitable for simple horizontal or vertical bounding boxes or tasks with low rotation angle
requirements.</p>
        <p>LE135[13] is a length encoding method based on the direction of the target’s principal axis,
with the angle between the principal axis and the x-axis ranging from -45 degrees to 135 degrees.
LE135 can more accurately represent large-angle rotated targets and performs better for such
targets compared to OC. While efective for large-angle rotations, it may not perform well for
certain angle ranges or occluded targets. Suitable for tasks requiring precise description of
large-angle rotated targets, enhancing detection accuracy and robustness.</p>
        <p>LE90[14] is a simplified form of LE135, using a length encoding range of -90 degrees to 90
degrees, making it a special case of LE135. LE90 is simpler and more intuitive in calculation
and representation, suitable for scenarios with lower angle precision requirements. Due to its
limited range, it may be less accurate for large-angle rotated targets compared to LE135. Suitable
for scenarios with low angle precision requirements, simple calculation, and representation,
especially in resource-constrained situations.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experiment Setup and Results</title>
      <sec id="sec-4-1">
        <title>4.1. Dataset Introduction</title>
        <p>The DOTA dataset [15] is a comprehensive resource for remote sensing image object detection,
featuring a large collection of high-resolution images from various sensors. It includes diverse
object categories, scales, and complex occlusions, making it vital for advancing object detection
algorithms in high-resolution imagery. This study analyzes targets like vehicles and ships across
various scales in the DOTA dataset, important for military and civilian uses, to thoroughly
validate detection model accuracy, Figure.4 show the details of the dataset.</p>
        <p>Additionally, this study also utilizes the HRSC dataset [16] as a supplementary resource. The
HRSC dataset is a remote sensing image dataset used for ship detection and classification. The
HRSC dataset comprises high-resolution aerial images from various angles and resolutions,
along with detailed annotation information for ships associated with these images. The goal of
this dataset is to provide a standard benchmark for the research and evaluation of algorithms
for ship detection, classification, and recognition.</p>
        <p>The dataset features aerial images of varying resolutions, taken under diverse temporal
and weather conditions from platforms like satellites and drones. Additionally, the ships in
the dataset are categorized into diferent types, including various kinds of vessels like cargo
ships, fishing boats, and yachts. Each ship is annotated with detailed information, including its
position, orientation, and dimensions. Every vessel is accurately marked with bounding boxes
and potential directions. These annotations are crucial for training and evaluating ship detection
algorithms. The dataset provides a wealth of aerial images and ship annotations, equipping
researchers with ample data for studying ship detection, classification, and recognition tasks.</p>
        <p>The HRSC dataset is a vital resource for ship detection and classification research, ofering
extensive image data and annotations to aid development in this field.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Experimental Design and Result Analysis</title>
        <p>To evaluate the efects of gamma correction with HSV on object recognition, a comparative
experiment was conducted, analyzing color space characteristics on original images and applying
gamma correction to enhance visual quality. Specifically, the HSV color space was chosen for
its efectiveness in preserving color information. The results are presented as follows.</p>
        <p>The analysis showed that images processed with Gamma correction had higher contrast and
richer color saturation than the originals. Moreover, using Gamma correction in the HSV color
space preserved original colors while improving visual quality. Table.1 presents the results
of the relevant experiments, Figure.5 and Figure.6 visually demonstrate the regression rate
andaccuracy of accuracy.</p>
        <p>In object detection tasks, choosing an appropriate rotated bounding box definition is crucial
for accurately identifying targets. Traditional rectangular bounding boxes may not efectively
describe rotated or tilted objects, hence adopting more flexible rotated bounding box definitions
could enhance recognition performance. This study utilized the MMRotate framework and
replaced three diferent rotated bounding box definitions within the same model, including
varying angle ranges, aspect ratios, and even non-rectangular shapes, to thoroughly investigate
their impact on the accuracy of object recognition.
bounding box definition method demonstrates certain advantages. This may be because the
LE90 definition’s rotated bounding box is better suited for capturing large, horizontal or nearly
horizontal targets, thereby improving the recognition performance for these types of objects. In
terms of average precision, LE90 might slightly lag behind LE135, but its advantages in specific
scenarios are also noteworthy.</p>
        <p>By experimentally comparing diferent rotated bounding box definition methods, we can
gain a more comprehensive understanding of their performance in object detection tasks. This
provides valuable reference for selecting the most suitable rotated bounding box definition for
specific scenarios.</p>
        <p>In the same model based on the MMRotate framework, studies on diferent datasets were
conducted to explore the impact of various datasets on object recognition accuracy. Specifically,
multiple datasets from diferent sources and with diferent characteristics were used, covering
target images in various scenarios and environments. This diverse dataset selection aims
to comprehensively evaluate the model’s performance in diferent contexts, thereby better
understanding its robustness and applicability. This experiment, in addition to the DOTA
dataset, also included the HRSC dataset. The results are as follows.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In the paper, we used the HSV-based Gamma correction method to process images, which
efectively improved visual quality and enhanced both color contrast and saturation. According
to the experimental results, the mean average precision (mAP) of the original dataset is 0.74,
while the mAP of the dataset processed with Gamma correction in the HSV color space is
0.82. Using the MMRotate framework, we conducted comparative experiments under three
diferent rotated bounding box definitions to explore the efects of diferent conditions on
rotated object recognition. Moreover, by integrating the HRSC dataset with the DOTA dataset,
this study not only enriches the theoretical and practical aspects of the field but also explores
the potential of these image processing techniques for object recognition and processing in the
Internet of Things (IoT) environment. This provides new methods and insights for enhancing
the performance of devices and services.</p>
      <p>In the future, due to the imbalance in the number of samples across diferent categories
in remote sensing image datasets, we aim to enhance the samples for categories with fewer
instances to achieve a more balanced distribution. Additionally due to the limitations of the
experimental environment, we were unable to conduct experiments with more models. We will
strive to incorporate additional models to enrich the experimental data.
(CVPR), New Orleans, LA, USA, 2022, pp. 14126–14135. doi:10.1109/CVPR52688.2022.
01375.
[11] J. Wang, L. Song, Z. Li, et al., End-to-end object detection with fully convolutional network,
in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition,
2021, pp. 15849–15858.
[12] W. Li, R. Shang, Z. Ju, J. Feng, S. Xu, W. Zhang, Ellipse iou loss: Better learning for rotated
bounding box regression, IEEE Geoscience and Remote Sensing Letters 21 (2024) 1–5.
doi:10.1109/LGRS.2023.3345881.
[13] Y. Pu, Y. Wang, Z. Xia, Y. Han, Y. Wang, W. Gan, Z. Wang, S. Song, G. Huang,
Adaptive rotated convolution for rotated object detection, in: Proceedings of the IEEE/CVF
International Conference on Computer Vision, 2023, pp. 6589–6600.
[14] X. Yang, X. Yang, J. Yang, Q. Ming, W. Wang, Q. Tian, J. Yan, Learning high-precision
bounding box for rotated object detection via kullback-leibler divergence, Advances in
Neural Information Processing Systems 34 (2021) 18381–18394.
[15] G.-S. Xia, et al., Dota: A large-scale dataset for object detection in aerial images, in: 2018
IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT,
USA, 2018, pp. 3974–3983. doi:10.1109/CVPR.2018.00418.
[16] Z. Liu, L. Yuan, L. Weng, Y. Yang, A high resolution optical satellite image dataset for ship
recognition and some new baselines, in: Proceedings of the 6th International Conference
on Pattern Recognition Applications and Methods (ICPRAM), Porto, Portugal, 2017.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Borde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Patil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Sonawane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>Object recognition based on deep learning algorithms using embedded iot with interactive interface</article-title>
          ,
          <source>in: 2023 7th International Conference on Intelligent Computing and Control Systems (ICICCS)</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>1581</fpage>
          -
          <lpage>1586</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICICCS56967.
          <year>2023</year>
          .
          <volume>10142821</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>N.</given-names>
            <surname>Shahid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Aneja</surname>
          </string-name>
          ,
          <article-title>Internet of things: Vision, application areas and research challenges</article-title>
          , in: 2017 International Conference on
          <string-name>
            <surname>I-SMAC</surname>
          </string-name>
          (
          <article-title>IoT in Social, Mobile, Analytics and Cloud) (I-SMAC</article-title>
          ),
          <year>2017</year>
          , pp.
          <fpage>583</fpage>
          -
          <lpage>587</lpage>
          . doi:
          <volume>10</volume>
          .1109/I-SMAC.
          <year>2017</year>
          .
          <volume>8058246</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>K.</given-names>
            <surname>Shafique</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Khawaja</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Sabir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Qazi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mustaqim</surname>
          </string-name>
          ,
          <article-title>Internet of things (iot) for next-generation smart systems: A review of current challenges, future trends and prospects for emerging 5g-iot scenarios</article-title>
          ,
          <source>IEEE Access 8</source>
          (
          <year>2020</year>
          )
          <fpage>23022</fpage>
          -
          <lpage>23040</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2020</year>
          .
          <volume>2970118</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Tarawneh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hassanat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Elkhadiri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Chetverikov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Almohammadi</surname>
          </string-name>
          ,
          <article-title>Automatic gamma correction based on root-mean-square-error maximization</article-title>
          ,
          <source>2020 International Conference on Computing and Information Technology (ICCIT-1441)</source>
          (
          <year>2020</year>
          )
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          . URL: https://api.semanticscholar.org/CorpusID:227220297.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>C.</given-names>
            <surname>Junhua</surname>
          </string-name>
          , L. Jing,
          <article-title>Research on color image classification based on hsv color space</article-title>
          , in: 2012 Second International Conference on Instrumentation, Measurement, Computer, Communication and Control,
          <year>2012</year>
          , pp.
          <fpage>944</fpage>
          -
          <lpage>947</lpage>
          . doi:
          <volume>10</volume>
          .1109/IMCCC.
          <year>2012</year>
          .
          <volume>226</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , et al.,
          <article-title>Mmrotate: A rotated object detection benchmark using pytorch</article-title>
          ,
          <source>in: Proceedings of the 30th ACM International Conference on Multimedia</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>7331</fpage>
          -
          <lpage>7334</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M. A. K.</given-names>
            <surname>Raiaan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. M.</given-names>
            <surname>Fahad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chowdhury</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Sutradhar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Mihad</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Islam</surname>
          </string-name>
          ,
          <article-title>Iotbased object-detection system to safeguard endangered animals and bolster agricultural farm security</article-title>
          ,
          <source>Future Internet</source>
          <volume>15</volume>
          (
          <year>2023</year>
          ). URL: https://doi.org/10.3390/fi15120372 . doi:
          <volume>10</volume>
          . 3390/fi15120372.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M. K.</given-names>
            <surname>Hebbar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Pullela</surname>
          </string-name>
          ,
          <article-title>Object recognition system for the visually impaired: Leveraging iot and remote server integration</article-title>
          ,
          <source>in: 2023 International Conference on the Confluence of Advancements in Robotics, Vision and Interdisciplinary Technology Management (ICRVITM)</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          . doi:
          <volume>10</volume>
          .1109/IC-RVITM60032.
          <year>2023</year>
          .
          <volume>10434991</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zeng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Yong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. Zhang,</surname>
          </string-name>
          <article-title>Toward real-world single image super-resolution: A new benchmark and a new model</article-title>
          , in: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South),
          <year>2019</year>
          , pp.
          <fpage>3086</fpage>
          -
          <lpage>3095</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICCV.
          <year>2019</year>
          .
          <volume>00318</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>X.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yao</surname>
          </string-name>
          , G. Cheng, J. Han,
          <article-title>Weakly supervised rotation-invariant aerial object detection network</article-title>
          ,
          <source>in: 2022 IEEE/CVF Conference on Computer Vision</source>
          and Pattern Recognition
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>