<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>December</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Underwater Image Segmentation and Image quality Enhancements using Deep Learning Models</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Veeresh</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Thillaigovindan Senthil Kumar</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dept. of Computing Technologies</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>SRMIST</string-name>
          <email>vb1500@srmist.edu.in</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kattankulathur</string-name>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>2</volume>
      <fpage>9</fpage>
      <lpage>30</lpage>
      <abstract>
        <p>Due to their crucial role in ensuring the continuity of life on Earth, marine ecosystems including their habitats are growing in significant amount. Marine ecosystems are frequently observed utilising underwater cameras to take films and photographs for analysing that ecosystem and for environmental preservation due to their distant and challenging to the access to nature. Unfortunately, it is challenging to train deep improvised models due to the lack of underwater photos with undistorted images data as references. It is so essential to establish more eficient learning techniques that harvest better supervised information from restricted training samples. Modern Artificial Intelligence (AI) subset known as Deep Learning (DL) has achieved better outcomes in the analysis of visual data. Despite having a wide range of applications, its usage in underwater picture segmentation is still being researched. In this survey, several works of literature related to underwater image segmentation are taken based on machine learning and deep learning models from IEEE explore, Research Gate, PubMed, Google Scholar, Scopus, and Web of Science search engines. Most of the literature is recently published between 2016-2023. This literature survey identifies some important shortcomings of the state-of-art-underwater image segmentation models considering various methodologies and datasets. Finally, a precise underwater image segmentation model based on deep learning-based algorithms can become a genuine option for underwater photos quality enhancement.</p>
      </abstract>
      <kwd-group>
        <kwd>Image processing</kwd>
        <kwd>underwater</kwd>
        <kwd>Deep learning</kwd>
        <kwd>marine ecosystem</kwd>
        <kwd>machine learning</kwd>
        <kwd>computer vision</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Several applications underwater image segmentation can be found in fields like biological
research and underwater inspection. It can play a significant role in more dificult tasks like
image restoration, animal counting, and for robot obstacle avoidance. For that reason, the
segmentation approach needs to be able to separate underwater photos that are captured in
the extreme scenarios [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Without appropriate instruments to investigate and learn about our
world’s largest ecosystem as well as the marine environment, a complete understanding of
the planet and its ecosystems cannot be achieved. Through the use of its underwater cameras,
Computer Vision (CV) techniques can assist us in better understanding and managing remote
marine item ecosystems [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The use of unmanned underwater robots with high-performance
CEUR
Workshop
Proceedings
      </p>
      <p>
        ceur-ws.org
ISSN1613-0073
surveillance [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] modules as a primary instrument for ocean exploration is on the rise.
Unfortunately, the environment for underwater photography is highly complex and is constantly altered
by factors including plankton, underwater drifting sand, illumination variations, and local
disturbances. In addition, underwater light attenuation frequently results in low contrast and fuzzy
detail information, which presents significant dificulties for vision-based underwater activities.
In order to enhance the image quality of underwater photos, underwater image enhancement
thus has recently attracted a lot of attention and rigorous research [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Because of the absence
of survival endurance, marine environment investigation has surely been more challenging over
time than terrestrial ecosystem exploration. Oceanography, maritime defence, information
navigation, and marine life analysis are all areas that require exploration. Underwater exploration
has drawn a lot of interest in recent years from the research perspective. In contrast to outdoor
photography, underwater photography involves complex lighting, atmosphere, and color casts,
makes the restoration process a bit dificult operation. The wavelength-dependent non-uniform
attenuation of light is one of the primary causes of such visual distortions. Moreover, marine
snow, which enhances the efect of light scattering, has a significant impact on how visible
the undersea biosphere is via the lens [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Semantic segmentation is now being handled by
Deep Neural Network (DNN) that have been trained via supervised learning. The so-called
label-space, which is used for this training, consists of datasets of images that have been labelled
with a set of specified labels. These datasets are particularly expensive to produce since they
must be manually labelled at the pixel level, unlike datasets enabling object detection, which
merely need to be bounding box labelled. But because of the magnitude of their visual data,
human processing is time- and cost-ineficient, necessitating a fundamental change in data
analysis through cutting-edge technologies like Deep Learning (DL). This paper is arranged
in the following style. Section 2 discusses several available techniques for underwater image
segmentation. Section 3 discusses the details of available datasets. Section 4 discusses the
related work regarding the underwater image segmentation and image enhancement. Section
5 describes the generalized methodology for the underwater image segmentation. Section 6
discusses the simulation metrics and tools, and section 7 concludes the paper.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Underwater image segmentation techniques</title>
      <p>Several techniques have been applied for the underwater image segmentation and image
enhancements. Few are discussed here:</p>
      <sec id="sec-2-1">
        <title>2.1. Machine Learning</title>
        <p>Machine Learning is a kind of automation process built on machine intelligence to link with
the physical world. Artificial intelligence (AI) is subdivided into two important subsets namely
Machine learning and Deep Learning algorithms. These algorithms are mainly focused on
building a model for prediction and data analysis for generating important insights. Thus,
both ML and DL algorithms are stronger and praiseworthy methods to generate meaningful
information from raw data, especially in the case of image data analytics. In another word, ML
algorithms can be defined as the scientific discipline with the objective of learning data using
an automation process through computers. The main base of machine learning is a statistical
analysis to discover insights from data using computing methods. This technique is capable of
handling billions or trillions of data by designing a computational statistical model.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Deep Learning</title>
        <p>
          Computer vision is one of the subsets of deep learning. DL has been efectively implemented
to a variety of dificult computer vision problems [
          <xref ref-type="bibr" rid="ref6 ref7 ref8">6, 7, 8</xref>
          ], including semantic picture
segmentation, since its deep neural network topologies can learn complicated mappings from
the high-dimensional data for performing feature extraction. The primary benefit of DL is
its capacity to learn features within various data formats, including images of underwater
objects [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. Deep Learning algorithms have gained high praise in the last few years in terms of
designing complex deeper architectures and networks and have shown their worth in numerous
applications like research, medicine, science, and technology. Diferent types of deep learning
techniques can be adopted for learning behaviour and the patterns of underwater images such
as ANN, CNN, KNN, and GAN so that the underwater image regions can be identified easily.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Datasets involved in underwater Segmentation</title>
      <p>A data set seems to be a grouping of connected, discrete pieces of connected data that can
be viewed separately, together, or handled as a single unit. An image dataset consists of
digitised images that have been carefully selected for use in training, testing, and assessing the
performance of computer vision and machine learning algorithms. Here they have discussed few
popular datasets involved in underwater image segmentation as well as image enhancement.</p>
      <p>
        NAUTEC UWI Real, 700 real underwater photos from the internet were used to create this
Real Underwater dataset. Foreground as well as background pixels were manually separated
from the photos. For testing and training, they choose 300 photos at random each. In Fig. 1,
three samples from the dataset are shown together with the corresponding ground truth. The
collection includes photos that were taken in a variety of locations, with varying amounts of
light and water, and without distinguishing between benthic and pelagic zones. Both naturally
and artificially illuminated photos exist. These photographs were captured in the wild, therefore
divers, marine life, and several underwater items are visible [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        UIEB, This dataset includes 950 actual underwater photographs, 890 of which include
comparable reference photographs. The remaining 60 undersea photographs, for which they can
ifnd no appropriate reference images, are viewed as dificult data. They are able to do in-depth
study using datasets [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>NYU Depth V2, This dataset ofers segmentation-labeled pictures and excellent depth maps.
By classifying pixels labelled as floor, wall, roof etc. as background as well as pixels labelled as
objects as foreground, they altered the initial segmentation labels. NAUTEC UWI Sim1000,
In comparison to the Sim200, this dataset has four extra stages of rising simulated underwater
turbidity, totaling 1000 simulated underwater photos.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Literature Survey</title>
      <p>As DL approach has made significant progress in a variety of low-level vision tasks, this results
in use of deep learning approach by researchers are the beginning to use deep learning to the
improvement of underwater images. Moreover, learning techniques using paired samples from
the actual world have drawn a lot of interest. Researchers have worked hard in recent years
to develop novel methods for creating samples, improved learning techniques, and network
structures. Here, a few recent studies are highlighted.</p>
      <p>
        In order to determine whether these photographs were taken in the same location, this study
suggests a cross-domain as well as cross-view image matching method employing a colour aerial
image and an underwater sound image. The technique is made to compare photos taken in
partially organised environments that have common features, including harbours and marinas.
Our processing system combines deep neural network and conventional image processing
methods [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>
        For low-energy as well as real-time image analysis at the undersea edge, they suggest an
optimal deep learning approach. This results in merely transmitting the low-volume outcomes
that can be delivered through wireless sensor networks instead of the big image data that
is required. They segment fish in underwater films and make comparisons with traditional
methods to show the advantages of our ideas in practical applications. They demonstrate that
processing underwater-captured photos at the collecting edge can be done 4 times faster than
using a land-based server [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>
        In this research, the authors first construct an underwater image synthesis algorithm (UISA),
which allows them to create a synthetic underwater image from an outdoor ground-truth
image dependent on the real-world underwater image. Based on this approach, they create
the synthetic underwater picture dataset, a newly designed benchmark that includes both
real-world and artificial underwater photos of the same scene (SUID). Our SUID, which has
strong reliability and viability, is built using the underwater IFM (image formation model) and
features of the underwater optical propagation [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        They suggest using a technique called MLLE, which is efective and reliable, to enhance
underwater images. To be more precise, they begin by locally modifying an input image’s colour
and features in accordance with a minimum colour loss concept and a maximum attenuation
map-guided fusion technique. The average and variance from local picture blocks are then
computed using the integral as well as squared integral maps, which are then utilised to adaptable
change the contrast of such input image. In the meanwhile, another colour balance approach
is presented to balance the CIELAB colour space’s channel both a channel b channel colour
discrepancies [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        They suggest using our local spatial mixture (LSM) technique in this study to segment
images from any kind of deployed side-scan sonar system. This innovative technique improves
segmentation by taking into account the potential spatial connection between nearby pixels
while estimating pixel labels within sonar pictures. By including an additional step (I-step)
between both the expectation (E-step) and maximisation (M-step) processes, LSM alters the
expectation-maximization algorithm. They use a new initialization approach, whose thresholds
are automatically determined to attain and sustain robustness in varied underwater conditions,
to battle intensity in homogeneity [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>
        In order to address the issue of image segmentation, this research suggests a whale
optimization algorithm (WOA) built on Kapur’s entropy approach. Exploration and exploitation can be
balanced well by the WOA in order to avoid early convergence and achieve the world’s best
solution. A number of studies on underwater photos from the Harbin Engineering University
experimental pool are carried out [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] to confirm the segmentation accuracy of the WOA.
      </p>
      <p>
        They suggest a multiscale feature fusion network as a technique for improving underwater
sensing scene images (MFFN). The measure combining the feature extraction, feature fusion,
and attention reconstruction modules is created for extracting the multi-scale feature. This
action can improve the scene’s flexibility and aesthetic impact. To fit the nonlinear mapping,
they also suggest a number of goal functions during supervised training [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
      <p>
        They present a powerful method to improve underwater photography that has sufered
from medium scattering as well as absorption degradation. Our solution uses a single image
and doesn’t call for any specialist equipment or expertise in underwater situations or scene
organisation. It is based on the merging of two images which were produced by taking the
original deteriorated image and applying colour correction and white-balancing. The two
images being fused, along with the maps that go with them, are designed to encourage smooth
transfer of edges as well as colour contrast towards the final image. They also adopt a multiscale
fusion technique [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] to prevent artefacts from being produced by the sharp map transitions in
the low frequency components of the reconstructed image.
      </p>
      <p>
        To provide visual-friendly and task-oriented enhancement, they suggest an object-guided twin
adversarial contrastive learning-based strategy for underwater enhancement. They specifically
create a bilateral limited closed-loop adversarial enhancement mechanism first, which reduces
the need for paired data when using an unsupervised approach and maintains more informative
features by linking with twin inverse mapping. They also use contrastive signals throughout
the training phase to give the reconstructed images a more appropriate appearance [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
      </p>
      <p>
        In this paper, a deep residual model is proposed as a technique for underwater image
improvement. First, convolution neural network models are trained using synthetic underwater images
produced by cycle-consistent adversarial networks (CycleGAN). The second development is the
introduction of the very-deep super-resolution reconstruction model (VDSR) to underwater
resolution applications, along with the Underwater Resnet model, a residual learning model for
underwater picture enhancement tasks [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ].
      </p>
      <p>
        To quickly extract complete &amp; clean areas of interest (ROIs) out the images having significantly
changing content and quality, a dynamic down-scaling algorithm was developed in the current
study. To guarantee the integrity of weak targets based on local two-dimensional (2D) entropy
parameters, the original image was downscaled and dynamic segmentation was carried out
in a scale pyramid space. Then, iteratively examining a number of local thresholds as well as
clustering gradients was done for ROI selection [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
      </p>
      <p>
        For the Polaris, a non-governmental Taiwanese oceanic research vessel, the researchers in
this project created holographic image software. It is a survey vessel that was jointly built
by Dragon Prince Hydro-Survey Enterprise Co. and the National Kaohsiung University of
Science and Technology. The ship’s dimensions are 260 tonnes, 36.98 metres in length, and
6.80 metres in width, and its top speed is 11 knots. It has experience with such missions
because it has participated in underwater rescue &amp; exploration operations. Survey vessels
frequently encounter interference during underwater exploration operations that are brought
on by elements like current velocity, water temperature, spectral conditions, refraction, and
climate, ocean current, the presence of algae, and light reflection by schools of fish [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ].
      </p>
      <p>
        This study focuses on classifying sonar images into multiple categories, including drowning
victim, wreck, aeroplane, mine, and seafloor. Initially, they created a real side-scan sonar picture
dataset called Seabed Objects-KLSG over an extended period of time, which at present contains
385 wrecks, 36 drowning victims, 62 aeroplanes, 129 mines, and 578 seabed photographs.
Second, they proposed a semi-synthetic data generation technique to generate sonar images of
aeroplanes and drowning victims that utilises optical images as input and manages to combine
image segmentation along with intensity distribution modelling of diferent regions [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], taking
into account that the real dataset is unbalanced.
      </p>
      <p>
        To train two of the top deep learning segmentation models, they create a dataset of genuine
underwater photographs as well as various combinations utilizing simulated data, with the goal
of handling the segmentation of underwater images within wild ecosystem. In additional to
models developed using these datasets, methodologies for picture restoration and fine-tuning
are also investigated. All the models of segmentation are compared with in testing set of actual
underwater photographs in order to conduct a more thorough evaluation [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ].
      </p>
      <p>
        During the past few years, academics from all around the world have already been studying
underwater photography and the capacity to take clear pictures. The entire process of recovering
the collected photos is also time-consuming. Due to the scientific processes of absorption and
scattering, several faults can be seen in the produced underwater photographs. The main
problems with these photographs are colour distortions, blurriness, and poor contrast efects.
To get over this, the proposed study work utilizes a deep learning algorithm to enhance the
underwater photographs [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ].
      </p>
      <sec id="sec-4-1">
        <title>4.1. Survey table of few existing research</title>
        <p>
          2023 G Chen, new hybrid trans- compares favourably
Z Mao, K former network- to the most recent
adWang, J based architecture vanced detectors in
Shen [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ] for underwater terms of both
characobject recognition teristics
Year Author
give sensible
guidance for the shrewd
advancement of
outdoor diving
instruction
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Methodology</title>
      <p>The goal of the systematic literature study in the preceding section was to identify and classify
the top methods for applying deep learning to obtain underwater images. Systematic reviews
of the literature is compiled and evaluates previously published studies using predetermined
assessment criteria. Such analyses assist in determining the current information known in
the relevant field of research. The identification of underwater images benefits greatly from
the use of deep learning models. They are made up of a collection of connected nodes. Their
interconnected neural structure is comparable to that of the human brain. Its nodes collaborate
to find solutions to issues. Deep learning algorithms are trained for specific tasks, after which
the networks function as subject-matter experts in certain fields. In our study, deep learning
models they are trained to segment underwater images and to recreate those images with better
quality. The eficient segmentation and feature extraction can be achieved using deep learning
model by obtaining region of interest (ROIs) in the image and by extracting shape, color and
texture features. At last, image quality enhancement of recreated images is an additional step
to evaluate performance of proposed deep learning model.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Results &amp; Discussion</title>
      <p>The tools, programming languages, and performance metrics required to get high-performance
results using deep learning algorithms to design an underwater image segmentation model
are discussed in the next section. Tools that are massively used in Deep Learning model
implementation with python and MATLAB programming are.</p>
      <p>• Python
• Matlab
• Pytorch
• OpenCV
• Tensorflow</p>
      <p>The performance of the designed machine learning models using these mentioned tools
is obtained in a statistical manner. For example, the performance of the underwater image
segmentation model is evaluated in terms of classification accuracy, precision, recall, F1-measure,
recall, and sensitivity based on the obtained confusion matrix (true positive, true negative, false
positive, false negative values).</p>
    </sec>
    <sec id="sec-7">
      <title>7. Limitations</title>
      <p>DL which has been the subject of numerous research eforts, faces a number of dificulties when
it comes to underwater image monitoring. We first discuss the main dificulties encountered
when creating models for segmenting underwater images in this part.</p>
      <p>1. Monitoring models need to be capable of identifying items and scenarios in intricate,
challenging backdrops in order to function in aquatic environments. This is a problem for
both the development as well as training of these algorithms along with their thorough
testing.
2. Underwater scenes are incredibly dynamic, meaning that the scene’s objects and content
are always changing. The background can switch between being entirely obscured and
being viewable.
3. Refraction can lead to inaccurate depth and distance perception. For shorter distances,
this is more severe.
4. There is a lot of ambient noise, including a wide range of illumination. A faraway object
appears significantly less light than one that is near up. When the background is uneven,
these issues exacerbate.</p>
    </sec>
    <sec id="sec-8">
      <title>8. Conclusion &amp; Future Work</title>
      <p>In the current survey paper, a comprehensive discussion and review are conducted on the
state-of-art underwater image segmentation model. In this survey report, diferent
underwater image segmentation datasets, diferent Deep Learning algorithms, their working process,
implementation tools, performance metrics, several prediction methods, and varied research
limitations are discussed. Numerous authors have emphasized that understanding of research
limitations and benefits is useful for analysing any prediction model. The most used machine
learning implementation interfaces are Python, MATLAB, OpenCV and TensorFlow. The goal
of this research survey is to provide details of current underwater image segmentation
techniques, and their working model and highlight their limitations so that an efective underwater
image segmentation model can be built in near future. This survey can help researchers to
design a reliable and accurate underwater image segmentation model in the early stages. It
can be concluded that to design an enhanced underwater image segmentation model, proper
pre-processing of data, exploratory data analysis, data cleansing, proper model selection, feature
selection, and eficient classification model selection are the mandatory requirements. In future
work, designing an efective predictive mechanism related to underwater image segmentation
is the cardinal interest in minimizing most of the limitations of existing works.
[29] Y. Li, S. Yang, Y. Gong, J. Cao, G. Hu, Y. Deng, D. Tian, J. Zhou, A new method for training
CycleGAN to enhance images of cold seeps in the qiongdongnan sea, Sensors (Basel) 23
(2023).
[30] E. Chen, T. Ye, Q. Chen, B. Huang, Y. Hu, Enhancement of underwater images with retinex
transmission map and adaptive color correction, Appl. Sci. (Basel) 13 (2023) 1973.
[31] Q. Zhao, L. Zhang, Y. Zhu, L. Liu, Q. Huang, Y. Cao, G. Pan, Real-time relative positioning
study of an underwater bionic manta ray vehicle based on improved YOLOx, J. Mar. Sci.</p>
      <p>Eng. 11 (2023) 314.
[32] U. A. Manawadu, M. De Zoysa, J. D. H. S. Perera, I. U. Hettiarachchi, S. G. Lambacher,
C. Premachandra, P. R. S. De Silva, Altering fish behavior by sensing swarm patterns of
ifsh in an artificial aquatic environment using an interactive robotic fish, Sensors (Basel)
23 (2023).
[33] X. Yang, S. Samsudin, Y. Wang, Y. Yuan, T. F. T. Kamalden, S. S. N. B. Yaakob, Application
of target detection method based on convolutional neural network in sustainable outdoor
education, Sustainability 15 (2023) 2542.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P.</given-names>
            <surname>Drews-Jr</surname>
          </string-name>
          , I. d. Souza,
          <string-name>
            <given-names>I. P.</given-names>
            <surname>Maurell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. V.</given-names>
            <surname>Protas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. S. C.</given-names>
            <surname>Botelho</surname>
          </string-name>
          ,
          <article-title>Underwater image segmentation in the wild using deep learning</article-title>
          ,
          <source>J. Braz. Comput. Soc</source>
          .
          <volume>27</volume>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Saleh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sheaves</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Jerry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Azghadi</surname>
          </string-name>
          ,
          <article-title>Applications of deep learning in fish habitat monitoring: A tutorial and survey (</article-title>
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>N.</given-names>
            <surname>Mohod</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Agrawal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Madaan</surname>
          </string-name>
          ,
          <article-title>Yolov4 vs yolov5: Object detection on surveillance videos</article-title>
          ,
          <source>in: International Conference on Advanced Network Technologies and Intelligent Computing</source>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>654</fpage>
          -
          <lpage>665</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Hou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>SGUIE-net: Semantic attention guided underwater image enhancement with multi-scale perception</article-title>
          ,
          <source>IEEE Trans. Image Process</source>
          .
          <volume>31</volume>
          (
          <year>2022</year>
          )
          <fpage>6816</fpage>
          -
          <lpage>6830</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Bisht</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sur</surname>
          </string-name>
          ,
          <article-title>Wavelength-based attributed deep neural network for underwater image restoration</article-title>
          ,
          <source>ACM Trans. Multimed. Comput. Commun. Appl</source>
          . (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>V.</given-names>
            <surname>Madaan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Agrawal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bologa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Prodan</surname>
          </string-name>
          ,
          <article-title>Xcovnet: chest x-ray image classification for covid-19 early detection using convolutional neural networks</article-title>
          ,
          <source>New Generation Computing</source>
          <volume>39</volume>
          (
          <year>2021</year>
          )
          <fpage>583</fpage>
          -
          <lpage>597</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Chaudhary</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Agrawal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Madaan</surname>
          </string-name>
          ,
          <article-title>Bank cheque validation using image processing</article-title>
          , in: Advanced Informatics for Computing Research: Third International Conference, ICAICR 2019, Shimla, India, June 15-16,
          <year>2019</year>
          ,
          <string-name>
            <given-names>Revised</given-names>
            <surname>Selected</surname>
          </string-name>
          <string-name>
            <surname>Papers</surname>
          </string-name>
          ,
          <source>Part I 3</source>
          , Springer,
          <year>2019</year>
          , pp.
          <fpage>148</fpage>
          -
          <lpage>159</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Chauhan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Agrawal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Madaan</surname>
          </string-name>
          , E-gardener:
          <article-title>Building a plant caretaker robot using computer vision</article-title>
          ,
          <source>in: 2018 4th International Conference on Computing Sciences (ICCS)</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>137</fpage>
          -
          <lpage>142</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICCS.
          <year>2018</year>
          .
          <volume>00031</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kwong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tao</surname>
          </string-name>
          ,
          <article-title>An underwater image enhancement benchmark dataset and beyond</article-title>
          ,
          <source>IEEE Trans. Image Process</source>
          .
          <volume>29</volume>
          (
          <year>2020</year>
          )
          <fpage>4376</fpage>
          -
          <lpage>4389</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Machado Dos Santos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. G.</given-names>
            <surname>De Giacomo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. L. J.</given-names>
            <surname>Drews</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. S. C.</given-names>
            <surname>Botelho</surname>
          </string-name>
          ,
          <article-title>Matching color aerial images and underwater sonar images using deep learning for underwater localization</article-title>
          ,
          <source>IEEE Robot. Autom. Lett. 5</source>
          (
          <year>2020</year>
          )
          <fpage>6365</fpage>
          -
          <lpage>6370</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Jahanbakht</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Xiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. J.</given-names>
            <surname>Waltham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Azghadi</surname>
          </string-name>
          ,
          <article-title>Distributed deep learning and energy-eficient real-time image processing at the edge for fish segmentation in underwater videos</article-title>
          ,
          <source>IEEE Access 10</source>
          (
          <year>2022</year>
          )
          <fpage>117796</fpage>
          -
          <lpage>117807</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>G.</given-names>
            <surname>Hou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Benchmarking underwater image enhancement and restoration, and beyond</article-title>
          ,
          <source>IEEE Access 8</source>
          (
          <year>2020</year>
          )
          <fpage>122078</fpage>
          -
          <lpage>122091</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , P. Zhuang, H.
          <string-name>
            <surname>-H. Sun</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Kwong</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Underwater image enhancement via minimal color loss and locally adaptive contrast enhancement</article-title>
          ,
          <source>IEEE Trans. Image Process</source>
          .
          <volume>31</volume>
          (
          <year>2022</year>
          )
          <fpage>3997</fpage>
          -
          <lpage>4010</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Abu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Diamant</surname>
          </string-name>
          ,
          <article-title>Unsupervised local spatial mixture segmentation of underwater objects in sonar images</article-title>
          ,
          <source>IEEE J. Ocean. Eng</source>
          .
          <volume>44</volume>
          (
          <year>2019</year>
          )
          <fpage>1179</fpage>
          -
          <lpage>1197</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <article-title>Kapur's entropy for underwater multilevel thresholding image segmentation based on whale optimization algorithm</article-title>
          ,
          <source>IEEE Access 9</source>
          (
          <year>2021</year>
          )
          <fpage>41294</fpage>
          -
          <lpage>41319</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>R.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Cai</surname>
          </string-name>
          , W. Cao,
          <string-name>
            <surname>MFFN:</surname>
          </string-name>
          <article-title>An underwater sensing scene image enhancement method based on multiscale feature fusion network</article-title>
          ,
          <source>IEEE Trans. Geosci. Remote Sens</source>
          .
          <volume>60</volume>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>C. O.</given-names>
            <surname>Ancuti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ancuti</surname>
          </string-name>
          ,
          <string-name>
            <surname>C. De Vleeschouwer</surname>
          </string-name>
          , P. Bekaert,
          <article-title>Color balance and fusion for underwater image enhancement</article-title>
          ,
          <source>IEEE Trans. Image Process</source>
          .
          <volume>27</volume>
          (
          <year>2018</year>
          )
          <fpage>379</fpage>
          -
          <lpage>393</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>R.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <article-title>Twin adversarial contrastive learning for underwater image enhancement and beyond</article-title>
          ,
          <source>IEEE Trans. Image Process</source>
          .
          <volume>31</volume>
          (
          <year>2022</year>
          )
          <fpage>4922</fpage>
          -
          <lpage>4936</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>P.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <article-title>Underwater image enhancement with a deep residual framework</article-title>
          ,
          <source>IEEE access 7</source>
          (
          <year>2019</year>
          )
          <fpage>94614</fpage>
          -
          <lpage>94629</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>X.</given-names>
            <surname>Cheng</surname>
          </string-name>
          , K. Cheng, H. Bi,
          <article-title>Dynamic downscaling segmentation for noisy, low-contrast in situ underwater plankton images</article-title>
          ,
          <source>IEEE Access 8</source>
          (
          <year>2020</year>
          )
          <fpage>111012</fpage>
          -
          <lpage>111026</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>B.-W.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-C.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.-C. Wen</surname>
            ,
            <given-names>C.-H.</given-names>
          </string-name>
          <string-name>
            <surname>Chen</surname>
            , H.-
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>S.-H.</given-names>
          </string-name>
          <string-name>
            <surname>Chang</surname>
          </string-name>
          ,
          <article-title>A study of artificial neural network technology applied to image recognition for underwater images</article-title>
          ,
          <source>IEEE Access 10</source>
          (
          <year>2022</year>
          )
          <fpage>13844</fpage>
          -
          <lpage>13851</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>G.</given-names>
            <surname>Huo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Underwater object classification in sidescan sonar images using deep transfer learning and semisynthetic training data</article-title>
          ,
          <source>IEEE Access 8</source>
          (
          <year>2020</year>
          )
          <fpage>47407</fpage>
          -
          <lpage>47418</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>P.</given-names>
            <surname>Drews-Jr</surname>
          </string-name>
          , I. d. Souza,
          <string-name>
            <given-names>I. P.</given-names>
            <surname>Maurell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. V.</given-names>
            <surname>Protas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. S. C.</given-names>
            <surname>Botelho</surname>
          </string-name>
          ,
          <article-title>Underwater image segmentation in the wild using deep learning</article-title>
          ,
          <source>J. Braz. Comput. Soc</source>
          .
          <volume>27</volume>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <article-title>Numerical simulation and design of hybrid underwater image restoration and enhancement with deep learning</article-title>
          ,
          <source>International Journal of Intelligent Systems and Applications in Engineering</source>
          <volume>10</volume>
          (
          <year>2022</year>
          )
          <fpage>95</fpage>
          -
          <lpage>101</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>K.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y. Tian,</surname>
          </string-name>
          <article-title>DBFNet: A dual-branch fusion network for underwater image enhancement, Remote Sens</article-title>
          . (Basel)
          <volume>15</volume>
          (
          <year>2023</year>
          )
          <fpage>1195</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>B.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Mei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          , UMGAN:
          <article-title>Underwater image enhancement network for unpaired image-to-image translation</article-title>
          ,
          <source>J. Mar. Sci. Eng</source>
          .
          <volume>11</volume>
          (
          <year>2023</year>
          )
          <fpage>447</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>G.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Shen,</surname>
          </string-name>
          <article-title>HTDet: A hybrid transformer-based approach for underwater small object detection, Remote Sens</article-title>
          . (Basel)
          <volume>15</volume>
          (
          <year>2023</year>
          )
          <fpage>1076</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Huo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wei</surname>
          </string-name>
          <article-title>, Multi-modal multi-stage underwater Side-Scan sonar target recognition based on synthetic images, Remote Sens</article-title>
          . (Basel)
          <volume>15</volume>
          (
          <year>2023</year>
          )
          <fpage>1303</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>