<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A New Matrix Decomposition Framework for Specular Reflection Removal from Endoscopic Images</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jithin Joseph</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sudhish N. George</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kiran Raja</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National Institute of Technology Calicut</institution>
          ,
          <addr-line>Kerala</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Norwegian University of Science and Technology</institution>
          ,
          <country country="NO">Norway</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Endoscopic images typically sufer from various image degradation such as specular reflection and motion blur due to camera motion. In this paper, we propose a new matrix decomposition method to eliminate the presence of specular reflections (highlights) from endoscopic images. The proposal combines the characteristics of specular reflections and the features of dimensionality reduction through singular value thresholding (SVT) to obtain a highlight free image. Addition of specular reflections with an image alters the singular values of the image matrix. The algorithm iteratively eliminates the changes that occurred to various singular values until all the highlight pixels are eliminated. In order to avoid any loss of significant information, slicing operation is performed on the residue image matrix remained after the initial process. The useful information is extracted from this residue matrix and reinserted with the semi-low-rank component obtained in SVT operation. The experiments and tests reveal that this new matrix decomposition method eliminates the specular reflections from the endoscopic images.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Specular Reflections</kwd>
        <kwd>Singular Value Thresholding</kwd>
        <kwd>Intensity and Saturation Slicing</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Over the years, endoscopy has evolved as one of the necessary tools used in the medical
diagnostic domain. It spreads its branches into colonoscopy, bronchoscopy, esophagoscopy,
etc.[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] which are used to view various parts of the internal human body. However, the small
intestine is the essential region in the gastrointestinal (GI) tract that stands inaccessible by
the conventional tube-like endoscope. As the small intestine is a very long tubular structure
arranged in the lower abdominal cavity with numerous bends and folds, it is challenging to use
a tube-like structure to view the internals of the small intestine.
      </p>
      <p>
        The images obtained from the endoscopy may not be ideal as they contain numerous artefacts
like the presence of specular reflections (also called highlights), blurred out regions,
overexposed and under-exposed pixels, presence of bubbles and debris, chromatic aberrations, etc.
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. So it is necessary to remove these artefacts before further analysis.
      </p>
      <p>
        Highlight removal has been a widely researched topic over the last few years, with several
diferent approaches proposed by many researchers [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ][
        <xref ref-type="bibr" rid="ref3">3</xref>
        ][
        <xref ref-type="bibr" rid="ref4">4</xref>
        ][
        <xref ref-type="bibr" rid="ref5">5</xref>
        ][
        <xref ref-type="bibr" rid="ref6">6</xref>
        ][
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. These include both
supervised and unsupervised approaches achieved in single stage or multiple stages of operation(s).
In the section below, we present a set of related works in this regard.
      </p>
      <sec id="sec-1-1">
        <title>1.0.1. Highlight Detection:</title>
        <p>
          Many of the early algorithms considered the highlight removal as a two-stage process. The
ifrst stage tries to identify the highlight pixels, and in the next stage, the identified pixels are
removed and replaced with new information derived from the image itself. In [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], the chromatic
diference between the normal pixel and highlight pixel is used to identify the corresponding
pixels. Since the reflection from any surface can be considered as two components, viz. difuse
and reflection components [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], the detection of the above components leads to the detection of
the highlighted pixel. The difuse component follows body colour, and the reflection component
follows illuminant colour [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. So by the method of colour channel thresholding or Y-channel
thresholding [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], highlight pixels can be identified. In another approach proposed by Kim  .
[
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], the diference in the polarisation of difuse and specular reflections is used to identify the
highlight pixels. But the database requirement for extracting polarisation characteristics limits
this method. In [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], Oh  . identified highlight pixels as characterised by low saturation and
high-intensity values, and in [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] this information is used to identify highlight pixels.
        </p>
      </sec>
      <sec id="sec-1-2">
        <title>1.0.2. Highlight Removal:</title>
        <p>Highlight removal can be accomplished either by first performing the highlight detection
followed by highlight removal or by performing removal without the prior detection.</p>
        <p>
          (i).    ℎ : The whole image can be split into reflection and difusion
components. The difusion component varies very slowly, whereas the reflection component
varies rapidly. In [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], Yang   suggested a filter-based approach to eliminate the high
frequency, noisy reflection component with the help of an edge-preserving, low pass filter.
Unfortunately, the filtering approach works well only when the frequency of the reflection
component is very high, which means this method fails whenever there is a large area of
highlight region present. Also, the underlying information lost due to highlight pixels are not
being recovered.
        </p>
        <p>
          (ii). ℎ  ℎ : In this approach [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ][
          <xref ref-type="bibr" rid="ref9">9</xref>
          ][
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], each of the image pixels
is assumed to be consisting of two components rather than assigning individual pixels to a
single component. The illuminant colour and tissue colour are extracted and are used to obtain
a more reliable colour for each pixel. By properly decomposing the original image, the difused
component is the reflection-free component. As this method highly relies on the object material
and motion, it is not suitable for dynamic images.
        </p>
        <p>
          (iii).    ℎ : Generally, these techniques are used after highlight
detection. Once the highlight pixels are identified, these are inpainted with proper values [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ].
Then, window-based approaches are used to inpaint the highlight pixels. Finally, in [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], a
structural similarity-based inpainting method is proposed.
        </p>
        <p>
          (iv).     ℎ : Supervised learning approaches show higher
eficiency in specularity removal [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ][
          <xref ref-type="bibr" rid="ref15">15</xref>
          ][
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. The unavailability of labelled data puts restrictions
on the training of various networks. In [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], Bobrow  . used pairs of coherent and
incoherent images for training. The Conditional Generative Adversarial Network (cGAN) considers
removing the specularity as a translation from image to image. Funke  . [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] proposed the
use of two GANs for self-training and self-regularization. The SpecGAN proposed in [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], trains
the network from weakly labelled training data.
        </p>
        <p>
          (v).    ℎ : Robust Principal Component Analysis (RPCA) is a powerful
tool to be used with unlabelled data. The highlight regions are considered a sparse component,
and the highlight free image is the low-rank component embedded in the endoscopic image
[
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. RPCA has been used in many applications where the low rankness can be employed to
extract some features of the original data as in [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ][18][
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. The setting of hyperparameters for
individual scenes makes the RPCA a time-consuming procedure. In [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], Li.  . proposed
a multistage, RPCA based algorithm to remove the highlight from endoscopic images. An
adaptive iterative method identifies the parameters that best reduce the specularity in [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], Li.
 .. This extra step leads to increased time complexity. Also, the sparse components may
contain some useful features. But the problem with RPCA method of specular reflection on
endoscopic images is the distribution of highlight pixels. If the highlight pixels are smooth and
continuous, they no longer can be considered sparse components. Further, in high-resolution
images, the highlight regions contain more pixels which calls for a large correlation between
highlight pixels. Here also, the algorithm fails.
        </p>
        <p>This paper proposes a matrix decomposition method to remove the specular reflections from
endoscopic images to obtain high-quality images. The proposed method overcomes a lot of
demerits of conventional highlight removal algorithms. The merits of the proposed algorithm
are listed below.</p>
        <p>
          • The proposed approach eliminates the need for a large labelled dataset as it is not training
driven.
• The proposed approach does not require pre-detection of highlighted pixels.
• The approach does not demand scene-wise tuning of parameters, unlike other works
[19][
          <xref ref-type="bibr" rid="ref14">14</xref>
          ][20][21].
• The sparse information vital for endoscopic images is not lost within the proposed
approach.
        </p>
        <p>In addition to these merits, the algorithm provides a fast rate of convergence by applying the
characteristics of highlight pixel in the update equations.</p>
        <p>The remainder of the paper is structured as follows. Section 2 discusses the mathematical
preliminaries required to better understand the proposed method. The proposed new matrix
decomposition method for specular reflection removal is outlined in Section 3. The experimental
results are presented in Section 4. Finally, the conclusions and future scope are drawn in Section
5.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Mathematical Preliminaries</title>
      <p>In this section, we briefly explain some of the mathematical ingredients which are necessary
for the development of the proposed method.</p>
      <sec id="sec-2-1">
        <title>2.1. Singular value decomposition (SVD)</title>
        <p>It is an important mathematical tool used to factorize an M
matrix into three sub-matrices as shown in Fig. 1. The SVD of an M× N matrix is given as,
× N matrix [22]. It decomposes a</p>
        <p>XM× N = [U]M× r[Σ] r× r[V]r× N,  = min{M, N}
are the square roots of the eigenvalues of A A.
where, [U]M× r is the matrix of orthonormal eigenvectors of AA . [V]r× N is the matrix of
orthonormal eigenvectors of A A. [Σ] r× r is the diagonal matrix of the singular values which
(1)
(2)
(3)</p>
        <p>SVD</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Soft thresholding operator</title>
        <p>Soft thresholding of a matrix is used to discard insignificant elements of the matrix with reference
to some threshold. Along with discarding these elements, it also reduces the strengths of other
elements also. The operator is defined as [23],</p>
        <p>(X) = [ (), ∀ ∈ X]
 () = () ×
max{ −  , 0
}</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Singular value thresholding operator</title>
        <p>SVT operator works in two stages. In the initial stage, it performs the SVD operation on
the image matrix. Then, the soft thresholding is performed on the singular value matrix.
The reconstructed matrix using the modified singular values is the result of this operation.
Mathematically, we can represent the operation as follows [23]:
 (X) =</p>
        <p>U 1 (Σ) V

where, X = UΣ V is the singular value decomposition of the matrix X, with Σ being the singular
value matrix.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed Method</title>
      <p>In the proposed method, the specular reflection components embedded in the endoscopic image
is removed by applying a new matrix decomposition technique. The original endoscopic image
(M) is decomposed into a semi-low-rank component (L) and a highlight component (H) as
follows.</p>
      <p>M = L + H
(4)</p>
      <p>If we attempt to decompose the matrix into a truly low-rank component and a sparse
component as in RPCA [24], many of the highlight pixels will be removed from the original image M.
But the highlight component of the image cannot be perfectly sparse in nature. It contributes
significantly to the low rank component of the image. In addition, there can be some sparse
non-highlight information making the component L a semi-low-rank component.
~
~
~
~
(a)
(b)
(c)</p>
      <sec id="sec-3-1">
        <title>3.1. Obtaining the semi-low-rank component</title>
        <p>The key idea to obtain the semi-low-rank component is to extract details out of the original
endoscopic image iteratively until all the specular components are removed completely.
Removing all specular components in a single step needs to subtract the exact contribution of the
specularity in each of the singular values. Extracting this information from the original image
is practically not possible. Further, the contribution difers from image to image. So the solution
becomes an iterative procedure that can converge when all the specularities are removed. Thus,
the algorithm can be summarized as below:</p>
        <p>Algorithm 1: Basic steps followed in the proposed method
1 Perform the singular value decomposition of the original image.
2 Apply the ‘soft thresholding operator’ on the singular values to reduce the contribution of specularity on
them.
3 Reconstruct the image to obtain the semi-low-rank component.
4 Check for convergence. If not converged, repeat from step 1.</p>
        <p>The steps 1, 2 and 3 can be performed by the use of ‘singular value thresholding operator’
defined in Eq. (3). Thus the extraction operation is given by,</p>
        <p>The part of M remaining after the formation of L is the residue matrix represented as H,
which is expected to contain only the highlight component.</p>
        <p>L =  (M)
H = M −</p>
        <p>L
(5)
(6)</p>
        <p>While the algorithm outlined appears to result in highlight component alone, we not that it
does not always hold true. This observation can be argued with the reasons listed below:
i. There is useful information in an endoscopic image that may be considered sparse. These
components have contributions towards the least significant singular values. Applying
soft thresolding technique removes this information and will be accumulated on H.
ii. In order to remove specularity completely, proper selection of the parameter  is essential.</p>
        <p>If  is selected high, 1 becomes less and only purely sparse components will be removed
and it takes more time. So the value of  must be very low. But in that case, there is a
possibility of relevant information also get removed due to the ‘higher step size’.</p>
        <p>This implies that in each iteration we remove both highlight pixels and some relevant
information using a very small value of  . The decomposition is given in Fig. 3.</p>
        <p>Original
Image</p>
        <p>Equation 5 and 6</p>
        <p>L</p>
        <p>Semi
Low-Rank
Component</p>
        <p>H
Semi Sparse Component</p>
        <p>Specular Useful
Component Information</p>
        <p>Now the H component contains both highlight pixels and useful information. We proceed
to extract this useful information from H and reinsert the same into L, so that the highlight
component will be available in H and the remaining in L.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Separating the specular component from the useful information</title>
        <p>The component remaining after the initial step is, H = M - L. Our method of extracting the
specular components from H is as follows. All the pixels in the image H are made zero if they
are not following the characteristics of specular reflections. Then the remaining component
represents only the specular reflection, i.e., for ∀ ℎ ∈ H,</p>
        <p>S’(ℎ) =
{︃ℎ,</p>
        <p>whenever ℎ is a highlight pixel
0, otherwise</p>
        <p>The slicing operation is defined as,
where,</p>
        <p>H = Slicing(M - L)
Slicing(H) = [ S’(ℎ),</p>
        <p>∀ ℎ ∈ H ]</p>
        <p>S’(ℎ) = ishighlight(ℎ) × ℎ
That is, S’(ℎ) is thresholding operation.</p>
        <p>
          Each pixel ℎ is decided to be a highlight pixel by applying the characteristics of highlight
pixels [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. In HSV color space, a pixel is assigned as a highlight pixel according to the function
defined as,
ishighlight(ℎ = (ℎ , ℎ, ℎ )) =
{︃+1, if ℎ &lt;  and ℎ &gt; 
0,
otherwise
where,  is the threshold for saturation channel and  is the threshold for the intensity
channel. For the RGB color space, the decision rule is stated for a single channel as,
∀  ∈ [, , ].
        </p>
        <p>ishighlight(ℎ = (ℎ , ℎ, ℎ )) =
{︃+1, if ℎ &gt; 
0,
otherwise</p>
        <p>The threshold for each channel is obtained from the image under processing using the
following equation.</p>
        <p>= (channel) +  × (channel), ∀ ∈ [ R, G, B ]
where,  and  are the statistical mean and standard deviation of the channel in image
H respectively. Applying Eq. (9) and (10) in Eq. (7) yields the operation required to extract
highlight pixels from the image H. Clearly the operation obtained is simply the intensity level
slicing operation. The thresholds are obtained from Eq. (10).</p>
        <p>Now the operation performed for obtaining only the highlight pixels from M - L is,
(7)
(8)
(9)
(10)
(11)
(12)</p>
        <p>Once the highlight-only image H is obtained, the remaining component is given by M - L
H. Let it be Y. This is the useful information that must reside in the semi-low-rank component.
Now this term is added back to L, so that in the next iteration, new value of L is obtained by
the singular value thresholding operation on (L + Y) = (M - H). The update equations for the
semi-low-rank component and the highlight component can be summarised as,</p>
        <p>L+1 =  (M − H)
H+1 = Slicing(M − L+1)
(13)</p>
        <p>The above mentioned iterative procedure is repeated until (M - L - H) becomes insignificantly
small. That is ‖M − L − H‖2 &lt;=  , where ‖.‖2 is the Frobenius norm. Resulting L contains
all the features other than specular reflections and H contains all the highlight regions. The
entire procedure is outlined in Fig. 4 and steps are enumerated in Algorithm 2.</p>
        <p>Slicing</p>
        <p>Is
Converged
?</p>
        <p>Yes
No</p>
        <p>Output/
Highlight
Free
Image</p>
        <p>Algorithm 2: Removing Highlight components from WCE images</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental Results and Discussions</title>
      <p>To evaluate the results obtained from the proposed algorithm, we have performed experiments
on two diferent publicly available datasets - the Kvasir-colonoscopy 1 dataset and the
CVCClinicSpec2 dataset 3.</p>
      <p>The results obtained are validated objectively using the percentage of remaining highlight.
We design two experiments to validate the proposed method. In the first experiment, images
from two datasets are processed using the proposed method and the percentage of highlight
remaining in each image after processing is calculated as given by Eq. 14:</p>
      <p>Percentage of highlight =</p>
      <p>No. of pixels satisfying the conditions for highlight</p>
      <p>Total No. of pixels in the original Imge
(14)</p>
      <p>A pixel satisfying the conditions for highlight is decided by Eq. (9). Low value of percentage
indicates a high eficiency for the algorithm.</p>
      <p>From the Kvasir-colonoscopy dataset, 500 images are selected for testing our algorithm, the
resolution of which ranges from 720× 576 to 1920× 1072. All of the images contain significant
specular reflections. Fig. 5 shows the result of the first experiment, and only two images
from each dataset are shown for the sake of illustration. The first two rows are the images
from the Kvasir-colonoscopy dataset, and the remaining two rows are the images from the
CNC-ClinicSpec dataset. Despite the images from diferent datasets, the output of the proposed
approach is observed to be consistent. However, it is observed that the boundaries of the
highlight pixels in the original image remain. The boundaries are defined by the shadow regions
created by the illumination, and these dark regions are clearly visible in the enlarged view of
the original image.</p>
      <p>As part of the first experiment, we have calculated the percentage of highlights remaining in
diferent images from two datasets using Eq. (14). The reduction in highlight is shown in Fig. 6
for 5 sample images.</p>
      <p>In the second experiment, the proposed method is compared with the state of the art matrix
decomposition methods based on singular values. The methods compared are (a) 1/2 Regularized
RPCA [25], (b) AS-RPCA [26], (c) PCP [24] and (d) FPCP [27]. The comparison is shown in Fig.
7. The corresponding statistical distribution of the percentage of highlights that remained in the
ifnal image is presented in Table 1, for the 500 images tested from Kvasir- colonoscopy dataset.
As seen from the Table 1, the proposed approach results in a 0.0052% (standard deviation
9.7714e-06) of highlights remaining as compared to next best performing approach with 0.0276%
highlights intact. The obvious advantage of the proposed approach can be seen against PCA
based approaches introspected from the Table 1.
2http://www.cvc.uab.es/CVC-Colon/index.php/cvc-clinicspec/
3The experiments are conducted on a computer with Intel(R) Xeon(R) CPUE5-1620 2 with 3.7GHz and 8GB RAM
and using Python 3.9.</p>
      <p>In addition to Table 1, we also conduct the statistical significance analysis using the Box-plots
as shown in Figure 8. The obtained results indicate the superiority over the other approaches.</p>
      <sec id="sec-4-1">
        <title>4.1. Time complexity</title>
        <p>In addition to quantitative results, we also conduct an analysis for time complexity of the
proposed approach as against other state-of-the-art approaches. As seen from the results
presented in Table 2, the average time complexity of the proposed approach is significantly lower
compared to next best performing approach (i.e.,FPCP). The results indicate that superiority of
the proposed approach both in terms of highlight removal and time complexity.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Limitations of proposed approach</title>
        <p>The employed Kvasir-colonoscopy dataset consists of images with padded boundaries due
to the nature of imaging. These boundary regions result in pseudo-artefacts and cannot be
avoided if the whole image is considered. Better strategies for handling such boundaries are
not investigated in this work and will be studied in future works.
Representation 56 (2018) 188–200.
[18] M. S. Subodh Raj, S. N. George, 1/2 regularized rpca technique for 3d human action
recovery, in: 2020 IEEE 17th India Council International Conference (INDICON), 2020, pp.
1–5.
[19] F. d. S. Queiroz, I. R. Tsang, Automatic segmentation of specular reflections for endoscopic
images based on sparse and low-rank decomposition, in: 2014 27th SIBGRAPI Conference
on Graphics, Patterns and Images, 2014, pp. 282–289.
[20] G. Fu, Q. Zhang, C. Song, Q. Lin, C. Xiao, Specular highlight removal for real-world images,</p>
        <p>Computer Graphics Forum 38 (????) 253–263.
[21] J. Guo, Z. Zhou, L. Wang, Single image highlight removal with a sparse and low-rank
reflection model, in: Computer Vision – ECCV 2018, 2018, pp. 282–298.
[22] V. Klema, A. Laub, The singular value decomposition: Its computation and some
applications, IEEE Transactions on Automatic Control 25 (1980) 164–176.
[23] J.-F. Cai, E. J. Candès, Z. Shen, A singular value thresholding algorithm for matrix
completion, SIAM Journal on Optimization 20 (2010) 1956–1982.
[24] E. J. Candès, X. Li, Y. Ma, J. Wright, Robust principal component analysis?, J. ACM 58
(2011).
[25] S. Raj M S, S. George, l 1/2 regularized rpca technique for 3d human action recovery, 2020,
pp. 1–5.
[26] G. Liu, S. Yan, Active subspace: Toward scalable low-rank learning, Neural computation
24 (2012).
[27] P. Rodríguez, B. Wohlberg, Fast principal component pursuit via alternating minimization,
in: 2013 IEEE International Conference on Image Processing, 2013, pp. 69–73.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>N.</given-names>
            <surname>VX</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. R.</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. E.</surname>
          </string-name>
          ,
          <article-title>Appropriate use of endoscopy in the diagnosis and treatment of gastrointestinal diseases: up-to-date indications for primary care providers</article-title>
          .,
          <source>International journal of general medicine 3</source>
          (
          <year>2010</year>
          )
          <fpage>345</fpage>
          -
          <lpage>57</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bailey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Braden</surname>
          </string-name>
          ,
          <article-title>A deep learning framework for quality assessment and restoration in video endoscopy</article-title>
          ,
          <source>Medical Image Analysis</source>
          <volume>68</volume>
          (
          <year>2021</year>
          )
          <fpage>101900</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>W.</given-names>
            <surname>Lim</surname>
          </string-name>
          ,
          <article-title>Robust specular reflection removal and visibility enhancement of endoscopic images using 3-channel thresholding technique and image inpainting</article-title>
          ,
          <source>Technium Romanian Journal of Applied Sciences and Technology</source>
          <volume>2</volume>
          (
          <year>2020</year>
          )
          <fpage>336</fpage>
          -
          <lpage>343</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>H.-L.</given-names>
            <surname>Shen</surname>
          </string-name>
          , H.-G. Zhang, S.
          <article-title>-</article-title>
          <string-name>
            <surname>J. Shao</surname>
            ,
            <given-names>J. H.</given-names>
          </string-name>
          <string-name>
            <surname>Xin</surname>
          </string-name>
          ,
          <article-title>Chromaticity-based separation of reflection components in a single image</article-title>
          ,
          <source>Pattern Recognition</source>
          <volume>41</volume>
          (
          <year>2008</year>
          )
          <fpage>2461</fpage>
          -
          <lpage>2469</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ahuja</surname>
          </string-name>
          ,
          <article-title>Real-time specular highlight removal using bilateral filtering</article-title>
          ,
          <source>in: ECCV</source>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Arnold</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ameling</surname>
          </string-name>
          , G. Lacey,
          <article-title>Automatic segmentation and inpainting of specular highlights for endoscopic imaging</article-title>
          ,
          <source>EURASIP Journal on Image and Video Processing</source>
          <year>2010</year>
          (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Jiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Satoshi</surname>
          </string-name>
          ,
          <article-title>Highlight removal for camera captured documents based on image stitching</article-title>
          ,
          <source>in: 2016 IEEE 13th International Conference on Signal Processing (ICSP)</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>849</fpage>
          -
          <lpage>853</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>R.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Si</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Qin</surname>
          </string-name>
          ,
          <article-title>Specular reflections removal for endoscopic image sequences with adaptive-rpca decomposition</article-title>
          ,
          <source>IEEE Transactions on Medical Imaging</source>
          <volume>39</volume>
          (
          <year>2020</year>
          )
          <fpage>328</fpage>
          -
          <lpage>340</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Nishino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ikeuchi</surname>
          </string-name>
          ,
          <article-title>Separating reflection components based on chromaticity and noise analysis</article-title>
          ,
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          <volume>26</volume>
          (
          <year>2004</year>
          )
          <fpage>1373</fpage>
          -
          <lpage>1379</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.-S.</given-names>
            <surname>Hong</surname>
          </string-name>
          , H.-Y. Shum,
          <article-title>Variational specular separation using color and polarization</article-title>
          .,
          <year>2002</year>
          , pp.
          <fpage>176</fpage>
          -
          <lpage>179</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Oh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hwang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Tavanapong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wong</surname>
          </string-name>
          , P. C. de Groen,
          <article-title>Informative frame classification for endoscopy video</article-title>
          ,
          <source>Medical Image Analysis</source>
          <volume>11</volume>
          (
          <year>2007</year>
          )
          <fpage>110</fpage>
          -
          <lpage>127</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>T.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Nishino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ikeuchi</surname>
          </string-name>
          ,
          <article-title>Illumination chromaticity estimation using inverse-intensity chromaticity space</article-title>
          , volume
          <volume>1</volume>
          ,
          <year>2003</year>
          , pp.
          <fpage>I</fpage>
          -
          <volume>673</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Dynamic searching and classification for highlight removal on endoscopic image</article-title>
          ,
          <source>Procedia Computer Science</source>
          <volume>107</volume>
          (
          <year>2017</year>
          )
          <fpage>762</fpage>
          -
          <lpage>767</lpage>
          .
          <source>Advances in Information and Communication Technology: Proceedings of 7th International Congress of Information and Communication Technology (ICICT2017).</source>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Jung</surname>
          </string-name>
          ,
          <article-title>Single image reflection removal using convolutional neural networks</article-title>
          ,
          <source>IEEE Transactions on Image Processing</source>
          <volume>28</volume>
          (
          <year>2019</year>
          )
          <fpage>1954</fpage>
          -
          <lpage>1966</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>T. L.</given-names>
            <surname>Bobrow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Mahmood</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Inserni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. J.</given-names>
            <surname>Durr</surname>
          </string-name>
          ,
          <article-title>Deeplsr: a deep learning approach for laser speckle reduction</article-title>
          ,
          <source>Biomedical Optics Express</source>
          <volume>10</volume>
          (
          <year>2019</year>
          )
          <fpage>2869</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>I.</given-names>
            <surname>Funke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bodenstedt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Riediger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Weitz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Speidel</surname>
          </string-name>
          ,
          <article-title>Generative adversarial networks for specular highlight removal in endoscopic images</article-title>
          ,
          <year>2018</year>
          , p.
          <fpage>3</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>B.</given-names>
            <surname>Shijila</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Tom</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. N.</given-names>
            <surname>George</surname>
          </string-name>
          ,
          <article-title>Moving object detection by low rank approximation and l1-tv regularization on rpca framework</article-title>
          ,
          <source>Journal of Visual Communication and Image</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>