<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>On Fuzzification of Color Spaces for Medical Decision Support in Video Capsule Endoscopy</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>V. B. Surya Prasath Computational Imaging and Visualization Analysis Lab Department of Computer Science University of Missouri-Columbia Columbia MO 65211</institution>
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>1982</year>
      </pub-date>
      <abstract>
        <p>Advances in image and video processing algorithms and the availability of computational resources has paved the way for real-time medical decision support systems to be a reality. Video capsule endoscopy is a direct imaging method for gastrointestinal regions and produces large scale color video data. Fuzzification of color spaces can improve contextual description based tasks that are required in medical decision support. We consider abnormalities detection in video capsule endoscopy using fuzzy sets and logic theory on different colorspaces. Application in retrieval of bleeding detection and polyp vascularization are given as examples of the methodology considered here and preliminary results indicate we obtain promising retrieval results.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Video capsule endoscopy (VCE) is a revolutionary
imaging technique which paved the way for unprecedented
direct visualization of the gastrointestinal tract without much
discomfort to patients. A typical colon VCE exam produces
around 8 hours of color (RGB) video data. For example,
Pillcam Colon capsule endoscope (Given Imaging, Yoqneam,
Israel) produces approximately 30; 000 frames per patient
and more than 1:6 million patients worldwide have used
capsule endoscopy over the past 10 years. Automatic
algorithms can help augment computer aided diagnosis (CAD) in
VCE medical decision support systems and can help reduce
the burden on gastroenterologists
        <xref ref-type="bibr" rid="ref4">(Rey 2008; Niwa et al.
2008)</xref>
        . For example, polyp detection
        <xref ref-type="bibr" rid="ref1">(Figueiredo et al. 2010;
2011)</xref>
        , mucosa surface identification
        <xref ref-type="bibr" rid="ref2">(Prasath, Figueiredo,
and Figueiredo 2011; Prasath et al. 2012; Prasath and
Delhibabu 2015b)</xref>
        , contrast enhancement (Prasath and
Delhibabu 2015a). Nevertheless, designing automatic methods
for automatically analyzing the VCE imagery via image
processing and computer vision techniques pose significant
challenges as we are dealing with big data.
      </p>
      <p>
        Image processing involves uncertainty quantification
and fuzzy techniques are effective in handling various
tasks (
        <xref ref-type="bibr" rid="ref3">Kerre and Nachtegael(eds) 2000</xref>
        ; Vlachos and
Sergiadis 2006; Shamoi, Inoue, and Kawanaka 2014b; 2014a).
Recently, Shamoi et al (Shamoi, Kawanaka, and Inoue 2014)
used fuzzification of HSI color space for apparel
coordination. In (Shamoi, Kawanaka, and Inoue 2014) a case
example in HSI (Hue, Saturation, Intensity) space is provided
for obtaining a correspondence between colors and human
impressions. In this work, we adapt the ground-work done
in (Shamoi, Kawanaka, and Inoue 2014) to the VCE color
space fuzzification for abnormality classifications. We
provide an overview of different color spaces (RGB, CMYK,
HSV, La*b*) and their fuzzifications to organize all
possible human operator color perceptions of abnormalities in
VCE images. Using operator defined impressions expressed
in terms of linguistic terms we provide retrieval examples.
We note the overall framework is general in the sense that it
can be expanded with domain knowledge for various related
tasks.
      </p>
      <p>
        The rest of the paper is organized as follows. Next section
introduces different color spaces useful in VCE imagery and
fuzzification techniques for representing color perceptions
of human operators using fuzzy sets. Next, we provide some
example classification results on VCE images for bleeding
regions.
Appearance of different abnormal regions such as polyps,
adenomas, and bleeding in VCE videos under different color
spaces provide different linguistic terms for description.
This can be utilized in the fuzzy logic framework advocated
in (Shamoi, Kawanaka, and Inoue 2014), see Figure 2. All
three components of a medical decision system, namely,
different color spaces, impressions based on linguistic terms,
and mapping between them, are all interpretable using fuzzy
logic. In particular we consider an example of bleeding
detection in VCE, see Figure 3. Note that various color spaces
can be utilized in the fuzzification framework and we
consider standard color spaces such as RGB, CMYK, HSV and
La*b*, we refer to (Wyszecki and Stiles 1982) for
corresponding definitions and formulae. Each give a different
perspective of an abnormality, see Figure 3 for an example of
bleeding region in RGB, CMYK, HSV and La*b* spaces.
Advantages of color (spectral) information can be exploited
for different diagnostic decision purposes
        <xref ref-type="bibr" rid="ref2">(Figueiredo et
al. 2011; Prasath and Delhibabu 2015a)</xref>
        . In this
particular case of bleeding detection, gastroenterologists tend to
mark bleeding regions using linguistic terms such as dark
red/medium red or pale red. For example, in RGB color
space (see Figure 3(a)) the bleeding region is darker in green
and blue channels and the overall appearance can be
characterized as dark red in the RGB spectrum. Thus, fuzzy logic
and mass assignment theory based mapping of different
colors and human (operator) oriented impressions can be
utilized in making a medical decision support system.
      </p>
      <p>Figure 4 shows RGB and HSV fuzzy sets which are
used to fuzzify different bleeding regions. In contrast to
the apparel coordination application considered in (Shamoi,
Kawanaka, and Inoue 2014), here we use only RGB color
space and the Value (from HSV) fuzzy membership
functions. Hence, the fRed, Green, Blueg and fDark, Medium,
Paleg are the spectral and linguistic terms, respectively. We
utilized a ground truth marked histogram from two
experienced gastroenterologists for various bleeding regions and
Figure 6 shows some examples with Red and three linguistic
terms. Context dependent color impressions in the bleeding
scenario are light, strong which indicate lighter or stronger
Following (Shamoi, Kawanaka, and Inoue 2014) we utilized
a taxonomy of color impressions adapted for VCE imagery
based medical decision support systems. Here we describe
it for bleeding regions and Table 1 provides the taxonomy
of color impressions in the RGB - Value case, and a similar
table is generated for the polyp vascularization with RGB
Density. Using these taxonomy we follow basic formulae in
fuzzy logic such as the intersection (minimum), union
(maximum) of two membership functions,
( A \
( A [</p>
      <p>B)(x) = minf A(x); B(x)g</p>
      <p>B)(x) = maxf A(x); B(x)g
and -cut
f = fx : f (x)
g:
These basic fuzzy formulae are used to fuzzify color spaces
and interprets linguistic impressions of colors for composite
cases. We used these formulae along with a map between
color impressions and color in RGB, Value in ranking
similar images for bleeding region identification.</p>
      <p>Similar interpretations are done in the polyp
vascularization and density, tortuosity are used as context dependent
impressions. Figure 5 provides example results in
bleeding and polyp vascularization using linguistic queries alone.
Figure 6 shows the corresponding ranking mechanism based
on linguistic impressions for the bleeding the
vascularization cases. As can be seen, histograms are utilized to
identify the top three ranked nearest images matching the Dark
red interpretation and the retrieval results are accurate as per
gastroenterologist ground truth markings. All the retrieved
bleeding regions are from the jejunum area of the
gastrointestinal tract and we only show top three results according to
color histogram matching, see Figure 6(a). A similar
analysis with RGB space and polyp vascularization density is
undertaken and the the query image (Figure 5(d)) is described
as Pale Red Dense and the ranking given in Figure 6(b) ranks
the resultant images according to the density of
vascularization. All the retrieved vascular regions contain dense
vessels and are malignant. We utilized 400 Pillcam R Colon
capsule images for bleeding detection, these were obtained
from 5 different patients, and marked by two
gastroenterologists who provided ground truth regions along with
boundaries separating bleeding from normal mucosa tissue. For
polyp vascularization based retrieval we used 100 images
which are benchmarked against an automatic segmentation
method (Prasath, Pelapur, and Palaniappan 2014) for
calculating the density of vascularization in polyps.
(a) Input</p>
      <p>(b) Groundtruth
(c) Bleeding
(d) Input</p>
      <p>(e) Groundtruth
(f) Polyp vasularization</p>
    </sec>
    <sec id="sec-2">
      <title>Conclusion</title>
      <p>In this paper, we considered fuzzification of different color
spaces for medical decision support systems in
gastrointestinal diagnosis using video capsule endoscopy.
Following (Shamoi, Kawanaka, and Inoue 2014) we utilized fuzzy
sets and logic, color space theory for VCE imagery
interpretation and used it for retrieval tasks. Our preliminary
results in bleeding region detection and polyp vascularization
in various VCE images indicate promise for using
fuzzification techniques in a medical decision support system. Future
works include introducing shape (e.g. polyp appearance),
texture (e.g. pit patterns) features along with fuzzification
framework studied here for different VCE videos. Moreover,
increasing the number of experts (in our case study
gastroenterologists) and quantifying/enlarging the linguistic
impressions is an important avenue to be explored. We also believe
the framework considered here will help in identifying trash,
bubbles for uninformative frame classification.</p>
    </sec>
    <sec id="sec-3">
      <title>Acknowledgment</title>
      <p>The author thanks the Gastroenterologists Dr. R. Shankar,
Dr. A. Sebastian from Vellore Christian Medical College
Hospital, India for their help in interpreting VCE imagery
and Radhakrishnan Delhibabu (Kazan Federal University
&amp; Innopolis, Kazan, Russia) in helping with data
collection/organization. This work was done while the author was
visiting the Center for Scientific Computation and
Mathematical Modeling (CSCAMM) at the University of
Maryland, MD, USA.
(a) Bleeding
(b) Polyp vascularization
Prasath, V. B. S., and Delhibabu, R. 2015a. Automatic
contrast enhancement for wireless capsule endoscopy videos
with spectral optimal contrast-tone mapping. In
Computational Intelligence in Data Mining - Volume 1, 243–250.
Springer SIST (eds.: L. Jain, H. S. Behera, J. K. Mandal, D.
P. Mohapatra).</p>
      <p>Prasath, V. B. S., and Delhibabu, R. 2015b. Automatic
image segmentation for video capsule endoscopy. In
Computational Intelligence in Medical Informatics, SpringerBriefs
in Applied Sciences and Technology, 73–80. Springer CIMI
(eds.: N. B. Muppalaneni, V. K. Gunjan).</p>
      <p>Prasath, V. B. S.; Figueiredo, I. N.; Figueiredo, P. N.; and
Palaniappan, K. 2012. Mucosal region detection and 3D
reconstruction in wireless capsule endoscopy videos using
active contours. In 34th IEEE/EMBS International
Conference, 4014–4017.</p>
      <p>Prasath, V. B. S.; Figueiredo, I. N.; and Figueiredo, P. N.
2011. Colonic mucosa detection in wireless capsule
endoscopic images and videos. In Congress on Numerical
Methods in Engineering (CMNE 2011).</p>
      <p>Prasath, V. B. S.; Pelapur, R.; and Palaniappan, K. 2014.
Multi-scale directional vesselness stamping based
segmentation for polyps from wireless capsule endoscopy. In
Figshare.</p>
      <p>Rey, J.-F. 2008. Future perspectives for esophageal and
colorectal capsule endoscopy: Dreams or reality? In New
Challenges in Gastrointestinal Endoscopy. Springer. 55–64.
Shamoi, P.; Inoue, A.; and Kawanaka, H. 2014a. Fuzzy color
space for apparel coordination. Open Journal of Information
Systems 1(2):20–28.</p>
      <p>Shamoi, P.; Inoue, A.; and Kawanaka, H. 2014b.
Perceptual color space: Motivations, methodology, applications. In
Joint 7th International Conference on Soft Computing and
Intelligent Systems (SCIS) and 15th International
Symposium on Advanced Intelligent Systems (ISIS), 1354–1359.
Shamoi, P.; Kawanaka, H.; and Inoue, A. 2014.
Fuzzification of HSI color space and its use in apparel coordination.
In Proc. of 25th Modern Artificial Intelligence and Cognitive
Science Conference, 11–17. Spokane, WA, USA:
CEURWS.org. online CEUR-WS.org/Vol-1144/paper2.pdf.
Vlachos, I. K., and Sergiadis, G. D. 2006. A heuristic
approach to intuitionistic fuzzification of color images. World
Scientific. chapter 108, 767–774.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Figueiredo</surname>
            ,
            <given-names>I. N.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Prasath</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ; Tsai,
          <string-name>
            <surname>Y.-H. R.</surname>
          </string-name>
          ; and Figueiredo,
          <string-name>
            <surname>P. N.</surname>
          </string-name>
          <year>2010</year>
          .
          <article-title>Automatic detection and segmentation of colonic polyps in wireless capsule imagess</article-title>
          .
          <source>Technical Report 10-37</source>
          , University of Coimbra, Portugal.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Figueiredo</surname>
            ,
            <given-names>P. N.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Figueiredo</surname>
            ,
            <given-names>I. N.</given-names>
          </string-name>
          ;
          <article-title>Surya Prasath;</article-title>
          and Tsai,
          <string-name>
            <surname>R.</surname>
          </string-name>
          <year>2011</year>
          .
          <article-title>Automatic polyp detection in pillcam colon 2 capsule images and videos: Preliminary feasibility report</article-title>
          .
          <source>Diagnostic and Therapeutic Endoscopy</source>
          <year>2011</year>
          :
          <fpage>7pp</fpage>
          .
          <source>Article ID 182435.</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Kerre</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , and Nachtegael(eds),
          <string-name>
            <surname>M.</surname>
          </string-name>
          <year>2000</year>
          .
          <article-title>Fuzzy techniques in image processing</article-title>
          . Heidelberg: Physica-Verlag.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Niwa</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ; Tajiri,
          <string-name>
            <surname>H.</surname>
          </string-name>
          ; Nakajima,
          <string-name>
            <given-names>M.</given-names>
            ; and
            <surname>Yasuda</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.</surname>
          </string-name>
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>