<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Skin Detection Technique Based on HSV Color Model and SLIC Segmentation Method?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Kseniia Nikolskaia</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nadezhda Ezhova</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anton Sinkov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maksim Medvedev</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>South Ural State University</institution>
          ,
          <addr-line>Chelyabinsk</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <fpage>123</fpage>
      <lpage>135</lpage>
      <abstract>
        <p>The paper is devoted to new skin detection technique based on the HSV color model and SLIC segmentation method. The algorithm of skin detection is described. Experiments results are presented. The in uence of training images on the skin detection is shown. New skin detection algorithm implemented in Python language using OpenCV library is described.</p>
      </abstract>
      <kwd-group>
        <kwd>Skin detection</kwd>
        <kwd>HSV</kwd>
        <kwd>SLIC Superpixels</kwd>
        <kwd>Computer vision</kwd>
        <kwd>Pattern recognition</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Skin detection is the process of nding skin-colored pixels and regions in an
image or a video. Skin detection applications are used for personality recognition,
body-parts tracking, gesture analysis and adult content ltering and etc [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        When the standard RGB color space is used, the skin detection can be very
di cult under conditions of variable lighting and contrast. Therefore, the input
image must be converted to another color space [2{4] that is invariant or at least
insensitive to lighting changes, such as HSV [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        The implemented skin detector converts the image into required color space
and then uses the image histogram to mark each pixel: whether it belongs to skin.
Image pixels are grouped in superpixels using the SLIC clustering method [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
Thus, advantages of our skin detector are high processing speed and invariance
under rotation and lighting changes.
      </p>
      <p>The main steps of skin detection in the image are:
1. download the input image;
2. convert image to HSV color space;
3. generate the image histogram;
4. apply classi er to determine the probability of a given pixel being
skincolored;
5. divide the image into superpixels;
6. paint out superpixels where sum of probabilities less than the limit.</p>
      <p>
        As a classi er, the Naive Bayes algorithm was chosen. The choice is due to
the facts that this algorithm is the most accurate [7{9] and a small amount of
data is required for it's learning [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ].
      </p>
      <p>
        SLIC algorithm was chosen for image segmentation. It had the fewest errors
among those considered in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], and also it is the fastest one [
        <xref ref-type="bibr" rid="ref12 ref13">12, 13</xref>
        ].
      </p>
      <p>The novelty of this research is the unique combination of technologies for
solving the problem of skin detection in the image.</p>
      <p>The paper has the following structure. In Section 2, color model convertion
technique is described. In Section 3, algorithm of image histograms generation
is given. In Section 4, application of Naive Bayes classi er is shown. In Section 5
superpixel segmentation is conducted. In Section 6, e ects of the training
example on skin detection are considered and results analysis is given. In Section 7,
skin detection algorithm implemented in Python language using OpenCV library
is described. In Section 8, the results are summarized and directions for further
research are outlined.
2</p>
    </sec>
    <sec id="sec-2">
      <title>HSV Color Model</title>
      <p>
        The rst step of skin detection is to convert the input image to HSV color
space as the most suitable for our research [14{16]. The HSV color model is
a cylindrical representation of the standard RGB model [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. HSV stands for
Hue, Saturation, and Value. The Hue is measured in degrees and varies from 0
to 360. It forms the base color. Saturation and Value (brightness) determine the
proximity to white and black respectively. In the basic model they vary from 0
to 100, but in the OpenCV library used in the detector, they vary from 0 to 255.
      </p>
      <p>
        In order to convert the image from the RGB to HSV, each pixel in the image
is subjected to the following transformation [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]:
{ Preliminarily, the maximum and minimum of R; G; B values C max and
      </p>
      <p>C min should be found and their di erence M is calculated;
{ The Hue is calculated:</p>
      <p>H =
80;
&gt;
&gt;
&gt;&lt;60
&gt;60
&gt;
&gt;:60</p>
      <p>GMB ;
BMR + 120;
RMG + 240;</p>
      <p>Cmax = 0;
Cmax = R;
Cmax = G;
Cmax = B:
{ The Saturation is calculated:
{ The Value is calculated:</p>
      <p>S =
(</p>
      <p>M ;
Cmax
0;</p>
      <p>Cmax 6= 0;</p>
      <p>Cmax = 0:
V = Cmin:
(1)
(2)</p>
      <p>Calculating the above values for each pixel, we get an image in the HSV
color space.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Histogram</title>
      <p>
        The second step is to generate the input image histogram [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. The histogram [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]
is a graph of the distribution of digital image elements with di erent saturation.
The horizontal axis represents pixel property values, and the vertical axis
represents the number of pixels. The image representation in the form of a histogram
is just another way of displaying the image. It is worth of noting that the color
histogram itself is not exhaustive in representing image's features but,
nevertheless, it is widely used. In image processing it is used either in combination with
others parameters, such as color moments [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], or in its modi ed version [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], or
in its original form with a slightly di erent approach to data compilation [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
      </p>
      <p>Let's take an image of people of di erent nationalities and build a histogram
(see Fig. 1). Later, the histogram for the Value will not be taken into account
in order to remove the e ects of lighting changes.</p>
    </sec>
    <sec id="sec-4">
      <title>Classi er</title>
      <p>
        The next step is to select a classi cation function [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] that will determine the
probability of a given pixel being skin-colored. Such a function can be Bayesian
network [
        <xref ref-type="bibr" rid="ref23 ref24">23, 24</xref>
        ], Multilayer Perceptron [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ], SVM [26{28], AdaBoost [
        <xref ref-type="bibr" rid="ref29 ref30">29, 30</xref>
        ],
Naive Bayes, RBF network [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ] and etc.
      </p>
      <p>In our detector the Naive Bayes algorithm is used. This method is based on
the Bayes theorem with the assumption that the hypotheses are independent. In
other words, the presence of any attribute in the class does not imply the
existence of another. For example, if you consider an apple of green color, round in
shape, and with a diameter of 5 sm, then all these parameters can be considered
as independent, because it is possible to change one or more of them, but the
object will still be an apple.</p>
      <p>
        Naive Bayes algorithm is easy to implement. It is faster than many other
algorithms [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ]. Besides, despite very simpli ed conditions, Naive Bayes classi ers
often work much better in many di cult situations [
        <xref ref-type="bibr" rid="ref33 ref6">6, 33</xref>
        ]. Also, the advantage
of this algorithm is that it does not require a lot of training examples [
        <xref ref-type="bibr" rid="ref34">34</xref>
        ].
      </p>
      <p>By the Bayes theorem, taking into account the variable y and the dependent
characteristic vector (x1; : : : ; xn), we obtain the following relation:
p(yjx1; :::; xn) =
p(y)p(x1; :::; xnjy)</p>
      <p>p(x1; :::; xn)</p>
      <p>Using the assumption that the parameters are independent
for any i.</p>
      <p>Notice that the variable p (x1; : : : ; xn) can be eliminated from the original
equation (4), because it is a constant:</p>
      <p>
        Using the transformations described above, the classi cation rule can be
obtained [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ]:
, the expression can be simpli ed:
(4)
(5)
(6)
(7)
p (xijxi+1; : : : ; y) = p (xijy)
p(yjx1; :::; xn) =
p(y) Qn
      </p>
      <p>i 1 p(xijy)
p(x1; :::; xn)</p>
      <p>n
p(yjx1; :::; xn) / p(y) Y p(xijy)
i=1
n
Y = argmaxyp(y) Y p(xijy)
i=1</p>
      <p>
        This rule will be applied to each pixel. Thus, we obtain the probability map
(PMap) [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ] of the image (see Fig. 2).
Superpixel segmentation algorithms combine the pixels into the atomic regions,
which will later be checked on being skin-colored. There are various algorithms
for segmentation [
        <xref ref-type="bibr" rid="ref13 ref37">13, 37</xref>
        ]. One of them is SLIC Superpixels algorithm. It is an
adaptation of the k-means method for superpixels [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The di erence is that the
algorithm is more e cient because the detection area is reduced to the size of a
superpixel.
      </p>
      <p>
        The algorithm is simple for understanding and implementation [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Initially,
only the number of regions k is speci ed. The image is divided into k segments of
size S, and center is located at the point with the lowest gradient. Then, for each
pixel, a 2Sx2S-sized area is considered and it is attached to the segment with the
smallest di erence in the distance Ds, calculated from the equation (8) [
        <xref ref-type="bibr" rid="ref38">38</xref>
        ]. After
that, in each cluster, the center moves to the point with the lowest gradient and
again all the pixels are rearranged [
        <xref ref-type="bibr" rid="ref12 ref4">12, 4</xref>
        ]. This occurs until a certain threshold
of the minimum distance is reached.
      </p>
      <p>Ds = dlab +</p>
      <p>dxy;
m
s
dlab = p(lk
li)2 + (ak
ai)2 + (bk</p>
      <p>bi)2;
dxy = p(xk
xi)2 + (yk
yi)2;
(8)
(9)
(10)
where dlab is the distance in CIELAB color space between k and i-pixels
(color distance); dxy is the distance between coordinates k and i-pixels (spatial
distance); m is the parameter that a ects the size of a superpixel. The more m,
the more the spatial proximity a ects the overall distance.</p>
      <p>After all transformations of the image used in the previous section we get
Figure 3 (for m = 10).</p>
      <p>The dependence of the segmentation on the coe cient m is shown on
Figure 4. It is clear, that the greater the m, the smaller the segments. The optimal
is m = 10.</p>
      <p>During the segmentation, unnecessary regions was removed and possible
image noises were excluded. Thus, skin will be selected entirely, and the other
pixels, which falsely have a high probability, will not be colored out. For
comparison, in Figure 5 there are two methods results. It is noticeable that the left
image is less informative. Segments were selected by threshold function, i.e. if
the sum of the probabilities of the whole segment was greater than a certain
value, then the pixel was not colored out and recognized as the skin. The right
image shows that not only the skin was colored out. This is due to the people of
the training example are with hair, so the hair in Figure 5 was also recognized,
as well as parts of the clothing, because its color is similar to the dark skin color
(see Fig. 5).</p>
      <p>The E</p>
      <p>ect of the Training Example on the Result
In this section, let us consider the dependence of the detection result on a variety
of skin colors on the training example. Let us take the original image with people
of di erent races (see Fig. 6) and test di erent training examples on it.
Comparative gures are presented below (see Fig. 7, Fig. 8, Fig. 9, Fig. 10).</p>
      <p>Experiments show that the image on which the histogram is generated
significantly a ects the skin detection. For the training images of people of di erent
races should be chosen. The rst and second tests are the examples of incorrectly
selected training images. The most accurate result is obtained in the fourth test.
However, there are also false positives, for example, then the clothes color is
similar to the skin color.
7</p>
    </sec>
    <sec id="sec-5">
      <title>Skin Detector Implementation</title>
      <p>
        The program is implemented in Python (v3.6.1) programming language. This
language is chosen because it is commonly used for solving the pattern
recognition problem, a large number of libraries are freely available - it greatly simpli es
the development. The following libraries are used in the program:
{ OpenCV (v3.2.0) is applied for the basic calculations;
{ Numpy (v1.12.1) accelerates and simpli es operations with arrays [
        <xref ref-type="bibr" rid="ref39">39</xref>
        ], which
      </p>
      <p>
        OpenCV operates with [
        <xref ref-type="bibr" rid="ref40">40</xref>
        ];
{ Matplotlib (v2.0.2) is used for plotting Numpy arrays as images [
        <xref ref-type="bibr" rid="ref41">41</xref>
        ].
      </p>
      <p>
        To download input image the function cv2.imread used. It is worth noting
that the downloaded image is in BGR color space [
        <xref ref-type="bibr" rid="ref42">42</xref>
        ] by default, so it should
be converted from the BGR color space to HSV (see Fig. 11). To do this, we
used the function cv2.cvtColor. Next, a mask of the training image is
generated using the function cv2.inRange, which sets the lower and upper bounds of
colors involved in generating the histogram. This step is needed to remove the
background from the image. Then, the training image histogram is generated in
two channels: Hue and Saturation, via the function cv2.calcHist. The next step
is the probability map calculation. This operation is performed by the function
cv2.calcBackProject, to which the input image and the histogram are given as a
parameters. It is indicated, through which channels of the color space the back
projection should be made. Back projection is the only way to determine how
much are the pixels correspond to the histogram [
        <xref ref-type="bibr" rid="ref43">43</xref>
        ]. At the end of the
preparatory stage, the function cv2.ximgproc.createSuperpixelSLIC initialize the object
SSllic, and the SSllic.iterate method re ne the boundaries.
      </p>
      <p>After the above steps, we got the PMap and segmented image. Using them,
we check the sum in each superpixel and color it out, if it does not exceed the
threshold, which depends on image sizes and the number of superpixels.</p>
      <p>The source code of the program is freely available on GitHub
at https://github.com/AntonSinkov/skin-recognition.
8</p>
    </sec>
    <sec id="sec-6">
      <title>Conclusion</title>
      <p>During the research, the skin detection algorithm was developed. The in uence of
training images on the result is shown. The more people of di erent races (in
approximately equal proportions), the more accurate the results of skin detection.
The usage of segmentation allowed selecting skin segments, without dividing into
individual pixels, which improves perception.</p>
      <p>However, this algorithm is not absolutely accurate. Segments with a color
similar to the skin color will be recognized as the skin. Nevertheless, this
algorithm is suitable for preprocessing, because it is fast and accurate enough if the
correct training example is provided.</p>
      <p>In the future, it is planned to use arti cial neural networks in the algorithm
to speed up the information processing.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Wang</surname>
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hu</surname>
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yao</surname>
            <given-names>S.</given-names>
          </string-name>
          <article-title>An adult image recognizing algorithm based on naked body detection</article-title>
          .
          <source>In: 2009 ISECS International Colloquium on Computing, Communication, Control, and Management (ISECS '09)</source>
          ,
          <fpage>8</fpage>
          -9
          <source>August</source>
          <year>2009</year>
          , Sanya, China;
          <year>2009</year>
          . p.
          <volume>197</volume>
          {
          <fpage>200</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <article-title>2. A Comparison of Color Models for Color Face Segmentation</article-title>
          .
          <source>Procedia Technology</source>
          <year>2013</year>
          ;
          <volume>7</volume>
          :
          <fpage>134</fpage>
          {
          <fpage>141</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Albiol</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Torres</surname>
          </string-name>
          .L,
          <string-name>
            <surname>Delp</surname>
            <given-names>E.J.</given-names>
          </string-name>
          <article-title>Optimum color spaces for skin detection</article-title>
          .
          <source>In: Proceedings 2001 International Conference on Image Processing, Thessaloniki, Greece, October</source>
          <volume>7</volume>
          -
          <issue>10</issue>
          ,
          <year>2001</year>
          ;
          <year>2001</year>
          . p.
          <volume>122</volume>
          {
          <fpage>124</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Zhang</surname>
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chew</surname>
            <given-names>S.E.</given-names>
          </string-name>
          , Xu
          <string-name>
            <given-names>Z.</given-names>
            ,
            <surname>Cahill</surname>
          </string-name>
          <string-name>
            <surname>ND</surname>
          </string-name>
          ,
          <article-title>SLIC superpixels for e cient graph-based dimensionality reduction of hyperspectral imagery;</article-title>
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>James</surname>
            <given-names>I.S.P.</given-names>
          </string-name>
          <string-name>
            <surname>Face</surname>
          </string-name>
          <article-title>Image Retrieval with HSV Color Space using Clustering Techniques</article-title>
          .
          <source>The SIJ Transactions on Computer Science Engineering &amp; its Applications</source>
          <year>2013</year>
          ;
          <volume>1</volume>
          (
          <issue>1</issue>
          ):
          <volume>17</volume>
          {
          <fpage>20</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Caruana</surname>
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Niculescu-Mizil</surname>
            <given-names>A</given-names>
          </string-name>
          .
          <article-title>An Empirical Comparison of Supervised Learning Algorithms</article-title>
          .
          <source>In: Proceedings of the 23rd International Conference on Machine Learning ICML '06; 2006</source>
          . p.
          <volume>161</volume>
          {
          <fpage>168</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Ashari</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paryudi</surname>
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tjoa</surname>
            <given-names>A.M.</given-names>
          </string-name>
          <article-title>Performance Comparison between Nave Bayes, Decision Tree and k-Nearest Neighbor in Searching Alternative Design in an Energy Simulation Tool</article-title>
          .
          <source>International Journal of Advanced Computer Science and Applications (IJACSA)</source>
          <year>2013</year>
          ;
          <volume>4</volume>
          (
          <issue>11</issue>
          ):
          <volume>33</volume>
          {
          <fpage>39</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Domingos</surname>
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pazzani</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Beyond</surname>
          </string-name>
          <article-title>Independence: Conditions for the Optimality of the Simple Bayesian Classi er</article-title>
          . In: Machine Learning Morgan Kaufmann;
          <year>1996</year>
          . p.
          <volume>105</volume>
          {
          <fpage>112</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Miasnikof</surname>
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Giannakeas</surname>
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gomes</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aleksandrowicza</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shestopalo</surname>
            <given-names>A.Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alam</surname>
            <given-names>D.</given-names>
          </string-name>
          , et al.
          <article-title>Naive Bayes classi ers for verbal autopsies: comparison to physician-based classi cation for 21,000 child and adult deaths</article-title>
          .
          <source>BMC Medicine</source>
          <year>2015</year>
          ;
          <volume>13</volume>
          (
          <issue>1</issue>
          ):1{
          <fpage>9</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Ruparel</surname>
            <given-names>N.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shahane</surname>
            <given-names>N.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bhamare</surname>
            <given-names>D.P..</given-names>
          </string-name>
          <article-title>Learning from Small Data Set to Build Classi cation Model: A Survey</article-title>
          . In: International Conference on Recent Trends in Engineering &amp; Technology - 2013, ICRTET '
          <year>2013</year>
          ),
          <fpage>15</fpage>
          -
          <issue>16</issue>
          <year>March</year>
          ,
          <year>2013</year>
          , Kodaikanal, India;
          <year>2013</year>
          . p.
          <volume>23</volume>
          {
          <fpage>26</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Contributors</surname>
            <given-names>W.</given-names>
          </string-name>
          , Naive Bayes classi er;
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Achanta</surname>
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shaji</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smith</surname>
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lucchi</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fua</surname>
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Susstrunk</surname>
            <given-names>S. SLIC</given-names>
          </string-name>
          <article-title>Superpixels Compared to State-of-the-Art Superpixel Methods</article-title>
          .
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          <year>2012</year>
          Nov;
          <volume>34</volume>
          (
          <issue>11</issue>
          ):
          <volume>2274</volume>
          {
          <fpage>2282</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Wang</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gao</surname>
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ma</surname>
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Soomro</surname>
            <given-names>N.Q.</given-names>
          </string-name>
          <article-title>Superpixel segmentation: A benchmark</article-title>
          .
          <source>Signal Processing: Image Communication</source>
          <year>2017</year>
          ;
          <volume>56</volume>
          :
          <fpage>28</fpage>
          {
          <fpage>39</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Manjare</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chougule</surname>
            <given-names>S.R.</given-names>
          </string-name>
          <string-name>
            <surname>Skin</surname>
          </string-name>
          <article-title>Detection for Face Recognition Based on HSV Color Space</article-title>
          .
          <source>International Journal of Engineering Sciences &amp; Research Technology</source>
          <year>2013</year>
          ;
          <volume>2</volume>
          (
          <issue>7</issue>
          ):
          <year>1883</year>
          {
          <year>1887</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Ong P.M.B.</surname>
          </string-name>
          ,
          <string-name>
            <surname>Punzalan</surname>
            <given-names>E.R.</given-names>
          </string-name>
          <string-name>
            <surname>Comparative</surname>
          </string-name>
          <article-title>Analysis of RGB and HSV Color Models in Extracting Color Features of Green Dye Solutions</article-title>
          .
          <source>In: DLSU Research Congress</source>
          <year>2014</year>
          ,
          <fpage>6</fpage>
          -
          <issue>8</issue>
          <year>March</year>
          ,
          <year>2014</year>
          , Manila, Philippines;
          <year>2014</year>
          . p.
          <volume>1</volume>
          {
          <fpage>7</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <article-title>Comparative Study of Skin Color Detection and Segmentation in HSV and YCbCr Color Space</article-title>
          .
          <source>Procedia Computer Science</source>
          <year>2015</year>
          ;
          <volume>57</volume>
          :
          <fpage>41</fpage>
          {
          <fpage>48</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Midha</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vijay</surname>
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kumari</surname>
            <given-names>S.</given-names>
          </string-name>
          <article-title>Analysis of RGB and YCbCr color spaces using wavelet transform</article-title>
          .
          <source>In: 2014 IEEE International Advance Computing Conference (IACC)</source>
          <year>2014</year>
          , Gurgaon, India,
          <source>February 21-22</source>
          ,
          <year>2014</year>
          ;
          <year>2014</year>
          . p.
          <volume>1004</volume>
          {
          <fpage>1007</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Sural</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Qian</surname>
            <given-names>G.</given-names>
          </string-name>
          ,
          <article-title>Pramanik S. Segmentation and histogram generation using the HSV color space for image retrieval</article-title>
          .
          <source>In: Proceedings of the 2002 International Conference on Image Processing, ICIP</source>
          <year>2002</year>
          , Rochester, New York, USA, September
          <volume>22</volume>
          -
          <issue>25</issue>
          ,
          <year>2002</year>
          ;
          <year>2002</year>
          . p.
          <volume>589</volume>
          {
          <fpage>592</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Novak</surname>
            <given-names>C.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shafer</surname>
            <given-names>S.A.</given-names>
          </string-name>
          <article-title>Anatomy of a color histogram</article-title>
          .
          <source>In: IEEE Computer Society Conference on Computer Vision</source>
          and Pattern Recognition,
          <string-name>
            <surname>CVPR</surname>
          </string-name>
          <year>1992</year>
          , Proceedings,
          <fpage>15</fpage>
          -
          <lpage>18</lpage>
          June,
          <year>1992</year>
          , Champaign, Illinois, USA;
          <year>1992</year>
          . p.
          <volume>599</volume>
          {
          <fpage>605</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <article-title>Image Retrieval based on the Combination of Color Histogram and Color Moment</article-title>
          .
          <source>International Journal of Computer Applications</source>
          <year>2012</year>
          ;
          <volume>58</volume>
          (
          <issue>3</issue>
          ):
          <volume>27</volume>
          {
          <fpage>34</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Han</surname>
            <given-names>J.</given-names>
          </string-name>
          , Ma K.
          <article-title>Fuzzy color histogram and its use in color image retrieval</article-title>
          .
          <source>IEEE Transactions on Image Processing</source>
          <year>2002</year>
          ;
          <volume>11</volume>
          (
          <issue>8</issue>
          ):
          <volume>944</volume>
          {
          <fpage>952</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Garg</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roth</surname>
            <given-names>D</given-names>
          </string-name>
          .
          <article-title>Understanding Probabilistic Classi ers</article-title>
          .
          <source>In: Machine Learning: EMCL 2001, 12th European Conference on Machine Learning</source>
          , Freiburg, Germany, September 5-
          <issue>7</issue>
          ,
          <year>2001</year>
          , Proceedings;
          <year>2001</year>
          . p.
          <volume>179</volume>
          {
          <fpage>191</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Lowd</surname>
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Domingos P.M. Naive</surname>
          </string-name>
          <article-title>Bayes models for probability estimation</article-title>
          .
          <source>In: Machine Learning, Proceedings of the TwentySecond International Conference (ICML</source>
          <year>2005</year>
          ), Bonn, Germany,
          <source>August</source>
          <volume>7</volume>
          -
          <issue>11</issue>
          ,
          <year>2005</year>
          ;
          <year>2005</year>
          . p.
          <volume>529</volume>
          {
          <fpage>536</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Friedman</surname>
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Geiger</surname>
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goldszmidt</surname>
            <given-names>M.</given-names>
          </string-name>
          <article-title>Bayesian Network Classi ers</article-title>
          .
          <source>Machine Learning 1997 Nov;29</source>
          (
          <issue>2-3</issue>
          ):
          <volume>131</volume>
          {
          <fpage>163</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Pal</surname>
            <given-names>S.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mitra</surname>
            <given-names>S.</given-names>
          </string-name>
          <article-title>Multilayer perceptron, fuzzy sets, and classi cation</article-title>
          .
          <source>IEEE Transactions Neural Networks</source>
          <year>1992</year>
          ;
          <volume>3</volume>
          (
          <issue>5</issue>
          ):
          <volume>683</volume>
          {
          <fpage>697</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Chao</surname>
            <given-names>C.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Horng</surname>
            <given-names>M.H.</given-names>
          </string-name>
          <article-title>The Construction of Support Vector Machine Classi er Using the Fire y Algorithm</article-title>
          .
          <source>Computational Intelligence and Neuroscience</source>
          <year>2015</year>
          Jan;
          <year>2015</year>
          :2:
          <issue>2</issue>
          {
          <issue>2</issue>
          :
          <fpage>2</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Fradkina</surname>
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Muchnik</surname>
            <given-names>I</given-names>
          </string-name>
          .
          <article-title>Support Vector Machines for Classi cation</article-title>
          .
          <source>DIMACS Series in Discrete Mathematics and Theoretical Computer Science</source>
          <year>2006</year>
          ;
          <volume>70</volume>
          :
          <fpage>13</fpage>
          {
          <fpage>20</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Hsu</surname>
            <given-names>C.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chang</surname>
            <given-names>C.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lin</surname>
            <given-names>C.J.</given-names>
          </string-name>
          <article-title>A Practical Guide to Support Vector Classi cation;</article-title>
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Kim</surname>
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Park</surname>
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Woo</surname>
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jeong</surname>
            <given-names>T.</given-names>
          </string-name>
          ,
          <article-title>Min S. Multi-class Classi er-Based Adaboost Algorithm</article-title>
          . In: Intelligent Science and
          <string-name>
            <surname>Intelligent Data Engineering - Second</surname>
          </string-name>
          Sinoforeign-interchange Workshop, IScIDE 2011,
          <article-title>Xi'an, China</article-title>
          ,
          <source>October 23-25</source>
          ,
          <year>2011</year>
          , Revised Selected Papers;
          <year>2011</year>
          . p.
          <volume>122</volume>
          {
          <fpage>127</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Schapire R</surname>
            .E. In: Scholkopf
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Luo</surname>
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vovk</surname>
            <given-names>V</given-names>
          </string-name>
          ., editors.
          <source>Explaining AdaBoost Berlin</source>
          , Heidelberg: Springer Berlin Heidelberg;
          <year>2013</year>
          . p.
          <volume>37</volume>
          {
          <fpage>52</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Thomaz</surname>
            <given-names>C.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Feitosa</surname>
            <given-names>R.Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Veiga</surname>
            <given-names>A</given-names>
          </string-name>
          .
          <article-title>Design of Radial Basis Function Network as Classi er in Face Recognition Using Eigenfaces</article-title>
          .
          <source>In: 5th Brazilian Symposium on Neural Networks (SBRN '98)</source>
          ,
          <fpage>9</fpage>
          -
          <issue>11</issue>
          <year>December 1998</year>
          ,
          <string-name>
            <given-names>Belo</given-names>
            <surname>Horizonte</surname>
          </string-name>
          , Brazil;
          <year>1998</year>
          . p.
          <volume>118</volume>
          {
          <fpage>123</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <surname>Narayanan</surname>
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Arora</surname>
            <given-names>I.</given-names>
          </string-name>
          ,
          <article-title>Bhatia A. Fast and Accurate Sentiment Classi cation Using an Enhanced Naive Bayes Model</article-title>
          .
          <source>In: Intelligent Data Engineering and Automated Learning - IDEAL 2013 - 14th International Conference, IDEAL</source>
          <year>2013</year>
          , Hefei, China,
          <source>October 20-23</source>
          ,
          <year>2013</year>
          . Proceedings;
          <year>2013</year>
          . p.
          <volume>194</volume>
          {
          <fpage>201</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          33.
          <string-name>
            <surname>Phung</surname>
            <given-names>S.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bouzerdoum</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chai</surname>
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Watson</surname>
            <given-names>A</given-names>
          </string-name>
          .
          <article-title>Naive bayes face/nonface classi er: a study of preprocessing and feature extraction techniques</article-title>
          .
          <source>In: Proceedings of the 2004 International Conference on Image Processing, ICIP</source>
          <year>2004</year>
          , Singapore,
          <source>October 24-27</source>
          ,
          <year>2004</year>
          ;
          <year>2004</year>
          . p.
          <volume>1385</volume>
          {
          <fpage>1388</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          34.
          <string-name>
            <surname>Park D.C.</surname>
          </string-name>
          <article-title>Image Classi cation Using Nave Bayes Classi er</article-title>
          .
          <source>International Journal of Computer Science and Electronics Engineering</source>
          (IJCSEE)
          <year>2016</year>
          ;
          <volume>4</volume>
          (
          <issue>3</issue>
          ):
          <volume>135</volume>
          {
          <fpage>139</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          35.
          <string-name>
            <surname>Sebe</surname>
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cohen</surname>
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            <given-names>T.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gevers</surname>
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Skin</surname>
          </string-name>
          <article-title>Detection: A Bayesian Network Approach</article-title>
          .
          <source>In: 17th International Conference on Pattern Recognition, ICPR</source>
          <year>2004</year>
          , Cambridge, UK,
          <year>August</year>
          23-
          <issue>26</issue>
          ,
          <year>2004</year>
          ;
          <year>2004</year>
          . p.
          <volume>903</volume>
          {
          <fpage>906</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          36.
          <string-name>
            <surname>Azzopardi</surname>
            <given-names>G.</given-names>
          </string-name>
          , Petkov N., editors.
          <source>The 16th International Conference, CAIP</source>
          <year>2015</year>
          , Valletta, Malta, September 2-
          <issue>4</issue>
          , 2015 Image Processing,
          <source>Computer Vision</source>
          , Pattern Recognition, and Graphics, Springer International Publishing;
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          37.
          <string-name>
            <surname>Neubert</surname>
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Protzel</surname>
            <given-names>P</given-names>
          </string-name>
          .
          <article-title>Superpixel benchmark and comparison</article-title>
          .
          <source>In: Proceedings of the Forum Bildverarbeitung</source>
          ,
          <year>January 2012</year>
          , Regensburg, Germany;
          <year>2012</year>
          . p.
          <volume>502</volume>
          {
          <fpage>513</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          38.
          <string-name>
            <surname>Radhakrishna Achanta KSALPF Appu Shaji</surname>
          </string-name>
          , Susstrunk S. SLIC Superpixels.
          <source>EPFL Technical Report 149300</source>
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          39.
          <string-name>
            <surname>Idris</surname>
            <given-names>I.</given-names>
          </string-name>
          , editor.
          <source>NumPy: Beginner's Guide - Third Edition</source>
          . Packt Publishing;
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          40.
          <string-name>
            <surname>Mordvintsev</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>AR K.</surname>
          </string-name>
          , Introduction to OpenCV-Python Tutorials;
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          41.
          <string-name>
            <surname>Droettboom</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Caswell</surname>
            <given-names>T.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hunter</surname>
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Firing</surname>
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nielsen</surname>
            <given-names>J.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Varoquaux</surname>
            <given-names>N.</given-names>
          </string-name>
          , et al.,
          <source>matplotlib/matplotlib v2.0</source>
          .2;
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          42.
          <string-name>
            <surname>Mordvintsev</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>AR K.</surname>
          </string-name>
          ,
          <article-title>Image le reading</article-title>
          and writing;
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          43.
          <string-name>
            <surname>Mordvintsev</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>AR K.</given-names>
            ,
            <surname>Back</surname>
          </string-name>
          <string-name>
            <surname>Projection</surname>
          </string-name>
          ;
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>