<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Development of an Automatic Pollen Classi cation System Using Shape, Texture and Aperture Features</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Celeste Chudyk</string-name>
          <email>celeste.chudyk@hs-mainz.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hugo Castaneda</string-name>
          <email>hugo.castaneda@iut-dijon.u-bourgogne.fr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Romain Leger</string-name>
          <email>romain.leger@iut-dijon.u-bourgogne.fr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Islem Yahiaoui</string-name>
          <email>islem.yahiaoui@iut-dijon.u-bourgogne.fr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Frank Boochs</string-name>
          <email>boochs@hs-mainz.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dijon Institute of Technology, Burgundy University</institution>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>i3mainz, University of Applied Sciences Mainz</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2015</year>
      </pub-date>
      <fpage>65</fpage>
      <lpage>74</lpage>
      <abstract>
        <p>Automatic detection and classi cation of pollen species has value for use inside of palynologic allergen studies. Traditional labeling of di erent pollen species requires an expert biologist to classify particles by sight, and is therefore time-consuming and expensive. Here, an automatic process is developed which segments the particle contour and uses the extracted features for the classi cation process. We consider shape features, texture features and aperture features and analyze which are useful. The texture features analyzed include: Gabor Filters, Fast Fourier Transform, Local Binary Patterns, Histogram of Oriented Gradients, and Haralick features. We have streamlined the process into one code base, and developed multithreading functionality to decrease the processing time for large datasets.</p>
      </abstract>
      <kwd-group>
        <kwd>Image processing</kwd>
        <kwd>Machine learning</kwd>
        <kwd>Pollen</kwd>
        <kwd>Texture classi cation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Currently, pollen count information is usually limited to generalizing all pollen
types with no access to information regarding particular species. In order to
di erentiate species, typically a trained palynologist would have to manually
count samples using a microscope. Advances in image processing and machine
learning enable the development of an automatic system that, given a digital
image from a bright- eld microscope, can automatically detect and describe the
species of pollen particles present.</p>
      <p>
        We build upon previous work from within our lab which has planned the
structure for a complete personal pollen tracker [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. For image classi cation,
preliminary results have shown that extraction of both shape features and
aperture features lead to useful results [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. To expand on this research, we have built
a software process that not only considers shape and aperture features, but also
adds multiple texture features. The range of tested image types has also been
greatly expanded in order to build a model capable of classifying a highly variable
dataset.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Overview</title>
      <p>The steps for our process are as follows: 1. Image acquisition and particle
segmentation, 2. Feature extraction, and 3. Classi cation.</p>
      <p>Our process begins with scanning glass slides of the various pollen species
with a digital microscope, then segmenting these images to gather samples of
individual pollen particles. These images are then further segmented to
identify the pollen boundary, and the area within this boundary is used for feature
extraction. 18 shape features, texture features including the Fast Fourier
Transform, Local Binary Patterns, Histogram of Oriented Gradients, and Haralick
features, as well as aperture features are used. These features are then trained
using supervised learning to build a model for the 5 pollen species sampled. The
model is then tested with ten-fold cross validation. The process is illustrated in
gure 1.</p>
    </sec>
    <sec id="sec-3">
      <title>Image Acquisition and Particle Segmentation</title>
      <p>Five di erent species (Alder, Birch, Hazel, Mugwort, and Sweet Grass) have
been stained and prepared on glass slides for use with a common digital
brighteld microscope. In order to build a robust model, all species had sample images
derived from three distinct laboratory slides (using a total of 600 sample images
obtained from 15 di erent slides).</p>
      <p>For particle segmentation, each digital image is processed in order to locate
and segment out a con ning square surrounding a pollen particle. First, a
median blur and Gauassian blur are applied to a negative of the image in order to
remove smaller particles that are background noise (often dirt or imperfections
on the background). Next, a threshold is applied to the image, using the OTSU
algorithm to automatically detect the histogram peak. The returned image is an
optimized binary image. A second set of lters is then applied using
morphological operators (iterations of erosions and dilations) to ll in the particle area.
Finally, the image is converted to have a white background in preparation for
further processing steps.</p>
      <p>A blob detection algorithm is now applied in order to extract a small image
surrounding each particle. This algorithm is based on four attributes { Area,
Circularity, Convexity and Inertia Ratio, with parameters for \minimum" and
\maximum" values for each. By setting the parameters for the expected
characteristics of pollen grains, the smaller images are then found and extracted.</p>
      <p>The last lter used on the resulting images of particles is depicted in Figure
3. Because the pollen grains settle into the slide adhesive at di erent depths,
some particles will be out of focus. These blurry images will provide insu cient
data especially concerning texture features, therefore we remove them from our
analysis. A blur detection algorithm was developed and applied to each image: a
Laplacian lter set to a manually determined threshold value determines which
images are too blurry and removed from further processing steps.</p>
      <p>Lastly, the contour surrounding each pollen particle is identi ed, using OpenCV's
findContours() method.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Feature Extraction</title>
      <p>
        Shape features
We have used 18 shape features already identi ed to be useful through previous
iterations of our research [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The 18 selected were based on the research of
developing an identi cation process for the Urticaceae family of pollen [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], as
well as research into developing universal shape descriptors [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>Shape features used:
Perimeter (P ) Length of contour given by OpenCV's arcLength() function
Area (A) Number of pixels contained inside the contour
CRoomunpdancetsnses(sR)1 4P 2A</p>
      <p>
        R
Roundness/Circularity Ratio (RC) Another measure of roundness, see [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
P pP 2 4 A
      </p>
      <p>P +pP 2 4 A
Mean Distance (S) Average of the distance between the center of gravity and
the contour
Minimum Distance (Smin) Smallest distance between the center of gravity
and the contour
Maximum Distance (Smax) Longest distance between the center of gravity
and the contour
Ratio1 (R1) Ratio of maximum distance to minimal distance Smax=Smin
Ratio2 (R2) Ratio of maximum distance to mean distance Smax=S
Ratio3 (R3) Ratio of minimum distance to mean distance Smin=S
Diameter (D) Longest distance between any two points along the contour
Radius Dispersion (RD) Standard deviation of the distances between the
center of gravity and the contour
Holes (H) Sum of di erences between the Maximum Distance and the distance
between center of gravity and the contour
Euclidean Norm (EN2) Second Euclidean Norm
RMS Mean RMS mean size
Mean Distance to Boundary Average distance between every point within
the area and the contour
Complexity (F ) Shape complexity measure based on the ratio of the area and
the mean distance to boundary
4.2</p>
      <p>
        Texture feature extraction
A variety of texture features were selected due to their performance in prior
research [
        <xref ref-type="bibr" rid="ref10 ref11 ref7 ref8">11,10,7,8</xref>
        ]. The texture features extracted included: Gabor Filters (GF),
the Fast Fourier Transform (FFT), the Local Binary Pattern (LBP), the
Histogram of Oriented Gradients (HOG), and Haralick features.
      </p>
      <p>
        Gabor Filters Gabor lters have been proven useful in image segmentation
and texture analysis [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. The Gabor Filter function consists of the application
of 5 di erent size masks and 8 orientation masks (See Figure 4) in order to
produce output images. For each of the 40 resulting images, we calculate the
local energy over the entire image (the sum of the square of the gray-level pixel
intensity), and the mean amplitude (the sum of the amplitudes divided by the
total number of images). In addition to these 80 values, we also store the total
local energy for each of the 8 directions as well as the direction where the local
energy is at the maximum.
Fourier Transform Fourier Transforms translate an image from the spatial
domain into the frequency domain, and are useful because lower frequencies
represent an area of an image with consistent intensity (relatively featureless areas)
and higher frequencies represent areas of change [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Just as in spatial analysis,
we cannot compare images directly, but rst need to extract features. In the
frequency domain, we likewise extract useful information through analysis of
frequency peaks. Here, we apply a Fast Fourier Transform to the image, apply
a logarithmic transformation, and create a graph of the resulting frequency
domain. After taking the highest 10 frequency peaks, we compute the di erences
between the peaks and store these values, as well as the mean of the di erences
and the variance of the di erences.
      </p>
      <p>
        Haralick Features Haralick features [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] are determined by computations over
the GLCM (Grey-Level Co-Occurence Matrix). Here, we use: the angular
second moment, contrast, correlation, sum of squares: variance, inverse di erence
moment, sum average, sum variance, sum entropy, entropy, di erence variance,
di erence entropy, measure of correlation 1, and measure of correlation 2. These
are 13 out of the 14 original features developed by Haralick: the 14th is typically
left out of computations due to uncertainty in the metric's stability.
Histogram oriented gradient (HOG) The Histogram of Oriented Gradients
is calculated by rst determining gradient values over a 3 by 3 Sobel mask. Next,
bins are created for the creation of cell histograms; here, 10 bins were used. The
gradient angles are divided into these bins, and the gradient magnitudes of the
pixel values are used to determine orientation. After normalization, the values
are attened into one feature vector.
      </p>
      <p>
        Local Binary Pattern (LBP) To obtain local binary patterns, a 3 by 3 pixel
window is moved over the image, and the value of the central pixel is compared
to the value of its neighbors. In the case that the neighbor is of lower value,
it is assigned a zero, and in the case of a higher value, a one. This string of
eight numbers ("00011101" for instance) is the determined local pattern. The
frequency of the occurrence of each pattern is used as the texture description.
Aperture Detection The number and type of apertures present on the pollen
surface is a typical feature used by palynologists in order to determine the pollen
species. Therefore, it seems useful to also build an automatic aperture detection
function in order to identify and count apertures as an addition feature set.
Preliminary work identifying apertures [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] has shown potential for this analysis.
First, a moving window segments the pollen image into smaller areas. Each
smaller image is manually labeled as an aperture or not an aperture. Texture
features are extracted from these smaller images, including those through a Fast
Fourier Transform (FFT), Gabor Filters (GF), Local Binary Pattern (LBP),
Histogram of Oriented Gradients (HOG), and Haralick features. A supervised
learning process (through the use of support vector machines) then creates a
model for each of the four species expected to include apertures on the surface.
Once an unlabeled pollen image is given to be classi ed, the system again uses a
moving window to break up the image into subsections. These smaller sections
are then loaded into the generated model, and four values are returned for each
detected aperture, corresponding to the probability that the aperture is of type
Alder, Birch, Hazel, and Mugwort.
5
      </p>
    </sec>
    <sec id="sec-5">
      <title>Classi cation</title>
      <p>Once the shape, texture, and aperture features have been calculated, they are
added together into a csv le. A data set of 5 species with 40 sample pollen
images from 3 separate sample slides led to a total of 600 samples, each with
252 extracted features. A supervised learning process used this data for model
creation, which was then tested using ten-fold cross validation. Both support
vector machines and a random forest classi er showed promising (and very
similar results); for the results reported here a random forest classi er was used
due to faster processing on the larger data sets. The n-estimators parameter for
this method was set to a typical size of 100 (increasing this number did lead to
slightly improved results yet also dramatically increased processing times).
6</p>
    </sec>
    <sec id="sec-6">
      <title>Results</title>
      <p>Using a random forest classi er on a total of 600 samples (120 each for each
species) and 252 features, a model was generated with an accuracy of 87% 2%.
Considering that the samples were intentionally selected for variability in their
appearance and background (See Figure 2), this is an indication of a robust,
reliable model that shows promise for expansion in the future to also include
datasets collected from an outdoor environment.</p>
      <p>The dataset was further modi ed into di erent versions in order to test the
results using only subsets of the features available.</p>
      <p>The above table shows the accuracies of the trained models. Using only the 18
shape features, an accuracy of 64% 3% was achieved, and adding texture
information either through Gabor Filters or Haralick features substantially improved
the results.
Features</p>
      <p>Accuracy
Shape features 64%
Shape and Gabor 76%
Shape and FFT 65%
Shape and LBP 65%
Shape and HOG 67%
Shape and Haralick 87%
Shape and Aperture 67%
3%
2%
2%
3%
2%
3%
2%
7</p>
    </sec>
    <sec id="sec-7">
      <title>Conclusion</title>
      <p>Through this research, we have tested an expanded sample set of 5 species of
pollen particles and used shape, texture and aperture features for use in
classication. Use of all features led to an accuracy of 87% 2%. Through testing of
individual texture features in combination with shape features, it was found that
using only the shape and Haralick features resulted in an accuracy of 87% 3%.
Gabor Filters also proved to be a useful feature as seen through the improved
accuracy compared to using just the shape features alone. Surprisingly, the other
texture features as well as the aperture features did not result in signi cant
accuracy gains. One next step of research would be to investigate under which exact
conditions certain texture features prove useful. In the case of the aperture
features, one known limitation is that the aperture types were trained on a more
limited dataset. Because the aperture detection process technique developed did
have positive results in determining correct aperture positions, it would be
interesting to retrain the aperture type on a wider dataset and see if this results in
a more useful set of extracted features. Furthermore, extending the dataset not
only beyond 600 images but especially to include more than three microscope
slides per species would test against possible over tting to particular slide
conditions. Future research would also include application of this process to data
collected outside of a laboratory environment, as well as expansion to include
more pollen species.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1. da Fontoura Costa,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Cesar</surname>
          </string-name>
          <string-name>
            <surname>Jr.</surname>
          </string-name>
          , R.M.:
          <article-title>Shape Classi cation and Analysis: Theory and Practice</article-title>
          . CRC Press, Inc.,
          <string-name>
            <surname>Boca</surname>
            <given-names>Raton</given-names>
          </string-name>
          , FL, USA, 2nd edn. (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Haas</surname>
            ,
            <given-names>N.Q.</given-names>
          </string-name>
          :
          <source>Automated Pollen Image Classi cation. Master's thesis</source>
          , University of Tennessee (
          <year>2011</year>
          ), http://trace.tennessee.edu/utk_gradthes/1113
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Haralick</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shanmugam</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dinstein</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Textural features for image classi cation</article-title>
          .
          <source>Systems, Man and Cybernetics</source>
          , IEEE Transactions on SMC-
          <volume>3</volume>
          (
          <issue>6</issue>
          ),
          <volume>610</volume>
          {621 (Nov
          <year>1973</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Lozano-Vega</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Benezeth</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marzani</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boochs</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Classi cation of pollen apertures using bag of words</article-title>
          . In: Petrosino,
          <string-name>
            <surname>A</surname>
          </string-name>
          . (ed.)
          <source>Image Analysis and Processing { ICIAP 2013, Lecture Notes in Computer Science</source>
          , vol.
          <volume>8156</volume>
          , pp.
          <volume>712</volume>
          {
          <fpage>721</fpage>
          . Springer Berlin Heidelberg (
          <year>2013</year>
          ), http://dx.doi.org/10.1007/978-3-
          <fpage>642</fpage>
          -41181-6_
          <fpage>72</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Lozano-Vega</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Benezeth</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marzani</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boochs</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Analysis of relevant features for pollen classi cation</article-title>
          . In: Iliadis,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Maglogiannis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            ,
            <surname>Papadopoulos</surname>
          </string-name>
          , H. (eds.)
          <article-title>Arti cial Intelligence Applications and Innovations</article-title>
          ,
          <source>IFIP Advances in Information and Communication Technology</source>
          , vol.
          <volume>436</volume>
          , pp.
          <volume>395</volume>
          {
          <fpage>404</fpage>
          . Springer Berlin Heidelberg (
          <year>2014</year>
          ), http://dx.doi.org/10.1007/978-3-
          <fpage>662</fpage>
          -44654-6_
          <fpage>39</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>Lozano</given-names>
            <surname>Vega</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Benezeth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Uhler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Boochs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            ,
            <surname>Marzani</surname>
          </string-name>
          ,
          <string-name>
            <surname>F.</surname>
          </string-name>
          :
          <article-title>Sketch of an automatic image based pollen detection system</article-title>
          .
          <source>In: 32. Wissenschaftlich-Technische Jahrestagung der DGPF</source>
          . vol.
          <volume>21</volume>
          , pp.
          <volume>202</volume>
          {
          <fpage>209</fpage>
          . Potsdam, Germany (Mar
          <year>2012</year>
          ), https://hal.archives-ouvertes.fr/hal-00824014
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Maillard</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Comparing texture analysis methods through classi cation</article-title>
          .
          <source>Photogrammetric Engineering &amp; Remote Sensing</source>
          <volume>69</volume>
          (
          <issue>4</issue>
          ),
          <volume>357</volume>
          {
          <fpage>367</fpage>
          (
          <year>2003</year>
          ), http://www. ingentaconnect.com/content/asprs/pers/2003/00000069/00000004/art00003
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Marcos</surname>
            ,
            <given-names>J.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nava</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cristobal</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Redondo</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Escalante-Ram rez</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bueno</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deniz</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gonzalez-Porto</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pardo</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chung</surname>
            ,
            <given-names>F.e.a.</given-names>
          </string-name>
          :
          <article-title>Automated pollen identi cation using microscopic imaging and texture analysis</article-title>
          .
          <source>Micron</source>
          <volume>68</volume>
          ,
          <issue>36</issue>
          {
          <fpage>46</fpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>O</given-names>
            <surname>'Higgins</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          :
          <article-title>Methodological issues in the description of forms</article-title>
          . In: Lestrel, P.E. (ed.)
          <source>Fourier Descriptors and their Applications in Biology</source>
          , pp.
          <volume>74</volume>
          {
          <fpage>105</fpage>
          . Cambridge University Press (
          <year>1997</year>
          ), http://dx.doi.org/10.1017/CBO9780511529870. 005, cambridge Books Online
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Redondo</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bueno</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chung</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nava</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marcos</surname>
            ,
            <given-names>J.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cristobal</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , Rodr guez, T.,
          <string-name>
            <surname>Gonzalez-Porto</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pardo</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deniz</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Escalante-Ram rez</surname>
          </string-name>
          , B.:
          <article-title>Pollen segmentation and feature evaluation for automatic classi cation in brighteld microscopy</article-title>
          .
          <source>Computers and Electronics in Agriculture</source>
          <volume>110</volume>
          (
          <issue>0</issue>
          ),
          <volume>56</volume>
          {
          <fpage>69</fpage>
          (
          <year>2015</year>
          ), http://www.sciencedirect.com/science/article/pii/S0168169914002348
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Rodriguez-Damian</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cernadas</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Formella</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fernandez-Delgado</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>SaOtero</surname>
          </string-name>
          , P.D.:
          <article-title>Automatic detection and classi cation of grains of pollen based on shape and texture</article-title>
          .
          <source>IEEE Trans. Syst</source>
          .,
          <string-name>
            <surname>Man</surname>
          </string-name>
          , Cybern. C
          <volume>36</volume>
          (
          <issue>4</issue>
          ),
          <volume>531</volume>
          {542 (jul
          <year>2006</year>
          ), http://dx.doi.org/10.1109/TSMCC.
          <year>2005</year>
          .855426
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Zheng</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Features extraction using a gabor lter family</article-title>
          . In: Hamza,
          <string-name>
            <surname>M.H</surname>
          </string-name>
          . (ed.)
          <source>Proceedings of the 6th IASTED International Conference</source>
          . pp.
          <volume>139</volume>
          {
          <fpage>144</fpage>
          . Signal and
          <string-name>
            <given-names>Image</given-names>
            <surname>Processing</surname>
          </string-name>
          , Acta Press (
          <year>2004</year>
          ), www.paper.edu.cn/ scholar/downpaper/wangjiaxin-13
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>