<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Sabanci-Okan System at ImageClef 2012: Combining Features and Classi ers for Plant Identi cation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Berrin Yanikoglu</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Erchan Aptoula</string-name>
          <email>erchan.aptoula@okan.edu.tr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Caglar Tirkaz</string-name>
          <email>caglartg@sabanciuniv.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Okan University</institution>
          ,
          <addr-line>Istanbul, Turkey, 34959</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Sabanci University</institution>
          ,
          <addr-line>Istanbul, Turkey 34956</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2012</year>
      </pub-date>
      <abstract>
        <p>We describe our participation in the plant identi cation task of ImageClef 2012. We submitted two runs, one fully automatic and another one where human assistance was provided for the images in the photo category. We have not used the meta-data in either one of the systems, for exploring the extent of image analysis for the plant identi cation problem. Our approach in both runs employs a variety of shape, texture and color descriptors (117 in total). We have found shape to be very discriminative for isolated leaves (scan and pseudoscan categories), followed by texture. While we have experimented with color, we could not make use of the color information. We have employed the watershed algorithm for segmentation, in slightly di erent forms for automatic and human assisted systems. Our systems have obtained the best overall results in both automatic and manual categories, with 43% and 45% identi cation accuracies respectively. We have also obtained the best results on the scanned image category with 58% accuracy.</p>
      </abstract>
      <kwd-group>
        <kwd>Plant identi cation</kwd>
        <kwd>mathematical morphology</kwd>
        <kwd>classi er combination</kwd>
        <kwd>support vector machines</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        A content-based image retrieval (CBIR) system for plants would be very useful
for plant enthusiasts or botanists who would like to learn more about a plant they
encounter. Until the rst ImageCLEF Plant Identi cation Competition
organized in 2011 [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], existing research concentrated on isolated leaf identi cation [
        <xref ref-type="bibr" rid="ref12 ref17 ref20 ref22 ref4 ref7">7,
4, 12, 17, 22, 20</xref>
        ], while few systems attempted the identi cation of unconstrained
whole or partial plant images [
        <xref ref-type="bibr" rid="ref11 ref19">11, 19</xref>
        ].
      </p>
      <p>
        As with the rst competition, the plant identi cation task in ImageCLEF
2012 consisted of identifying images of plants that were captured by di erent
means: scans, scan-like photos (called pseudo-scans) and unrestricted photos, as
shown in Fig. 1. In this way, the competition aimed to benchmark state-of-the-art
in both isolated leaf shape and unrestricted plant image recognition problems.
The details of this competition are described in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>
        Content-based plant identi cation problem faces many challenges such as
color, illumination, size variations that are also common in other CBIR problems,
as well as some speci c problems, such as variations in the composition of the
leaves that change the plant shape. In addition, one can see that color is less
identifying in the plant retrieval problem compared to many other retrieval
problems, since most plants have green tones as their main color with subtle
di erences. In the rare cases that color is discriminative for a certain plant,
then it may still be the case that the leaves of that plant may change colors
due to seasonal variations. Shape features are useful in identifying isolated leafs,
but not really useful in identifying full or partial plant images [
        <xref ref-type="bibr" rid="ref11 ref21">11, 21</xref>
        ]. In that
regard, isolated leaf identi cation is appears to be a signi cantly simpler problem
compared to the identi cation of partial or full plants.
      </p>
      <p>(a)
(b)
(c)
(d)
(e)
(f)
As a collaboration between Sabanc and Okan Universities, we submitted two
runs, one fully automatic and another one where human assistance was provided
for the images in the photo category. The only distinction between our two runs
is that the images in the photo category undergo a human-assisted segmentation
process, as explained in Section 3.2. Hence, in the remainder of this paper, we
will talk about one system, since everything besides the segmentation of photos
are the same in the two systems corresponding to the two submitted runs.</p>
      <p>
        The system is designed as two separate sub-systems, one for scan and
scan-like images and another one for photos. Since the meta-data included the
acquisition type, an input image is automatically sent to the correct subsystem,
but that was the extent of the use of the meta-data. Our system shares many
common parts with the system we sent to ImageCLEF2011, described in [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ].
In this paper, we give an overview of the system, with detailed descriptions of
the changes done this year. The main changes are as follows:
      </p>
      <p>In ImageCLEF2011, we had concentrated our e orts almost exclusively
to scan and pseudo-scan images, while photos were neglected due to lack of
time. This year, we developed automatic and human-assisted segmentation
algorithms for the photo category. In both segmentation algorithms, we aimed
to segment the image to leave only a single leaf, so as to use our isolated leaf
recognition engine. The segmentation was reasonably successful, in that our
system had obtained 5.3% accuracy in photos last year, while the automatic
photo identi cation accuracy was 16% this year. The segmentation steps are
explained in Section 3.</p>
      <p>For recognizing photographs, we found that many of our other feature
descriptors, especially global shape descriptors, would not be useful if the photo
was one of a partial plant. On the other hand for scan and scan-like categories
containing isolated leaves, all three main feature categories are expected to be
useful: shape, texture and color. After experimenting with a large number of
descriptors, we selected a 117-dimensional feature vector for the scan/scan-like
sub-system and a subset of these for the photo category. The features used in
our system are explained in Section 5.</p>
      <p>Another major problem we encountered last year was the over- tting
problem in classi er training: our results obtained in o cial tests were seriously
lower compared to the results obtained with cross-validation experiments done
on the training set (around 40% di erence in accuracy). This year our feature
selection and classi er optimization approaches were run on a separate test set
which was partitioned from the original training data such that the two did not
to include images of the same individual plant (e.g. the same exact tree). For the
same reason, we have excluded our powerful new morphological texture features,
in order to study them further. This issue is explained in 6.
3</p>
    </sec>
    <sec id="sec-2">
      <title>Segmentation</title>
      <p>The ultimate goal of segmentation is the separation of the leaf from its
background. If no a priori knowledge is available about the image, the
segmentation problem becomes equivalent to unconstrained color image segmentation,
where the various leaves can be located within an equally immense variety of
backgrounds, such as those shown in Fig. 2. One might argue at this stage
that the background can provide information facilitating the recognition of
the leaf/plant in the foreground. For instance if the background consists of
forest ground versus the sky, we can eliminate some alternatives as implausible.
However, this would additionally require the description and recognition of the
background, which given its limitless diversity potential, increases the complexity
of an already challenging problem.</p>
      <p>
        The accurate separation of the background from the plant/leaf under
consideration is of crucial importance for the subsequent stage of feature
extraction, since a poor result would a ect many of the features. Given the
ill-posed nature of segmentation, in addition to the lack of any useful a priori
knowledge, it becomes evident that some form of human intervention or feedback
is necessary for an accurate and reliable segmentation. However, we also have
to take into account practical considerations, since no user/expert would like to
spend a prolonged amount of time on this stage, especially when dealing with
voluminous amounts of data. That is why we have explored both automatic and
human assisted segmentation strategies.
Although the ImageClef dataset is classi ed into three categories, namely scan,
pseudo-scan and photos, from a segmentation viewpoint scan and pseudo-scan
type images are of similar quality, in other words they both possess a mostly
noise-free, spectrally homogeneous background, occasionally containing some
amount of shadow in the pseudo-scans. Consequently, as far as scan and
pseudoscan type images are concerned, automatic segmentation has been trivially
resolved through Otsu's method [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] as it is both e cient and e ective (Fig. 3).
(a)
(b)
(c)
      </p>
      <p>Photos on the other hand consist of unconstrained acquisitions of leaves, as
shown in Fig. 2, and thus their automatic segmentation is a far greater challenge.
In order to counter this problem we adopted a combination of spectral and spatial
techniques.</p>
      <p>
        More precisely the only assumption concerning the input has been that
the object of interest, i.e. the leaf, is located roughly at the center of the
image and possesses a single dominant color. The image was rst simpli ed
by means of marginal color quasi- at zones [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], a morphology based image
partitioning method based on constrained connectivity, only recently extended
to color data [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], that creates at zones based on both local and global spectral
variational criteria (Fig. 4a). Next we computed its morphological color gradient
in the LSH color space [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], taking into account both chromatic and achromatic
variations (Fig. 4b), followed by the application of the watershed transform.
Hence we obtained a rst partition with spectrally homogeneous regions and
spatially consistent borders, albeit with a serious over-segmentation ratio which
was compensated for by merging basins below a certain area threshold (Fig. 4c).
      </p>
      <p>
        At this point our initial assumption about the object of interest's central
location was used, as we employed the central 2/3 area of the image in order
to determine its dominant color, which was obtained by means of histogram
clustering in the LSH color space. Assuming that the mean color of the most
signi cant cluster (i.e. reference color) belongs to the leaf/plant, we then
switched to spectral techniques, so as to determine its watershed basins. Since
camera re ections can be problematic due to their low saturation, we computed
both the achromatic, i.e. grayscale distance image from the reference gray
(Fig. 4d) and the angular hue distance image (Fig. 4f) from the reference hue
href [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]:
8 h; href 2 [0; 2 ]; d (h; href ) =
jh
2
href j
jh
      </p>
      <p>if
href j if
jh
jh
href j &lt;
href j
(1)
We then applied Otsu's method on both distance images, that provided us with
two masks (Figs. 4e &amp; 4g), representing spectrally interesting areas w.r.t. the
reference color. The intersection of the two masks was used as the nal object
mask (Figs. 4h). As shown in Fig. 4i, spectral and spatial techniques indeed
complement each other well, while the use of both chromatic and achromatic
distances increases the method's robustness. However the main di culty is the
accurate determination of the reference or dominant color; this can easily corrupt
the entire process if computed incorrectly or if the leaf under consideration has
more than one dominant colors.
3.2</p>
      <sec id="sec-2-1">
        <title>Human assisted segmentation</title>
        <p>As the automatic segmentation module relies strongly on its initial assumption, it
does not work very well, resulting in over or wrong segmentation. We investigated
human assisted segmentation for the semi-manual category of the competition.</p>
        <p>
          Ideally, given an input image we would like the user/expert to spend
at most a few tens of seconds in order to provide some amount of high level
knowledge. Assuming that the provided knowledge is valid, we chose to use the
marker based watershed transform [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. Speci cally, the marker based watershed
transform is a powerful, robust and fast segmentation tool that given a number
of seed areas or markers, results in watershed lines representing the skeleton of
their in uence zones.
        </p>
        <p>
          To accomplish this we need a suitable topographic relief as input, where
object borders are denoted as peaks and at zones as valleys; which is why we
employ the morphological color gradient of the input image [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] (Fig. 5b). We
then require at least two markers provided by the user/expert, one denoting
the background and another representing the foreground, both of which could
be easily provided for instance through the touchscreen of a smartphone by
indicating once the leaf and once its background (Fig. 5c). Next, having
superposed the markers on the gradient, the marker based watershed transform
provides the binary image partition (Fig. 5d). Although both e cient and
e ective, this method depends utterly on the quality of the provided markers,
(a) Color quasi at
zones
(b) Color gradient
(c) Watershed
transform and removal of
small basins
(d)
distance
        </p>
        <p>Grayscale
(e) Grayscale mask
(f) Hue distance
(g) Hue mask
(h) Mask intersection
(i) Mask
superposition on the original
since if they are too small they can lead to partial leaf detection and conversely
if they are excessively large, the leaf will be confounded with its background.
4</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Preprocessing</title>
      <p>After segmentation, we have a single leaf which is segmented from its
background, regardless of the type of the original image (scans or photos). For the
case of photos, this isolated image is well segmented in the case of
humanassisted segmentation, except possibly for some noise at the contour; in the case
of automatic segmentation, the leaf may be over or under-segmented. Note that
in all photos, the segmented image may also be rotated in an arbitrary direction,
while most of the scanned leaves are oriented with their axes aligned with the
vertical, stem part down.</p>
      <p>Our preprocessing consists of orientation normalization for photos,
followed by height normalization (keeping the height/width ratio unchanged).
We have also experimented with stem location and orientation normalization for
scans and pseudo-scans, however these steps are not yet mature, causing errors
in as many images as the ones they correct.
(a) Original
image
(b) Morphologi- (c) The markers (d) Result
cal gradient</p>
      <p>Fig. 5: Stages of human assisted photo segmentation on image #4835.
5</p>
    </sec>
    <sec id="sec-4">
      <title>Features</title>
      <p>In a di cult problem such as this, it was clear that we needed to use all main
feature categories (shape, texture and color), even though some of them were
expected to be more useful than others. We have kept most of the features that
were used in last years's competition, and we added some new ones that we
thought would be useful.</p>
      <p>
        Considering the intra-class color variations levels, it became evident at
an early stage that our approach would have to be mainly texture and shape
based, even though the discriminatory potential of color has not been ignored
completely. Moreover, it was chosen not to employ any further complicated
descriptors, such as scale invariant feature transforms (SIFT) or maximally
stable extremal regions (MSER), for two reasons. First, our past experience
with plant retrieval [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] has led to results indicating them as not very suitable
for this content type, and second due to time limitations that hindered e orts
to test more sophisticated descriptors.
      </p>
      <p>
        In total, we evaluated about 20 di erent shape, texture and color
features. The evaluated features consist of region-based (e.g. regional moments)
and contour-based (e.g. Fourier descriptors, border covariance) shape features;
texture features (morphological, Gabor, orientation histograms); and color (color
moments and di erent length and quantization of the color histogram in LSH
space). Here we summarize the new features, while others are described in detail
in [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ].
1. Convex Perimeter Ratio returns the ratio of the perimeter of the convex hull
to the perimeter of the original binary mask.
2. Regional Moments of Inertia is computed on grayscale data. We rst divide
the input into n horizontal slices as in area width factor and then compute
independently for each slice the mean distance to the image centroid.
3. Angle Code Histogram is computed on binary data. We rst compute the
object contour, which is then subsampled. We then proceed to calculate the
angle of every point triplet and return as feature their normalized histogram
[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
4. Contour Point Distribution Histogram is applied on the binarized internal
morphological gradient of the input image. Given n concentric disks centered
at the image centroid, we calculate for each disk the percentage of gradient
pixels in it [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
5. Orientation Histogram is computed on grayscale data. We rst compute
the orientation map using a 11 11 mask for determining the dominant
orientation of each pixel. The feature vector consists of the normalized
histogram of dominant orientations.
6. Lobe Descriptor is calculated by over-segmenting the scanned images using
morphological tools and extracting the median value of convexity, elongation
etc. parameters of the segmented lobes.
      </p>
      <p>Feature Selection We evaluated individual features using the global classi er
described in Section 6.2, to observe their relative merits, but more importantly,
to see the errors made in the system and to add more features as a remedy. Some
of the above mentioned features (e.g. lobe descriptor) were added as a result of
this process. In this process, we have also excluded the morphological texture
features that we used in ImageCLEF2011.</p>
      <p>The full set of our nal features (117 dimensional), along with their
e ectiveness on cross-validation and test data, can be found in the Results
section, in Table 1 and Table 2 for shape and texture features respectively. Color
features were evaluated but we did not nd them useful, so they do not appear
in the nal feature list.
6
6.1</p>
    </sec>
    <sec id="sec-5">
      <title>Classi er Training</title>
      <sec id="sec-5-1">
        <title>Datasets</title>
        <p>
          The ImageClef2012 plant database contains 126 tree species and the images for
each species are contributed by various people, some of the images being taken
from the same individual plant [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. A straightforward cross-validation done over
the whole training set thus results in splitting similar images of the same plant
into training and test sets. We believe this is the main reason for the
overtting that we experienced in last year's system. Consider that the images of an
individual plant, often captured with the same lighting and camera parameters,
are split into training and test sets. In this case, color and texture descriptors
remain quite similar between two images, leading to over- tting.
        </p>
        <p>This year, we took a di erent approach and divided the database into
training and testing, rather than using cross-validation that may result in a
random split of very similar images. In short, we put all of the images of an
individual plant either in training or testing. Speci cally, for each of the species,
if the species images contained two or more individual plants, one of them (the
one with the least number of pictures) is separated for testing while all others
are selected for training. In total 5163 scan and scan-like images are used for
training and 1526 scan and pseudo-scan images and all photos (1733 images) are
used for testing during our experiments.</p>
        <p>Note that this approach for dividing the training set has the disadvantage
that few of the species having images from a single individual plant did not have
any representative images in test data. However, in terms of feature selection,
we thought that this should not have a signi cant e ect, as features can be said
to be species-independent.</p>
        <p>As seen in Tables 1 and 2, cross-validation accuracies are almost always
higher than test set accuracies, and especially so for texture features. This
supports the association between cross-validation and the over- tting that
applies in this particular problem.
6.2</p>
      </sec>
      <sec id="sec-5-2">
        <title>Classi ers</title>
        <p>We trained three classi ers in total for the automatic run: one Support Vector
Machine (SVM) classi er for scans, pseudo-scans and manually segmented
photos using all the features for isolated leaf recognition; one SVM classi er
for automatically segmented photos using a subset of the features, and nally a
single local classi er. The details of these classi ers are explained below. Finally,
a score level combination of the two classi ers is performed for each category of
images.</p>
        <p>
          This year, the ambiguity resolution step, where we had used a third
classi er trained to distinguish between the top-5 choices [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], is skipped, as it
was observed that it was signi cantly extending the test stage without signi cant
improvement.
        </p>
        <p>Global Classi er Using the selected features described in Section 5, we
trained two SVM classi ers, one for scans (scan and pseudo-scan categories)
and manually segmented photos, and another for automatically segmented
photos. When the photos are manually segmented, the results are similar to
scanned images, hence we used the same classi er as for scans. For automatically
segmented photos, the noise on the contours are more apparent, so we discarded
the Fourier descriptors obtained via the Fast Fourier Transform (FFT) as they
may be a ected from contour noise. For these classi ers, we used a SVM using
the radial basis function kernel whose parameters were optimized using
crossvalidation and grid search on the training data.</p>
        <p>The training samples for these two classi ers were obtained from scan and
pseudo-scan images only, so that the feature extraction was not a ected from
segmentation noise.</p>
        <p>
          Local Classi er Another classi er was built to compare local features obtained
from the stable points on the boundary of the leaves. The local features used here
were shape context [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], Histogram of Oriented Gradients (HOG) [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], and local
curvature. The stable points were obtained using an unsupervised clustering
of boundary points, based on location and feature similarity. All leaf images
from each plant type were processed to nd the stable point of that plant. This
classi er was expanded from our work in sketch recognition [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ].
Throughout the experiments, we considered the scan and pseudo-scans as one
group and treated them using the same methodology. We evaluated the e ciency
of our features and the performance of the classi ers on this group, as shown
in Table 1 and Table 2. Here, the cross-validation results are obtained using
cross-validation on the training data, while test results are obtained using the
separate test data that was split from the original training set. Color features
were evaluated but we did not nd them useful, so they do not appear in the
nal feature list. As shown in these tables, all shape features achieve 67.89%
accuracy while texture features achieve 33.42%.
        </p>
        <p>Finally, using all the features (117-long feature vector), we obtained a
85.9% accuracy on cross-validation tests and 66.1% accuracy over the test data,
while the results reaches almost 71% with classi er combination. Note that the
test accuracy with all features show a slight decrease compared to using all
shape features, but there is a signi cant increase in cross-validation accuracy
when using all features. We believe that these issues will be less pronounced in
the future when training and test data sizes increase.</p>
        <p>Also note that our accuracies are measured as average accuracy over the
test images, while the o cial scoring function computes an average across users
(the people who have collected the images being seen as users of such a plant
identi cation system).eing seen as users of such a plant identi cation system).
Classi er Feature Length Cross Val Acc.(%)
Global classi er (SVM) 117 85.94
Local classi er N/A N/A
Classi er Comb. N/A N/A</p>
        <p>Table 3: Classi er combination accuracies.</p>
        <p>Test Acc.(%)
66.12
63.17
70.97
Hence, a signi cant amount of the di erence between the test results reported
here and the o cial results may be due to this.</p>
        <p>
          Accuracies for automatically processed photos from the test set are given
in Table 4. The photos are categorized into 4 categories by the organizers: i)
picked leaf ii) leaf iii) branch iv) leafage, containing roughly a picked single leaf
on a variety of backgrounds, a leaf which may be hanging from the branch, and
plant foliage with or without branches [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. As expected, photos perform much
worse compared to scan categories. Furthermore, we observe that di erent photo
categories di er in di culty level, picked leaf images being the easiest, pointing
to the di culty of segmentation.
8
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>ImageCLEF2012 Results and Discussion</title>
      <p>This year the organizers categorized automatic and manual or human-assisted
runs separately, while this information was not clear in last year's results.
According to the o cial results given in Table ??, our runs achieved 1st place
in overall classi cation, for both automatic and human-assisted categories. We
also obtained signi cantly better results on the scan category, compared to the
next best system. Note that we only give a partial results table, including only
the best automatic systems in addition to our two systems.</p>
      <p>
        Although our results are satisfactory comparatively to other participants,
we believe there is still much room for improvement. Currently, plant
identication systems cannot be of practical use, maybe only to narrow down the
alternatives. That is why we have established our future work plan in two
directions, with both converging on the common goal of accuracy improvement.
For one, having resolved the issues of over- tting, we now possess a sound feature
selection and optimization setup, that we intend to exploit in order to explore
the latest and especially morphology related content descriptors [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        Moreover, we are well aware of our comparatively poor results in the photo
category, as well as of its strong ties to segmentation quality. Hence we will focus
particularly on improving the performance of leaf isolation, in both automatic
and human-assisted approaches. To this end, we plan to further incorporate the
latest results obtained from our ongoing work on color quasi- at zones [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
      </p>
      <p>Finally, we will think of ways to bene t from color and other new
descriptors.</p>
      <p>Run Auto/Manual Scan
Sabanci Okan-run2 Manual 0.58
Sabanci Okan-run1 Automatic 0.58
INRIA Imedia plantnet run1 Automatic 0.49
INRIA Imedia plantnet run2 Automatic 0.39
LSYS DYNI run 3 Automatic 0.41
ARTELAB run 1 Automatic 0.40</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>E.</given-names>
            <surname>Aptoula</surname>
          </string-name>
          .
          <article-title>Extending morphological covariance</article-title>
          .
          <source>Pattern Recognition</source>
          ,
          <volume>45</volume>
          (
          <issue>12</issue>
          ):
          <volume>4524</volume>
          {
          <fpage>4535</fpage>
          ,
          <year>December 2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>E.</given-names>
            <surname>Aptoula</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Lefevre</surname>
          </string-name>
          .
          <article-title>A basin morphology approach to colour image segmentation by region merging</article-title>
          .
          <source>In Proceedings of the Asian Conference in Computer Vision</source>
          , volume
          <volume>4843</volume>
          , pages
          <fpage>935</fpage>
          {
          <fpage>944</fpage>
          , Tokyo, Japan,
          <year>November 2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>E.</given-names>
            <surname>Aptoula</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Lefevre</surname>
          </string-name>
          .
          <source>On the morphological processing of hue. Image and Vision Computing</source>
          ,
          <volume>27</volume>
          (
          <issue>9</issue>
          ):
          <volume>1394</volume>
          {
          <fpage>1401</fpage>
          ,
          <year>August 2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Backes</surname>
          </string-name>
          and
          <string-name>
            <given-names>O. M.</given-names>
            <surname>Bruno</surname>
          </string-name>
          .
          <article-title>Shape classi cation using complex network and multi-scale fractal dimension</article-title>
          .
          <source>Pattern Recognition Letters</source>
          ,
          <volume>31</volume>
          (
          <issue>1</issue>
          ):
          <volume>44</volume>
          {
          <fpage>51</fpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>Serge</given-names>
            <surname>Belongie</surname>
          </string-name>
          , Jitendra Malik, and
          <string-name>
            <given-names>Jan</given-names>
            <surname>Puzicha</surname>
          </string-name>
          .
          <article-title>Shape matching and object recognition using shape contexts</article-title>
          .
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          ,
          <volume>24</volume>
          :
          <fpage>509</fpage>
          {
          <fpage>522</fpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>S.</given-names>
            <surname>Beucher</surname>
          </string-name>
          and
          <string-name>
            <given-names>F.</given-names>
            <surname>Meyer</surname>
          </string-name>
          .
          <article-title>The morphological approach to segmentation: the watershed transformation</article-title>
          . In E. R. Dougherty, editor,
          <source>Mathematical Morphology in Image Processing</source>
          , pages
          <volume>433</volume>
          {
          <fpage>482</fpage>
          .
          <string-name>
            <surname>Dekker</surname>
          </string-name>
          , New York,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>O. M.</given-names>
            <surname>Bruno</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. O.</given-names>
            <surname>Plotze</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Falvo</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Castro</surname>
          </string-name>
          .
          <article-title>Fractal dimension applied to plant identi cation</article-title>
          .
          <source>Information Sciences</source>
          ,
          <volume>178</volume>
          (
          <issue>12</issue>
          ):
          <volume>2722</volume>
          {
          <fpage>2733</fpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>Navneet</given-names>
            <surname>Dalal</surname>
          </string-name>
          and
          <string-name>
            <given-names>Bill</given-names>
            <surname>Triggs</surname>
          </string-name>
          .
          <article-title>Histograms of oriented gradients for human detection</article-title>
          .
          <source>In CVPR</source>
          , pages
          <volume>886</volume>
          {
          <fpage>893</fpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9. H. Goeau,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bonnet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Joly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Boujemaa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Barthelemy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Molino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Birnbaum</surname>
          </string-name>
          , E. Mouysset, and
          <string-name>
            <given-names>M.</given-names>
            <surname>Picard</surname>
          </string-name>
          .
          <article-title>The clef 2011 plant image classi cation task</article-title>
          .
          <source>In CLEF 2011 working notes</source>
          , Amsterdam, The Netherlands,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. H. Goeau,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bonnet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Joly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Yahiaoui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Barthelemy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Boujemaa</surname>
          </string-name>
          , , and
          <string-name>
            <surname>J. Molino.</surname>
          </string-name>
          <article-title>The imageclef 2012 plant identi cation task</article-title>
          .
          <source>In CLEF 2011 working notes</source>
          , Rome, Italy,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11. H.
          <string-name>
            <surname>Kebapci</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Yanikoglu</surname>
            , and
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Unal</surname>
          </string-name>
          .
          <article-title>Plant image retrieval using color, shape and texture features</article-title>
          .
          <source>The Computer Journal</source>
          ,
          <volume>53</volume>
          (
          <issue>1</issue>
          ):1{
          <fpage>16</fpage>
          ,
          <string-name>
            <surname>April</surname>
          </string-name>
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>F.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Q.</given-names>
            <surname>Man</surname>
          </string-name>
          .
          <article-title>Multiple classi cation of plant leaves based on gabor transform and lbp operator</article-title>
          .
          <source>In International Conference on Intelligent Computing</source>
          , pages
          <volume>432</volume>
          {
          <fpage>439</fpage>
          , Shanghai, China,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <given-names>N.</given-names>
            <surname>Otsu</surname>
          </string-name>
          .
          <article-title>A threshold selection method from gray-level histograms</article-title>
          .
          <source>IEEE Transactions on Systems, Man and Cybernetics</source>
          ,
          <volume>9</volume>
          (
          <issue>1</issue>
          ):
          <volume>62</volume>
          {
          <fpage>66</fpage>
          ,
          <year>1979</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <given-names>X.</given-names>
            <surname>Shu</surname>
          </string-name>
          and
          <string-name>
            <given-names>X.-J.</given-names>
            <surname>Wu</surname>
          </string-name>
          .
          <article-title>A novel contour descriptor for 2d shape matching and its application to image retrieval</article-title>
          .
          <source>Image and Vision Computing</source>
          ,
          <volume>29</volume>
          (
          <issue>4</issue>
          ):
          <volume>286</volume>
          {
          <fpage>294</fpage>
          ,
          <string-name>
            <surname>March</surname>
          </string-name>
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <given-names>P.</given-names>
            <surname>Soille</surname>
          </string-name>
          .
          <article-title>Constrained connectivity for hierarchical image partitioning and simpli cation</article-title>
          .
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          ,
          <volume>30</volume>
          (
          <issue>7</issue>
          ):
          <volume>1132</volume>
          {
          <fpage>1145</fpage>
          ,
          <year>July 2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Caglar</surname>
            <given-names>Tirkaz</given-names>
          </string-name>
          , Berrin Yanikoglu, and
          <string-name>
            <given-names>T. Metin</given-names>
            <surname>Sezgin</surname>
          </string-name>
          .
          <article-title>Memory conscious sketched symbol recognition</article-title>
          .
          <source>In 21st International Conference on Pattern Recognition, Tsukuba Science City, JAPAN</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Feng</surname>
          </string-name>
          .
          <article-title>Shape based leaf image retrieval</article-title>
          .
          <source>IEE Proceedings in Vision, Image and Signal Processing</source>
          ,
          <volume>150</volume>
          (
          <issue>1</issue>
          ):
          <volume>34</volume>
          {
          <fpage>43</fpage>
          ,
          <year>February 2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>J. Weber</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Aptoula</surname>
            , and
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Lefevre</surname>
          </string-name>
          .
          <article-title>Extension of quasi- at zones to color images</article-title>
          .
          <source>Computer Vision</source>
          and Image Understanding,
          <year>2012</year>
          . Under review.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>D. M. Woebbecke</surname>
            , G. E. Meyer,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Von Bargen</surname>
            , and
            <given-names>D. A.</given-names>
          </string-name>
          <string-name>
            <surname>Mortensen</surname>
          </string-name>
          .
          <article-title>Plant species identi cation, size, and enumeration using machine vision techniques on near-binary images</article-title>
          .
          <source>In Optics in Agriculture and Forestry</source>
          , volume
          <volume>1836</volume>
          , pages
          <fpage>208</fpage>
          {
          <fpage>219</fpage>
          , Boston, USA,
          <year>1993</year>
          . SPIE.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Itheri</surname>
            <given-names>Yahiaoui</given-names>
          </string-name>
          , Nicolas Herve, and
          <string-name>
            <given-names>Nozha</given-names>
            <surname>Boujemaa</surname>
          </string-name>
          .
          <article-title>Shape-based image retrieval in botanical collections</article-title>
          .
          <source>In PCM</source>
          , pages
          <volume>357</volume>
          {
          <fpage>364</fpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Berrin</surname>
            <given-names>Yanikoglu</given-names>
          </string-name>
          , Erchan Aptoula, and
          <string-name>
            <given-names>Caglar</given-names>
            <surname>Tirkaz</surname>
          </string-name>
          .
          <article-title>Sabanci-Okan system at imageclef 2011: Plant identi cation task</article-title>
          .
          <source>In CLEF (Notebook Papers/Labs/Workshop)</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <given-names>S.</given-names>
            <surname>Yonekawa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Sakai</surname>
          </string-name>
          , and
          <string-name>
            <given-names>O.</given-names>
            <surname>Kitani</surname>
          </string-name>
          .
          <article-title>Identi cation of idealized leaf types using simple dimensionless shape factors by image analysis</article-title>
          .
          <source>Transactions of the ASAE</source>
          ,
          <volume>39</volume>
          :
          <fpage>1525</fpage>
          {
          <fpage>2533</fpage>
          ,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>