<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>SShhaappee EExxttrraaccttiioonn FFrraammeewwoorrkk ffoorr SSiimmiillaarriittyy SSeeaarrcchh iinn IImmaaggee DDaattaabbaasseess</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jan Kı ml´a</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tomsaˇ´ Skopal Jan Kl´ıma</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tom´aˇs Skopal</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Charles University in Prague, FMP, Department of Software Engineering Charles University in</institution>
        </aff>
      </contrib-group>
      <fpage>89</fpage>
      <lpage>102</lpage>
      <abstract>
        <p>The task of similarity search in image databases has been studied for decades, while there have been many feature extraction techniques proposed. Among the mass of low-level techniques dealing with color, texture, layout, etc., an extraction of shapes provides better semantic description of the content in raster image. However, even such specific task as shape extraction is very complex, so the mere knowledge of particular raster transformation and shape-extraction techniques does not give us an answer what methods should be preferred and how to combine them, in order to achieve the desired effect in similarity search. In this paper we propose a framework consisting of low-level interconnectable components, which allows the user to easily configure the flow of transformations leading to shape extraction. Based on experiments, we also propose typical scenarios of transformation flow, with respect to the best shape-based description of the image content.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Similarity search in image databases [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] is becoming increasingly important, due
to rapidly growing volumes of available image data. Simultaneously, the
textbased image retrieval systems become useless, since the requirements on manual
annotation exceed human possibilities and resources. The metadata-based search
systems are of similar kind, we need an additional explicit information to
effectively describe multimedia objects (e.g. structured semantic description, as class
hierarchies or ontologies), which is not available in most cases.1 The only
practicable way how to search the vast volumes of raw image data is the content-based
similarity search, i.e. we consider the real content of each particular raster image
(e.g. a photography), where the images are ranked according to similarity to a
query image (the example). Only such images are retrieved, which have been
ranked as sufficiently similar to the query image. The similarity measure returns
a real-valued similarity score for any two models of multimedia objects.
      </p>
      <p>
        Unlike other similarity search applications, the task of highly semantic
contentbased search in image/video databases is extremely difficult. Because of
generally unrestricted origination of a particular raster image, its visual content is
1 The image search at images.google.com is a successful example of metadata-based
engine, where the metadata are extracted from web pages referencing the images.
not structured and, on the other side, hides rich semantics (as perceived by
human). The most general techniques providing extraction of distinguished features
from an image are based on processing of low-level characteristics, like color
histograms, texture correlograms, color moments, color layout (possibly considered
under spatial segmentation). Unfortunately, the low-level features emphasize just
local/global relationships between pixels (their colors, respectively), hence, they
do not capture high-level (semantic) features. In turn, usage of the low-level
features in similarity search tasks leads to poor retrieval results, which is often
referred to as the ”semantic gap” [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>In real-world applications, a design of high-level feature extraction is
restricted to the domain-specicfi image databases. For instance, images of human
faces can be processed so that biometric features (like eyes, nose, chin, etc.) are
identified and properly represented. Although domain-specific image retrieval
systems reach high effectiveness in precision/recall, they cannot be used to
manage heterogeneous collections, e.g. images on web.
1.1</p>
      <sec id="sec-1-1">
        <title>Shape Extraction</title>
        <p>
          The shapes (contours, bounded surfaces) in an image could be understood as
a medium-level feature, since shape is an entity recognized also by human’s
perception (unlike low-level features). Moreover, shape is a practical feature for
query-by-sketch support (i.e. a query-by-example where the ”example” consists
of user-drawn strokes), where extraction of colors or textures is meaningless.
Shape Reconstruction. The most common technique is to vectorize the
contiguous lines or areas in the raster image. Prior to this, the image has to be
preprocessed still on the raster basis (edge detection [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], smoothing, morphologic
operations, skeletonization, etc.). The subsequent raster-to-vector
transformation step follows (e.g. a binary mask is vectorized into a set of (poly)lines).
        </p>
        <p>
          Naturally, we are not done at this moment, the hardest task is to lfiter and
combine the ”tangle” of short lines (as typically produced) into several (or even
single) distinguished major shapes (polylines/polygons). This involves polyline
simplicfiation [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], removal of artifacts, line connection, etc. The most complex
but invaluable part of shape reconstruction should derive the prototypical shape
which is approximated by the vectorized information obtained so far.
        </p>
      </sec>
      <sec id="sec-1-2">
        <title>Shape Representation &amp; Similarity Measuring. Once we have sufficiently</title>
        <p>
          simpliefid shapes found in an image, we have to represent them in order to
support measuring of similar shapes. The polygonal representation itself is not
very suitable for similarity measuring, because of high sensitivity to translation,
scale, rotation, orientation, noise, distortion, skew, vertex spacing/offset, etc.
More likely, the raw shape is often transformed into a single vector or time series
[
          <xref ref-type="bibr" rid="ref5 ref7 ref9">5, 7, 9</xref>
          ], where the shape characteristics are preserved but the transformation
non-invariant characteristics are removed. The time series representations are
usually measured by Euclidean distance, Dynamic time warping [
          <xref ref-type="bibr" rid="ref4 ref7">4, 7</xref>
          ], Longest
common subsequence [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
      </sec>
      <sec id="sec-1-3">
        <title>Motivation &amp; Paper Contributions</title>
        <p>As overviewed above, the process of shape extraction (starting from the raster
image and producing a vector or time series) is very complex task, where there do
not exist general recommendations about particular transformation/extraction
steps. In this paper, we propose a component-based framework allowing to
encapsulate and connect various image/vector-processing algorithms. Hence, a
network consisting of many components can be easily created, through which a
particular shape extraction scenario is congfiured. Following this framework, we
have also implemented a catalogue of basic components which, when properly
connected, can provide an effective environment for shape extraction
experimentation. Finally, we present several domain scenarios for shape-extraction based
on experimental results.
1.3</p>
      </sec>
      <sec id="sec-1-4">
        <title>Related Work</title>
        <p>Although we are aware of many existing image processing libraries, most of them
lack support for end-user dataflow configuration or vector processing.</p>
        <p>
          “Filters” [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] is a library of image, video and vector processing functions, based
on idea of congfiurable lfiters that perform various tasks. Dataflow configuration
is obtained by hardcoding or via python scripts.
        </p>
        <p>
          “JaGrLib” [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] is a 2D/3D graphics library primarily aimed for educational
purposes. JaGrLib offers both XML and GUI oriented dataoflw configuration,
but currently has limited shape extraction capabilities.
2
        </p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>IVP Framework</title>
      <p>
        The idea of Image and Vector Processing Framework [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] is to separate objects
that usually figure in image processing (color bitmaps, grayscale bitmaps, binary
bitmaps, gradient maps, vectors, polylines, etc.) and algorithms which work with
these objects on an input → output basis. Each algorithm can be considered as a
black box that expects certain input data and produces a defined kind of output
data. With this point of view the whole shape extraction application reduces to
a network of algorithms that send data to each other. An example of the idea is
depicted in Figure 1.
      </p>
      <p>This gives a view into the IVPF design: it’s advantageous to code and store
algorithms separately, the role of the client application is to allow user to specify
which algorithms should be used, and also their output → input dependence. In
the final effect, many specialized applications can be implemented (configured
respectively) at high application level, just by specifying the algorithms, their
settings and mutual dependencies.
2.1</p>
      <sec id="sec-2-1">
        <title>Interface &amp; Components</title>
        <p>Each particular component class encapsulating2 an algorithm (as mentioned
above) must implement the IIVPFComponent interface in such a fashion that all
2 The framework has been implemented in .NET framework 2.0.
public component features are accessible in a simple and transparent way. In
particular, using IIVPFComponent interface the components are interconnected
(port registration), and checked for compatibility.</p>
        <p>Higher Level Functionality. At a higher application level, the components
are just configured and their ports connected together to create a network. This
can be done either via GUI or by loading configuration/connections from an
XML lfie. During the whole network execution, all components are run starting
from pure output components (no input ports) and propagating the intermediate
data through the entire network to pure input components (no output ports).
The whole approach enables to change the behavior of the network in two ways:
– by changing component(s) configuration
– by changing the structure of the entire component network</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Component Catalogue</title>
      <p>In this section we propose several components already implemented in the IVP
framework. In order to ease the understanding, we use the following formal
declaration of a component:</p>
      <p>type of input → component name (parameters) → type of output
where the type of input/output we distinguish either bitmap (any 2D bitmap of
color, intensity, gradient, etc.) or vectors (collection of polygons/polylines). The
single arrow ’→’ means there is just a single type of connection to input/output
supported, while the double arrow ’⇒’ supports several types of input/output3.
The parameters declared in parentheses are component-specific, thus they have
to be configured by the user.</p>
      <sec id="sec-3-1">
        <title>3.1 Image Processing Components</title>
        <p>The rfist class provides components serving as the basic-processing tools, which
do eliminate resolution dependencies, lfiter the noise, and separate pixels within
a speciefid range of colors.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Image Loader Component</title>
        <p>ImageFromFile (FileName) ⇒ bitmap (of colors) + bitmap (of intensities)
To process an image, it must be loaded from a lfie rfist. During this phase
grayscale image representation is computed and offered simultaneously.</p>
      </sec>
      <sec id="sec-3-3">
        <title>Image Resample Component</title>
        <p>bitmap (colors) → ImageResample (ResamplingType, DesiredSize) ⇒ bitmap
(colors) + bitmap (intensities)
The input image might be either too small or very large for the sake of further
processing. With this component it’s easy to resize it suitably using Nearest
Point, Bilinear and Biquadratic resampling.</p>
      </sec>
      <sec id="sec-3-4">
        <title>Thresholding Component</title>
        <p>bitmap (colors) → ImageThresholding (RGBUpper, RGBLower) → bitmap (binary)
There exist special types of images like maps or drawings where it makes little
sense to do full edge detection. Instead, certain parts of interest can be extracted
by simple interval thresholding.</p>
      </sec>
      <sec id="sec-3-5">
        <title>Gaussian Smoothing Component</title>
        <p>bitmap(intensity)→GaussianIntensityFilter(WindowSize,Sigma)→bitmap(intensity)
In noisy images where one would expect problematic edge detection, smoothing
step is required. Gaussian smoothing is a well-established method and the
implementation allows to configure both the Sigma parameter of the Gaussian
function and the window size.
3 Naturally, an output port of one component can be connected to input ports of
multiple components (providing we obey port compatibility).
3.2</p>
      </sec>
      <sec id="sec-3-6">
        <title>Edge Detection Components</title>
        <p>
          For edge detection, the Canny operator [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] was chosen as a main approach, as it is
acceptably stable and congfiurable. The edge detection is performed in multiple
stages, starting on an intensity (grayscale) bitmap, which usually involve
1. Source image smoothing (typically by Gaussian convolution)
2. Gradient approximation using rfist derivative operator
3. Non-maximum suppression
4. Hysteresis thresholding to identify edge pixels
        </p>
        <p>The Canny operator is formed by a chain of components (the latter described
below): GaussianIntensityFilter → GradientOperator → NonMaximaSuppression
→ HysteresisThresholding. For an example of the dataflow see Figure 2.</p>
      </sec>
      <sec id="sec-3-7">
        <title>Gradient Operator Component</title>
        <p>bitmap (intensity) → GradientOperator (OperatorType) → bitmap (gradients)
Uses simple first derivative operators (Sobel, Prewitt, Roberts-Cross) to obtain
local gradient magnitudes and directions (in 45 degree steps) for each pixel.</p>
      </sec>
      <sec id="sec-3-8">
        <title>Non Maximum Suppression Component</title>
        <p>bitmap (gradients) → NonMaximaSuppression → bitmap (gradient)
Gradient map obtained by first derivative operator often contains thick regions
with high gradient magnitude but to extract edges one would like areas with high
gradient magnitude to be as much thin as possible. Non-maximum suppression
archives this by ignoring pixels where the gradient magnitude is not maximal in
the gradient direction.</p>
      </sec>
      <sec id="sec-3-9">
        <title>Hysteresis Thresholding Component</title>
        <p>bitmap (gradients) → HysteresisThresholding (Lower, Upper) → bitmap (binary)
Based on two thresholds gradient map is traced and edge pixels are extracted.
Each pixel with gradient magnitude greater than the Upper threshold is marked
as edge immediately. Remaining pixels are marked as edges only if they have their
gradient magnitude greater than the Lower threshold and if they are connected
to some existing edge pixel chain.</p>
      </sec>
      <sec id="sec-3-10">
        <title>Binary Image Processing Components</title>
        <p>The following components process binary bitmap inputs (e.g. obtained from the
edge detection). These components could be used to simplicfiation of contours.</p>
      </sec>
      <sec id="sec-3-11">
        <title>Erosion And Dilatation Component</title>
        <p>bitmap(binary)→ErosionDilatation (OperationType, MaskType)→bitmap(binary)
Refinement of binary images is often needed (to close gaps, round curved
objects, etc.). The operators of erosion, dilatation, opening and closing (combined
with a properly chosen mask) are usually a good choice to handle some input
image defects.</p>
      </sec>
      <sec id="sec-3-12">
        <title>Thinning Component</title>
        <p>
          bitmap (binary) → Thinning (ThinningType) → bitmap (binary)
Thinning components that implement Stendiford [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] and Zhang-Suen [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]
thinning algorithms are handy when it comes to polish results given by edge
detection, or when there is a need to turn thick objects into one pixel thin lines.
A staircase removal to renfie lines contained within the binary image also fits
in this category of algorithms. An example of the thinning process is given in
Figure 3 (the rfist two images).
        </p>
      </sec>
      <sec id="sec-3-13">
        <title>Contouring Component</title>
        <p>
          bitmap (binary) → Contouring → bitmap (binary)
In some cases (when the binary image contains thick objects), an information about
contour is more valuable than the object’s thinned form. Implemented method is based
on approach found in [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] although it uses complete image matrix instead of run length
encoding as the binary image representation.
        </p>
        <sec id="sec-3-13-1">
          <title>Vectorization Component</title>
          <p>
            bitmap (binary) → Vectorization → vectors
The vectorization component is responsible to turn binary image with marked edge
pixels into a set of vectors (polylines). It is based on approach mentioned by [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ] or [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ].
          </p>
          <p>First, the input binary mask is went through by shifting a 3x3 window over every
pixel in order to mark critical points. Those are either endpoints (pixels with only one
neighbor within the 3x3 window) or intersections (pixels with more than 2 neighbors).
In second phase all polylines between critical points are traced. Another pass through
the image is needed then to identify closed polylines that were left untouched by the
previous step. The resulting pixel chains are turned immediately into polylines along
with junction and connectivity information (i.e. with the topology).
3.4</p>
        </sec>
      </sec>
      <sec id="sec-3-14">
        <title>Vector Processing Components</title>
        <p>Once we get a vectorized form of shape, we move to waters of geometry and graphs
algorithms. Although we got rid of the raster information, we now face a tangle of
(poly)lines to be meaningfully managed.</p>
        <sec id="sec-3-14-1">
          <title>Polyline Simplification Component</title>
          <p>
            vectors → PolylineSimplification(Type, Error) → vectors
The polylines obtained from vectorization usually carry much more information than
required. It also happens that there are undesired irregularities in polylines caused
by straightforward vectorization. Hence, a polyline simplification is needed. Multiple
approaches are implemented, including those of Douglas-Peucker [
            <xref ref-type="bibr" rid="ref11">11</xref>
            ] and Rosin-West
[
            <xref ref-type="bibr" rid="ref13">13</xref>
            ].
          </p>
        </sec>
        <sec id="sec-3-14-2">
          <title>Artifact Removal Component</title>
          <p>vectors→ShortVectorRemoval(ArtifactsType,ArtifactCharacteristics)→vectors
Aside from polyline simplification, the resulting vector image contains artifacts at
higher logical level. For a list list o typical artifacts see Figure 4. Artifact removal
component handles these artifacts based on configuration and produces result with
reduced noise and longer (more meaningful) vectors, as shown in Figure 4.
e
e
a
c
c
b</p>
          <p>d
One of the goals of vector processing is to find a given number of the most significant
vectors to represent the original image. With this component an experiment was made
to find if a metric based on vector length is a good criterion for selection the most
significant vectors.</p>
          <p>The algorithm (our original design) works as follows: An upper bound is selected
by user that represents the maximum number of vectors to obtain. First, all vectors
are sorted with respect to their lengths. The algorithm works in iterations and tends
toward the desired number of vectors. In order to guarantee convergence, a half of the
vectors are expected to be thrown away in each iteration (the number of vectors to
throw away can be easily configured as well, giving slower of faster convergence).</p>
          <p>Which vectors should be thrown away is decided upon their lengths(short vectors
are considered first) and upon the knowledge similar to that used in Artifact Removal
Component (most probable artifacts are thrown away first). A typical output is
depicted in Figure 5.</p>
        </sec>
        <sec id="sec-3-14-3">
          <title>Polyline Connection Component</title>
          <p>vectors→PolylineConnection(Scenarios,ScenariosCharacteristics)→vectors
It happens (especially in edge detection) that a significant polyline gets divided into
multiple parts as a result of inaccurate thresholding, noise or overlap.</p>
          <p>
            Three phase algorithm is employed to join parts together into bigger entities. The
first phase operates on polylines that have their endpoints very close to each other
and can be joined together without additional heuristics. Second phase takes care of
larger gaps between endpoints with respect to multiple criteria (vector length, distance,
orientation, etc.). Third phase tries to handle disconnected corners. Figure 6 features
typical scenarios of these three stages. All phases are configurable and optionally offer
many important concepts like double sided check, identical angle orientation check etc.
Many of these concepts are mentioned in [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ].
3.5
          </p>
        </sec>
      </sec>
      <sec id="sec-3-15">
        <title>Vector Output Components</title>
        <p>The last group of components provides an output to external storage (now file). Besides
an one-to-one export to a well-known format (like WMF, DXF), components in this
class cover also secondary feature extraction techniques needed for particular task –
typically representations for subsequent similarity measuring/search.</p>
        <sec id="sec-3-15-1">
          <title>Vectors Output Component</title>
          <p>vectors → VectorToFile(FileName, Format)
This component saves its vector input into a specified format (DXF, textfile, etc.).</p>
        </sec>
        <sec id="sec-3-15-2">
          <title>Angles Extraction Component</title>
          <p>vectors → AnglesToFile(FileName, AngleType, NumberOfAngles)
A specialized feature component transforming polylines to (time) series. Each polyline
is normalized first by splitting it to a number of parts of the same length. A set of
angles is then saved to the output. The angles can be measured as the smaller angles
between each pair of successive line segments, as clockwise (or counter-clockwise) angles
between them, or as angles between each line segment and the X-axis.
4</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Domain Scenarios</title>
      <p>The components of the previously introduced catalogue have been designed in order
to easily create scenarios suitable for extraction of simple shapes. Hence, we are
primarily interested in simplified representation of shapes inside an image, rather than
in a comprehensive image-to-shape transformation preserving as much information as
possible.</p>
      <p>A simplified shape could serve as a descriptor involved in measuring similarity of
two images (based on shapes found inside). The requirement on small-sized descriptor is
justified by handling the shape information by similarity measure. Since the similarity
measure is supposed to be evaluated many times on entities of a huge image database,
the similarity measuring should be as fast (yet precise) as possible. However, this goal
can be achieved by a smart shape extraction providing prototypical descriptors. An
image represented by a single (or very few) polylines/polygons limited to several tens
or hundreds of vertices (say up to 1 kilobyte) – this is our desirable descriptor.</p>
      <p>On the other side, we are aware of the dicffiulties when trying to establish such
a simple descriptor. Therefore, we have performed experimentation on various images
(photography, drawing, etc.) and tried to assemble several configurations of component
networks (called domain scenarios) which gave us the best results for a particular
image type (with respect to desirable descriptor properties). In the following sections
we present three such scenarios.
4.1</p>
      <sec id="sec-4-1">
        <title>Drawing</title>
        <p>As for the drawings, the vectorization task is slightly simpler thanks to the fact that the
source image contains the desired information in an easily identifiable form
(monochromatic strokes describing shapes, colored areas in case of cartoons, etc.). The extracted
layer or layers described by binary images can be further processed by means of
thinning (in case of strokes) or contour extraction (in case of thick areas). The result often</p>
        <p>Fig. 7. Scenario 1 – Drawing.
embodies only a low level of noise and can be directly used as is or further simplified. In
Figure 7 see the scheme of component configuration and interconnection under Drawing
scenario. Note that the two branches lead to two dieffrent types of shape extraction
(skeleton and contour). For an example of data flow regarding to the Drawing scenario,
see Figure 8.
For high contrast images containing unsophisticated shapes, the edge detection alone
is a reliable way to extract required feature information. When this is known, the
artifact removal is a relatively safe operation without the risk of removing important
features. A reconnection of disconnected lines (which follows then) almost completely
reconstructs the full shape information. Finally, the polyline simplification should be
done to straighten jagged lines and minimize the produced number of line segments.
In Figure 10 see the scheme of component configuration and interconnection under the
Simple object scenario. For an example of data flow, see Figure 10.
In real-world images (photos), the edge detection cannot guarantee getting clean shapes,
on the contrary there are usually huge amounts of false detected or unwanted edges.
The iterative pruning is supposed to take care of most of the ”trash” in the vector
output and even then, maximum eoffrt must be directed into connecting disrupted
polylines, corner detection and polygonal approximation. In Figure 11 see the scheme
of component configuration and interconnection under the Complex scene scenario.
For an example of data flow, see Figure 12.</p>
        <p>Fig. 11. Scenario 3 – Complex scene.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusions &amp; Future Work</title>
      <p>In this paper we have presented a highly configurable framework for shape extraction
from raster images. Based on the framework, we have proposed a catalogue of
components, which have been designed to be easily configured into a network. Based on
experiments, we have recommended three domain scenarios for extraction of simple
shapes, in order to create useful descriptors for similarity search applications.</p>
      <p>In the future we would like to investigate similarity measures suitable for
shapebased similarity search. An extraction of simple prototypical shapes from images (as
proposed in this paper) is crucial for similarity measuring, so it is an unavoidable
step when trying to ”bridge the semantic gap” in image retrieval. Furthermore, we
would like to automate the scenario recommendation process (where each component in
scenario evaluates the goodness of what it produces), resulting in a kind of unsupervised
extraction technique.</p>
      <sec id="sec-5-1">
        <title>Acknowledgments.</title>
        <p>This research has been partially supported by GA CˇR grant 201/05/P036 provided by
the Czech Science Foundation.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>1. Filters library (available at http://sourceforge.net/projects/filters/).</mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <article-title>2. Image and vector processing framework (available at siret</article-title>
          .ms.mff.cuni.cz).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>S. V.</given-names>
            <surname>Ablameyko</surname>
          </string-name>
          .
          <article-title>Introduction to Interpretation of Graphic Images</article-title>
          . SPIE,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>T.</given-names>
            <surname>Adamek</surname>
          </string-name>
          and
          <string-name>
            <given-names>N. O</given-names>
            <surname>'Connor. Efficient</surname>
          </string-name>
          contour
          <article-title>-based shape representation and matching</article-title>
          .
          <source>In MIR '03: Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval</source>
          , pages
          <fpage>138</fpage>
          -
          <lpage>143</lpage>
          , New York, NY, USA,
          <year>2003</year>
          . ACM Press.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>T.</given-names>
            <surname>Adamek</surname>
          </string-name>
          and
          <string-name>
            <given-names>N. E. O'</given-names>
            <surname>Connor</surname>
          </string-name>
          .
          <article-title>A multiscale representation method for nonrigid shapes with a single closed contour</article-title>
          .
          <source>IEEE Trans. Circuits Syst. Video Techn</source>
          .,
          <volume>14</volume>
          (
          <issue>5</issue>
          ):
          <fpage>742</fpage>
          -
          <lpage>753</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>P.</given-names>
            <surname>Altman</surname>
          </string-name>
          . Digitalization of map.
          <source>Master's thesis</source>
          , Charles University, Department of Software and Computer Science Education,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>I.</given-names>
            <surname>Bartolini</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Patella</surname>
          </string-name>
          . Warp:
          <article-title>Accurate retrieval of shapes using phase of fourier descriptors and time warping distance</article-title>
          .
          <source>IEEE Trans. Pattern Anal. Mach</source>
          . Intell.,
          <volume>27</volume>
          (
          <issue>1</issue>
          ):
          <fpage>142</fpage>
          -
          <lpage>147</lpage>
          ,
          <year>2005</year>
          . Member-Paolo Ciaccia.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>J.</given-names>
            <surname>Canny</surname>
          </string-name>
          .
          <article-title>A computational approach to edge detection</article-title>
          .
          <source>IEEE Trans. Pattern Anal. Mach</source>
          . Intell.,
          <volume>8</volume>
          (
          <issue>6</issue>
          ):
          <fpage>679</fpage>
          -
          <lpage>698</lpage>
          ,
          <year>1986</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>A.</given-names>
            <surname>Cardone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Gupta</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Karnik</surname>
          </string-name>
          .
          <article-title>A survey of shape similarity assessment algorithms for product design and manufacturing applications</article-title>
          .
          <source>Journal of Computing and Information Science in Engineering</source>
          ,
          <volume>3</volume>
          (
          <issue>2</issue>
          ):
          <fpage>109</fpage>
          -
          <lpage>118</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>V.</given-names>
            <surname>Castelli</surname>
          </string-name>
          and L. D. Bergman, editors.
          <source>Image Databases : Search and Retrieval of Digital Imagery</source>
          . Wiley-Inter.,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>D. H.</given-names>
            <surname>Douglas</surname>
          </string-name>
          and
          <string-name>
            <given-names>T. K.</given-names>
            <surname>Peucker</surname>
          </string-name>
          .
          <article-title>Algorithms for the reduction of the number of points required to represent a line or its caricature</article-title>
          .
          <source>Canadian Cartographer</source>
          ,
          <volume>10</volume>
          (
          <issue>2</issue>
          ):
          <fpage>112</fpage>
          -
          <lpage>122</lpage>
          ,
          <year>1973</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>J. Pelika</surname>
          </string-name>
          <article-title>´n and J</article-title>
          . Kostlivy´. Jagrlib:
          <article-title>Library for computer graphics education</article-title>
          .
          <source>In WSCG (Posters)</source>
          , pages
          <fpage>125</fpage>
          -
          <lpage>128</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <given-names>P. L.</given-names>
            <surname>Rosin</surname>
          </string-name>
          and
          <string-name>
            <given-names>G. A. W.</given-names>
            <surname>West</surname>
          </string-name>
          .
          <article-title>Segmentation of edges into lines and arcs</article-title>
          .
          <source>Image Vision Comput.</source>
          ,
          <volume>7</volume>
          (
          <issue>2</issue>
          ):
          <fpage>109</fpage>
          -
          <lpage>114</lpage>
          ,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <given-names>F.</given-names>
            <surname>Stentiford</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Mortimer</surname>
          </string-name>
          .
          <article-title>Some new heuristics for thinning binary handprinted characters for ocr</article-title>
          .
          <source>IEEE Transactions on Systems Man and Cybernetics</source>
          ,
          <volume>13</volume>
          (
          <issue>1</issue>
          ):
          <fpage>81</fpage>
          -
          <lpage>84</lpage>
          ,
          <year>1983</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>M. Vlachos</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Hadjieleftheriou</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Gunopulos</surname>
            , and
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Keogh</surname>
          </string-name>
          .
          <article-title>Indexing multidimensional time-series with support for multiple distance measures</article-title>
          .
          <source>In KDD '03: Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining</source>
          , pages
          <fpage>216</fpage>
          -
          <lpage>225</lpage>
          , New York, NY, USA,
          <year>2003</year>
          . ACM Press.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16. T. Y. Zhang and
          <string-name>
            <given-names>C. Y.</given-names>
            <surname>Suen</surname>
          </string-name>
          .
          <article-title>A fast parallel algorithm for thinning digital patterns</article-title>
          .
          <source>Commun. ACM</source>
          ,
          <volume>27</volume>
          (
          <issue>3</issue>
          ):
          <fpage>236</fpage>
          -
          <lpage>239</lpage>
          ,
          <year>1984</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <given-names>D.</given-names>
            <surname>Ziou</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Tabbone</surname>
          </string-name>
          .
          <article-title>Edge detection techniques - an overview</article-title>
          .
          <source>International Journal of Pattern Recognition and Image Analysis</source>
          ,
          <volume>8</volume>
          :
          <fpage>537</fpage>
          -
          <lpage>559</lpage>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>