<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Character Segmentation in Collector's Seal Images: An Attempt on Retrieval Based on Ancient Character Typeface *</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Kangying Li</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Biligsaikhan Batjargal</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Akira Maeda</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>College of Information Science and Engineering, Ritsumeikan University</institution>
          ,
          <country country="JP">Japan</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Graduate School of Information Science and Engineering, Ritsumeikan University</institution>
          ,
          <country country="JP">Japan</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Kinugasa Research Organization, Ritsumeikan University</institution>
          ,
          <country country="JP">Japan</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The collector's seal is a stamp for the ownership of the book. There is many important information in the collector's seals, which is the essential elements of ancient materials. It also shows the possession, the relation to the book, the identity of the collectors, the expression of the dignity, etc. Majority of people from many Asian countries usually use artistic ancient characters to make their own seals instead of modern characters. A system that automatically recognizes these characters can help enthusiasts and professionals better understand the background information of these seals more efficiently. However, there is a lack of training data of labeled images, and many images are noisy and difficult to be recognized. We propose a retrieval-based recognition system that focuses on single character to assist seal retrieval and matching.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Background</title>
      <p>The use of the collector’s seals in Asian ancient books is a worthy topic of detailed discussions. The
collector’s seal has the functions of expressing the sense of belonging, demonstrating the inheritance,
showing the identity, representing the dignity, displaying aspiration and interest, identifying the version,
expressing speech, etc., in different forms. For individual collectors, in addition to their own names,
there are also words, which can show their personal or their family situations including ancestral origin,
residence, family background, family status, rank and so on. For example, Natsume Soseki, a famous
Japanese writer, had many kinds of collector’s seals. The foreign books, which he had collected during
his life, were recorded using various collector’s seals that he used in different periods. We can get wider
information about his book-collecting hobbies by analyzing these collector’s seals. One of the most
important functions of the collector’s seals is to record where a book has been kept and the footsteps of
history of a book being handed over. For book collecting institutions, the migration process of the books
and their purchase histories can be also reflected in the contents of the collector’s seals. The names of
many book collecting institutions have been gradually changed with the historical changes. Through
the collector’s seals, we can understand the books themselves, their collectors, or more background
information of the collecting institutions. Through collector’s seals in libraries, we can find out the
collective experience and the source of inheritance of a book. In Asian countries where kanji characters
are used, ancient characters are usually used to make collector’s seals. Also, as time passes, the shape
of kanji may be changed, and multiple variations of a character might be created. There are also
characters derived from kanji characters, or the characters that just look like kanji characters. For
example, Vietnam’s “Chữ Nôm” is a character system that are originated from kanji’s original shape.
Therefore, it is hard for non-professionals to understand all the contents of a collector’s seal or a single
character in it. Meanwhile, as far as a single ancient character is recognized, utilizing the data from
scholars studying kanji characters, we can also have a broader understanding of ancient character’s
culture. Especially for individual collectors, exploring the information in every single character of their
names is an important method for us to get a comprehensive understanding of their family affairs.</p>
      <p>It is expected to construct a retrieval-based ancient character recognition system that can match the
user’s query character even if there is only one labeled typeface image, and can update the character
type at any time instead of retraining a model. Therefore, we propose a text extraction system for image
data of collector’s seals, and this system is also expected for the future research in automatic text feature
generation to find the background information of Asian ancient books from external databases using
the extracted text contents in the character recognition task. In this research, we perform character
segmentation using Mean-shift clustering and retrieve a single ancient character from ancient character
typeface images by using the extracted features.</p>
    </sec>
    <sec id="sec-2">
      <title>2 Related Work</title>
      <p>
        Character segmentation for off-line recognition. Nguyen et al. (2016)[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] proposed a text-line
segmentation method. They proposed a morphological method, zone projection and character
separation for each segmented text-line by vertical projection, Stroke Width Transform (SWT), bridge
finding and Voronoi diagrams to get the results. This method handles the segmentation task of Japanese
handwritten characters very well. However, seal characters usually have irregular positions or
distributions and large differences in character size, which may cause bad influence for the
segmentation result by using the proposed method. Zahan et al. (2018) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]proposed a segmentation
process for printed Bangla script, and this method has a good performance on segmenting the characters
with topologically connected structure. However, the target image of this method is quite different from
our target image. Hence, we propose a method to deal with the problem of irregular character
distributions.
      </p>
      <p>
        Seals image retrieval. Fujitsu R&amp;D Center (2016) [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]announced a seal retrieval technique for
Chinese ancient document images. By applying color separation technique to split the seal from the
background image, they proposed a two-level hierarchical matching method based on global feature
matching. Unfortunately, we have not been able to find any relevant literature that describes the details
of this technique. This system is aimed at the whole content of seals, and retrieval scope depends on the
existing seal data in the database. Because most of the seals are personal assets and usually consist of
meaningful, separate and independent characters. Therefore, we propose a method to search and match
the seal images by splitting it to character units.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3 Methodology</title>
      <p>We describe our approach in the following sections. Since we only extract features from typeface
images and store them in a database, we perform some pre-processing for user-provided seals images.
We describe our data pre-processing in Section 3.1. In the beginning of Section 3.2, we describe the
process of extracting essential features from images. These features include deep features and geometric
features, therefore in Section 3.2, we also describe the deep features that we use. In Section 3.3, we
describe the extraction of geometric features. Introduction of the feature matching and ranking
calculation will be shown in Sections 3.4 and 3.5.
3.1</p>
      <p>Data pre-processing and character segmentation
As shown in Figure 1, in classical materials, seals often overlap with handwritten words.</p>
      <p>
        Therefore, splitting the seal pattern from the image is an important task. We use k-means clustering
(Hartigan, et al.,1979) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]to cluster image color information. As shown in Figure 2, We can project
RGB three channels information of an image into the three-dimensional space.
Hence, the Euclidean distance is used to represent the color relationship between two pixels which is
defined by (1).
      </p>
      <p>= √( 1 −  2)2 + ( 1 −  2)2 + ( 1 −  2)2
(1)</p>
      <p>Where  1,  2,  1, 2, 1,  2 represent RGB values for pixels 1 and 2 respectively, and we use  
to cluster pixels with similar colors. According to the principle of k-means algorithm, we regard this
task as extracting K groups of pixels with similar colors from images. Our system automatically extracts
areas with more red components. As shown in Figure 3, the Pixel group 2 is extracted as the analysis
target of our system.</p>
      <p>Result of clustering</p>
      <sec id="sec-3-1">
        <title>Pixel group 1</title>
      </sec>
      <sec id="sec-3-2">
        <title>Pixel group 2 Pixel group 3</title>
        <p>
          For the determination of a single character, we use the Mean-shift clustering to segment each
character. Because kanji characters are independent and balanced in structure, we regard every character
as a module, and each module has its centroid. By using these centroids, we can use a clustering to
extract character fields. Since we already know the coordinates of each pixel of the seal areas, the
density information of each pixel can be obtained by kernel density estimation. We use Mean-shift
clustering (Comaniciu, et al.,1999) [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]to cluster the pixels of an image. The clustering results can be
optimized by adjusting the variable bandwidth, under the influence of different variables bandwidth.
Figure 4 shows the results of clustering under different bandwidth settings.
        </p>
        <p>bandwidth 1：60</p>
        <p>Bandwidth 3：90
bandwidth 2：80
bandwidth 4：100</p>
        <p>In each unit, the graph on the left side is the clustering result of each pixel of the seal, and each color
represents a cluster, and as for the graph on the right side, the X-axis is the value of bandwidth, and the
Y-axis is the number of clusters. The results that we need can be selected in the process when change
rate of the total number of clusters becomes steady, for example, when the bandwidth equals to 90, we
suppose each cluster in the result is treated as a candidate result of character segmentation. Hence, we
calculate a bandwidth interval, and the bandwidth value is obtained equidistantly in the interval. By
using these bandwidth values, segmentation candidates are obtained. The algorithm flow is shown in
Algorithm 1.</p>
      </sec>
      <sec id="sec-3-3">
        <title>Algorithm 1: Image segmentation</title>
        <p>Input: coordinate set { ( 1,  1), ( 2,  2)… (  ,   )} from non-background area obtained by
Kmeans clustering
Output: Set of object location hypotheses U
1: Initialize bandwidth interval：[Bandwidthmin, Bandwidthmax]={Bandwidth∈ ℝ：
{Bandwidthmin &lt; Bandwidthmax}，take the value at a moderate distance in the interval to
get bandwidth set{b1, b2, b3 … . . bn ∈ [Bandwidthmin, Bandwidthmax]}
2: Get the result of number of clusters{Nclusters_b1, Nclusters_b2….Nclusters_bn:
Nclusters_bn∈ fmeanshiftclustering (bn)} for each bandwidth using Mean-shift clustering
3: Find the value of   when Nclusters_bn become to the minimum, take   as the
Endbandwidth.
4: Fit a polynomial Q (b) with {(b1, Ncluster_bs1),</p>
        <p>(b2, Ncluster_bs2)…(bn, Ncluster_bsn)} using least squares polynomial fit.
5: Get the second derivative Q′′(b) =(d2Nclusters_bn)of Q(bn)</p>
        <p>dbn2
6: Foreach bi ∈{b1, b2, b3 … . . bn ∈ [Bandwidthmin, Bandwidthmax]}do:
if Q′′(bi) = Min{(Q′′(bi+1) − Q′′(bi))/Q′′(bi)} then</p>
        <p>Initial Initbandwidth←  
else continue
7: Obtain regions U={u1, u2, u3, u4 ….} using Mean-shift clustering with bandwidth
during[Initbandwidth, Endbandwidth]</p>
        <p>
          The algorithm implementation is available on GitHub (Li, K et al., 2019)[
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] for your reference.
3.2
        </p>
        <sec id="sec-3-3-1">
          <title>Extracting CNN features from images</title>
          <p>
            The typeface images converted from the font file (Shirakawa Font project, 2016) [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ]are shown in
Figure 5.
          </p>
          <p>Modern commonly</p>
          <p>used fonts
“Shirakawa font”</p>
          <p>Typeface
‘人’
‘文’
‘科’
‘学’</p>
          <p>First, we normalize the typeface image. Then we crop the typeface image according to maximum
and minimum coordinates of the black pixels, and then standardize them to the size of 64*64. As there
are many variations of ancient characters, even a slight change of the structure will affect the extraction
of geometric features. We use the pre-trained model to extract the deep features (convolutional neural
networks (CNN) features) of fonts, trying to subtract some minor changes in character’s structure, to
assist the characters of same category become closer to each other in the feature space.</p>
          <p>
            As shown in Figure 6, due to the lack of ancient character dataset, we selected CASIA Online and
Offline Chinese Handwriting Databases (Liu et al.,2011)[
            <xref ref-type="bibr" rid="ref8">8</xref>
            ], in which the characters have some similar
shape feature with the ancient characters as the training data to train the model using VGG16 (Simonyan
et al.,2014)[
            <xref ref-type="bibr" rid="ref9">9</xref>
            ]. The visual expression of feature map in the max pooling layer of the pre-trained model
is shown in Figure 7. We use kernelPCA (Mika et al.,1999)[
            <xref ref-type="bibr" rid="ref10">10</xref>
            ] to reduce the dimension of the output
from the middle layer.
          </p>
          <p>Extracting geometric features from images</p>
          <p>To capture some details of the character’s structure, we attempted to extract the geometric features
of these characters.</p>
          <p>
            As shown in Figure 8, using the method proposed by Zhang’s research (Zhang et al.,1984)[
            <xref ref-type="bibr" rid="ref11">11</xref>
            ], a
character’s skeleton map is obtained. Different from general thinning method, Zhang’s method ignores
the existence of the stroke width, and can obtain a skeleton map without noise, so each stroke can be
represented by a unique corresponding continuous single pixel. Then we use Harris corner (Harris et
al.,1988)[
            <xref ref-type="bibr" rid="ref12">12</xref>
            ] to obtain the coordinates of the intersection of each stroke. The coordinate points of
skeleton map and the coordinate points of stroke intersections are stored in the database as
representations of geometric features.
3.4 Image matching using multiple features
          </p>
          <p>We use the following method for calculating similarity to match the typeface image and the query
image. The matching process is shown in Figure 9. The segmented user query image and the typeface
image use the same feature extraction method to extract the corresponding features including CNN
features and geometric features.</p>
          <p>
            The cosine similarity is used to compare the similarity of CNN features, and Hausdorff distance
(Huttenlocher et al.,1992) [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ]is used to compare the similarity of geometric features between two
images.
to values in the range [
            <xref ref-type="bibr" rid="ref1">0, 1</xref>
            ] by using Min-Max normalization, and the function is defined by (2).
  _ℎ
=

 ℎ
{
 ℎ
1
1
_ −min{ ℎ


}−min( ℎ
1
1
_
}
_
)
(2)
Where i=0, 1...., N, N is the number of images in the database,   _ℎ
between image i and the query image, and  ℎ
_ is the result of Hausdorff distance between
is the similarity score
image i and query image calculated by using Hausdorff distance. The result of multi-feature similarity
score is calculated by (3).
          </p>
          <p>=   

+</p>
          <p>+ 
(3)
Where  
feature  
is the total score of similarity,  
is the sum of similarities of geometric
features which calculated use   _ℎ
.</p>
          <p>are the weights of similarity score of CNN
and geometric feature  

t. Finally, we use the total score of
similarity to predict the input image’s category.
4</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Experiments and Results</title>
      <p>explained in Sections 4.2 and 4.3.
4.1</p>
      <sec id="sec-4-1">
        <title>Datasets</title>
        <p>In this section, we will describe our experiments in three parts. In section 4.1, we will introduce the
database used in the experiments. The results of image segmentation and image retrieval will be</p>
        <p>
          We select the test data from Collectors’ Seal Database (National Institute of Japanese Literature,
available from 2011)[
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. This database contains 39,686 images of collector’s seals, including only
pictorial seals and seals carved in various types of calligraphy. The main color of the seals is red.
Because the original data does not mark the coordinate information of a single character, in this
experiment we counted the characters with high frequency in this database, and marked their position
information as a small-scale experimental object.
4.2
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>Result of segmentation</title>
        <p>Different types of seals are used to test our proposed segmentation algorithm, and the results are
shown in Table 2. From the experiment results, it can be seen that our proposed method achieves good
results in processing images with irregular characters distribution. Nevertheless, from the Result 3,
larger characters are segmented into other character candidate area, thus more efforts are still needed to
conduct further research on dealing with images with different character sizes and close character
spacing.</p>
        <p>1: Regular seal</p>
        <sec id="sec-4-2-1">
          <title>Result 1</title>
        </sec>
        <sec id="sec-4-2-2">
          <title>2: Characters with</title>
          <p>irregular glyph</p>
        </sec>
        <sec id="sec-4-2-3">
          <title>Result 2</title>
        </sec>
        <sec id="sec-4-2-4">
          <title>3: Irregular character distribution</title>
        </sec>
        <sec id="sec-4-2-5">
          <title>Result 3</title>
        </sec>
        <sec id="sec-4-2-6">
          <title>4: With noisy</title>
          <p>background and
overlaps with
handwritten
words</p>
        </sec>
        <sec id="sec-4-2-7">
          <title>Result 4</title>
          <p>4.3</p>
        </sec>
      </sec>
      <sec id="sec-4-3">
        <title>Retrieval results</title>
        <p>Here we show the Mean Reciprocal Rank (4) results of ten characters with the highest frequency in
the printed text. For each character, we tested with twenty images.
where Q is the total number of images retrieved, i is the image number, and rank is the ranking order.</p>
        <p>When all the weights are set as initial value 1, the results are shown in Table 2. We found that
characters with simple shapes showed better results.</p>
        <p>Characters</p>
        <p>MRR score</p>
        <p>Characters</p>
        <p>MRR score</p>
        <p>In this study, we used two clustering algorithms to preprocess seal images and have obtained the
good results. Not alike to train a neural network, clustering algorithm is a method that can extract part
of the required information from an image without consuming a lot of computational resources. Then
we use the combination of deep features and geometric features to retrieve ancient kanji characters
through the calculation of similarities. It reduces the time of re-training a model when adding new
category of characters and also enables flexible use of intermediate output of neural networks. We can
know which seal belongs to which famous person using the recognition results, then we can learn more
about this person’s hobbies from his or her collection’s information, and this will be the next target of
our research.However, the image retrieval performances need to be further improved. How to make
better use of only one typeface image is needed to be focused in our future research.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Nguyen</surname>
            ,
            <given-names>K. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nakagawa</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , (
          <year>2016</year>
          )
          <article-title>Text-line and character segmentation for offline recognition of handwritten Japanese text</article-title>
          .
          <source>IEICE technical report.</source>
          (pp.
          <fpage>53</fpage>
          -
          <lpage>58</lpage>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Zahan</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Iqbal</surname>
            ,
            <given-names>M. Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Selim</surname>
            ,
            <given-names>M. R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rahman</surname>
            ,
            <given-names>M. S.</given-names>
          </string-name>
          , (
          <year>2018</year>
          ).
          <article-title>Connected Component Analysis Based Two Zone Approach for Bangla Character Segmentation</article-title>
          .
          <source>In 2018 International Conference on Bangla Speech and Language Processing (ICBSLP)</source>
          (pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          ), IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Fujitsu</given-names>
            <surname>Research</surname>
          </string-name>
          &amp; Development Center Co. Ltd.
          <article-title>Seal Retrieval Technique for Chinese Ancient Document Images</article-title>
          . Retrieved from: https://www.fujitsu.com/cn/en/about/resources/news/pressreleases/2016/frdc-0330.html
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Hartigan</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wong</surname>
            ,
            <given-names>M. A.</given-names>
          </string-name>
          , (
          <year>1979</year>
          ).
          <article-title>Algorithm AS 136: A k-means clustering algorithm</article-title>
          .
          <source>Journal of the Royal Statistical Society</source>
          . Series C (pp.
          <fpage>100</fpage>
          -
          <lpage>108</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Comaniciu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Meer</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>1999</year>
          ).
          <article-title>Mean shift analysis and applications</article-title>
          .
          <source>In Proceedings of the Seventh IEEE International Conference on Computer Vision</source>
          (pp.
          <fpage>1197</fpage>
          -
          <lpage>1203</lpage>
          ), IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Batjargal</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maeda</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , (
          <year>2019</year>
          )
          <article-title>Seal Character segmentation</article-title>
          . Retrieved from https://github.com/timcanby/collector-s_
          <fpage>seal</fpage>
          -ImageProcessing
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <article-title>[7] The Shirakawa Shizuka Institute of East Asian Characters and Culture, Shirakawa Font project</article-title>
          . Retrieved from: http://www.ritsumei.ac.jp/acd/re/k-rsc/sio/shirakawa/index.html
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8] Liu.，Cheng. L.,
          <article-title>CASIA online and offline Chinese handwriting databases</article-title>
          .
          <source>Document Analysis and Recognition (ICDAR2011)</source>
          , IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Simonyan</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zisserman</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , (
          <year>2014</year>
          )
          <article-title>Very deep convolutional networks for large-scale image recognition</article-title>
          .
          <source>arXiv preprint arXiv:1409</source>
          .
          <fpage>1556</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Mika</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schölkopf</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smola</surname>
            ,
            <given-names>A. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Müller</surname>
            ,
            <given-names>K. R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scholz</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rätsch</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>1999</year>
          ).
          <article-title>Kernel PCA and de-noising in feature spaces</article-title>
          .
          <source>In Advances in neural information processing systems</source>
          (pp.
          <fpage>536</fpage>
          -
          <lpage>542</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Zhang</surname>
          </string-name>
          , T. Y.,
          <string-name>
            <surname>Suen</surname>
            ,
            <given-names>C. Y.</given-names>
          </string-name>
          (
          <year>1984</year>
          ).
          <article-title>A fast parallel algorithm for thinning digital patterns</article-title>
          . (
          <volume>27</volume>
          (
          <issue>3</issue>
          ), pp.
          <fpage>236</fpage>
          -
          <lpage>239</lpage>
          ), ACM.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Harris</surname>
            ,
            <given-names>C. G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stephens</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>1988</year>
          ).
          <article-title>A combined corner and edge detector</article-title>
          .
          <source>In Alvey vision conference</source>
          (Vol.
          <volume>15</volume>
          , No.
          <volume>50</volume>
          , pp.
          <fpage>10</fpage>
          -
          <lpage>5244</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Huttenlocher</surname>
            ,
            <given-names>D. P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rucklidge</surname>
            ,
            <given-names>W. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Klanderman</surname>
            ,
            <given-names>G. A.</given-names>
          </string-name>
          (
          <year>1992</year>
          ).
          <article-title>Comparing images using the Hausdorff distance under translation</article-title>
          .
          <source>In Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition</source>
          (pp.
          <fpage>654</fpage>
          -
          <lpage>656</lpage>
          ), IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>[14] National Institute of Japanese Literature, Collectors' Seal Database, Retrieved from http://base1.nijl.ac.jp/~collectors_seal/</mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>