<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Overview of Morpho Challenge in CLEF 2007</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>General Terms</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Algorithms, Performance, Experimentation</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Mikko Kurimo, Mathias Creutz, Ville Turunen Adaptive Informatics Research Centre, Helsinki University of Technology P.</institution>
          <addr-line>O.Box 5400, FIN-02015 TKK</addr-line>
          ,
          <country country="FI">Finland</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Morpho Challenge 2007 contained an evaluation of unsupervised morpheme analysis algorithms using information retrieval experiments utilizing data available in CLEF. The objective of the challenge was to design statistical machine learning algorithms that discover which morphemes (smallest individually meaningful units of language) words consist of. Ideally, these are basic vocabulary units suitable for different tasks, such as text understanding, machine translation, information retrieval, and statistical language modeling The evaluation of the submitted morpheme analysis was performed by two complementary ways: Competition 1: The proposed morpheme analyses were compared to a linguistic morpheme analysis gold standard by matching the morphemesharing word pairs. Competition 2: Information retrieval (IR) experiments were performed, where the words in the documents and queries were replaced by their proposed morpheme representations and the search was based on morphemes instead of words. This paper provides an overview of the IR evaluation. The IR evaluations were provided for Finnish, German, and English and participants were encouraged to apply their algorithm to all of them. The organizers performed the IR experiments using the queries, texts, and relevance judgments available in CLEF forum and morpheme analysis methods submitted by the challenge participants. The results show that the morpheme analysis has a significant effect in IR performance in all languages, and that the performance of the best unsupervised methods can be superior to the supervised reference methods. The challenge was part of the EU Network of Excellence PASCAL Challenge Program and organized in collaboration with CLEF.</p>
      </abstract>
      <kwd-group>
        <kwd>H</kwd>
        <kwd>3 [Information Storage and Retrieval]</kwd>
        <kwd>H</kwd>
        <kwd>3</kwd>
        <kwd>1 Content Analysis and Indexing</kwd>
        <kwd>H</kwd>
        <kwd>3</kwd>
        <kwd>3 Information Search and Retrieval</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>becoming increasingly important, because language technology methods need to be quickly and as
automatically as possible extended to new languages that have limited previous resources. That is
why learning the morpheme analysis directly from large text corpora using unsupervised machine
learning algorithms is such an attractive approach and a very relevant research topic today.</p>
      <p>
        The problem of learning the morphemes directly from large text corpora using an
unsupervised machine learning algorithm is clearly a difficult one. First the words should be somehow
segmented into meaningful parts, and then these parts should be clustered in the abstract classes
of morphemes that would be useful for modeling. It is also challenging to learn to generalize
the analysis to rare words, because even the largest text corpora are very sparse, a significant
portion of the words may occur only once. Many important words, for example proper names
and their inflections or some forms of long compound words, may also not exist in the training
material at all, and their analysis is often even more challenging. However, benefits for successful
morpheme analysis, in addition to obtaining a set of basic vocabulary units for modeling, can be
seen for many important tasks in language technology. The additional information included in
the units can provide support for building more sophisticated language models, for example, in
speech recognition [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], machine translation [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], and information retrieval [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        The evaluation of the unsupervised morpheme analysis was in this challenge solved by
developing two complementary evaluations, one including a comparison to linguistic morpheme analysis
gold standard, and another including a practical real-world application where morpheme analysis
might be useful. This paper presents an overview how the application-oriented evaluation called
Competition 2 was performed in the domain of finding useful index terms for information retrieval
tasks in multiple languages using the queries, texts, and relevance judgments available in CLEF
forum and morpheme analysis methods submitted by the challenge participants. The linguistic
evaluation called Competition 1 are described in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and Competition 2 in more detail in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>Traditionally, and especially in processing English texts, stemming algorithms have been used
to reduce the different infected word forms into the common roots or stems for indexing. However,
to achieve best results when ported to new languages the development of stemming algorithms
requires a considerable amount of special development work. In many highly-inflecting,
compounding, and agglutinative European languages the amount of different word forms is huge and
the task of extracting the useful index terms becomes both more complex and more important.</p>
      <p>
        The same IR tasks that were attempted using the Morpho Challenge participants’ morpheme
analysis, were also tested by a number of reference methods to see how the unsupervised
morpheme analysis performed in comparison to them. These references included the organizers’ public
Morfessor Categories-Map [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and Morfessor Baseline [
        <xref ref-type="bibr" rid="ref2 ref4">2, 4</xref>
        ], the Morfessor analysis improved by a
hybrid method [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], grammatical morpheme analysis based on the linguistic gold standards [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], the
traditional Porter stemming [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] of words and also by the words as such without any processing.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Task</title>
      <p>
        Morpho Challenge 2007 is a follow-up to our previous Morpho Challenge 2005 (Unsupervised
Segmentation of Words into Morphemes) [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. In Morpho Challenge 2005 the focus was in the
segmentation of data into units that are useful for statistical modeling. The specific task for the
competition was to design an unsupervised statistical machine learning algorithm that segments
words into the smallest meaning-bearing units of language, morphemes. In addition to comparing
the obtained morphemes to a linguistic ”gold standard”, their usefulness was evaluated by using
them for training statistical language models for speech recognition.
      </p>
      <p>In Morpho Challenge 2007 a more general focus was chosen to not only to segment words into
smaller units, but also to perform morpheme analysis of the word forms in the data. For instance,
the English words ”boot, boots, foot, feet” might obtain the analyses ”boot, boot + plural, foot,
foot + plural”, respectively. In linguistics, the concept of morpheme does not necessarily directly
correspond to a particular word segment but to an abstract class. For some languages there exist
carefully constructed linguistic tools for this kind of analysis, although not for many, but using
statistical machine learning methods we may still discover interesting alternatives that may rival
even the most careful linguistically designed morphologies.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Training data</title>
      <p>
        The Morpho Challenge 2007 task, in practice, was to return the unsupervised morpheme analysis
of every word form contained in a long word list supplied by the organizers for each test language
[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. The participants were pointed to corpora [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] in which the words occur, so that the algorithms
may utilize information about word context. The text corpora where the word list were collected
were obtained from the Wortschatz collection1. at the University of Leipzig (Germany). We used
the plain text files (sentences.txt for each language); the corpus sizes are 3 million sentences for
English, Finnish and German, and 1 million sentences for Turkish. For English, Finnish and
Turkish we used preliminary corpora, which have not yet been released publicly at the Wortschatz
site. The corpora were specially preprocessed for the Morpho Challenge (tokenized, lower-cased,
some conversion of character encodings).
      </p>
      <p>To achieve the goal of designing language independent methods, the participants were
encouraged to submit results in all test languages. The information retrieval (IR) experiments were
performed by the organizers based on the morpheme analyses submitted by the participants.
4</p>
    </sec>
    <sec id="sec-4">
      <title>IR evaluation data</title>
      <p>The data sets for testing the IR performance in each test language consisted of news paper articles
as the source documents, test queries and the binary relevance judgments regarding to the queries.
The organizers performed the IR experiments based on the morpheme analyses submitted by the
participants, so it was not necessary for the participants to get these data sets. However, all the
data was available for registered participants in the Cross-Language Evaluation Forum (CLEF)2.</p>
      <p>The source documents were news articles collected from different newspapers selected as follows:
• In Finnish: 55K documents from short articles in Aamulehti 1994-95, 50 test queries on
specific news topics and 23K binary relevance assessments (CLEF 2004)
• In English: 170K documents from short articles in Los Angeles Times 1994 and Glasgow
Herald 1995, 50 test queries on specific news topics and 20K binary relevance assessments
(CLEF 2005).
• In German: 300K documents from short articles in Frankfurter Rundschau 1994, Der Spiegel
1994-95 and SDA German 1994-95, 60 test queries with 23K binary relevance assessments
(CLEF 2003).</p>
      <p>
        When performing the indexing and retrieval experiments for Competition 2, it turned out
that the test data contained quite many new words in addition to those that were provided as
training data for the Competition 1 [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Thus, the participants were offered a chance to improve
the retrieval results of their morpheme analyses by providing them a list of the new words found in
all test languages. The participants then had the choice to either run their algorithms to analyze
as many of the new words as they could or liked, or to provide no extra analyses. No text data
resources to find context for the new words were provided, but it was made possible to register to
CLEF to use the text data available in there or any other data the participants could get.
5
      </p>
    </sec>
    <sec id="sec-5">
      <title>Participants and their submissions</title>
      <p>
        By the deadline in May, 2007, 6 research groups had submitted the segmentation results obtained
by their algorithms. A total of 12 different algorithms were submitted, 8 of them ran experiments
1http://corpora.informatik.uni-leipzig.de/
2http://www.clef-campaign.org/
1. Public baseline methods called “Morfessor Baseline” and “Morfessor Categories-MAP” (or
here just “Morfessor MAP”) developed by the organizers [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
2. No words were split nor any morpheme analysis provided, “dummy”.
3. The words were analyzed using the gold standard in each language that were utilized as the
“ground truth” in the Competition 1 [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Besides the stems and suffixes, the gold standard
analyses typically consist of all kinds of grammatical tags which we decided to simply include
as index terms, as well. “grammatical first” uses only the first interpretation of each word
whereas “grammatical all” use all.
4. Porter: No real morpheme analysis was performed, but the words were stemmed by the
      </p>
      <p>
        Porter stemming, an option provided by the Lemur toolkit.
5. Tepper: A hybrid method developed by Michael Tepper [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] was utilized to improve the
morpheme analysis reference obtained by our Morfessor Categories-MAP.
      </p>
      <p>
        The outputs of the submitted algorithms are analyzed closer in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. From the IR point of view it
is interesting to note that only Monson and Zeman decided to provide several alternative analysis
for most words instead of just the most likely one. McNamee’s algorithms did not attempt to
provide a real morpheme analysis, but focused directly on finding a representative substring for
each word type that would be likely to perform well in the IR evaluation.
6
      </p>
    </sec>
    <sec id="sec-6">
      <title>Evaluation</title>
      <p>In this evaluation, the organizers applied the analyses provided by the participants in information
retrieval experiments. The words in the queries and source documents were replaced by the
corresponding morpheme analyses provided by the participants, and the search was then based on
morphemes instead of words.</p>
      <p>
        The evaluation was performed using a state-of-the-art retrieval method (the latest version of
the freely available LEMUR toolkit3. We utilized two standard retrieval method: Tfidf and Okapi
term weighting. The Tfidf implementation in LEMUR applies term frequency weights for both
query and document based on the BM25 weighting and the Euclidean dot-product as similarity
measure. Okapi in LEMUR is an implementation of the BM25 retrieval function as described in
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>The evaluation criterion was Uninterpolated Average Precision There were several different
categories and the winner with the highest Average Precision was selected separately for each
language and each category:
1. All morpheme analyses from the training data are used as index terms “withoutnew” vs.
additionally using also the morpheme analyses for new words that existed in the IR data
but not in the training data “withnew”.
2. Tfidf term weighting was utilized for all index terms without any stoplists vs. Okapi term
weighting for all index terms excluding an automatic stoplist consisting of the most common
terms (frequency threshold was 75,000 for Finnish and 150,000 for German and English).
The stoplist was developed for the Okapi weighting, because otherwise Okapi weights were
not suitable for the indexes that had many very common terms.
7</p>
    </sec>
    <sec id="sec-7">
      <title>Results</title>
      <p>
        The results of the information retrieval evaluations are shown in Table 2. Here we have selected
only the best runs from each participant (in bold) and reference method. For the full results see
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Indexing is performed using Tfidf weighting for all morphemes (left) and Okapi weighting
for all morphemes except the most common ones (stoplist) with frequency higher than 150,000
(right).
      </p>
      <p>
        In the Finnish task, the highest average precision was obtained by the “Bernhard 2” algorithm,
which was also won the Competition 1 [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. The highest average precision 0.49 was obtained
using the Okapi weighting and stoplist for both the originally submitted morpheme analysis (for
Competition 1) and the morpheme analysis for the new words added for Competition 2. The
“Bernhard 1” algorithm obtained the highest average precision 0.47 for the German task using
the new words, Okapi and stoplist. For English, the highest average precision was obtained by the
“Bernhard 2” algorithm, which was also won the Competition 1 [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. As in Finnish and German,
the highest average precision 0.39 was obtained with the new words and using the Okapi weighting
and stoplist.
      </p>
      <p>
        As expected, the “grammatical” reference method based on linguistic Gold Standard morpheme
analysis [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] did not perform very well. However, with stoplist and Okapi term weighting it did
achieve better results than the “dummy” method in all languages. In Finnish and English the
performance was better than average, but quite poor in German. The “grammatical first” that
utilized only the first of the alternative analysis in indexing was at least as good or better than
the “grammatical all”, which seems to indicate that the alternative analysis are not very useful
here.
      </p>
      <p>
        For the “Morfessor” references it is interesting to note that they always performed better than
the “grammatical”, which seems to suggest that the coverage of the analysis (“Morfessor” does not
have any out-of-vocabulary words) is more important for IR than the grammatical correctness. In
general, the old “Morfessor Baseline” seems to provide a very good baseline in all tested languages
also for the IR tasks as it did for the language modeling and speech recognition in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>Finnish:
Tfidf weighting for all morphemes
METHOD WORDLIST
Morfessor baseline withnew
Bernhard 1 withoutnew
grammatical first withoutnew
Bordag 5 withnew
McNamee 5 withoutnew
Porter withnew
dummy withnew
Zeman withoutnew
German:
Tfidf weighting for all morphemes
METHOD WORDLIST
Morfessor baseline withnew
Bernhard 1 withoutnew
Porter withnew
Monson Morfessor withnew
dummy withnew
Bordag 5a withnew
McNamee 5 withoutnew
grammatical first withoutnew
Zeman withoutnew
English:
Tfidf weighting for all morphemes
METHOD WORDLIST
Porter withnew
McNamee 5 withoutnew
Morfessor baseline withnew
Tepper withoutnew
dummy withnew
Bernhard 1 withoutnew
Monson Morfessor withoutnew
Pitler withoutnew
grammatical all withoutnew
Zeman withoutnew
Bordag 5 withoutnew</p>
    </sec>
    <sec id="sec-8">
      <title>Discussions</title>
      <p>The comparison of the results in the Tfidf and Okapi categories show that the Okapi with stoplist
performed significantly better for all languages. We also run Tfidf with stoplist (the results not
included here) which achieved results that were better than the plain Tfidf and only slightly inferior
to Okapi with stoplist. However, we decided to rather report the original Tfidf, since we wanted
to show what is the performance and the relative ranking of the methods without the stoplist.</p>
      <p>The Porter stemming that is a standard word preprocessing tool in IR remained unbeaten (by
a narrow margin) in our evaluations in English, but in German and especially in Finnish, the
unsupervised morpheme analysis methods clearly dominated the evaluation. There might exist
better stemming algorithms for those languages, but because of the more complex morphology,
their development might not be an easy task.</p>
      <p>As future work in this field it should be relatively straight-forward to evaluate the unsupervised
morpheme analysis in several other interesting languages, because it is not bounded to only those
languages where rule-based grammatical analysis can be performed. It would also be interesting
to try to combine the rival analysis to produce something better.
9</p>
    </sec>
    <sec id="sec-9">
      <title>Conclusions</title>
      <p>The objective of Morpho Challenge 2007 was to design a statistical machine learning algorithm
that discovers which morphemes (smallest individually meaningful units of language) words consist
of. Ideally, these are basic vocabulary units suitable for different tasks, such as text
understanding, machine translation, information retrieval, and statistical language modeling. The current
challenge was a successful follow-up to our previous Morpho Challenge 2005 (Unsupervised
Segmentation of Words into Morphemes). This time the task was more general in that instead of
looking for an explicit segmentation of words, the focus was in the morpheme analysis of the word
forms in the data.</p>
      <p>The scientific goals of this challenge were to learn of the phenomena underlying word
construction in natural languages, to discover approaches suitable for a wide range of languages and to
advance machine learning methodology. The analysis and evaluation of the submitted machine
learning algorithm for unsupervised morpheme analysis showed that these goals were quite nicely
met. There were several novel unsupervised methods that achieved good results in several test
languages, both with respect to finding meaningful morphemes and useful units for information
retrieval. The IR results also revealed that the morpheme analysis has a significant effect in IR
performance in all languages, and that the performance of the best unsupervised methods can be
superior to the supervised reference methods.</p>
    </sec>
    <sec id="sec-10">
      <title>Acknowledgments</title>
      <p>We thank all the participants for their submissions and enthusiasm. We owe great thanks as
well to the organizers of the PASCAL Challenge Program and CLEF who helped us organize this
challenge and the challenge workshop. Especially, we would like to thank Carol Peters from CLEF
for helping us to get Morpho Challenge in CLEF 2007 and organize a great workshop there. Our
work was supported by the Academy of Finland in the projects Adaptive Informatics and New
adaptive and learning methods in speech recognition. This work was supported in part by the IST
Programme of the European Community, under the PASCAL Network of Excellence,
IST-2002506778. This publication only reflects the authors’ views. We acknowledge that access rights to
data and other materials are restricted due to other commitments.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Jeff</surname>
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Bilmes</surname>
            and
            <given-names>Katrin</given-names>
          </string-name>
          <string-name>
            <surname>Kirchhoff</surname>
          </string-name>
          .
          <article-title>Factored language models and generalized parallel backoff</article-title>
          .
          <source>In Proceedings of the Human Language Technology</source>
          ,
          <article-title>Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL)</article-title>
          , pages
          <fpage>4</fpage>
          -
          <lpage>6</lpage>
          , Edmonton, Canada,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Mathias</given-names>
            <surname>Creutz</surname>
          </string-name>
          and
          <string-name>
            <given-names>Krista</given-names>
            <surname>Lagus</surname>
          </string-name>
          .
          <article-title>Unsupervised discovery of morphemes</article-title>
          .
          <source>In Proceedings of the Workshop on Morphological and Phonological Learning of ACL-02</source>
          , pages
          <fpage>21</fpage>
          -
          <lpage>30</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Mathias</given-names>
            <surname>Creutz</surname>
          </string-name>
          and
          <string-name>
            <given-names>Krista</given-names>
            <surname>Lagus</surname>
          </string-name>
          .
          <article-title>Inducing the morphological lexicon of a natural language from unannotated text</article-title>
          .
          <source>In Proceedings of the International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning (AKRR'05)</source>
          , pages
          <fpage>106</fpage>
          -
          <lpage>113</lpage>
          , Espoo, Finland,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Mathias</given-names>
            <surname>Creutz</surname>
          </string-name>
          and
          <string-name>
            <given-names>Krista</given-names>
            <surname>Lagus</surname>
          </string-name>
          .
          <article-title>Unsupervised morpheme segmentation and morphology induction from text corpora using Morfessor</article-title>
          .
          <source>Technical Report A81</source>
          , Publications in Computer and Information Science, Helsinki University of Technology,
          <year>2005</year>
          . URL: http://www.cis.hut.fi/projects/morpho/.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Mathias</given-names>
            <surname>Creutz</surname>
          </string-name>
          and
          <string-name>
            <given-names>Krister</given-names>
            <surname>Linden</surname>
          </string-name>
          .
          <article-title>Morpheme segmentation gold standards for finnish and english</article-title>
          .
          <source>Technical Report A77</source>
          , Publications in Computer and Information Science, Helsinki University of Technology,
          <year>2004</year>
          . URL: http://www.cis.hut.fi/projects/morpho/.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Robertson</surname>
          </string-name>
          et al.
          <source>Okapi at TREC-3. In Proceedings of the Third Text Retrieval Conference (TREC-3)</source>
          , pages
          <fpage>109</fpage>
          -
          <lpage>126</lpage>
          ,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Mikko</given-names>
            <surname>Kurimo</surname>
          </string-name>
          , Mathias Creutz, and
          <string-name>
            <given-names>Ville</given-names>
            <surname>Turunen</surname>
          </string-name>
          .
          <article-title>Unsupervised morpheme analysis evaluation by IR experiments - Morpho Challenge 2007</article-title>
          .
          <source>In Working Notes for the CLEF 2007 Workshop</source>
          , Budapest, Hungary,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Mikko</given-names>
            <surname>Kurimo</surname>
          </string-name>
          , Mathias Creutz, and
          <string-name>
            <given-names>Matti</given-names>
            <surname>Varjokallio</surname>
          </string-name>
          .
          <article-title>Unsupervised morpheme analysis evaluation by a comparison to a linguistic Gold Standard - Morpho Challenge 2007</article-title>
          .
          <source>In Working Notes for the CLEF 2007 Workshop</source>
          , Budapest, Hungary,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Mikko</given-names>
            <surname>Kurimo</surname>
          </string-name>
          , Mathias Creutz, Matti Varjokallio, Ebru Arisoy, and
          <string-name>
            <given-names>Murat</given-names>
            <surname>Saraclar</surname>
          </string-name>
          .
          <source>Unsupervised segmentation of words into morphemes - Challenge</source>
          <year>2005</year>
          ,
          <article-title>an introduction and evaluation report</article-title>
          .
          <source>In PASCAL Challenge Workshop on Unsupervised segmentation of words into morphemes</source>
          , Venice, Italy,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Y.-S.</given-names>
            <surname>Lee</surname>
          </string-name>
          .
          <article-title>Morphological analysis for statistical machine translation</article-title>
          .
          <source>In Proceedings of the Human Language Technology</source>
          ,
          <article-title>Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL)</article-title>
          , Boston, MA, USA,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Porter</surname>
          </string-name>
          .
          <article-title>An algorithm for suffix stripping</article-title>
          .
          <source>Program</source>
          ,
          <volume>14</volume>
          (
          <issue>3</issue>
          ):
          <fpage>130</fpage>
          -
          <lpage>137</lpage>
          ,
          <year>July 1980</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Michael</given-names>
            <surname>Tepper</surname>
          </string-name>
          .
          <article-title>A Hybrid Approach to the Induction of Underlying Morphology</article-title>
          .
          <source>PhD thesis</source>
          , University of Washington,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Y.L.</given-names>
            <surname>Zieman</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.L.</given-names>
            <surname>Bleich</surname>
          </string-name>
          .
          <article-title>Conceptual mapping of user's queries to medical subject headings</article-title>
          .
          <source>In Proceedings of the 1997 American Medical Informatics Association (AMIA) Annual Fall Symposium</source>
          ,
          <year>October 1997</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>