<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>An approach to unsupervised ontology term tagging of dependency-parsed text using a Self-Organizing Map (SOM)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Seppo Nyrkko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Digital Humanities, University of Helsinki</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>I describe here a machine-learning estimation method for term tagging which can learn semantic disambiguation. The model is trained with a Semantic Web ontology, and a set of sample text documents with a set of concepts tagged, referring to the given ontology. The machine-learning method is based on creating numeric representations, or embeddings, which are based on dependency analysis of the syntactic environment of the word being analyzed. In contrast to many modern neural data-driven models, this model uses a less data-hungry unsupervised clustering method, the Self-Organizing Map (SOM). Based on the observations found with the experimental model, I suggest this can be utilized for populating ontologies with new concepts and terms, and for guessing the best matching ontology concepts for the found terms.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Large amounts of written information ow in news, article databases and
knowledge forums, and searching for required information often requires using proper
keywords. Semantic Web ontologies describe a vocabulary of concepts and terms
which are useful for Information Retrieval in their speci ed domain.</p>
      <p>
        Ontologies can provide enhanced results in information search when
multiple taxonomies of terms and keywords are used in composing a large document
database. Such databases may cover for instance a multilingual, cultural or
biological domain [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] where problems may be caused by diverse term variants,
historical synonyms, misspellings and foreign terms.
      </p>
      <p>
        Using automated content analysis based on machine learning, the amount
of manual work in concept annotation and keyword tagging can be reduced.
Automatic concept tagging makes it possible to apply ontology-based retrieval
methods that combine keyword search with concept-based search [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. This leads
to a better coverage and quality compared to standard information retrieval.
      </p>
      <p>I suggest here a method where a machine learning model is trained for
semantic tagging. For demonstration purposes, a model is trained with a small
annotated text, containing a set of examples of the terms described in the
annotation ontology. In gure 1 a sample ontology used for the experiment is shown
as a Venn diagram where separate and nested concept clusters are shown as
graphical regions.
The method intends to assist the process of adding semantic tags to
individual sentences and paragraphs in new documents to be added to the database.
The input text in new document is analyzed a sentence at a time, with a
dependency parser. The semantic similarities between terms in new input and the
reference text (training data) are estimated by the similarities in their syntactic
dependencies in the new document.</p>
      <p>A high syntactic similarity is considered as a possible semantic match.
Furthermore, the method can be extended to detect a new term, without proper
match. The approach also nds the closest match for an out-of-vocabulary term,
that is not yet introduced in the current ontology.</p>
      <p>
        By using an unsupervised machine learning method such as the Self-Organizing
Map (SOM) we can even give a comprehensive, visual impression of the
collection of articles available in the text database. A SOM is a Neural Network model
that is di erent from most modern neural network architectures. It is less
datahungry and it is tolerant to noise in the training data [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. This way, it can also
classify rare term occurrences that have no exact match in the training data set,
by guessing the best partial match based on the syntactic features of the term.
This makes it an interesting alternative model to learning features for terms,
associated with a set of ontology concepts.
3
      </p>
    </sec>
    <sec id="sec-2">
      <title>Sample experiment</title>
      <p>
        In the experiment, the sentences of text corpora are processed with the Stanford
Parser (Penn PCFG dependency model for English). The sentences are tokenized
as part of the dependency parsing process. Each token (in its actual word form)
in the sentences are indexed in the training sentence bank. The dependency
arcs and bi-arcs are extracted from the parse output, and each arc forms a
feature descriptor on the tagged word. The arcs are bi-directional so that one
dependency is tagged on the head and dependent word. The semantic features for
individual word tokens are random projected indexes of the features produced by
the Stanford Parser model. The syntactic context representation is very similar
to the one in Dependency-Based Word Embeddings as in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        The experimental OntoR tool was developed in the R statistical
programming environment, using the CRAN library som, based on SOM-PAK, the
SelfOrganizing Map Program Package (version 3.1) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. A screen shot from the
OntoR user interface (2) demonstrates how ontology-based term structure is
reected as a SOM map containing the keywords. A modi ed plot of the SOM
map has been developed to explore the mapping of ontology term classes and
super-classes over the term model trained with the sample corpus.
      </p>
      <p>The areas in the outcome SOM grid show the taxonomical hierarchy that can
be seen in the mapping of ontology-terms in the unsupervised model representing
the training corpus. Multiple clusters were seen with both the subterms and
terms categorized in the same map cell and their neighborhood. This supports
the earlier work hypothesis that a data point cluster with an internal topology,
or a structure, has a strong tendency to distribute over multiple adjacent cells
over the SOM lattice.
4</p>
    </sec>
    <sec id="sec-3">
      <title>Related Work and Discussion</title>
      <p>
        The WebSOM project[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] inspired work towards unsupervised term learning and
classi cation with the use of Self Organizing Map, which works and learns on
Internet sourced text articles, and extracts topics based on the tokens found in
the text articles. Also the work by Tanev et al [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] describes the main paradigms
on weakly supervised ontology population, one being the term pattern related
method and the another being the context sensitive triggering. The approach
described here is a contextual extension to the WebSOM model since it adds
syntactic dependencies as additional information over the tokens found in the
text. In this work, the suggested method for mapping concepts occurring in text
into the SOM grid will analogously support automatic tagging of new term
candidates in document databases. This seems applicable especially for hyponyms
(terms for subclasses) and synonyms for previously categorized terms. In the
following phase of the experiment, the internal weighting parameters for building
numeric embeddings from syntactic analysis will be evaluated and analyzed in
contrast to using plain word-based embeddings.
      </p>
      <p>
        This method can also be seen applicable in weakly supervised ontology
concept population for adding new term candidates, since the presence of some rare
terms occurrences were found in distinct areas of the SOM map in the
experiment. This aim to use the SOM in concept mining is also supported by work by
Honkela and Polla [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. The set of ontologies used with OntoR is not restricted to
a medical domain, as seen with the sample experiment. The used ontologies can
even cover multiple topics, for instance, history, politics, science and culture.
      </p>
      <p>Acknowledgments: Research and development of the method and the
OntoR tool have been supported by the MOLTO EU project and Whitelake
Software Point. The suggested model and inspection of the methods described
here have been developed and supported with feedback from professor Timo
Honkela and the Research Seminar in Language Technology held at University
of Helsinki.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>Jouni</given-names>
            <surname>Tuominen</surname>
          </string-name>
          , Nina Laurenne, and
          <article-title>Eero Hyvonen. Biological names and taxonomies on the semantic web{managing the change in scienti c conception</article-title>
          .
          <source>The Semanic Web: Research and Applications</source>
          , pages
          <volume>255</volume>
          {
          <fpage>269</fpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Minna</given-names>
            <surname>Tamper</surname>
          </string-name>
          , Petri Leskinen, Esko Ikkala, Arttu Oksanen, Eetu Makela,
          <string-name>
            <surname>Erkki</surname>
            <given-names>Heino</given-names>
          </string-name>
          , Jouni Tuominen, Mikko Koho, and
          <article-title>Eero Hyvonen. Aatos{a con gurable tool for automatic annotation</article-title>
          .
          <source>In International Conference on Language, Data and Knowledge</source>
          , pages
          <volume>276</volume>
          {
          <fpage>289</fpage>
          . Springer,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>Juha</given-names>
            <surname>Vesanto</surname>
          </string-name>
          and
          <string-name>
            <given-names>Esa</given-names>
            <surname>Alhoniemi</surname>
          </string-name>
          .
          <article-title>Clustering of the self-organizing map</article-title>
          .
          <source>IEEE Transactions on neural networks</source>
          ,
          <volume>11</volume>
          (
          <issue>3</issue>
          ):
          <volume>586</volume>
          {
          <fpage>600</fpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>Omer</given-names>
            <surname>Levy</surname>
          </string-name>
          and
          <string-name>
            <given-names>Yoav</given-names>
            <surname>Goldberg</surname>
          </string-name>
          .
          <article-title>Dependency-based word embeddings</article-title>
          .
          <source>In ACL (2)</source>
          , pages
          <fpage>302</fpage>
          {
          <fpage>308</fpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>Teuvo</given-names>
            <surname>Kohonen</surname>
          </string-name>
          , Jussi Hynninen, Jari Kangas, and
          <string-name>
            <given-names>Jorma</given-names>
            <surname>Laaksonen</surname>
          </string-name>
          .
          <article-title>Som pak: The self-organizing map program package</article-title>
          .
          <source>Report A31</source>
          , Helsinki University of Technology,
          <source>Laboratory of Computer and Information Science</source>
          ,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>T</given-names>
            <surname>Honkela</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S</given-names>
            <surname>Kaski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T</given-names>
            <surname>Kohonen</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          K Lagus.
          <article-title>Self-organizing maps of very large document collections: Justi cation for the websom method</article-title>
          .
          <source>In Classi cation, Data Analysis, and Data Highways</source>
          , pages
          <volume>245</volume>
          {
          <fpage>252</fpage>
          . Springer,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>Hristo</given-names>
            <surname>Tanev</surname>
          </string-name>
          and
          <string-name>
            <given-names>Bernardo</given-names>
            <surname>Magnini</surname>
          </string-name>
          .
          <article-title>Weakly supervised approaches for ontology population</article-title>
          .
          <source>In 11th Conference of the European Chapter of the Association for Computational Linguistics</source>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>Timo</given-names>
            <surname>Honkela</surname>
          </string-name>
          and
          <article-title>Matti Polla. Concept mining with self-organizing maps for the semantic web</article-title>
          .
          <source>In WSOM</source>
          , pages
          <volume>98</volume>
          {
          <fpage>106</fpage>
          . Springer,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>