<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Learning to Map Ontologies with Neural Network</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Ming Mao SAP Labs Palo Alto</institution>
          ,
          <addr-line>CA 94304</addr-line>
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Yefei Peng, Paul Munro School of Information Sciences University of Pittsburgh Pittsburgh</institution>
          ,
          <addr-line>PA 15206</addr-line>
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper the authors applied the idea of training multiple tasks simultaneously on a partially shared feed forward network to domain of ontology mapping. A “cross training” mechanism was used to specify corresponding nodes between the two ontologies. By examining output of one network in response to stimulus from the other network, we can test if the network can learn the correspondence that was not cross-trained. Two kinds of studies on ontology mapping were conducted. The result shows the network can fill in the missing mappings between ontologies with sufficient training data.</p>
      </abstract>
      <kwd-group>
        <kwd>neural network</kwd>
        <kwd>shared weights</kwd>
        <kwd>transfer</kwd>
        <kwd>analogy</kwd>
        <kwd>ontology mapping</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>An early implementation of IENN appeared at Munro’s work [2], which used
feedforward network with two hidden layers and trained on three simple analogous
tasks: three squares with different orientation. In this study, we use partially shared
network architecture [3] [4]. It should be noted that the partially shared network
architecture used here is virtually identical to the network used in Hinton’s [1] classic
“family trees” example. The network in that paper also had independent inputs and
shared hidden units, but only briefly addresses the notion of generalization.</p>
      <p>The ontologies used in our experiment were Ontology A and B shown in Figure 1.
There are four types of relationship: identity, parent, child, and sibling. So there are 4
nodes in Sin. Training in NetA include all possible training data in Ontology A, i.e.
possible combinations of 6 nodes and 4 relationships. The same for NetB.</p>
      <p>The network is cross trained on the following pairs: (r, R), (a, A), (b, B), (c, C) and
(d, D).</p>
      <p>Totally 100 trials were performed. In each trial, networks were initialized by
setting the weights to small random values from a uniform distribution. The network
was trained with two vertical training tasks (NetA and NetB), and two cross training
tasks(NetAB and NetBA).</p>
      <p>One training cycle of the networks is
1) randomly train a record for NetA
2) randomly train a record for NetB
3) with a probability train a record for NetAB and the same record for NetBA.
The probability of cross training is 0.6.</p>
      <p>After each trial, cross-testing was performed for A:1, B:2, B:3, and B:4. “self”
relationship was used during cross-testing.</p>
      <p>In 100 trials, 93 of them yield correct mapping for A:1maps to B:2. The accuracy
is 93%. There is no doubt that B:2’s correct mapping should be A:1, which is (Car,
Car). But B:3 (Luxury Car) and B:4 (Family Car) do not have exact correspondences
in ontology A, since B:3 and B:4 are on the additional layer of ontology B compared
to ontology A. They can either go up one layer and map to A:1, or go down one layer
and map to A:C and A:D. So here the correct mapping will be (A:1, B:3), (A:C, B:3);
(A:1, B:4), (A:D, B:4). Totally the four correct cases contain 75 trials in 100 trials.
The accuracy is 75%.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <article-title>In our approach, only structure information is used for ontology mapping. Normally in ontology mapping methods, textual information plays an important role. Future work will be to include textual information in our neural network. For example, training pairs could be from high confident mappings from textual information. 1</article-title>
          .
          <string-name>
            <surname>Hinton</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>1986</year>
          ).
          <article-title>Learning distributed representations of concepts</article-title>
          .
          <source>In Proceedings of the</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <source>Eighth Annual Conference of the Cognitive Science Society</source>
          , pages
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          , Amherst,
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <given-names>Lawrence</given-names>
            <surname>Erlbaum</surname>
          </string-name>
          , Hillsdale. 2.
          <string-name>
            <surname>Munro</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>1996</year>
          )
          <article-title>Shared network resources and shared task properties</article-title>
          .
          <source>In: Proceedings of</source>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <source>the Eighteenth Annual Conference of the Cognitive Science Society</source>
          . Mahwah NJ:
          <article-title>Erlbaum 3</article-title>
          .
          <string-name>
            <surname>Munro</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2008</year>
          )
          <article-title>Learning Structurally Analogous Tasks</article-title>
          .
          <source>In: Proceedings of the Eighteenth</source>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <source>Conference of Artificial Neural Networks. Prague, Czech Republic 4</source>
          .
          <string-name>
            <surname>Peng</surname>
            , Yefei and Munro,
            <given-names>P.</given-names>
          </string-name>
          <article-title>" Learning Mappings with Neural Network"</article-title>
          , In the Proceedings
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <source>of the 2009 International Conference on Artificial Intelligence</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>