<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Ontology Mapping Neural Network: An Approach to Learning and Inferring Correspondences among Ontologies</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yefei Peng</string-name>
          <email>yefeip@google.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Paul Munro</string-name>
          <email>pmunro@pitt.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ming Mao</string-name>
          <email>ming.mao@sap.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>SAP Labs</institution>
          ,
          <addr-line>Palo Alto CA 94304</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Pittsburgh</institution>
          ,
          <addr-line>Pittsburgh PA 15206</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>An ontology mapping neural network (OMNN) is proposed in order to learn and infer correspondences among ontologies. It extends the Identical Elements Neural Network (IENN)'s ability to represent and map complex relationships. The learning dynamics of simultaneous (interlaced) training of similar tasks interact at the shared connections of the networks. The output of one network in response to a stimulus to another network can be interpreted as an analogical mapping. In a similar fashion, the networks can be explicitly trained to map specific items in one domain to specific items in another domain. Representation layer helps the network learn relationship mapping with direct training method. OMNN is applied to several OAEI benchmark test cases to test its performance on ontology mapping. Results show that OMNN approach is competitive to the top performing systems that participated in OAEI 2009.</p>
      </abstract>
      <kwd-group>
        <kwd>neural network</kwd>
        <kwd>ontology mapping</kwd>
        <kwd>analogy</kwd>
        <kwd>learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Ontology mapping is important to the emerging Semantic Web. The pervasive
usage of agents and web services is a characteristic of the Semantic Web.
However agents might use different protocols that are independently designed. That
means when agents meet they have little chance to understand each other
without an“interpreter”. Ontology mapping is “a necessary precondition to establish
interoperability between agents or services using different ontologies.” [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]
      </p>
      <p>
        The Ontology Mapping Neural Network (OMNN) extends the Identical
Elements Neural Network(IENN)’s [
        <xref ref-type="bibr" rid="ref1 ref3 ref4 ref5">3, 4, 1, 5</xref>
        ] ability to represent and map complex
relationships. The network can learn high-level features common to different
tasks, and use them to infer correspondence between the tasks. The learning
dynamics of simultaneous (interlaced) training of similar tasks interact at the
shared connections of the networks. The output of one network in response to a
stimulus to another network can be interpreted as an analogical mapping. In a
similar fashion, the networks can be explicitly trained to map specific items in
one domain to specific items in another domain.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Network Architecture</title>
      <p>The network architecture is shown in Figure 1. Ain and Bin are input subvectors
for nodes from ontology A and ontology B respectively. They share one
representation layer ABr. RAin represents relationships from graph A; RBin represents
relationships from graph B. They share one representation layer Rr.</p>
      <p>In this network, each to-be-mapped node in graph is represented by a
single active unit in input layers (Ain, Bin) and output layers (Aout, Bout ). For
relationships representation in input layer (RAin, RBin), each relationship is
represented by a single active unit.</p>
      <p>The network shown in Figure 1 has multiple sub networks shown in the
following list.
1. N etAAA : {Ain-ABr-XAB; RAin-RRA-XR }-H1-W -H2-VA-Aout;
2. N etBBB : {Bin-ABr-XAB; RBin-RRB-XR }-H1-W -H2-VB -Bout;
3. N etAAB : {Ain-ABr-XAB; RAin-RRA-XR }-H1-W -H2-VB-Bout;
4. N etBBA : {Bin-ABr-XAB; RBin-RRB -XR }-H1-W -H2-VA-Aout;</p>
      <p>An explicit cross training method is proposed to train the correspondence
of two relationships by directly making their representations more similar. Only
a portion of the neural network is involved in this cross training method: the
input subvectors and representation layer. For example, we want to train the
relationship correspondence of &lt; R1, R2 &gt;, where R1 belongs to ontology A and
R2 belongs to ontology B. R1 will be presented at RAin. The output at Rr will
be recorded, which we will name as RR1. Then R2 is presented at RBin. RR1 will
be treated as target value at Rr for R2. Weights RUB will be modified so that R1
and R2 have more similar representation at Rr with standard back propagation
method. Then &lt; R1, R2 &gt; will be trained so that weight RUA will be modified to
make R1’s representation at Rr similar to that of R2. The sub networks involved
in this training method will be named as RN etAB and RN etBA.</p>
      <p>Network is initialized by setting the weights to small random values from
a uniform distribution. The network is trained with two vertical training tasks
(N etAAA and N etBBB), two cross training tasks (N etAAB and N etBBA), and
two explicit training tasks (RN etAB and RN etBA ).
3</p>
    </sec>
    <sec id="sec-3">
      <title>Results</title>
      <p>Selected OAEI 3 benchmark tests are used to evaluate OMNN approach. All
test cases share the same reference ontology, while the test ontology is different.
3 http://oaei.ontologymatching.org/
The reference ontology contains 33 named classes, 24 object properties, 40 data
properties, 56 named individuals and 20 anonymous individuals. In the OMNN
approach, classes are treated as items; object properties and data properties are
treated as relationships; lastly individuals are not used.</p>
      <p>Texture information is used to generate high confident mappings which are
then used as cross-training data in OMNN. However OMNN does not focus on
how well texture information is used.</p>
      <p>In order to compare with other approaches that heavily use texture
information, 19 test cases with limited texture information are selected to be used in our
experiments. They are test case 249, 257, 258, 265, 259, 266 and their sub-cases.</p>
      <p>To get a meaningful comparison, the Wilcox test is performed to compare
OMNN with the other 12 systems participated in OAEI 2009 on precision, recall
and f-measure. The result is shown in Table 1. It shows that OMNN has better
F-measure than 9 of the 12 systems, OMNN’s recall is significantly better than 10
of the systems. It should be noted that p-value&lt; 0.05 means there is significant
difference between two systems compared, then detailed data is visited to reveal
which is one is better than the other.
.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Bao</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Munro</surname>
            ,
            <given-names>P.W.</given-names>
          </string-name>
          :
          <article-title>Structural mapping with identical elements neural network</article-title>
          .
          <source>In: Proceedings of the International Joint Conference on Neural Networks - IJCNN 2006</source>
          . pp.
          <fpage>870</fpage>
          -
          <lpage>874</lpage>
          . IEEE (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Ehrig</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Ontology Alignment: Bridging the Semantic Gap (Semantic Web</article-title>
          and Beyond). Springer-Verlag New York, Inc., Secaucus, NJ, USA (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Munro</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Shared network resources and shared task properties</article-title>
          .
          <source>In: Proceedings of the Eighteenth Annual Conference of the Cognitive Science Society</source>
          (
          <year>1996</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Munro</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bao</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>A connectionist implementation of identical elements</article-title>
          .
          <source>In: In Proceedings of the Twenty Seventh Ann. Conf. Cognitive Science Society Proceedings. Lawerence Erlbaum: Mahwah NJ</source>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Munro</surname>
            ,
            <given-names>P.W.</given-names>
          </string-name>
          :
          <article-title>Learning structurally analogous tasks</article-title>
          . In: Kurkov´a, V.,
          <string-name>
            <surname>Neruda</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <source>Koutn´ık, J. (eds.) Artificial Neural Networks - ICANN</source>
          <year>2008</year>
          ,
          <source>18th International Conference. Lecture Notes in Computer Science</source>
          , vol.
          <volume>5164</volume>
          , pp.
          <fpage>406</fpage>
          -
          <lpage>412</lpage>
          . Springer: Berlin/Heidelberg (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>