<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Ontology Mapping Neural Network: An Approach to Learning and Inferring Correspondences among Ontologies</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yefei Peng</string-name>
          <email>yefeip@google.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Paul Munro</string-name>
          <email>pmunro@pitt.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ming Mao</string-name>
          <email>ming.mao@sap.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>SAP Labs</institution>
          ,
          <addr-line>Palo Alto CA 94304</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Pittsburgh</institution>
          ,
          <addr-line>Pittsburgh PA 15206</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The Ontology Mapping Neural Network (OMNN) extends the ability of Identical Elements Neural Network(IENN) and its variants' [4, 1-3] to represent and map complex relationships. The network can learn high-level features common to different tasks, and use them to infer correspondence between the tasks. The learning dynamics of simultaneous (interlaced) training of similar tasks interact at the shared connections of the networks. The output of one network in response to a stimulus to another network can be interpreted as an analogical mapping. In a similar fashion, the networks can be explicitly trained to map specific items in one domain to specific items in another domain. A more detailed version is published on the main conference [5]. The network architecture is shown in Figure 1. Ain and Bin are input subvectors for nodes from ontology A and ontology B respectively. They share one representation layer ABr. RAin represents relationships from graph A; RBin represents relationships from graph B. They share one representation layer Rr. In this network, each to-be-mapped node in graph is represented by a single active unit in input layers (Ain, Bin) and output layers (Aout, Bout ). For relationships representation in input layer (RAin, RBin), each relationship is represented by a single active unit. The network shown in Figure 1 has multiple sub networks shown in the following list. 1. N etAAA : {Ain-ABr-XAB; RAin-RRA-XR }-H1-W -H2-VA-Aout; 2. N etBBB : {Bin-ABr-XAB; RBin-RRB-XR }-H1-W -H2-VB-Bout; 3. N etAAB : {Ain-ABr-XAB; RAin-RRA-XR }-H1-W -H2-VB-Bout; 4. N etBBA : {Bin-ABr-XAB; RBin-RRB-XR }-H1-W -H2-VA-Aout; Selected OAEI 3 benchmark tests are used to evaluate OMNN approach. Wilcox test is performed to compare OMNN with the other 12 systems participated in OAEI 2009 on precision, recall and f-measure. The result is shown in Figure 1. Green means OMNN is significantly better than the system; Red means the system is significantly better than OMNN. Yellow means no significant difference. Significance is defined as p − value &lt; 0.05. It shows that OMNN</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>has better F-measure than 9 of the 12 systems, OMNN’s recall is significantly
better than 10 of the systems. It should be noted that p-value&lt; 0.05 means
there is significant difference between two systems compared, then detailed data
is visited to reveal which is one is better than the other.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Munro</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Shared network resources and shared task properties</article-title>
          .
          <source>In: Proceedings of the Eighteenth Annual Conference of the Cognitive Science Society</source>
          (
          <year>1996</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Munro</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bao</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>A connectionist implementation of identical elements</article-title>
          .
          <source>In: Proceedings of the Twenty Seventh Ann. Conf. Cognitive Science Society Proceedings. Lawerence Erlbaum: Mahwah NJ</source>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Munro</surname>
            ,
            <given-names>P.W.</given-names>
          </string-name>
          :
          <article-title>Learning structurally analogous tasks</article-title>
          . In: Kurkov´a, V.,
          <string-name>
            <surname>Neruda</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <source>Koutn´ık, J. (eds.) Artificial Neural Networks - ICANN</source>
          <year>2008</year>
          ,
          <source>18th International Conference. Lecture Notes in Computer Science</source>
          , vol.
          <volume>5164</volume>
          , pp.
          <fpage>406</fpage>
          -
          <lpage>412</lpage>
          . Springer: Berlin/Heidelberg (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Munro</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peng</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Analogical learning and inference in overlapping networks</article-title>
          . In: Kokinov,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Holyoak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Gentner</surname>
          </string-name>
          ,
          <string-name>
            <surname>D</surname>
          </string-name>
          . (eds.) New Frontiers in Analogy Research. New Bulgarian University Press (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Peng</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Munro</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mao</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Ontology mapping neural network: An approach to learning and inferring correspondences among ontologies</article-title>
          .
          <source>In: Proceedings of the 9th International Semantic Web Conference</source>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>