<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jun Wang Les Gasser</string-name>
          <email>gasser@uiuc.edu</email>
          <email>junwang4@uiuc.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Graduate School of Library and Information Science University of Illinois at Urbana-Champaign 501 E. Daniel St.</institution>
          ,
          <addr-line>Champaign, IL 61820</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>One critical foundation for reliable collective coordinated action in multi-agent systems is the ability to exchange representational objects that can be locally interpreted in ways that make collective sense. Coherent local interpretation of shared representations is needed to create global semantic coherence for the distributed actions of individual agents. The process of autonomously establishing a collective semantics and organization of the concepts in an open domain is not well understood. This paper discusses our motivations and approach to this problem. It presents a specific formalization and algorithm that we call mutual online ontology alignment with its rationale, and illustrates the performance of the approach with some simulation experiments.</p>
      </abstract>
      <kwd-group>
        <kwd>ontology evolution</kwd>
        <kwd>ontology alignment</kwd>
        <kwd>colearning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. INTRODUCTION</title>
      <p>“We must be systematic, but we should keep our
systems open.” (Alfred North Whitehead, Modes
of Thought).</p>
      <p>In multi-agent systems research and practice, reliable
collective coordinated action rests on two foundations:
F1. Distributed reasoning algorithms whose critical
property is their ability to manage the control uncertainty
inherent in distributed settings.</p>
      <p>F2. Representational objects that can be exchanged and
still be locally interpreted in ways that make collective sense,
yielding semantic coherence to the actions of distributed
algorithms.</p>
      <p>
        This presents a kind of chicken-and-egg problem: distributed
reasoning algorithms depend on collectively-sensible objects
for coherent behavior, yet the collective sensibility of
objects arises through distributed reasoning algorithms. So,
our central questions are these:
Q1. How can we break into this cycle and create algorithms
to bootstrap and adapt new collective object semantics over
time?
In short, we need distributed methods that manage both
control uncertainty and semantic uncertainty in dynamic
settings, and that reduce—or at least stabilize—both
uncertainties over time. Two points of interactional and temporal
space interest us: the long-term, large-scale dynamics of
collective concepts and their overall structure, and the short
term adjustment, adaptation, and innovation processes in
which situation-specific concepts and interpretations emerge
quickly within a context of the more stable and long-term
conceptual architecture. We are led to viewing collective
concept management as a multi-tiered dynamic system built
from constrained, regionalized (i.e., not globally random)
interactions, that may have local dynamism while exhibiting a
degree of global stability. In our research work as a whole, we
approach these two levels with two different methodologies:
algorithms for rapid multi-agent mutual concept design and
alignment for the local, short term adaptation, and a
“population dynamics of concepts” for understanding, predicting,
and stabilizing the longer-term structural properties of
conceptual spaces. In this paper we treat only the design of
specific algorithms for the short-term, context-sensitive
mutual adaptation of ontologies by a collection of autonomous
agents. Our work on the issues of stability and long-term
structure of collective ontologies is treated elsewhere.
Agents cooperating in a multi-agent setting need a shared
ontology[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. But the fact is that there is often more than
one ontology established in a given domain. Since these
existing ontologies are designed by different people or
organizations, ontology alignment1 is needed for agents to
cooperate in multi-agent systems. Recently there has been an
increasing interest in ontology alignment [
        <xref ref-type="bibr" rid="ref1 ref2 ref4 ref5 ref6">6, 4, 1, 5, 2</xref>
        ].
However, existing work on ontology alignment does not fully
accommodate requirements for open multi-agent systems.
Open multi-agent systems must adapt rapidly to new
categories, conceptualizations, and specifications of domains —
they cannot be defined or aligned once and for all. Moreover,
autonomous multi-agent systems must refine and align their
own ontologies collectively, not through the external human
intervention of standards committees and designers. To
create multi-agent systems that are both adaptive and open,
agents must collectively design common ontologies actively
in an online fashion.
      </p>
      <p>To this end, we propose a mutual ontology adaptation
frameQ2. How can we stabilize collective semantics in dynamic
settings so that collective behavior is possible?
1By ontology alignment we mean a process that is often
approached through ontology mapping and/or merging.
work. The framework assumes that each agent has its own
ontology which is not open to other agents or outside. An
agent learns about the ontology of other agents through the
exchange of information. In this paper, we define an
ontology as an extensional categorization of the domain of
interest, and we define the information that agents exchange as
an instance or object of an ontological category. We will
describe this approach formally in next section. The intuition
behind this assumption is that each person in human society
has its own ontology or conceptualization about the world,
and one way that we know about the ontologies of others is
through the exchange (or reference to) of instances such as
concrete examples rather than the exchange of an abstract
concept.</p>
      <p>Based on the above framework, we design an ontology
alignment game in which agents with different ontologies
mutually adapt to each other’s ontology through interaction. We
also conduct an experimental simulation of the game which
shows how agents can converge to a shared ontology, using
a mutual ontology update algorithm proposed in this paper.</p>
    </sec>
    <sec id="sec-2">
      <title>2. ONTOLOGY ALIGNMENT PROBLEM</title>
    </sec>
    <sec id="sec-3">
      <title>FORMAL DESCRIPTION</title>
      <p>
        An ontology is a system of categories for a domain [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Let
there be n agents {A1, · · · , An}, and a domain O which has
m instances or objects O = {o1, · · · , om}. Each agent Ai has
a categorization Ci on the domain O. A categorization C is
a partition of the domain O: C = c1 ∪ c2 ∪ · · · ∪ ck, where
ci = {o1, · · · , cli } is called a category. For example, let a
domain O have 10 instances: O = {0, 1, ..., 9}. A
categorization can be: C = {0, 1, 2, 3} ∪ {4, 5, 6, 7} ∪ {8, 9}, where
c1 = {0, 1, 2, 3}, c2 = {4, 5, 6, 7}, and c3 = {8, 9} are three
categories.
      </p>
      <p>A single category is a concept of an agent’s ontology. Note
that in this definition, a category is an extensional
representation of a concept. Most of existing work on ontology
assumes that the representation of a concept is intensional
description.</p>
      <p>Given the above definition, we can identify two different
ontology alignment problems: off-line ontology alignment
and online ontology alignment. Most existing work belongs
to off-line alignment, in which the entire specification for the
problem exists before the alignment is done.</p>
      <p>The task of off-line alignment can be defined as follows. Find
a categorization C∗ such that in=1 M(C∗, Ci) is minimal.
The mis-alignment function M can be defined according to
the needs of application. Off-line alignment requires that an
agent knows the ontology of other agents.</p>
      <p>In this paper, we are concerned about online ontology
alignment problem, which is defined as follows. Design a
mechanism or find a solution such that in,j=1 M(Ci(t+1), Cj(t+1)) &lt;
n
i,j=1 M(Ci(t), Cj(t)) , where t means the time step. In
online mode, we don’t care about finding a global optimal
categorization such as C∗ in off-line mode. Since the
alignment process in this setting is dynamical, and thus the result
is adaptable as the situation changes. If we can work out
a mechanism to satisfy the above inequality, it means all
agents will eventually converge to a shared ontology.
3.</p>
    </sec>
    <sec id="sec-4">
      <title>MUTUAL ONTOLOGY ADAPTATION FRAMEWORK</title>
      <p>adaptation
ontology
p
r
o
d
u
c
t
i
o
n
instance
agent i
n
o
i
t
a
t
e
r
p
r
e
t
n
i</p>
      <p>agent k
communication
channel
adaptation
ontology
p
r
o
d
u
c
t
i
o
n
instance
agent j
n
o
i
t
a
t
e
r
p
r
e
t
n
i
To approach the online ontology alignment problem, we
introduce a mutual online ontology adaptation framework (see
figure 1). A mutual online ontology adaptation system
consists of a set of agents and a communication channel used by
agents to exchange information. In this paper we suppose
the communication channel is perfect without noise. For
one agent, the framework includes five elements: ontology,
instance, instance production, instance interpretation, and
ontology adaptation ( or online ontology alignment).</p>
      <sec id="sec-4-1">
        <title>Ontology</title>
        <p>A partition of the domain O (see the previous section).</p>
      </sec>
      <sec id="sec-4-2">
        <title>Instance</title>
        <p>An element o in the domain O: o ∈ O.</p>
      </sec>
      <sec id="sec-4-3">
        <title>Production</title>
        <p>When an agent plans to express some concept or category
to other agents, it can use an instance belonging to that
category to represent this concept2. There might exist many
instances that can be used to represent the concept. It is
possible that different instances have different priorities to
represent a same concept.</p>
      </sec>
      <sec id="sec-4-4">
        <title>Interpretation</title>
        <p>Given an instance, find the category of this instance.</p>
      </sec>
      <sec id="sec-4-5">
        <title>Adaptation</title>
        <p>When an agent receives an instance from another agent, it
may adapt its ontology such that its ontology would be more
compatible with the ontology of the sender. We will give a
specification of some adaptation rules in the next section.
2In fact, unless a set of agents already has a compatible and
verified shared ontology, it is difficult to see how they could
specify categories to each other in another way.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. ONTOLOGY ALIGNMENT GAME</title>
      <p>We model the multi-agent mutual ontology alignment as an
ontology alignment game. The game is played repeatedly in
a (finite or infinite) sequence of rounds. On each round, the
following events happen:
1. Two agents are randomly chosen from the agent
population. One is called the learner AL and the other
is called the teacher AT . In fact, this terminology is
not completely accurate, since both agents will adapt
their ontology in the same round, learning from each
other. However, there is a slight asymmetry because
the teacher will initiate the process. Denote by CT
the teacher’s categorization or ontology, and by CL
the learner’s one.
2. The teacher randomly chooses a category c from its
ontology CT , and then chooses an instance o ∈ c.
3. The learner receives the instance o, and performs an
interpretation, locating a category c ∈ CL such that
o ∈ c . From c , the learner chooses a new object o
from c which is different from the received one, and
sends o back to the teacher to indicate that to the
learner’s knowledge c , o and o all belong to the same
concept or category.
4. The teacher tells the learner whether the instance o
actually does match the category c of the teacher or
not. If it matches, we say that the learner has made a
correct interpretation on this round.
5. The learner takes certain actions (e.g., updating its
ontology or doing nothing), after getting the feedback
from the teacher. The teacher also takes certain
actions (e.g., updating its ontology or doing nothing)
after sending the feedback to the learner. Below we
detail these actions and the decisions that lead to them.
The goal of an agent in a learner’s role is to make as few
mistakes as possible during the game. If possible, we would
like all agents to converge to a shared ontology so that no
interpretation mistakes will be made in the population. One
good measure of success is the probability that any agent
makes no mistakes of interpretation.</p>
    </sec>
    <sec id="sec-6">
      <title>4.1 Ontology update</title>
      <p>
        In our model, we assume that each agent keeps an
association matrix, which is used to capture relationships among
objects in the domain of interest. After each round of the
alignment game, the association matrix might be updated to
reflect the new relationships among objects. Given a current
version of an agent’s ontology and a newly updated
association matrix, a new version of ontology can be computed
using a variant of K-means — a classical partition-based
clustering method[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <sec id="sec-6-1">
        <title>Association matrix update</title>
        <p>Denote by sij or s(oi,oj) the association strength of the
in</p>
        <p>T
stances pair (oi, oj). Let s(o,o ) be the association strength
between the two objects o and o on the teacher’s side,</p>
        <p>L
and s(o,o ) be the association strength between o and o
(1)
(2)
on the learner’s side. If the learner’s response matches the
teacher’s, the update will be:</p>
        <p>T
s(o,o )
L
s(o,o )
←
←</p>
        <p>T
s(o,o ) + α
L
s(o,o ) + α
where α (0 ≤ α &lt; 1) is called match update rate. When
α = 0, it means doing nothing. Teacher and learner can
take different α values. For simplicity, we assume that they
take the same value.</p>
        <p>If the learner’s response doesn’t match the teacher’s, the
update will be:</p>
        <p>T
s(o,o )
L
s(o,o )
←
←</p>
        <p>T
s(o,o ) + β
L
s(o,o ) − β
where β (0 ≤ β &lt; 1) is mismatch update rate. Similarly,
when β = 0, it means doing nothing. Teacher and learner
can take different β values. For simplicity, we assume that
they take the same value.</p>
        <p>Note that both learner and teacher will update their
matrices. The intuition behind the joint update is that neither
agent’s existing ontology is treated as the ideal reference. In
a sense, the learner has made a mistake because the teacher,
too, has made a mistake: if the teacher’s ontology had been
corrected with respect to the learner (i.e. identical to the
learner’s ontology), then the learner would not have made a
mistake. The collection of agents is collaboratively
designing an ontology that fits their mutual circumstances, rather
than moving toward a pre-ordained one.</p>
      </sec>
      <sec id="sec-6-2">
        <title>Ontology update</title>
        <p>Let an agent have a set of categories as its ontology: C =
c1 ∪ · · · ∪ cn. The agent also has an association matrix S =
(sij). Initially, sij is set to:
sij =
1 if object oi and oj belong to the same category
0 otherwise
The update of categorization C is done by using a variant
of K-means clustering. The basic idea is as follows. For an
object o, compute the association strength between o and
each category ci, which can be defined as:</p>
        <p>AS(o, ci) =</p>
        <p>s(o,o )
1
| ci |
o ∈ci
For each object, assign it to a category which has the
maximal association strength with this object. The (re)assignment
of object to a category may be repeated until some criterium
is satisfied. In this paper, we only conduct once for object
assignment.</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>5. SIMULATION</title>
      <p>We have constructed a simple simulation that represents
this collective ontology alignment. To make things simple,
in this experiment, we consider only the categorization of
fixed number of categories3.
3In fact, K-means clustering requires the number of clusters
be fixed in advance.
0
5
Figure 2 shows such a simulation of 2 agents in a domain
containing 10 instances {0, 1, 2, · · · , 9} with 2 fixed categories.
We set the two update rates as: α = β = 0.1. Each
generation contains 10 rounds of the ontology alignment game—
one for each agent as teacher with a random learner (though
the selection of agents doesn’t make a large difference). The
average accuracy is the ratio of the total number of correct
interpretations over all rounds in the single generation to the
number of rounds in that generation (i.e., 10 in this case).
As the figure shows, the collection of agents in this
simulation converges to a perfect collective interpretation at about
31 generations.</p>
      <p>Initial ontologies of agents
agent 1 {0, 1, 2, 3, 4} {5, 6, 7, 8, 9}
agent 2 {0, 2, 4, 6, 8} {1, 3, 5, 7, 9}</p>
      <p>Shared ontologies after evolution
agent 1 {0, 2, 4} {1, 3, 5, 6, 7, 8, 9}
agent 2 {0, 2, 4} {1, 3, 5, 6, 7, 8, 9}
Table 1 illustrates what the evolution result looks like.
After the convergence, they share the same ontology.
According to the initial ontologies of two agents, we are able
to figure out that there exist two optimal shared
ontologies4 : C = {0, 2, 4} ∪ {1, 3, 5, 6, 7, 8, 9} and C = {5, 7, 9} ∪
{0, 1, 2, 3, 4, 6, 8}. This simulation shows the first solution.
In fact, with different random seed setting, another solution
was also obtained in our experiment.</p>
    </sec>
    <sec id="sec-8">
      <title>6. DISCUSSION AND CONCLUSION</title>
      <p>We have given the sketch of a formalized approach to a
dynamic process for mutual continuous co-alignment (actually
a kind of collective design) of a group-wide ontology. At
present we have no formal characterization of the specific
conditions for collective convergence of this procedure, or
proof of convergence in all cases. However, our simulation
work indicates that there is a wide range of conditions
under which convergence does occur, and we believe a more
rigorous result can be built. Further development of this
idea will include improving our understanding of these
convergence conditions, as well as expanding our treatment to
cover more forms of ontological representation with greater
structural complexity.
4Here, we define optimal shared ontology as the ontology
which has the minimal mismatch with the initial ontologies
of all agents.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Agrawal</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Srikant</surname>
          </string-name>
          .
          <article-title>On integrating catalogs</article-title>
          .
          <source>In WWW2001</source>
          , Hong Kong, May
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Doan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Madhavan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Domingos</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Halevy</surname>
          </string-name>
          .
          <article-title>Learning to map between ontologies on the semantic web</article-title>
          .
          <source>In WWW2002</source>
          , Honolulu, Hawaii, May
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Jain</surname>
          </string-name>
          and
          <string-name>
            <given-names>R. C.</given-names>
            <surname>Dubes</surname>
          </string-name>
          .
          <article-title>Algorithms for Clustering Data</article-title>
          .
          <source>Prentice Hall</source>
          ,
          <year>1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Lacher</surname>
          </string-name>
          and
          <string-name>
            <given-names>G.</given-names>
            <surname>Groh.</surname>
          </string-name>
          <article-title>Facilitating the exchange of explicit knowledge through ontology mappings</article-title>
          .
          <source>In The 14th International FLAIRS Conference</source>
          , Key West, FL,
          <year>2001</year>
          . AAAI Press.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Maedche</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Staab</surname>
          </string-name>
          .
          <article-title>Ontology learning for the semantic web</article-title>
          .
          <source>IEEE Intelligent Systems</source>
          ,
          <volume>16</volume>
          (
          <issue>2</issue>
          ):
          <fpage>72</fpage>
          -
          <lpage>79</lpage>
          , March/April 2001.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>N. F.</given-names>
            <surname>Noy</surname>
          </string-name>
          and
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Musen</surname>
          </string-name>
          . Prompt:
          <article-title>Algorithm and tool for automated ontology merging and alignment</article-title>
          .
          <source>In AAAI/IAAI</source>
          <year>2000</year>
          , pages
          <fpage>450</fpage>
          -
          <lpage>455</lpage>
          , Austin, TX,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Sowa</surname>
          </string-name>
          . Knowledge Representation: Logical, Philosophical, and
          <string-name>
            <given-names>Computational</given-names>
            <surname>Foundations</surname>
          </string-name>
          . Brooks/Cole, Pacific Grove, CA,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>L.</given-names>
            <surname>Steels</surname>
          </string-name>
          .
          <article-title>The origins of ontologies and communication conventions in multi-agent systems</article-title>
          .
          <source>Autonomous Agents and Multi-Agent Systems</source>
          ,
          <volume>1</volume>
          (
          <issue>2</issue>
          ):
          <fpage>169</fpage>
          -
          <lpage>194</lpage>
          ,
          <year>October 1998</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>