<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Catalog Search Engine: Semantics applied to products search</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jacques-Albert De Blasio</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Takahiro Kawamura</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tetsuo Hasegawa</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Research and Development Center</institution>
          ,
          <addr-line>Toshiba Corp</addr-line>
        </aff>
      </contrib-group>
      <fpage>11</fpage>
      <lpage>20</lpage>
      <abstract>
        <p>The Semantic Web introduces the need for semantic search engines. In this paper, we explain our vision of a catalog search engine for semantically de¯ned products. With our prototype, we address the problem of products' information retrieval over the Internet and their semantic enrichment through the mixed usage of thesauruses and ontologies. We show how we automatically build a repository of instances of ontology classes, and how we dynamically prioritize the search variables of our engine. We then introduce our prototype which, through the use of all those concepts, improves the user experience.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        The Semantic Web introduces the need for semantic search engines. Although
semantic search is already available in a variety of forms such as SHOE[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] or Ask
Jeeves[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], semantic search for products sold on the Internet is rarely available.
With the system we developed, we strived to ¯ll this gap.
      </p>
      <p>Products catalogs available on the Internet all have limitations of several
types. They either provide a wide range of products but have a poor search
engine in terms of precision, or o®er a limited range of products with a powerful
but too specialized (in terms of search variables) search engine. Whichever the
catalog, the user can easily get frustrated by their poor ability to supply him/her
precise results and an extensive selection of products at the same time.</p>
      <p>One of the challenges of the semantic web is to transform the already
available information into more meaningful, more usable data. A lot of the available
literature tackles this problem and agrees on a fundamental problem: most of the
di±culties come from the ambiguity of the human language, and from the fact
that nearly all this information has been created by humans for human
consumption. Products information, on the other hand, has the advantage of being, in
most cases, based on an agreed vocabulary. However, the problem with products
sold on the Internet is that the quality of their descriptions (in terms of
availability), as well as the presentation of those descriptions (in structured tables,
simple paragraphs, etc) varies greatly from a web site to another. Nonetheless,
considering that the Internet is the biggest products database available, it
becomes obvious that it should be the source of any search engine aspiring to be
as complete and accurate as possible.</p>
      <p>Our vision of a catalog search engine includes three distinct goals. The catalog
must contain as much products as possible, its search engine must be the most
accurate, and the user must be given enough tools to let him/her search e±ciently
through the catalog. In this paper, we show that we answered to those three
needs with the following strategies. The wideness of the catalog is insured by
the gathering of existing products' information all over the Internet through the
usage of dedicated parsers. The accuracy of the search engine is reached by a
combination of semantic enrichment of the previously fetched information, and
the automatic conversion into logic facts of every characteristic of the products.
Eventually, the user's search e±ciency is enhanced by the usage of algorithms
dynamically computing the usefulness of each products' characteristic. Moreover,
the usage of a thesaurus during the query phase allows the user not to worry
about the exactitude of the content of his/her query.</p>
      <p>In the following sections, we ¯rst introduce the architecture of our system.
Then, we focus on its usage and explain the details of its features. We continue
with a short demonstration of our prototype and follow with a discussion about
decisions we took during the design phase. Eventually, we take a brief look at
the existing work tackling the problem of catalog search engines and conclude.
2
2.1</p>
    </sec>
    <sec id="sec-2">
      <title>Architecture of the Catalog Search Engine</title>
      <p>Overall Architecture
The architecture of the catalog search engine is shown in Fig. 1. The system we
suggest is a complete solution, from the server fetching data from the Internet,
to the client that will let the end-user carry out his/her requests. The server is
made of 4 main components; the web page fetcher, the parser, the facts creator
and the pro¯les creator. The client is separated into 2 main components; a GUI
with which the end-user will communicate, and a proxy which will be in charge
of the client-Matchmaker communication. In between the server and the client
lies the Matchmaker server which provides the search capabilities.</p>
      <p>The main idea of this system consists in fetching information about products
sold on the Internet, and publishing this information on the Matchmaker server.
This information will be enriched with semantics using ontologies. On the other
side, the end-user will be able to express requests to the Matchmaker. The latter
will browse through all the available advertisements, try to ¯nd those which are
the most closely related to the request and eventually return them to the user.
We will describe the usage in the next section.</p>
      <p>
        At the heart of our prototype lies the Semantic Service Matchmaker, a service
search engine based on the LARKS[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] algorithm. It adopts the ¯ltering approach
which uses sophisticated information retrieval mechanisms and ontology-based
subsumption mechanisms to match requests against advertisements. This engine
has already proven to be e±cient in regard to web services matchmaking[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ][
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>Ideally, when the requester looks for a product, the Matchmaker will retrieve
a product that matches exactly the expected one. In practice, if the exact product
is not available, the Matchmaker will retrieve one which capabilities are similar to
those expected by the requester. Ultimately, the matching process is the result of
the interaction of the products available, and the requirements of the requester.
Server
manager</p>
      <p>Server client
Web browser</p>
      <p>CPU
memory
hdd
…
Results</p>
      <p>Parameters
End-user</p>
      <p>User client Avail. items/charact.</p>
      <p>query
Web browser
s
u
r
u
a
s
e
h
T
/
y
g
o
lt
o
n
O</p>
      <sec id="sec-2-1">
        <title>AACANNollect pages</title>
        <p>ANN</p>
      </sec>
      <sec id="sec-2-2">
        <title>PaArAsANNeNitems / stores</title>
        <p>AN</p>
      </sec>
      <sec id="sec-2-3">
        <title>AAANCNreate facts</title>
        <p>ANN</p>
      </sec>
      <sec id="sec-2-4">
        <title>ACAANrNNeate profiles</title>
        <p>AN
Record (profiles)</p>
        <p>Matchmaking</p>
        <p>engine</p>
        <p>Matchmaker server
Search (profile)
Matchmaker client
Ontology / Thesaurus</p>
        <p>Client server
tables
lists</p>
        <p>
          Although the Matchmaker originally provides a set of 5 ¯lters, our prototype
uses only two of them. The type ¯lter applies a set of subtype inferencing rules
mainly based on structural algorithm to determine whether an ontology class is
a subsumption of another. The constraint ¯lter has the responsibility to verify
whether the subsumption relationship for each of the constraints are logically
valid. The Matchmaker computes the logical implication among constraints by
using polynomial subsumption checking for Horn clauses. More details about the
Matchmaker's ¯lters are provided in [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].
2.2
        </p>
        <p>
          Usage scenario
Server side
1. The server is initialized with a ¯le which contains information concerning the
web pages to be fetched and parsed. This ¯le connects each type of products with
a list of web pages (e.g http://somewhere/hddidetosell.html ) products
type \HDD IDE").
2. Next, web pages are fetched from selected web sites. Once done, a parser
detects relevant information from those web pages. If a new product is detected,
the server automatically creates a new instance of the ontology class which
describes the type of the product. If a new characteristic is detected, the server
updates the list of properties associated with each class of product.
3. Then, the server automatically creates a ¯le containing a list of facts written
in RDF-RuleML[
          <xref ref-type="bibr" rid="ref8">8</xref>
          ][
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. Each fact corresponds to a characteristic of a product (see
section 2.4).
4. Eventually, the server creates an \advertisement" pro¯le for each product. A
pro¯le is the semantic description of a product (see section 2.5). Once all the
pro¯les for all the products have been created, they are registered in the
Matchmaker server.
        </p>
        <p>Client side
1. First the user inputs a query. This query is parsed and its content is compared
with the thesaurus' words, as well as with the name of the instances of the
products' types (created at step 2 of the server part).
2. The answer to the query is either a list of types of products having the best
matching terms to the query, or a list of instances, or both. If the answer is a list
of instances, the user can click on one of them in order to display the details of
the chosen product. If the answer is a list of types of products, the user can click
on one of them in order to display a list of characteristics of the chosen type. If
the user wants to carry out a ¯ne search, he/she must input some values for the
characteristics with which he/she wants the result to be relevant. When the user
eventually clicks on the \¯ner search" button, a RDF-RuleML ¯le containing
those characteristics translated into facts is automatically created.
3. The system automatically creates a \request" pro¯le. This pro¯le is then
submitted to the Matchmaker, which tries to match this \request" to the
\advertisements" contained in its database. If one or more matching pro¯les are
found, they are sent back to the user, who sees them as individual products.
He/she can then click on one of the available link to di®erent shops selling the
product.
2.3</p>
        <p>
          Usage of thesauruses and ontologies
Thesauruses are needed for any search engine of which search mechanism is
not exclusively based on the query's keywords. For the sake of simplicity, we
built our own thesaurus for our prototype. While the goal is, of course, to use a
rather complete thesaurus such as WordNet[
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], we wanted our thesaurus to be
multilingual, which WordNet is not. In our thesaurus, n terms can be synonyms
of n other terms, and each term is translated into m languages. As the user can
choose in which language he/she would like to interact with our prototype, it
will set the language of the thesaurus. In the future we want to improve this
by letting the user type the request in any language and let the search engine
browse through the entire thesaurus, without any preference for the language.
        </p>
        <p>
          Our ontologies are written in OWL[
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. An OWL class corresponds to a
product type which characteristics are described using the OWL properties. Each
property's range is either an object (e.g. the type of interface of a hard disk)
or a value (e.g. the capacity of a hard disk, in gigabytes). The Fig. 2 shows our
internal representation of an ontology. The \datatype" attribute is needed in
order to make sure that the value of a product's characteristic taken from a web
site (during the parsing) has the same type as the one expected.
        </p>
        <p>Catalog Search Engine
C l ( a e s x s :
• n a m e
• c U l) R a L s s
• S e t o f
• S e t o f</p>
        <p>H D D )
(
i n s t a n c e s
p r o p e r t i e s
P r ( o e p x e : r ( t ec yxa :c h c e a ) p a c i t y )
• n a m e • n a m e
• p r o …p •e rp tr yo p ( e U r Rt Ly ) ( U R L )
• d a t a • td y a p t ea t y p e
• o b j e c• to b / j e v c a t l u / e v a l u e</p>
        <p>In our system, we have a general ontology composed of classes corresponding
to various types of products (e.g. \hdd ide", \scanner", etc). We assume that this
ontology has been created prior to the launch of the system. When the server
of the system browses and parses the information found on various web sites,
it automatically ¯lls in each instances slot for each class. For example, all the
products found as being of type \hdd ide" will be automatically added to the
instances slot of the product type (the class) \hdd ide". The instances growth
is non-monotonic. It makes sense in that products may disappear if the catalog
of the sellers is updated. In our prototype, we track each product's availability,
so that if all the web sites of which we get information do not o®er a product
anymore, its instance is automatically deleted.</p>
        <p>A problem we encountered was related to the names of the products. A
product has, in general, the same name wherever it is sold. Even so, some web
sites on the Internet mix the name and the characteristics of the products in the
same string. The instances we record being based on the name, we had to insure
that, even though the name may be di®erent for a same product, new instances
might not be created. To solve this problem, we ¯rst check the manufacturers of
the products. If they match, we use a parser which is able to correct suspicious
products' names through string comparisons. Also, as some products may have
di®erent names in di®erent countries, we use a thesaurus of products'names to
make sure that the products are the same.
2.4</p>
        <p>Characteristics of the products
A product is distinguished by its characteristics. A hard disk has a certain seek
time, capacity, interface, etc. Translating products' characteristics into facts is
quite natural, as each characteristic can be thought as a truth about the product
it describes. The facts our system creates are all of the form P (x; y) where P
is a predicate, and x and y two terms. The facts written by the server use two
kinds of predicates, \equal" when the term y is a numerical value, and \is"
when the term y is a string of characters. The facts written by the client use
the predicates \is less than or equal to" and \is more than or equal to" when
y is a numerical value, \is" when y is a string of characters. Each property of
the products' classes of our main ontology are converted into OWL classes and
used as the term x of the facts (for a discussion about this conversion, refer to
section 4). The facts concerning the manufacturer, the seller and the price of a
product are considered the minimum information required for a product to be
taken into account.</p>
        <p>In the table 1, we show an example of a product's characteristics converted
into facts on the server and the client side. On the server side, the characteristics
of a hard disk fetched from the Internet are converted into facts. On the client
side, characteristics which values have been input by the user are also converted
into facts. \MANUFACTURER", \COST", \SELLER" and \CAPACITY" were
OWL properties converted into classes. The predicates are classes of an
ontology describing predicates. We use our own thesaurus to make the connections
between the words used in the description of a characteristic (e.g. \sold at") and
the corresponding ontology class (e.g. \COST"). See the code at section 2.5 for
an example of a fact.</p>
        <p>Once the client has transmitted the request pro¯le (which contains links to
the facts created by the client) to the Matchmaker, the latter will use its inference
engine (called the constraints ¯lter) to match the facts of the advertisements to
the facts of the request.
2.5</p>
        <p>Pro¯les
A pro¯le is an OWL ¯le containing a semantic description of a product, as well
as a list of links to each fact present in the facts ¯les related to this same product.
For each shop selling the product, a pro¯le is created (i.e. a product being sold by
10 shops will have 10 di®erent pro¯les). The information stored in those pro¯les
are the ontology class of the product's type, the name of the product (only if the
pro¯le is an advertisement), the list of facts and the URL to the shop selling the
product. Once advertisements pro¯les have been registered to the Matchmaker,
if a request pro¯le is submitted the Matchmaker applies a matching using its
type ¯lter on the ontology class of the product's type and its constraint ¯lter on
all the facts. The following code shows an example of a pro¯le and a fact.
&lt;product:description rdf:id="Kakaku_CPU_Athlon_64_2800_Socket754_5"&gt;
&lt;product:name&gt;ATHLON 64 2800 Socket754_5&lt;/productName&gt;
&lt;product:restrictedTo rdf:resource="http://somewhere/onto.owl#cpu" /&gt;
&lt;product:constraint rdf:resource="http://somewhere/facts.rdf#clockspeed" /&gt;
&lt;product:constraint rdf:resource="http://somewhere/facts.rdf#cost" /&gt;
&lt;product:constraint rdf:resource="http://somewhere/facts.rdf#manufacturer" /&gt;
...</p>
        <p>&lt;product:shopURL&gt; http://www.aShopURL.com/&lt;/product:shopURL&gt;
&lt;/product:description&gt;
&lt;ruleml:Fact ruleml:label="cost"&gt;
&lt;ruleml:head&gt;
&lt;ruleml:Atom ruleml:rel="http://somewhere/predicates.owl#numericallyEqual"&gt;
&lt;ruleml:args&gt;
&lt;rdf:Seq&gt;
&lt;rdf:li&gt;</p>
        <p>&lt;ruleml:Var ruleml:name="http://somewhere/store.owl#COST" /&gt;
&lt;/rdf:li&gt;
&lt;rdf:li&gt;</p>
        <p>&lt;ruleml:Ind ruleml:name="20990" /&gt;
&lt;/rdf:li&gt;
&lt;/rdf:Seq&gt;
&lt;/ruleml:args&gt;
&lt;/ruleml:Atom&gt;
&lt;/ruleml:head&gt;
&lt;/ruleml:Fact&gt;
2.6</p>
        <p>Dynamic update and prioritization of the characteristics
When the user searches for a given type of product, its related characteristics
are displayed. If the user wants to carry out a ¯ne search, he/she can insert some
values to the characteristics he/she wants to be respected. The list of available
characteristics is updated on the server side, when fetching and parsing
information from various web sites. However, the priority in which those characteristics
are shown to the user is dependant of each characteristic's associated weight.
The weights are updated as follows.
- the more a characteristic is available for a given type of product, the greater
its weight will be,
- the more a characteristic has possible values, the greater its weight will be (e.g.
the size of a screen can be 15", 17",19", 21", etc),
- the more a characteristic is chosen by the user, the greater its weight will be.</p>
        <p>Other conditions come also into play to determine the position of a
characteristic. As products' types are ontologically de¯ned, we rely on the parent-child
relationships to tell whether a product's characteristic should be shown before
another. For instance, as \computer" is the parent class of \notebook computer",
if the user searches for \notebook computers" its characteristics should be
displayed prior to those of \computer".
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>The Prototype</title>
      <p>
        We introduce here the prototype of the client application. Data from the Internet
has already been fetched from Kakaku[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], a Japanese catalog web site, and
parsed on the server side.
      </p>
      <p>In Fig. 3(a), the user entered \hard disk" as a query. With the thesaurus, the
client found out that \hard disk" is the equivalent to \hdd". Moreover, using the
ontology, the client proposes not only \hdd" but also \hdd ide" and \hdd scsi",
two subclasses of \hdd". In Fig. 3(b), the user selected \hdd ide". The client
now proposes a list of characteristics of the \hdd ide". The three ¯rst are always
present for each product. The next ones are the characteristics available for \hdd
ide", as well as \hdd", as well as any other parent class of the \hdd", back up to
the root of the ontology. The user decided to input values for two characteristics,
the cost and the capacity. Once done, he/she gets the result shown in Fig. 3(c).
The values corresponding to the chosen characteristics in the previous step are
shown in bold. A link is provided to shops selling the products.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Discussion</title>
      <p>Our intention was not to produce a very e±cient catalog search engine in terms
of speed, but rather in terms of relevance of the results. In this regard, we
reached the three goals cited in the introduction of this paper. As our system
gathers data from various web sites, we get more details about the products
than if we simply relied on speci¯c vendors and thus insure the wideness of the
catalog. Accuracy of the search engine is provided by the combined usage of
thesauruses and ontologies, allowing the system to return very precise results
even if the query of the user is relatively vague. Eventually, search e±ciency is
attained by handling the feedback of the user regarding the characteristics of the
products. As a consequence of all this, we observed that a user needs about half
the number of clicks than usually needed when accessing the same information
about products on other web sites such as Kakaku.</p>
      <p>However, as our system is still at the stage of a prototype, it is not without
°aws. In fact, the parser approach to the problem of information fetching on
various web sites can prove to be quite weak in the long term. It requires much
more advanced techniques to be able to fetch facts or rules from web pages
such as Amazon or Yahoo Shopping as those web sites do not always display
information about products in a very formal way.</p>
      <p>The reader may wonder why we chose to create facts using RDF-RuleML
to describe the characteristics of each products, instead of directly using the
properties of each products' classes of our ontology. The reason is that we intend
to create a much more powerful search engine, which does more than giving the
possibility to enter a value for each characteristic. The goal is to use a better
Natural Language Processing tool during the parsing phase on the server side,
so that the system becomes able to create rules such as \if the credit card is Visa
or American Express, the customer can have a 5% discount ". This kind of rule
can not be expressed with OWL's classes and properties.</p>
      <p>
        Alternatively, we thought about expressing the facts and rules in SWRL[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]
instead of RDF-RuleML. The advantage is that SWRL allows the use of
properties which have been created in the ontology, and thus avoids redundancy.
However, the terms of an atom in SWRL must be either variables, OWL
individuals or OWL data values. Unfortunately, as individuals lack any subsumption
relationship, the constraint ¯lter of the Matchmaker would not work e±ciently.
Froogle, a twin of Google in a shopping search engine point of view, o®ers a
wide catalog and blatant speed, but allows search re¯nement only through price
range. Kakaku gives the possibility to search using the products' characteristics,
but the number of the latter is static. Both search engines get the products
information directly from the vendors. Although it insures accuracy, this method
limits greatly the number of sources of information. Amazon is too restrictive in
terms of products, as they propose only those which they sell. To our knowledge,
none of the web sites cited above make use of semantics.
      </p>
      <p>
        The IWebS project[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ][
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] aims at creating an intelligent yellow pages service
with semantically annotated services. Although they share some similarities with
our approach, they introduce the needs for manual annotations which would be
intolerable for a database of thousands of di®erent products.
      </p>
      <p>
        Active Catalog[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] focuses on how retrieved information can be used to
engineer parts and physical objects. Its database is entirely built beforehand, that
is, there is no dynamic data acquisition. The parts' characteristics are also all
predetermined. Eventually, the content as well as the usage makes it usable
exclusively to engineers.
6
      </p>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>Based on the Matchmaker, we developed a prototype of a catalog search engine
which enables users to have more accurate results in regard to their queries.
Search parameters are dynamically updated through the analysis of fetched
information and the feedback from the users. We showed that the approach of
fetching available products data from the Internet, adding semantic to it through
the use of ontologies, and e±ciently searching through it is feasible. Using rules,
our system will be able to give more expressive power to users' queries.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Simple</surname>
            <given-names>HTML</given-names>
          </string-name>
          Ontology Extensions, http://www.cs.umd.edu/projects/plus/SHOE/
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Ask</given-names>
            <surname>Jeeves</surname>
          </string-name>
          , http://www.ask.com/.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>K.</given-names>
            <surname>Sycara</surname>
          </string-name>
          , S. Wido®,
          <string-name>
            <given-names>M.</given-names>
            <surname>Klusch</surname>
          </string-name>
          , J. Lu, \LARKS:
          <article-title>Dynamic Matchmaking Among Heterogeneous Software Agents in Cyberspace"</article-title>
          ,
          <source>In Autonomous Agents and MultiAgent Systems</source>
          , Vol.
          <volume>5</volume>
          , pp.
          <fpage>173</fpage>
          -
          <lpage>203</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>M.</given-names>
            <surname>Paolucci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kawamura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. R.</given-names>
            <surname>Payne</surname>
          </string-name>
          , K. Sycara, \
          <article-title>Semantic Matching of Web Services Capabilities"</article-title>
          ,
          <source>Proceedings of First International Semantic Web Conference (ISWC</source>
          <year>2002</year>
          ), IEEE, pp.
          <fpage>333</fpage>
          -
          <lpage>347</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>T.</given-names>
            <surname>Kawamura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Blasio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hasegawa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Paolucci</surname>
          </string-name>
          , K. Sycara, \
          <article-title>Public Deployment of Semantic Service Matchmaker with UDDI Business Registry"</article-title>
          ,
          <source>Proceedings of 3rd International Semantic Web Conference (ISWC</source>
          <year>2004</year>
          ),
          <year>2004</year>
          . to appear.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>6. WordNet, http://www.cogsci.princeton.edu/ wn/</mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>Web</given-names>
            <surname>Ontology</surname>
          </string-name>
          <string-name>
            <surname>Language</surname>
          </string-name>
          , http://www.w3.org/TR/owl-ref/.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>8. Resource Description Framework, http://www.w3.org/RDF/.</mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>9. RuleML, http://www.ruleml.org/.</mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Kakaku</surname>
          </string-name>
          , http://www.kakaku.com.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Semantic Web Rule Language</surname>
          </string-name>
          , http://www.w3.org/Submission/SWRL/.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>M. Laukkanen</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Viljanen</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Apiola</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Lindgren</surname>
            , and
            <given-names>E. Hyvonen. \Towards</given-names>
          </string-name>
          <string-name>
            <surname>Ontology-Based Yellow Page Services</surname>
          </string-name>
          <article-title>"</article-title>
          .
          <source>In Proceedings of the WWW 2004 Workshop on Application Design</source>
          ,
          <article-title>Development and Implementation Issues in the Semantic Web</article-title>
          , New York, USA, May 18th
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13. E. Hyvonen,
          <string-name>
            <given-names>K.</given-names>
            <surname>Viljanen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hatinen</surname>
          </string-name>
          . \
          <article-title>Yellow pages on the semantic Web". Towards the Semantic Web and Web services</article-title>
          .
          <source>In Proceedings of XML Conference</source>
          ,
          <year>Finland 2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <given-names>S.R.</given-names>
            <surname>Ling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Will</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Luo</surname>
          </string-name>
          , \
          <article-title>Active Catalog: Searching and Using Catalog Information in Internet-Based Design,"</article-title>
          <source>Proceedings of DETC '</source>
          <fpage>97</fpage>
          - 1997
          <string-name>
            <given-names>ASME</given-names>
            <surname>Design Engineering Technical Conferences</surname>
          </string-name>
          , Sacramento, California,
          <source>September 14-17</source>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>