=Paper= {{Paper |id=Vol-197/paper-4 |storemode=property |title=Applying Inference Engine to Context-Aware Computing Services |pdfUrl=https://ceur-ws.org/Vol-197/Paper4.pdf |volume=Vol-197 }} ==Applying Inference Engine to Context-Aware Computing Services== https://ceur-ws.org/Vol-197/Paper4.pdf
     ubiPCMM06:2nd International Workshop on Personalized Context Modeling and Management for UbiComp Applications




   Applying Inference Engine to Context-Aware Computing

                                                                   Services
  Jaemoon Sim, Jihoon Kim, Ohbyung Kwon                                               Sean S. Lee, Jungho Kim, HK Jang,
  College of Management and International Relations                                            Myungchul Lee
                KyungHee University                                                      IBM Ubiquitous Computing Lab
      Yongin, Kyungggi-do, 449701, South Korea                                 Dogok-dong, Kangnam-gu, Seoul, Korea Rep 135700
                  +82-31-201-2306                                                               +82-2-3781-8598
       { deskmoon, hdlamb, obkwon}@khu.ac.kr                                      { lsean, kjungho, hkjang, mclee}@kr.ibm.com



ABSTRACT                                                                       full-fledged ubiquitous computing services covering vast zones
Ubiquitous computing services started taking advantage of the                  with various sets of requirements such as lots of fast moving users
reasoning capabilities of inference engines to acquire hidden and              and other transient computing entities. The main purpose of this
potentially useful contextual information. However, performance                paper is to examine how well inference engines satisfy
evaluations of the inference engines have been limited to the                  responsiveness requirement, scalability requirement and
domain of static information reasoning; evaluations of                         requirement to accommodate frequent inference requests in
requirements pertaining to ubiquitous computing environment                    response to dynamic data insertion and deletion that realistic
have been largely neglected. This paper aims to examine how                    ubiquitous computing environments exhibit. To do so, we have
different types of inference engines perform by applying them to               modeled scenarios based in a major Korean university such as
realistic ubiquitous computing scenarios. Based on the scenarios,              MyEntrance service. MyEntrance service scenario is a part of a
three measurement criteria are proposed and measured including                 larger Celadon project [8] in aim to study and adopt the most
scalability as data set gets large, responsiveness for user’s requests,        suitable reasoner for ubiquitous environment in its system.
and adaptability to frequent inference requests.                               Specifically, five most prominent engines are considered based on
                                                                               their reasoning mechanisms: three memory-based and two DBMS-
Keywords                                                                       based engines.
Inference Engine, OWL-DL, scalability, MINERVA, DLDB-OWL
                                                                               2. INFERENCE ENGINES
1. INTRODUCTION                                                                MINERVA, JENA, Pellet, RacerPro, and DLDB-OWL (HAWK)
Ubiquitous computing services aim to provide information and                   [1, 2, 3, 4, 5, 6, 7] are discussed as representative instances of each
services in more intelligent ways with more seamless interfaces                class of reasoners and they are summarized in Table 1.
that aid users to be conveniently served anytime, anywhere with
any devices without awkward user intervention. These must rely                 2.1 MINERVA
on not only multiple sensors capturing user’s context but also                 Minerva is high performance OWL storage, inference, and query
reasoning capabilities for processing raw sensed context data into             system built on RDBMS. The advantages of the system could be
more useful and meaningful information within the user’s                       categorized in two important aspects of a reasoner: response time
environment. Recently, inference engines such as Jena, RacerPro,               and scalability. It does all required inferences in the time of data
FacT++, and Minerva have been proposed as a core component of                  load instead of data query, making it more responsive at the time
intelligent ubiquitous computing systems. There had been many                  of user query. In addition, it calculates all inferences in relational
extensive evaluations of their reasoning capabilities in correctness,          database management system, making it more scalable than
completeness, and response time [10, 11]. It is less widely known              memory based counterpart. It is provided as a component of
without fully analyzing how well the inference engines can fare                IBM’s Integrated Ontology Development Toolkit (IODT) [4] and
                                                                                it supports DLP (Description Logic Program), a subset of OWL
Permission to make digital or hard copies of all or part of this work for       DL and conjunctive query, a subset of the SPARQL language.
personal or classroom use is granted without fee provided that copies are       Minerva uses Description Logic reasoner for TBox and a set of
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise,
                                                                                logic rules translated from Description Logic Programming (DLP)
or republish, to post on servers or to redistribute to lists, requires prior    for ABox inference.
specific permission and/or a fee.
                                                                               2.2 DLDB-OWL/HAWK
UbiPCMM06 September 18, 2006, California, USA
     ubiPCMM06:2nd International Workshop on Personalized Context Modeling and Management for UbiComp Applications



                                      Table 1. The Comparative Table for Inference Engine
               Import                     Inference                         Repository             Query
                                                                                                                Version        Source
             Ontology                       Internal     External       Persistent    Memo
                            Capability                                                           Language
             Language                        Engine       Engine         Storage       -ry
                                                                                                                Semantics
                                                                        DB2, Derby,
  Minerva       OWL              DL             O       Racer, Pellet                    X         SPARQL        Toolkit-        IBM
                                                                         HSQLDB
                                                                                                                  1.1.1

                                                                                                                HAWK 1.3       Lehigh
  DLDB-                                                                 MS-Access,                 KIF-Like
                OWL              DL             X          Racer                         O                        beta        University
   OWL                                                                  PostgreSQL                 language
                                                                                                                              SWAT Lab
               DIG tell                                                                            DIG ask
   Pellet     document,          DL             O            X              X            O        document,      pellet-1.3    mindswap
                OWL                                                                                RDQL
                                                                                                   DIG ask                      Racer
               DIG tell
                                                                                                  document,      RacerPro      Systems
   Racer      document,          DL             O            O              X            O
                                                                                                 Racer Query      1.9.0       GmbH & Co.
                OWL
                                                                                                  Language                       KG
                                                         Racer Pro,       MySQL,
                                                           Pellet,       HSQLDB,
   JENA         OWL              DL             O          Fact++       PostgreSQL,      O         SPARQL        Jena 2.4      HP Labs
                                                        (JenaOntMo      Oracle, MS-
                                                            del)        SQL Server
DLDB-OWL is a repository framework and toolkit that supports             It provides a programmatic environment for RDF, RDFS and
OWL It provides APIs as well as implementations for parsing,             OWL, SPARQL and includes a rule-based inference engine. The
editing, manipulating and preservation of OWL ontologies. The            Jena Framework includes a RDF API, reading and writing RDF in
architecture of DLDB-OWL consists of three packages: core, owl           RDF/XML, N3 and N-Triples and an OWL API and SPARQL
and storage. The core package defines the generic interfaces of the      query engine.
data structures of ontology and ontology objects, e.g. class and
property. The core package, that is independent of underlying            3. PERFORMANCE EVALUATION
model the application will use, provides an API for constructing
and manipulating ontology models. The owl package provides the           3.1 MyEntrance Service Scenario
utilities for parsing and serializing ontologies in OWL language.        To analyze how the legacy inference engines perform in realistic
                                                                         ubiquitous computing environments, we used a MyEntrance
2.3 Pellet                                                               service scenario modeled based on Kyunghee Univerity (KHU).
Pellet is an open-source Java based OWL DL reasoner. It can be           KHU has two great campuses, Seoul campus with 50 departments
used in conjunction with both Jena and OWL API libraries and             and YongIn campus with 51 departments.
also provides a DIG interface. Pellet API provides functionalities       “When a member of KHU enters the Student Union Building, a
to see the species validation, check consistency of ontologies,          service agent recognizes the member’s preference and searches for
classify the taxonomy, check entailments and answer a subset of          an Event on the Application Server to provide it for him/her.
RDQL queries. It supports the full expressivity OWL DL including         Additionally, a sports equipment shop located in Student Union
reasoning about nominals (enumerated classes).                           Building wants to advertise its sales promotion on new baseball
                                                                         products for the KHU students and faculty members. The Service
2.4 RacerPro                                                             Agent of the sports equipment shop in KHU requests Application
RacerPro is an OWL reasoner and inference server for the                 Server to retrieve the information on the hobbies of members
semantic web. The origins of RacerPro are within the area of             located in the Student Union Building. The Application Server
description logics. It can be used as a system for managing              hands over the request of Service Agent to Context Server about
semantic web ontologies based on OWL. However, RacerPro can              the hobby information of members who are located in the Student
also be seen as a semantic web information repository with               Union Building. Context Server returns the result of preference
optimized retrieval engine because it can handle large sets of data      and product matching inference. Consequently, the Service Agent
descriptions.                                                            checks for the members who like to play baseball and it sends the
                                                                         Discount Event Information to appropriate members. The
2.5 JENA                                                                 members who receive the event information make a purchase
Jena is a Java framework for building Semantic Web applications.         decision with the received offer.
     ubiPCMM06:2nd International Workshop on Personalized Context Modeling and Management for UbiComp Applications



To accommodate this scenario, we have developed KHU campus
                                                                                                      Application server
ontology based on the ontology generation program supported by
                                                                                                      (context ontology reader)
IBM China Lab [9]. On top of the generated ontology, we have                                          (Inference engine)
installed area-based location context. An experimental data set
which represents an actual college (9 departments, file size=
11.8MB), International College in KHU at Suwon, was made and                                                                  Client

used in the performance evaluation.

3.2 Results: Performance Evaluation on Static
and Dynamic Context Information
For the performance evaluation on static information, we placed                                             Context event handler
                                                                                                            (context ontology writer)
our focus on scalability and subsequent performance issue.                        Context server
Specifically, in evaluating the performance of query processing,                  (OWL DL ontology)

we considered 16 sets of University Ontology Benchmark [9]
queries that were generated by IBM China Research Lab. by                             Figure 1. Simulation Environment
extending widely used Lehigh University Benchmark [10].
In order to evaluate handling of context information, SPARQL is        We select query response time as the performance measure. The
used as follows:                                                       summary of response time is listed in Table 2.

SELECT                                                                            Table 2. Summary of query response time
DISTINCT ?person ?zone ?Hobby1 ?Hobby2 ?Hobby3                                                     Response Time (ms)
WHERE                                                                                                            DLDB-
                                                                        Query #
(?person benchmark:locatedIn                                                                MINERVA              OWL              Pellet
)                                                           1                        424.90          10.00         5858.00
(?person benchmark:locatedIn ?zone)                                      2                        312.80          12.85         268.70
(?person benchmark:like ?Hobby1)                                         3                        284.30         241.85          115.50
                                                                         4                        382.80         221.57           89.00
For DLDB-OWL/HAWK evaluation, the above query is translated              5                        358.00           2.85           31.20
into KIF-like query as follows:                                          6                        343.70           5.71           42.20
                                                                         7                        483.00           5.71           40.50
[http://rcubs.kyunghee.ac.kr/owl/univ-bench-
                                                                         8                        743.60           5.71       1813723.00
dl.owl]
                                                                         9                        843.90         160.14         295.40
(type Person ?x)
                                                                         10                       857.90          25.71           7.80
(locatedIn ?x
                                                                         11                        ­              81.71           17.40
http://rcubs.kyunghee.ac.kr/owl/rcubs-univ-bench-
                                                                         12                      1176.60          43.00           9.20
dl.owl#Zone1)
                                                                         13                       906.40           2.85       1199564.80
(locatedIn ?x ?z)
(like ?x ?y1)
                                                                         14                      1076.60         173.00        467724.80
                                                                         15                       978.10         158.71       208571.80
We set up our scenario environment of campus like the topology           16                      1186.00         291.71            51.50
shown as Figure 1. For context generation, user’s current location     Performance evaluations in case of dynamic context were also
is gathered by the context event handler on the client’s portable      performed. Dynamic context evaluations were differentiated from
device or tags at any time the user is passing by sensors located in   the static context evaluation in that, engines were constantly
the entrance gate of the campus buildings. At a fixed cycle, the       requested to inference based on new data whereas the static
context event handler sends the user’s current context information     evaluations were performed after the inference processing is
as OWL-DL format to the context server. The context server             finished. For this test, since the memory-based reasoning systems
returns the result if the ontology files comes through successfully    cannot be directly compared to the DBMS-based reasoning
to the context event handler. The user may invoke the application      systems in their absolute loading time, we tested two database-
server to get location-based service at any time the user wants.       based reasoning systems: MINERVA and DLDB-OWL/HAWK.
Then the application server asks the context server to get service-    Only the test with data set 1 is described in this paper because we
related context data as facts. According to the facts, the inference   encountered a consistent problem with the dynamic data set and
engine determines the location-based services most appropriate for     did not see the value in presenting more data in this study. The
the user.                                                              result is listed in
      ubiPCMM06:2nd International Workshop on Personalized Context Modeling and Management for UbiComp Applications



      Table 3. Summary of load time in dynamic situation            and third numbers denote the rate of yielding wrong answers and
                                                                    no response to the query, respectively.
                                          Load Time (%)
                                                                    4. CONCLUSION
  Cycle time of    Cycle time of                  DLDB-             We first studied the responsiveness to the user request using both
                                   Minerva
  updating         query                          OWL               memory-based and DBMS-based inference engines in static
  context data     processing                                       context setting. The result indicated that database-based inference
                                   33.3           99.8
                   600sec          0.0            0.2               engines far outperform the other in this criterion owing to the fact
                                   66.7           0.0               that memory based inference engines pre-process context data
                                                                    while loading and thus is able to respond without further
                                   5.0            89.5
                                                                    calculation at query time.
  1200 SEC         60sec           6.7            10.5
                                                                    Then, using DBMS-based inference engines, we analyzed how
                                   88.3           0.0
                                                                    they perform in conducting a context-aware MyEntrance service
                                   0.6            91.3
                                                                    under the setting and environment of a major university. By
                   10sec           3.6            8.7
                                                                    varying the cycle of context changes and knowledge loading,
                                   95.8           0.0
                                                                    performance measures such as correctness and completeness were
                                   33.3           93.3
                                                                    examined. We noticed that engines do not respond to queries in the
                   600sec          16.7           6.7
                                                                    middle of their inferences and sometimes generate invalid or no
                                   50.0           0.0
                                                                    responses.
                                   5.0            84.0
                                                                    We conclude that current state-of-the-art inference engines do not
  900 SEC          60sec           6.7            16.0
                                                                    fulfill responsiveness, scalability and high-update frequency
                                   88.3           0.0
                                                                    requirements demanded for ubiquitous computing environments
                                   0.8            89.1
                                                                    with lots of fast moving users and other transient computing
                   10sec           6.7            10.9
                                                                    entities. And, evolutionary algorithms or inference mechanisms to
                                   92.5           0.0
                                                                    deal with enormous amount of data, frequency of updates and
                                   50.0           90.0
                                                                    respond to user’s needs in time are needed for the inference
                   600sec          0.0            10.0
                                                                    engines to be improved.
                                   50.0           0.0
                                   6.7            71.0              5. ACKNOWLEDGEMENT
  600 SEC          60sec           5.0            29.0              Thanks to the Institute of Information Technology Assessment and
                                   88.3           0.0               Ministry of Information and Communication of Republic of Korea
                                   0.8            85.7              for providing an opportunity to participate in the IT839 project.
                   10sec           6.7            14.3              And, thanks to IBM China Research Laboratory for providing
                                   92.5           0.0               detailed information about Integrated Ontology Development
                                                                    Toolkit available on IBM AlphaWorks.
Table 3 where the first number of the three denotes the rate of
yielding correct answers to the query aforementioned. The second

REFERENCES                                                          [5]   Kevin, W., Sayers, C., Kuno, H. Efficient RDF Storage and
[1]   Carroll, J.J., Dickinson, I., Dollin, C., Reynolds, D.,             Retrieval in Jena2. Proceedings of First International
      Seaborne, A., Wilkinson, K. Jena: Implementing the                  Workshop on Semantic Web and Databases (2003) 131-151
      Semantic Web Recommendations. Proceedings of the 13th         [6]   Sirin, E., Parsia, B. Pellet: An owl dl reasoner. Proceedings
      International World Wide Web Conference. ACM Press, New             Of the Int. Third International Semantic Web Conference
      York (2004) 74-83                                                   (ISWC 2004) - Poster (2004)
[2]   Guo, Y., Pan, Z., Heflin., J. An Evaluation of Knowledge      [7]   Wessel, M., Möller, R. A High Performance Semantic Web
      Base Systems for Large OWL Datasets. Proceedings of the             Query Answering Engine. International Workshop on
      3rd International Semantic Web Conference, Hiroshima.               Description Logics (DL2005), Edinburgh, Scotland, UK.
      LNCS, Vol. 3298. (2004) 274-288                                     (2005)
[3]   Haarslev, V., M¨oller, R., Wessel, M. Querying the Semantic   [8]   MC Lee, HK Jang, YS Paik, SE Jin, S Lee A Ubiquitous
      Web with Racer + nRQL, In: Proc. of the KI-04 Workshop              Device Collaboration Infrasturcture: Celadon. Third
      on Applications of Description Logics. (2004)                       Workshop on Software Technologies for Future Embedded &
[4]   IBM’s IODT/Minerva team: Minerva Reasoner, See,                     Ubiquitous Systems (SEUS 2006)
      http://www.alphaworks.ibm.com/tech/semanticstk           or
      http://www.ifcomputer.com/MINERVA/
      ubiPCMM06:2nd International Workshop on Personalized Context Modeling and Management for UbiComp Applications



[9]  Li Ma, Yang Yang, Zhaoming Qiu, Guotong Xie, Yue Pan,            OWL Knowledge Base Systems. Journal of Web Semantics
     Shengping Liu: Towards A Complete OWL Ontology                   3(2), 2005, pp158-182.
     Benchmark. 3rd European Semantic Web Conference             [11] T. Liebig, H. Pfeifer, F. von Henke, Reasoning Services for
     (ESWC06) – 2006                                                  an OWL Authoring Tool: An Experience Report, in:
[10] Y. Guo, Z. Pan, and J. Heflin. LUBM: A Benchmark for             Proceedings of the 2004 International