=Paper= {{Paper |id=Vol-2469/ERDemo08 |storemode=property |title=OntoVal: A Tool for Ontology Evaluation by Domain Specialists |pdfUrl=https://ceur-ws.org/Vol-2469/ERDemo08.pdf |volume=Vol-2469 |authors=Caio Viktor S. Avila,Gilvan Maia,Wellington Franco,Tulio Vidal Rolim,Artur O. R. Franco,Vania M.P. Vidal |dblpUrl=https://dblp.org/rec/conf/er/AvilaMFRFV19 }} ==OntoVal: A Tool for Ontology Evaluation by Domain Specialists == https://ceur-ws.org/Vol-2469/ERDemo08.pdf
    OntoVal: A Tool for Ontology Evaluation by
               Domain Specialists

 Caio Viktor S. Avila1 , Gilvan Maia1 , Wellington Franco1 , Tulio Vidal Rolim1 ,
                 Artur O. R. Franco1 , and Vania M.P. Vidal1

      Department of Computing, Federal University of Ceará, Campus do Pici,
                             Fortaleza-CE, Brazil
                           caioviktor@alu.ufc.br


       Abstract. We present OntoVal, a portable and domain-independent
       web tool for the evaluation of OWL ontologies by non-technical domain
       specialists. Ontoval presents the ontology in a textual way, making it
       readable for users with little to no knowledge about ontologies. Also,
       OntoVal features a form engine which allows users to give feedback and
       evaluate the correctness of the artifact being developed. The evaluation
       data is automatically added and processed to the experiment in order to
       present detailed report on the results of the evaluations.

       Keywords: Ontology engineering · Ontology evaluation · Semantic Web
       · Linked Data.


1    Introduction
An Ontology is a formal, explicit specification of a shared conceptualization [5]
which can be employed as conceptual model for the representation of knowledge
about a domain. Ontologies also formalize and share the understanding of con-
cepts and how these relate to each other. Ontologies play a key role in many
applications and domains. So, it is of paramount importance that the underly-
ing development process of a domain ontology adopts a mechanism for ensuring
that an accurate representation of that domain is obtained. A validation step
regarding accuracy, comprehensiveness, and technical correctness is thus usually
employed by ontology experts [1]. However, the opinions of domain specialists
are the central feedback guiding the construction of proper ontologies.
    Ontologies are inherently complex models, hence evaluating these models re-
quire a complex evaluation process. For example, there are numerous metrics
applicable for ontology evaluation: accuracy, completeness, conciseness, adapt-
ability, clarity, computational efficiency, and consistency [3]. As each of these
metrics reflects different aspects of an ontology, an extensive evaluation rapidly
becomes a time-consuming, challenging task. Consequently, the availability of
adequate tools supporting the evaluation process by domain experts represents
a significant contribution to drive the development of high-quality ontologies.
    Thus, in this work we present OntoVal, a domain-independent and portable
web tool for the evaluation of OWL ontologies by non-technical domain spe-
cialist. OntoVal presents the ontology to the user in a textual way. In addition,


Copyright © 2019 for this paper by its authors. Use permitted under Creative Com-
mons License Attribution 4.0 International (CC BY 4.0).
144    Avila et al.

OntoVal has an integrated form engine allowing the user to provide feedback and
evaluate the correctness of the artifact being developed. In the end, OntoVal au-
tomatically aggregates and processes the data, presenting a detailed report on
the results of the evaluation to the ontology engineer.
    This is how the remaining of this paper is organized: Section 2 presents
the main related works; Section 3 details OntoVal design and implementation;
Section 4 demonstrates how OntoVal ’s interface is used for actual evaluation;
in Section 5 we present the evaluation of OntoVal ; and Section 6 contains the
concluding remarks about OntoVal and future work directions.


2     Related Works

Existing tools such as Protégé 1 and WebVOWL2 could be used as a support
during the evaluation by experts. Protégé is an extremely popular open-source
editor and framework for building ontologies and smart systems, which is prob-
ably the most widespread tool for this purpose. As such, Protégé allows users to
explore, edit, and perform detailed analyses over ontologies. However, this is a
tool designed for aiding ontology developers during the development process, so
it demands previous technical background on technologies and standards such
as RDF3 and OWL4 , plus concepts from logic.
    WebVOWL, in its turn, is a Web tool for interactive ontology visualization.
WebVOWL assists lay users to understand the structure by means of an intuitive
visual representation. However, user experience and usability of this tool can be
impaired when dealing with big or complex ontologies, since the corresponding
visual models generated are usually polluted and confusing for lay users.
    In [6], Tan et al. propose a verbalization tool and carry out an ontology
evaluation with non-technical specialists. They compare the results obtained by
adopting Protégé and their verbalization tool. Tan et al. found that adopting
the verbalization tool led to a less time-consuming evaluation process. Moreover,
they also observed users provided overall higher grades for the ontology, which
may indicate that the participants could not correctly understand the ontology
by using Protégé.
    A key limitation raising from the adoption of the aforementioned tools is
that they lack integrated evaluation mechanisms. Consequently, evaluation is
performed in two or more steps, since this scenario requires the use of developer-
made forms in order to collect user feedback separately. This approach tends
to turn the evaluation into a mostly manual, time-consuming, and error-prone
process, because the aggregation and computation of results lack automation.
Moreover, from the users’ perspective, rotating through the forms and the on-
tology tool can be a nuisance.
1
  https://protege.stanford.edu/
2
  http://vowl.visualdataweb.org/webvowl.html
3
  https://www.w3.org/TR/rdf-primer/
4
  https://www.w3.org/TR/owl-ref/
             OntoVal: A Tool for Ontology Evaluation by Domain Specialists       145

3     OntoVal

Developers provide their ontology as the input for OntoVal 5 , which was designed
to handle any domain, so the tool is portable across virtually any projects.
Moreover, when available, a visual model representing the ontology can also be
displayed as a supporting tool for the domain expert users. OntoVal starts an
evaluation by collecting information about the participant: name (optional); age;
domain experience level, ranging from 0 to 10; and ontology experience level, also
ranging from 0 to 10. OntoVal divides the ontology evaluation process into three
stages: (1) class evaluation; (2) property evaluation; and (3) overall evaluation.
An example evaluation can be seen in Figure 1, where some parts of the text are
purposely omitted.
    In the first stage, the following information is presented to the user for each
class in the ontology being evaluated: URI; known names; description; list of su-
perclasses; lists of owl:ObjectProperty and owl:DatatypeProperty. Additionally,
the system also presents an evaluation form to the user regarding that class. This
form contains simple “yes/no” questions. This form collects answers for agree-
ment regarding the following questions: appropriateness of the assigned URI;
assigned names; description; for each superclass; for each owl:ObjectProperty;
and for owl:DatatypeProperty.
    Each property of the ontology is analyzed during the second evaluation stage.
The following information is shown to users for each property: URI; known
names; description; its type (owl:ObjectProperty or owl:DatatypeProperty); list
of classes containing that property; list of super-properties; list of classes in the
property’s range. The questionnaire in this stage evaluates the user’s agreement
on the following questions: suitability of the URI; assigned property names;
property description; their type; for each super-property; and for each element
of their range.
    In the third and last stage, OntoVal evaluates general but important criteria
about the ontology, such as: agreement on the ontology’s name; agreement on
its description; agreement on the success of the ontology in representing the
domain; agreement on the comprehensiveness of its classes; agreement on the
comprehensiveness of its properties; and agreement on the way the concepts
presented in the ontology are related one to another.
    Moreover, for each evaluated term (i.e., class, property), OntoVal also allows
users to provide textual feedback regarding their answers. This information is of
utmost importance for ontology developers, since these experts can shed light
into their own understanding of the specific given domain. We advocate this
aspect is crucial for effective improvement of the ontology under development.
    Finally, OntoVal automatically aggregates and computes the evaluations to
be presented to the developer in a separate web page. For simplicity, evaluation
grades for each term are given based on the computation of percentage of positive
answers. Hence, each question corresponds to a score. The final grade for each
term is the fraction of the number of positive points over the number of questions
5
    https://github.com/CaioViktor/ontoval
146      Avila et al.

presented to users. The resulting statistics and metrics are divided into four
areas: (1) summary; (2) classes; (3) properties; and (4) overall. An example of
statistics visualization web page can be found in Figure 2.
    The first area displays general results and resorts to a table in order to
enumerate values for mean, maximum, minimum, and standard deviation. This
table considers the following attributes: age, domain experience level, ontology
experience level, mean approval of classes, mean approval of properties, and
elapsed time. On top of that, this area also contains charts displaying grade
distribution, adopting the user’s level of experience regarding the domain and
ontologies, plus the frequency distribution of the grades assigned.
    In the second area, for each class, it is shown a table containing the median,
maximum, minimum, and standard deviation values for the following aspects of
the evaluation: general approval; superclass approval; and approval of Datatype-
Properties and ObjectProperties. In the third area, for each property, it is shown a
table containing the mean, maximum, minimum, and standard deviation grades
for the following aspects: general approval; approval of super-property; and range
approval. The fourth area presents a table containing the mean, maximum, min-
imum, and standard deviation grade values for each question of the evaluation
form. Moreover, for both the second and third areas, the developer can choose
to see more detailed statistics for each of the questions in the evaluations and
the comments provided by participants.


4     Demonstration
Figure 1 depicts the web page visualized by user, which is divided into three
areas: (1) information of evaluation; (2) evaluation; and (3) glossary. Area (1)
is composed of: (4) presentation of the ontology’s description; (5) presentation
of the evaluation’s progress; (6) button for the ontology model’s exhibition; (7)
link to resume evaluation; and (8) display of keyboard shortcuts. Area (2) is
where the actual evaluation process is carried out and it is composed by: (9)
presentation of the term’s verbalization being evaluated, i.e., class or property
as aforementioned in Section 3; and (10) form used to collect users’ evaluation.
Finally, Area (3) exhibits information about extra resources presented in Area
(2), i.e., additional classes and properties being represented by their respective
URIs. This area is divided into: (11) exhibition of information about classes; and
(12) display of information about properties. For more information, please refer
to the video demonstration6 .


5     Tool Evaluation
OntoVal is under development and was already evaluated under the light of a
real project regarding the development of a sophisticated ontology applicable to
the domain of computer games [4, 2] with 8 reviewers, of which 6 are domain
6
    https://youtu.be/5Yfi-crl5Ak
             OntoVal: A Tool for Ontology Evaluation by Domain Specialists          147




    Fig. 1. Evaluation demonstration             Fig. 2. Statistics demonstration


specialists and 2 are ontology experts. The participants were invited to offer
their opinion about the evaluation tool and process. The tool is clear under
a minimal explanation, but most of the few usability problems pointed out by
users were addressed. Users missed a simple feature: visualization of a previously
given answer, since when returning to it, the page did not load correctly.


6    Conclusions
Ontoval automates most of the tasks and presents the ontology in a readable,
textual way for domain experts which usually are lay users on ontologies, so
collaborators can focus their attention on the evaluation aspects regarding the
specific domain. Ontoval was preliminarily evaluated within an actual ontology
development project in the field of computer games with the participation of
both domain and ontology experts. Users pointed out the ease of using the tool,
indicating possible improvements for better usability as future works.


References
1. Denaux, R., et al.: Supporting domain experts to construct conceptual ontologies: A
   holistic approach. Web Semantics: Science, Services and Agents on the World Wide
   Web 9(2), 113–127 (2011)
2. Franco, A.O.R., Rolim, T.V., Santos, A.M.M., Silva, J.W.F., Vidal, V.M.P., Gomes,
   F.A.C., Castro, M.F., Maia, J.G.R.: An ontology for role playing games. In: Pro-
   ceedings of SBGames 2018. pp. 615–618. SBC (2018)
3. Raad, J., Cruz, C.: A survey on ontology evaluation methods. In: Proceedings of
   the International Conference on Knowledge Engineering and Ontology Development
   (2015)
4. da Rocha Franco, A.d.O., da Silva, J.W.F., Pinheiro, V.C.M., Maia, J.G.R., de Car-
   valho Gomes, F.A., de Castro, M.F.: Analyzing actions in play-by-forum rpg. In:
   International Conference on Computational Processing of the Portuguese Language.
   pp. 180–190. Springer (2018)
5. Studer, R., Benjamins, V.R., Fensel, D.: Knowledge engineering: principles and
   methods. Data & knowledge engineering 25(1-2), 161–197 (1998)
6. Tan, H., et al.: Evaluation of an application ontology. In: Proceedings of the Joint
   Ontology Workshops 2017. vol. 2050. CEUR-WS (2017)