<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>OntoVal: A Tool for Ontology Evaluation by Domain Specialists</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Caio Viktor S. Avila</string-name>
          <email>caioviktor@alu.ufc.br</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gilvan Maia</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Wellington Franco</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tulio Vidal Rolim</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Artur O. R. Franco</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vania M.P. Vidal</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computing, Federal University of Ceara</institution>
          ,
          <addr-line>Campus do Pici, Fortaleza-CE</addr-line>
          ,
          <country country="BR">Brazil</country>
        </aff>
      </contrib-group>
      <fpage>143</fpage>
      <lpage>147</lpage>
      <abstract>
        <p>We present OntoVal, a portable and domain-independent web tool for the evaluation of OWL ontologies by non-technical domain specialists. Ontoval presents the ontology in a textual way, making it readable for users with little to no knowledge about ontologies. Also, OntoVal features a form engine which allows users to give feedback and evaluate the correctness of the artifact being developed. The evaluation data is automatically added and processed to the experiment in order to present detailed report on the results of the evaluations.</p>
      </abstract>
      <kwd-group>
        <kwd>Ontology engineering Ontology evaluation Semantic Web Linked Data</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>An Ontology is a formal, explicit speci cation of a shared conceptualization [5]
which can be employed as conceptual model for the representation of knowledge
about a domain. Ontologies also formalize and share the understanding of
concepts and how these relate to each other. Ontologies play a key role in many
applications and domains. So, it is of paramount importance that the
underlying development process of a domain ontology adopts a mechanism for ensuring
that an accurate representation of that domain is obtained. A validation step
regarding accuracy, comprehensiveness, and technical correctness is thus usually
employed by ontology experts [1]. However, the opinions of domain specialists
are the central feedback guiding the construction of proper ontologies.</p>
      <p>Ontologies are inherently complex models, hence evaluating these models
require a complex evaluation process. For example, there are numerous metrics
applicable for ontology evaluation: accuracy, completeness, conciseness,
adaptability, clarity, computational e ciency, and consistency [3]. As each of these
metrics re ects di erent aspects of an ontology, an extensive evaluation rapidly
becomes a time-consuming, challenging task. Consequently, the availability of
adequate tools supporting the evaluation process by domain experts represents
a signi cant contribution to drive the development of high-quality ontologies.</p>
      <p>Thus, in this work we present OntoVal, a domain-independent and portable
web tool for the evaluation of OWL ontologies by non-technical domain
specialist. OntoVal presents the ontology to the user in a textual way. In addition,</p>
      <p>OntoVal has an integrated form engine allowing the user to provide feedback and
evaluate the correctness of the artifact being developed. In the end, OntoVal
automatically aggregates and processes the data, presenting a detailed report on
the results of the evaluation to the ontology engineer.</p>
      <p>This is how the remaining of this paper is organized: Section 2 presents
the main related works; Section 3 details OntoVal design and implementation;
Section 4 demonstrates how OntoVal 's interface is used for actual evaluation;
in Section 5 we present the evaluation of OntoVal ; and Section 6 contains the
concluding remarks about OntoVal and future work directions.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Works</title>
      <p>Existing tools such as Protege1 and WebVOWL2 could be used as a support
during the evaluation by experts. Protege is an extremely popular open-source
editor and framework for building ontologies and smart systems, which is
probably the most widespread tool for this purpose. As such, Protege allows users to
explore, edit, and perform detailed analyses over ontologies. However, this is a
tool designed for aiding ontology developers during the development process, so
it demands previous technical background on technologies and standards such
as RDF3 and OWL4, plus concepts from logic.</p>
      <p>WebVOWL, in its turn, is a Web tool for interactive ontology visualization.
WebVOWL assists lay users to understand the structure by means of an intuitive
visual representation. However, user experience and usability of this tool can be
impaired when dealing with big or complex ontologies, since the corresponding
visual models generated are usually polluted and confusing for lay users.</p>
      <p>In [6], Tan et al. propose a verbalization tool and carry out an ontology
evaluation with non-technical specialists. They compare the results obtained by
adopting Protege and their verbalization tool. Tan et al. found that adopting
the verbalization tool led to a less time-consuming evaluation process. Moreover,
they also observed users provided overall higher grades for the ontology, which
may indicate that the participants could not correctly understand the ontology
by using Protege.</p>
      <p>A key limitation raising from the adoption of the aforementioned tools is
that they lack integrated evaluation mechanisms. Consequently, evaluation is
performed in two or more steps, since this scenario requires the use of
developermade forms in order to collect user feedback separately. This approach tends
to turn the evaluation into a mostly manual, time-consuming, and error-prone
process, because the aggregation and computation of results lack automation.
Moreover, from the users' perspective, rotating through the forms and the
ontology tool can be a nuisance.</p>
      <sec id="sec-2-1">
        <title>1 https://protege.stanford.edu/</title>
      </sec>
      <sec id="sec-2-2">
        <title>2 http://vowl.visualdataweb.org/webvowl.html</title>
      </sec>
      <sec id="sec-2-3">
        <title>3 https://www.w3.org/TR/rdf-primer/</title>
      </sec>
      <sec id="sec-2-4">
        <title>4 https://www.w3.org/TR/owl-ref/</title>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>OntoVal</title>
      <p>
        Developers provide their ontology as the input for OntoVal 5, which was designed
to handle any domain, so the tool is portable across virtually any projects.
Moreover, when available, a visual model representing the ontology can also be
displayed as a supporting tool for the domain expert users. OntoVal starts an
evaluation by collecting information about the participant: name (optional); age;
domain experience level, ranging from 0 to 10; and ontology experience level, also
ranging from 0 to 10. OntoVal divides the ontology evaluation process into three
stages: (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) class evaluation; (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) property evaluation; and (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) overall evaluation.
An example evaluation can be seen in Figure 1, where some parts of the text are
purposely omitted.
      </p>
      <p>In the rst stage, the following information is presented to the user for each
class in the ontology being evaluated: URI; known names; description; list of
superclasses; lists of owl:ObjectProperty and owl:DatatypeProperty. Additionally,
the system also presents an evaluation form to the user regarding that class. This
form contains simple \yes/no" questions. This form collects answers for
agreement regarding the following questions: appropriateness of the assigned URI;
assigned names; description; for each superclass; for each owl:ObjectProperty ;
and for owl:DatatypeProperty.</p>
      <p>Each property of the ontology is analyzed during the second evaluation stage.
The following information is shown to users for each property: URI; known
names; description; its type (owl:ObjectProperty or owl:DatatypeProperty ); list
of classes containing that property; list of super-properties; list of classes in the
property's range. The questionnaire in this stage evaluates the user's agreement
on the following questions: suitability of the URI; assigned property names;
property description; their type; for each super-property; and for each element
of their range.</p>
      <p>In the third and last stage, OntoVal evaluates general but important criteria
about the ontology, such as: agreement on the ontology's name; agreement on
its description; agreement on the success of the ontology in representing the
domain; agreement on the comprehensiveness of its classes; agreement on the
comprehensiveness of its properties; and agreement on the way the concepts
presented in the ontology are related one to another.</p>
      <p>Moreover, for each evaluated term (i.e., class, property), OntoVal also allows
users to provide textual feedback regarding their answers. This information is of
utmost importance for ontology developers, since these experts can shed light
into their own understanding of the speci c given domain. We advocate this
aspect is crucial for e ective improvement of the ontology under development.</p>
      <p>Finally, OntoVal automatically aggregates and computes the evaluations to
be presented to the developer in a separate web page. For simplicity, evaluation
grades for each term are given based on the computation of percentage of positive
answers. Hence, each question corresponds to a score. The nal grade for each
term is the fraction of the number of positive points over the number of questions</p>
      <sec id="sec-3-1">
        <title>5 https://github.com/CaioViktor/ontoval</title>
        <p>
          presented to users. The resulting statistics and metrics are divided into four
areas: (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) summary; (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) classes; (
          <xref ref-type="bibr" rid="ref3">3</xref>
          ) properties; and (
          <xref ref-type="bibr" rid="ref4">4</xref>
          ) overall. An example of
statistics visualization web page can be found in Figure 2.
        </p>
        <p>The rst area displays general results and resorts to a table in order to
enumerate values for mean, maximum, minimum, and standard deviation. This
table considers the following attributes: age, domain experience level, ontology
experience level, mean approval of classes, mean approval of properties, and
elapsed time. On top of that, this area also contains charts displaying grade
distribution, adopting the user's level of experience regarding the domain and
ontologies, plus the frequency distribution of the grades assigned.</p>
        <p>In the second area, for each class, it is shown a table containing the median,
maximum, minimum, and standard deviation values for the following aspects of
the evaluation: general approval; superclass approval; and approval of
DatatypeProperties and ObjectProperties. In the third area, for each property, it is shown a
table containing the mean, maximum, minimum, and standard deviation grades
for the following aspects: general approval; approval of super-property; and range
approval. The fourth area presents a table containing the mean, maximum,
minimum, and standard deviation grade values for each question of the evaluation
form. Moreover, for both the second and third areas, the developer can choose
to see more detailed statistics for each of the questions in the evaluations and
the comments provided by participants.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Demonstration</title>
      <p>OntoVal is under development and was already evaluated under the light of a
real project regarding the development of a sophisticated ontology applicable to
the domain of computer games [4, 2] with 8 reviewers, of which 6 are domain</p>
      <sec id="sec-4-1">
        <title>6 https://youtu.be/5Y -crl5Ak</title>
        <p>specialists and 2 are ontology experts. The participants were invited to o er
their opinion about the evaluation tool and process. The tool is clear under
a minimal explanation, but most of the few usability problems pointed out by
users were addressed. Users missed a simple feature: visualization of a previously
given answer, since when returning to it, the page did not load correctly.
6</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusions</title>
      <p>Ontoval automates most of the tasks and presents the ontology in a readable,
textual way for domain experts which usually are lay users on ontologies, so
collaborators can focus their attention on the evaluation aspects regarding the
speci c domain. Ontoval was preliminarily evaluated within an actual ontology
development project in the eld of computer games with the participation of
both domain and ontology experts. Users pointed out the ease of using the tool,
indicating possible improvements for better usability as future works.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Denaux</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , et al.:
          <article-title>Supporting domain experts to construct conceptual ontologies: A holistic approach</article-title>
          .
          <source>Web Semantics: Science, Services and Agents on the World Wide Web</source>
          <volume>9</volume>
          (
          <issue>2</issue>
          ),
          <volume>113</volume>
          {
          <fpage>127</fpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Franco</surname>
            ,
            <given-names>A.O.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rolim</surname>
            ,
            <given-names>T.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Santos</surname>
            ,
            <given-names>A.M.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Silva</surname>
            ,
            <given-names>J.W.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vidal</surname>
            ,
            <given-names>V.M.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gomes</surname>
            ,
            <given-names>F.A.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Castro</surname>
            ,
            <given-names>M.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maia</surname>
            ,
            <given-names>J.G.R.:</given-names>
          </string-name>
          <article-title>An ontology for role playing games</article-title>
          .
          <source>In: Proceedings of SBGames 2018</source>
          . pp.
          <volume>615</volume>
          {
          <fpage>618</fpage>
          .
          <string-name>
            <surname>SBC</surname>
          </string-name>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Raad</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cruz</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>A survey on ontology evaluation methods</article-title>
          .
          <source>In: Proceedings of the International Conference on Knowledge Engineering and Ontology Development</source>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4. da Rocha Franco, A.d.O.,
          <string-name>
            <surname>da Silva</surname>
            ,
            <given-names>J.W.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pinheiro</surname>
            ,
            <given-names>V.C.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maia</surname>
            ,
            <given-names>J.G.R.</given-names>
          </string-name>
          , de Carvalho Gomes,
          <string-name>
            <given-names>F.A.</given-names>
            ,
            <surname>de Castro</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.F.</surname>
          </string-name>
          :
          <article-title>Analyzing actions in play-by-forum rpg</article-title>
          .
          <source>In: International Conference on Computational Processing of the Portuguese Language</source>
          . pp.
          <volume>180</volume>
          {
          <fpage>190</fpage>
          . Springer (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Studer</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Benjamins</surname>
            ,
            <given-names>V.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fensel</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Knowledge engineering: principles and methods</article-title>
          .
          <source>Data &amp; knowledge engineering 25</source>
          (1
          <issue>-2</issue>
          ),
          <volume>161</volume>
          {
          <fpage>197</fpage>
          (
          <year>1998</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Tan</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , et al.:
          <article-title>Evaluation of an application ontology</article-title>
          .
          <source>In: Proceedings of the Joint Ontology Workshops</source>
          <year>2017</year>
          . vol.
          <year>2050</year>
          .
          <article-title>CEUR-WS (</article-title>
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>