<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Evaluation of an Application Ontology</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>He TAN</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anders ADLEMO</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vladimir TARASOV</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mats E. JOHANSSON</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science and Informatics, School of Engineering, Jonkoping University</institution>
          ,
          <country country="SE">Sweden</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Saab AB</institution>
          ,
          <addr-line>Jonkoping</addr-line>
          ,
          <country country="SE">Sweden</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The work presented in this paper demonstrates an evaluation procedure for a real-life application ontology, coming from the avionics domain. The focus of the evaluation has speci cally been on three ontology quality features, namely usability, correctness and applicability. In the paper, the properties of the three features are explained in the context of the application domain, the methods and tools used for the evaluation of the features are presented, and the evaluation results are presented and discussed. The results indicate that the three quality features are signi cant in the evaluation of our application ontology, that the proposed methods and tools allow for the evaluation of the three quality features and that the inherent quality of the application ontology can be con rmed.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Ontologies as a modelling approach have long been recognized by the software
engineering community (e.g. [
        <xref ref-type="bibr" rid="ref1 ref2">1,2</xref>
        ]). One example where ontologies have been
introduced in software development is requirements engineering (e.g. [
        <xref ref-type="bibr" rid="ref3 ref4 ref5">3,4,5</xref>
        ]).
Requirements engineering (RE), is an area concerned with the elicitation, speci
cation and validation of software systems requirements [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The use of ontologies
in RE dates back to the 1990s, e.g. [
        <xref ref-type="bibr" rid="ref7 ref8">7,8</xref>
        ]. More recently, the interest in utilizing
ontologies in RE, as well as software engineering in general, has been renewed due
to the emergence of semantic web technologies [
        <xref ref-type="bibr" rid="ref1 ref9">1,9</xref>
        ]. Much of the research on the
use of ontologies in RE has focused on inconsistency and incompleteness problems
in requirements speci cations. For example, in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], an ontology provides a formal
representation of the engineering design knowledge that can be shared among
engineers, such that ambiguity, inconsistency, incompleteness and redundancy can
be reduced to a minimum when engineers concurrently develop requirements for
sub-systems of a complex artefact. In [
        <xref ref-type="bibr" rid="ref10 ref7">7,10</xref>
        ], ontologies have been used to
represent requirements in formal ontology languages, such that the analysis of
consistency, completeness and correctness of requirements can be performed through
reasoning over requirements ontologies. In the research project carried out by the
authors of this paper [
        <xref ref-type="bibr" rid="ref11 ref12">11,12</xref>
        ], ontologies have also been developed to represent
software requirements. The developed ontologies have mainly been employed to
support the automation of the software testing process and, more speci cally, the
creation of software test cases.
      </p>
      <p>The ontologies developed in the research project presented in this paper can
be considered to be application ontologies. They contain the knowledge of a
particular domain, that is the requirements speci cation for software, to be able to
achieve a speci c task or support a particular application, that is the generation
of software test cases. An ontology, like all engineering artefacts, needs a thorough
evaluation to be able to put con dence in it and its intrinsic quality features. Not
only will the quality of an ontology have a direct impact on the quality of the
results originating from it, the ontology's quality during the ontology development
should also be measured in a quantitative way, to decide whether the ontology
will be able to meet its assigned goals or not. In this paper we demonstrate an
ontology evaluation carried out in the aforementioned research project. We have
focused on the evaluation of three speci c quality features, i.e. usability,
correctness and applicability, and proposed methods and tools for the evaluation of the
quality features.</p>
      <p>The remainder of the paper is organised as follows. Section 2 presents one
of the application ontologies developed in our research project. In section 3, we
discuss the quality features used to evaluate the ontology and the methods for
evaluating the features. Section 4 presents the evaluation tools and the
ontology evaluation with the results. Section 5, nally, presents our conclusions and
discussions on future work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. An Application Ontology</title>
      <p>
        In this section, we rst present one of the application ontologies that were
developed in our research project, along with the task it supports. The task of the
ontology is to capture the knowledge contained in a set of given software
requirements in an e ort to automate the creation of software test cases. The ontology
represents the requirements of a communication software component pertaining
to an embedded system situated in an avionics system. The software component
is fully developed in compliance with the DO-178B standard [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] which means
that the requirements of the software component have been prepared, reviewed
and validated by experts at the avionics partner company. Based on these written
requirements, the application ontology for the software component, or
requirements ontology as we also call it in this paper, was developed to support the
creation of software test cases. Due to con dentiality reason the ontology is not
published on the internet.
      </p>
      <p>The ontology was created using Protege. Beside the ontology itself,
documentation was created to give an overview of the represented knowledge and describe
the design of the ontology. The current version of the ontology contains 42 classes,
34 object properties, 13 datatype properties, and 147 instances in total. Figure 1
shows the ontology fragment for one particular functional requirement, in this
case the SRSRS4YY-431, a requirement that has a focus on error handling.
Ontology fragments for the remaining individual requirements related to the
communication software component can be represented in a way similar to the one in
Figure 1. The SRSRS4YY-431 de nes that if the communication type is out of its
valid range, the initialization service shall deactivate the UART (Universal
Asynchronous Receiver/Transmitter), and return the result "comTypeCfgError". In
the gure, the rectangle boxes represent the concepts of the ontology; the rounded
rectangle boxes represent the instances; and the dashed line rounded rectangle
boxes provide the data values of the datatype property for instances.</p>
      <p>
        The ontology has been designed to support inference rules to generate the
software test cases [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. The inference rules, that represent the expertise of an
expert software tester, are coded in Prolog and make use of the ontology entities
to generate the test cases. The Prolog inference engine controls the process of
selecting and invoking the inference rules. To be able to implement this process,
the ontology rst needs to be translated into Prolog syntax by applying a set of
prede ned translation rules, and thus become a part of the Prolog program.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Quality Features and Evaluation Methods</title>
      <p>
        The challenge in ontology evaluation is to determine the quality features that
are to be evaluated, along with the evaluation method. Many di erent
evaluation features have been discussed in literature (e.g. [
        <xref ref-type="bibr" rid="ref14 ref15 ref16 ref17">14,15,16,17</xref>
        ]). Which quality
features to evaluate depend on various factors, such as the type of ontology, the
focus of the evaluation and the person who is performing the evaluation.
BurtonJones et al. argue that di erent types of ontologies require di erent evaluation
features [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. For example, for an application or domain ontology it is enough
to evaluate the features important to the application or domain. In our research
project, the ontology evaluation has focused on three speci c quality features:
usability, correctness and applicability, as these features are estimated to be the
most critical and valuable when deciding whether our speci c application
ontology has met its assigned task of capturing the knowledge contained in a set of
given software requirements to support the creation of software test cases.
      </p>
      <p>In the context of this paper, we consider usability of an application ontology
to be a set of attributes that describe the e ort needed by a human to make
use of the ontology. The usability feature provides a measure of the quality a
user experiences when interacting with an ontology. A user of an application
ontology is normally not the creator of the ontology, and he or she may not even
be knowledgeable about ontologies or ontology engineering. This observation was
also true for the domain experts participating in the evaluation of the application
ontology presented in this paper. Usability, as de ned and applied in this paper,
is a necessary condition for an application ontology to be used on a regular basis.
Users should not feel frustrated when attempting to understand the ontology.
Instead, users should be able to put trust in the ontology and be con dent that
they can carry out their tasks e ectively and e ciently while using the ontology.</p>
      <p>
        The evaluation of the usability of a product or system has a long history and
back in 1986, Brooke developed a questionnaire, so called System Usability Scale
(SUS) [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], with exactly this purpose. Ever since then, the SUS has demonstrated
its excellence in thousands of applications, proving that it can be used for a
wide range of systems and types of technology. It has been shown that the SUS
produces comparable results to those of more extensive attitude scales that intend
to provide a deeper insight into a user's attitude towards the usability of a speci c
system. The SUS also possess the ability to discriminate and identify systems
that demonstrate good and poor usability tendencies. An example of a special
application where the SUS has been applied, an application that is relevant to the
work presented in this paper, is the evaluation of the usability of an ontology, as
described in [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. Yet another example has been presented in [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. In both these
papers the usability of an ontology was evaluated by other people than ontology
experts.
      </p>
      <p>
        In our project, the correctness of an ontology is probably the most important
quality feature that needs to be evaluated. Di erent de nitions of correctness
can be found in di erent articles, e.g. [
        <xref ref-type="bibr" rid="ref14 ref17">14,17</xref>
        ]. This divergence in the de nition
leads to confusion when trying to specify evaluation goals for an ontology. In
the context of this paper, we de ne correctness of an application ontology as
the degree to which the information asserted in the ontology conforms to the
information that need be represented in the ontology. It is about being able to
extract the accurate information from an ontology and to accurately document the
information gathered from the same ontology. Validation methods, like reasoning,
can nd logical errors, such as inconsistency, but give no indication as to the
correctness of the content. Correctness is therefore probably the most important
quality feature, but at the same time one of the most di cult to measure.
      </p>
      <p>
        Competency questions (CQ) are among the available methods of gathering
information. They are natural language sentences that express patterns that
apply to a type of questions one would expect an ontology to be able to answer.
CQ have been suggested to serve as functional requirements of an ontology [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ].
The functional correctness of an ontology can consequently be validated against
the CQ. For a large or complex ontology it is a challenging task to prepare CQ.
The correctness of the CQ should also be veri ed. In the case of utilizing an
application ontology, the information contained in the ontology may already exist
and be documented in other formats. For the software requirements ontology
developed in our research project, the information is written in plain English and
documented in software requirements documents. Furthermore, the functionality
of an application ontology may not be clearly de ned before or during the
development of the ontology. The functionality can only be fully understood and
clearly de ned during or after the application of the ontology. This observation is
also valid for the ontology developed during our research project. It is probably
also true for other kinds of ontologies, e.g. those within the context of the
Semantic Web, which are often used in ways not anticipated by the creators of the
ontology. Hence, the functional requirements cannot always be fully identi ed for
future applications.
      </p>
      <p>Ideally, correctness veri cation is an activity realised by a domain expert who
not only grasps the details of the application itself but also has su cient
knowledge of ontologies to be able to perform the evaluation. In many situations, this
kind of human evaluator, with knowledge of ontologies, is not available. Hence,
we suggest that a method, which is based on the verbalization of an ontology,
provides support to the correctness evaluation process. In this process, the
ontology is rst verbalized into a natural language text. Thereafter, the text can be
read and understood by non-ontology experts, i.e. application domain experts,
and compared with the information contained in the source information that was
used to construct the ontology.</p>
      <p>
        Applicability of an application ontology, in the context of this paper, is de ned
as the quality of the ontology regarding its appropriateness for a particular
application, task or purpose. The applicability concept includes the appropriateness of
the structure or semantics (e.g. the depth of the class hierarchy and the number
of instances), the appropriateness of the ontology language (i.e. which language
constructs that can be used in an ontology constructed with that language), and
the appropriateness of the ontology document (i.e. a particular serialization of an
ontology, such as RDF/XML [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] and OWL Functional Syntax [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]). The
selection of these conditions should support the application in such a way that the
application could be implemented and the task could be performed e ectively
and e ciently. The applicability of an ontology can be evaluated by the person
who designs or develops the application and who uses the ontology to implement
the functionality of the application, or it can be evaluated through the
application itself. The evaluation considers the quality features of the application or the
performance of the task, which are decided or a ected by the use of the ontology.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. The Evaluation</title>
      <p>In this section, we demonstrate the speci c details of the evaluation with regard
to the three quality features, applying them on the software requirements
ontology described in Section 2. We also describe how the methods and tools were
applied during the evaluation, and discuss the evaluation results. Four industry
domain experts (DE1-DE4) from the avionics domain (and speci cally from the
Statements to evaluate the previous knowledge
of ontologies, Protege and SRS
1 To what extent do you know about ontologies,</p>
      <p>what they represent, how they can be used, etc.?
2 To what extent do you know about the ontology</p>
      <p>tool, Protege, and how to use it?
3 To what extent are you familiar with SRS?
4 To what extent do have you been directly
in</p>
      <p>volved in developing di erent SRSs?
5 How familiar are you with the content of the</p>
      <p>SRS used in this project?
6 To what extent have you been involved in the
development of the software component de ned
in the SRS?
2
1
software domain), and one ontology expert (OE) (not being the ontology
developer), participated in the evaluation of the usability and the correctness of the
application ontology. The applicability was evaluated by the ontology expert only,
as he was the person who developed the ontology-based program for the test case
generation.</p>
      <p>To validate the evaluators' background competence in the eld of ontologies
and software requirements speci cations (SRS), a quick questionnaire was applied.
The results are presented in Table 1, where the grades range from 1, indicating
"not at all", to 6, indicating "to a great extent". As can be observed in the table,
none of the industry experts had a deeper knowledge of working with ontologies,
apart from an overall understanding of the speci c application ontology as a
result of the many presentations provided during the recurrent project meetings.
Additionally, they did not have any previous experience of using ontology editors,
such as Protege. However, these de ciencies were made up for by their experiences
with SRS, both as designers and users. As a comparison, the ontology expert has
a deep knowledge of ontologies, ontology tools and also the content of the SRS
used in the project. This di erence in knowledge and experience is the reason why
the results presented in the following two subsections look like they do. In spite of
these di erences, by applying an appropriate preparation before the evaluation of
an application ontology using adequate tools, such as Protege and verbalization,
the negative e ects of such di erences can be reduced to a minimum.</p>
      <sec id="sec-4-1">
        <title>4.1. Evaluation of Usability</title>
        <p>
          In the evaluation of the usability of the application ontology, we applied a version
of the SUS that was introduced by Casellas in [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. The result from the
evaluation is presented in Table 2, including the ten questions, as required for a SUS
evaluation. The intricate details of how to use the SUS can be found in [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ], for
example, why the odd numbered statements are all in positive form while the even
numbered statements are all in negative form. The formulations of the texts in
the ten questions have been slightly modi ed from the work by Casellas, to better
2
3
2
3
4
3
3
2
3
3
adjust to the ontology domain. The evaluation of the usability of an ontology is
especially important and relevant when the ontology is going to be used by
application domain experts who are normally not ontology experts, as stated before.
Hence, the ontology was evaluated by four experts in the avionics domain, plus
one ontology expert. The grades in the table range from 1 to 5, where 1=strongly
disagree, 2=disagree, 3=no preference, 4=agree, 5=strongly agree.
        </p>
        <p>
          A sample of only ve evaluators could by some be considered as being too
small. Tullis and Stetson [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ] have shown that it is possible to get reliable results
from a sample of only 8-12 users. Hence, we argue that our sample of ve
persons provides a positive indication of the usability of the ontology, as far as the
application domain experts are concerned (not to mention the ontology expert,
but for obvious reasons). Nonetheless, in the future a more extensive usability
evaluation will be undertaken, to prove our statements regarding the usability of
the ontology.
        </p>
        <p>Since it was di cult for the industry experts to comprehend the ontology only
by reading the raw ontology document (i.e. the ontology documented in formal
ontology language), Protege was used as the tool to visualize the application
ontology in the evaluation. Before the evaluation, a 40-minute tutorial on the
application ontology and the Protege tool was provied to the 4 industry experts.
During the evaluation itself, the questions in Table 2 were answered by all of the
5 experts. After having answered the questions, the score for each question was
calculated (where the score can range from 0 to 4). For items 1, 3, 5, 7, and 9 (the
positive statements) the score contribution was calculated as the scale position
value minus 1. For items 2, 4, 6, 8, and 10 (the negative statements), the score
contribution was calculated as 5 minus the scale position value. After having done</p>
        <p>Evaluator
# Requirements Evaluated
Evaluation Time (in minutes)</p>
        <p>SUS Scores</p>
        <p>Grades
this simple exercise for the 10 statements, the sum of the scores was multiplied
with 2.5 to obtain the overall SUS scores.</p>
        <p>
          The nal SUS scores are presented in Table 3 together with the grades,
accordingly to Bangor et al. [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ]. The results could be considered discouraging,
but a closer perusal of the written comments by the evaluators indicated that
the "poor" results were not caused by a "poor" ontology, but rather the lack
of experience using Protege. The comments from the industry domain experts
(DE) were positive regarding the ontology itself, or as one of them stated: "Due
to inexperience in reading and using ontologies in testing activities, at rst it
was not obvious how to apply the requirements ontology. But after some time,
the almost one-to-one correspondence between the requirements identi ed in the
SRS document and the corresponding parts found in the ontology, made it quite
straightforward to understand. To overcome the initial problems of
understanding some of the technicalities within the ontology, several questions had to be
asked to the developers of the ontology. Consequently, some extra training in the
eld of ontologies would have been helpful." Not surprisingly, the comments from
the ontology expert (OE) were more positive: "In general, it was easy to
understand the ontology. Most concepts were well integrated but some of them needed
an extra integration, mainly through object properties. No inconsistencies were
found, just some entities missing from the application viewpoint. Most ontology
engineers would quickly understand the ontology but, still, they would need some
extra time because the domain represented by the application ontology is quite
complex." The ontology expert did not have to ask a lot of questions because the
documentation was available in addition to the ontology itself. However, some
very concrete questions were needed to understand how the ontology could be
improved from an application point-of-view. Negative comments from the industry
application domain experts were mainly about the tool. For these experts it was
di cult to get an overview of the complete solution and, hence, they got stuck in
details.
        </p>
        <p>To sum up, there exist strong indications that the usability of an ontology
is not exclusively determined by the ontology complexity, but also by the tool
used to read and comprehend the ontology. Before initiating the evaluation of the
usability of an ontology, it is therefore recommended to commence with a thorough
introduction of the tool that is going to be used. A good ontology visualisation
tool, like VOWL, would also be of great help in this endeavour.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Evaluation of Correctness</title>
        <p>
          In this real application the information to be represented in the ontology is the
information written in the SRS document. Furthermore, the information in the
SRS document has been reviewed and validated. Therefore, the correctness is
evaluated as a whole, by comparing the information asserted in the ontology to
the information in SRS document. For this purpose, two di erent tools were used
by the ve evaluators. The rst tool that was used once again was Protege. The
evaluators used Protege to access the information represented in the ontology and
compared this information with the information written in the SRS document.
The second tool that was used was a web-based application, developed within
the project, that transformed the information represented in the ontology into a
natural language text in English. The web-based application is an ontology
verbalization tool with the goal of making ontologies more readable to non-ontology
experts [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]. The focus of the research in the ontology verbalization eld has so
far mainly been on expressing the axioms in an ontology in natural language
(e.g. [
          <xref ref-type="bibr" rid="ref28 ref29">28,29</xref>
          ]), or on generating a summarized report of an ontology (e.g. [
          <xref ref-type="bibr" rid="ref30">30</xref>
          ]). The
SRS provided in the case study that is presented in this paper was well-structured.
Hence, the verbalization process was based on a simple pattern-based algorithm.
        </p>
        <p>The evaluation was organized in two parts. In the rst part, the ontology
was evaluated relying solely on Protege. In the second part, the ontology was
evaluated using only the verbalization tool. However, while using the verbalization
tool, the evaluators could make use of Protege for further validation of details, if
required. The evaluation using the verbalization tool proceeded as follows. The
evaluators uploaded the ontology (in OWL format) to the tool, retrieved the
verbalization of the ontology (in plain English), and compared the verbalization
result with the written information in the SRS document. Figure 2 presents the
verbalization of a piece of the application ontology (the left part of the gure)
and the corresponding text found in the requirements document (the right part
of the gure). The green colored texts in the left part of the gure indicate the
prede ned patterns found in the verbalization algorithm. The remainder of the
text (the black text found in the left part of the gure) comes from the ontology
# Req.</p>
        <p>Evaluated
Eval. Time
(in minutes)</p>
        <p>Tool</p>
        <p>Protege
verbalization</p>
        <p>Protege
verbalization
itself. The fragment of the ontology for requirement SRSRS4YY-219 is similar to
requirement SRSRS4YY-431 presented in Figure 1.</p>
        <p>In both parts of the evaluation, i.e. using Protege or verbalization, the domain
experts were asked to validate the correctness of the ontology as compared to the
text found in the SRS document. Each of the four industry domain experts were
assigned a xed number of requirements that they were asked to validate (9 or
10 requirements each). Table 4 shows the number of requirements that were
veri ed by each of the evaluators and the evaluation time needed while using either
Protege or the verbalization tool. As can be observed in the table, in general the
evaluators used less time but veri ed more requirements when using the
verbalization tool. The reason for this is simple: the di culty when using Protege for the
rst time, together with the staggering amount of information represented in the
application ontology, causes a reduced evaluation speed. Some of these observed
di erences could most likely be reduced by increasing the time spent on training
the users in using Protege. As a secondary result, the evaluations sometimes
indicated errors in the application ontology, errors that was subsequently removed
by the ontology developers.</p>
        <p>So, when should a domain expert rely on Protege for the evaluation of an
ontology and when it is enough to rely on the verbalization of an ontology? Our
conclusion is that, in most situations, it is su cient to verbalize an ontology to
be able to detect possible errors or mismatches. And, as shown in Table 4, if
time is of the essence, verbalizing an ontology normally beats relying on Protege,
especially if the domain experts are not familiar with Protege. However, in some
situations it is recommended to use both tools in a complementary fashion. Or,
as one of the domain experts put it, "Note that 'human understandable' issues
that are 'machine impossible' will go undetected [if relying solely on verbalization,
authors comments]. In Protege it was possible to see when a requirement was
misinterpreted. Here it is back to textual representations [when using verbalization,
authors comments] that may hide the misunderstanding. Easier to read though,
but changing format is sometimes good".</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Evaluation of Applicability</title>
        <p>According to our de nition of applicability, the ontology should exhibit the quality
of appropriateness when used for a particular application domain or purpose. The
applicability of the ontology is evaluated by looking at its applicability in the task
of automatically generating software test cases based on SRS.</p>
        <p>
          In the case study, consisting of the communication software component,
software test cases were generated by a knowledge-based system consisting of a
knowledge base (i.e. the application ontology), a set of inference rules and an inference
engine. The knowledge base had to provide the following: 1) Domain knowledge
at the instance level su cient for the construction of test cases; 2) Domain
knowledge at the schema level necessary for domain-speci c reasoning; 3) Syntax and
semantics allowing for querying the knowledge base during the reasoning; and 4)
Documentation that allows a developer to understand the represented knowledge.
The inference rules used to create the test cases represent strategies for test case
generation that were acquired through an analysis of the case study and
discussions with the domain experts. The inference rules were implemented in Prolog
because it provided a built-in inference engine, which was used for reasoning. The
implementation details can be found in [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
        </p>
        <p>
          To be used as part of the knowledge-based system, the serialization of the
requirements ontology was suggested to follow the OWL functional-style syntax [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ]
after a study of the Prolog syntax [
          <xref ref-type="bibr" rid="ref31">31</xref>
          ]. As the functional-style syntax is very
similar to the Prolog syntax, the translation from OWL to Prolog was
straightforward and without any loss of syntactic details (the translation rules can be
found in [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]). As a result, the ontology became part of a Prolog program
containing the inference rules, which simpli ed the implementation of the test case
generator. The assertions of the translated ontology contained subclass axioms,
domain and range axioms, and class assertions. The axioms made it possible to
perform domain-speci c reasoning based on the class membership and relations
between the classes in the domain. More complex axioms, such as restrictions on
object properties, turned out to be unused during domain-speci c reasoning.
        </p>
        <p>The conditions and actions of the inference rules included patterns that were
matched against the statements in the knowledge base. The pattern matching
was performed by Prolog, resulting in retrieval of instances from the application
ontology, to check conditions of the rules as well as to construct di erent parts of
test cases. The continued experiments showed that the instances contained in the
application ontology were su cient for the execution of the inference rules and
the generation of software test cases. During the rst experiment, 40 inference
rules were used to generate 18 test cases. 66 distinct entities from the ontology
were used for the test case construction. In the rst attempt, the test cases were
generated as plain text in English. The experiment showed an almost one-to-one
correspondence between the texts in the generated test cases and the texts
provided by one of our industrial partners in the form of a Software Test
Description (STD) document. It should be stressed, however, that the test cases could
as easily have been generated as executable scripts, if so desired.</p>
        <p>The evaluation showed that the developed application ontology ful lled its
purpose. The ontology has been used for the automation of a part of the testing
process and allowed for the successful generation of test cases. The ontology
documentation contained an overview of the represented knowledge, which helped
to bootstrap the development of the knowledge-based system. The syntax and
semantics provided by the ontology satisfactorily supported the execution of the
inference rules by the inference engine. The domain knowledge represented in the
application ontology provided means for domain-speci c reasoning and allowed
for the construction of test cases that corresponded to the sample test cases
provided in the case study. Minor de ciencies in the application ontology, discovered
during the development of the inference rules, were addressed and removed in
the following iterations of the ontology development process. One example of a
de ciency that was identi ed during the applicability evaluation, was the lack of
distinction between single-value requirement parameters and enumeration
parameters. This distinction is important for test case construction during the
domainspeci c reasoning. The lack of explicit representation of the enumeration type of
parameters led to the loss of domain semantics, something that had to be
remedied by an extra inference rule checking the type of a requirement parameter.
Because this knowledge is declarative rather than procedural, this distinction was
introduced in the ontology in the next iteration. The result of this can be observed
in Figure 1 as a "list-type" data value (the upper-left green box, "frs485Com,
rs422Comg"). Another example of a modi cation of the application ontology, as
a direct result of the applicability evaluation, was the division of the FIFO class
into subclasses of transmission queues and reception queues.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and Future Work</title>
      <p>Within the vast research eld that encompasses all aspects of ontology
development is situated the crucial activity of building an ontology of "high quality". The
challenge is the di culty in de ning quality, and decide on the methods/tools
for the evaluation of quality. Currently, there exist no common de nition of
quality and well-established evaluation methods/tools. In the work presented in this
paper, we have evaluated three quality features, usability, correctness, and
applicability, features that we consider critical for the evaluation of an application
ontology, which includes knowledge from software requirements in support of the
creation of software test cases. We have also presented the methods and tools
requested for the evaluation of the three quality features, along with the results
from these evaluations.</p>
      <p>Correctness is probably the most important quality feature, and also the most
di cult to evaluate. Characteristics, such as completeness, conciseness and
consistency, can be considered as sub-features of correctness. In the real application
study that has been presented in this paper, correctness of an ontology is
evaluated as a whole. Correctness of the ontology is measured by comparing the
information represented in the ontology to the information found in a requirements
document. The evaluation was performed by four industry domain experts and
one ontology expert. The results from the evaluation performed by the industry
domain experts indicate that the use of a verbalization tool can drastically
improve the productivity during the evaluation of an ontology. Nonetheless, the use
of a tool like Protege is still needed to verify details in an application ontology.
As the evaluations in this paper have indicated, extra time needs to be invested
in the training of domain experts, for them to make full use of Protege.</p>
      <p>Evaluation can be related to di erent attributes of an ontology, such as its
structure and semantics, ontology language and ontology documentation. These
attributes of an application ontology are very relevant to the task or application
the ontology supports. As such, they can be covered in the evaluation of the
applicability. It is likely that the requirements on the applicability might not be
well-de ned when developing an ontology. Hence, the evaluation of the
applicability has to be coordinated with the development of the application that uses
the ontology. But, at the same time, the evaluation of the applicability helps to
make ne-grained improvements of the ontology. These improvements make the
ontology better adjusted to the application.</p>
      <p>The usability of an application ontology is of utmost importance from an
application experts' point of view. If one really wants the application experts to
fully embrace ontologies and use them, the ontologies must be "user-friendly".
In this paper we have shown that the System Usability Scale is useful when
evaluating the usability of an ontology.</p>
      <p>To provide good tools for the ontology quality evaluation is paramount for
the e ectiveness and e ciency of ontology development. In the long term, the
focus of our future work will be on investigating more examples of practical
methods and tools for ontology quality evaluation. For the near future, we foresee a
generalization of the verbalization process, such that the tool can be used for
evaluating other types of ontologies.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgment</title>
      <p>The work presented in this paper was nanced by the Knowledge Foundation in
Sweden, grant KKS-20140170.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>H.-J.</given-names>
            <surname>Happel</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Seedorf</surname>
          </string-name>
          ,
          <article-title>Applications of ontologies in software engineering</article-title>
          ,
          <source>in: Proceedings of Workshop on Semantic Web Enabled Software Engineering (SWESE) on the ISWC, Citeseer</source>
          ,
          <year>2006</year>
          , pp.
          <volume>5</volume>
          {
          <fpage>9</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Henderson-Sellers</surname>
          </string-name>
          ,
          <article-title>Bridging metamodels and ontologies in software engineering</article-title>
          ,
          <source>Journal of Systems and Software</source>
          <volume>84</volume>
          (
          <issue>2</issue>
          ) (
          <year>2011</year>
          ),
          <volume>301</volume>
          {
          <fpage>313</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.S.</given-names>
            <surname>Fox</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Bilgic</surname>
          </string-name>
          ,
          <article-title>A requirement ontology for engineering design</article-title>
          ,
          <source>Concurrent Engineering</source>
          <volume>4</volume>
          (
          <issue>3</issue>
          ) (
          <year>1996</year>
          ),
          <volume>279</volume>
          {
          <fpage>291</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>V.</given-names>
            <surname>Mayank</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kositsyna</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Austin</surname>
          </string-name>
          ,
          <article-title>Requirements engineering and the semantic Web, Part II. Representation, management, and validation of requirements and system-level architectures</article-title>
          ,
          <source>Technical Report</source>
          , University of Maryland,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>K.</given-names>
            <surname>Siegemund</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.J.</given-names>
            <surname>Thomas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pan</surname>
          </string-name>
          and
          <string-name>
            <given-names>U.</given-names>
            <surname>Assmann</surname>
          </string-name>
          ,
          <article-title>Towards ontology-driven requirements engineering</article-title>
          ,
          <source>in: Proceedings of Workshop semantic web enabled software engineering at 10th international semantic web conference (ISWC)</source>
          ,
          <year>Bonn</year>
          ,
          <year>2011</year>
          ,
          <volume>14</volume>
          pages.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>B.</given-names>
            <surname>Nuseibeh</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Easterbrook</surname>
          </string-name>
          , Requirements engineering: a roadmap,
          <source>in: Proceedings of the Conference on the Future of Software Engineering</source>
          , ACM,
          <year>2000</year>
          , pp.
          <volume>35</volume>
          {
          <fpage>46</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Mylopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Borgida</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jarke</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Koubarakis</surname>
          </string-name>
          ,
          <article-title>Telos: representing knowledge about information systems</article-title>
          ,
          <source>ACM Transactions on Information Systems (TOIS) 8</source>
          (
          <issue>4</issue>
          ) (
          <year>1990</year>
          ),
          <volume>325</volume>
          {
          <fpage>362</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Greenspan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mylopoulos</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Borgida</surname>
          </string-name>
          ,
          <article-title>On formal requirements modeling languages: RML revisited</article-title>
          ,
          <source>in: Proceedings of 16th international conference on Software engineering</source>
          , IEEE Computer Society Press,
          <year>1994</year>
          , pp.
          <volume>135</volume>
          {
          <fpage>147</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G.</given-names>
            <surname>Dobson</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Sawyer</surname>
          </string-name>
          ,
          <article-title>Revisiting ontology-based requirements engineering in the age of the semantic Web</article-title>
          ,
          <source>in: Proceedings of International Seminar on Dependable Requirements Engineering of Computerised Systems at NPPs</source>
          ,
          <year>2006</year>
          , pp.
          <volume>27</volume>
          {
          <fpage>29</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>T.</given-names>
            <surname>Moroi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Yoshiura</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Suzuki</surname>
          </string-name>
          ,
          <article-title>Conversion of software speci cations in natural languages into ontologies for reasoning</article-title>
          ,
          <source>in: Proceedings of 8th International Workshop on Semantic Web Enabled Software Engineering (SWESE'2012)</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>H.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Muhammad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Tarasov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Adlemo</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Johansson</surname>
          </string-name>
          ,
          <article-title>Development and evaluation of a software requirements ontology</article-title>
          ,
          <source>in: Proceedings of 7th International Workshop on Software Knowledge-SKY</source>
          <year>2016</year>
          ,
          <article-title>in conjunction with the 8th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge ManagementIC3K 2016</article-title>
          , SCITEPRESS,
          <year>2016</year>
          , pp.
          <volume>11</volume>
          {
          <fpage>18</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>V.</given-names>
            <surname>Tarasov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ismail</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Adlemo</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Johansson</surname>
          </string-name>
          ,
          <article-title>Application of inference rules to a software requirements ontology to generate software test cases</article-title>
          ,
          <source>in: OWL: Experiences and Directions{Reasoner Evaluation: 13th International Workshop, OWLED</source>
          <year>2016</year>
          ,
          <article-title>and</article-title>
          5th International Workshop, ORE 2016, Bologna, Italy, November
          <volume>20</volume>
          ,
          <year>2016</year>
          ,
          <string-name>
            <given-names>Revised</given-names>
            <surname>Selected</surname>
          </string-name>
          <string-name>
            <surname>Papers</surname>
          </string-name>
          , Vol.
          <volume>10161</volume>
          , Springer,
          <year>2017</year>
          , p.
          <fpage>82</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>L.A.</given-names>
            <surname>Johnson</surname>
          </string-name>
          et al.,
          <source>DO-178B</source>
          ,
          <article-title>Software considerations in airborne systems and equipment certi cation</article-title>
          ,
          <source>Crosstalk, October</source>
          <volume>199</volume>
          (
          <year>1998</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Gomez-Perez</surname>
          </string-name>
          ,
          <article-title>Ontology evaluation</article-title>
          , in: Handbook on Ontologies, Springer,
          <year>2004</year>
          , pp.
          <volume>251</volume>
          {
          <fpage>273</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Gangemi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Catenacci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ciaramita</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          ,
          <article-title>Modelling ontology evaluation and validation</article-title>
          ,
          <source>in: Proceedings of European Semantic Web Conference</source>
          , Springer,
          <year>2006</year>
          , pp.
          <volume>140</volume>
          {
          <fpage>154</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>F.</given-names>
            <surname>Neuhaus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vizedom</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Baclawski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bennett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dean</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Denny</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Gruninger, A</article-title>
          . Hashemi,
          <string-name>
            <given-names>T.</given-names>
            <surname>Longstreth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Obrst</surname>
          </string-name>
          et al.,
          <article-title>Towards ontology evaluation across the life cycle</article-title>
          ,
          <source>Applied Ontology</source>
          <volume>8</volume>
          (
          <issue>3</issue>
          ) (
          <year>2013</year>
          ),
          <volume>179</volume>
          {
          <fpage>194</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>H.</given-names>
            <surname>Hlomani</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Stacey</surname>
          </string-name>
          , Approaches, methods, metrics, measures, and
          <article-title>subjectivity in ontology evaluation: a survey</article-title>
          ,
          <source>Semantic Web Journal</source>
          (
          <year>2014</year>
          ),
          <volume>1</volume>
          {
          <fpage>5</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>A.</given-names>
            <surname>Burton-Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.C.</given-names>
            <surname>Storey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Sugumaran</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Ahluwalia</surname>
          </string-name>
          ,
          <article-title>A semiotic metrics suite for assessing the quality of ontologies</article-title>
          ,
          <source>Data &amp; Knowledge Engineering</source>
          <volume>55</volume>
          (
          <issue>1</issue>
          ) (
          <year>2005</year>
          ),
          <volume>84</volume>
          {
          <fpage>102</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>J.</given-names>
            <surname>Brooke</surname>
          </string-name>
          ,
          <article-title>SUS-a quick and dirty usability scale</article-title>
          ,
          <source>Usability Evaluation in Industry</source>
          <volume>189</volume>
          (
          <issue>194</issue>
          ) (
          <year>1996</year>
          ),
          <volume>4</volume>
          {
          <fpage>7</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>N.</given-names>
            <surname>Casellas</surname>
          </string-name>
          ,
          <article-title>Ontology evaluation through usability measures</article-title>
          ,
          <source>in: Proceedings of OTM Confederated International Conferences "On the Move to Meaningful Internet Systems"</source>
          , Springer,
          <year>2009</year>
          , pp.
          <volume>594</volume>
          {
          <fpage>603</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>L.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Ma</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>West</surname>
          </string-name>
          , Ontology Usability Scale:
          <article-title>context-aware metrics for the e ectiveness, e ciency and satisfaction of ontology uses (</article-title>
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Parvizi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Mellish</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.Z.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <surname>K. Van Deemter</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Stevens</surname>
          </string-name>
          ,
          <article-title>Towards competency question-driven ontology authoring</article-title>
          ,
          <source>in: Proceedings of European Semantic Web Conference (ESWC)</source>
          , Springer,
          <year>2014</year>
          , pp.
          <volume>752</volume>
          {
          <fpage>767</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>D.</given-names>
            <surname>Beckett</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>McBride</surname>
          </string-name>
          ,
          <article-title>RDF/XML syntax speci cation (revised</article-title>
          ),
          <source>W3C recommendation 10</source>
          (
          <year>2004</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>B.</given-names>
            <surname>Motik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Patel-Schneider</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Parsia</surname>
          </string-name>
          ,
          <article-title>OWL 2 Web ontology language: structural speci cation and functional-style syntax, 2nd edn</article-title>
          .(
          <year>2012</year>
          ),
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>T.S.</given-names>
            <surname>Tullis</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.N.</given-names>
            <surname>Stetson</surname>
          </string-name>
          ,
          <article-title>A comparison of questionnaires for assessing website usability</article-title>
          ,
          <source>in: Proceedings of Usability Professional Association Conference</source>
          , Citeseer,
          <year>2004</year>
          , pp.
          <volume>1</volume>
          {
          <fpage>12</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bangor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kortum</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <article-title>Determining what individual SUS scores mean: adding an adjective rating scale</article-title>
          ,
          <source>Journal of usability studies 4</source>
          (
          <issue>3</issue>
          ) (
          <year>2009</year>
          ),
          <volume>114</volume>
          {
          <fpage>123</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>M.</given-names>
            <surname>Jarrar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.M.</given-names>
            <surname>Keet</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Dongilli</surname>
          </string-name>
          ,
          <article-title>Multilingual verbalization of ORM conceptual models and axiomatized ontologies</article-title>
          ,
          <source>Technical Report, Vrije Universiteit Brussel</source>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>K.</given-names>
            <surname>Kaljurand</surname>
          </string-name>
          and
          <string-name>
            <given-names>N.E.</given-names>
            <surname>Fuchs</surname>
          </string-name>
          ,
          <article-title>Verbalizing OWL in Attempto controlled English</article-title>
          ,
          <source>in: Proceedings of OWLED'07</source>
          , Vol.
          <volume>258</volume>
          ,
          <year>2007</year>
          ,
          <volume>10</volume>
          pages.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>S.F.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Stevens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Scott</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Rector</surname>
          </string-name>
          ,
          <article-title>Automatic verbalisation of SNOMED classes using OntoVerbal</article-title>
          ,
          <source>in: Proceedings of Conference on Arti cial Intelligence in Medicine in Europe</source>
          , Springer,
          <year>2011</year>
          , pp.
          <volume>338</volume>
          {
          <fpage>342</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>C.</given-names>
            <surname>Kop</surname>
          </string-name>
          ,
          <article-title>How to summarize an OWL domain ontology</article-title>
          ,
          <source>in: Proceedings of Fourth International Conference on Digital Society (ICDS'10)</source>
          , IEEE,
          <year>2010</year>
          , pp.
          <volume>106</volume>
          {
          <fpage>111</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <surname>I. Bratko</surname>
          </string-name>
          ,
          <source>Prolog Programming for Arti cial Intelligence</source>
          , 4th ed. edn, Pearson Education,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>