=Paper=
{{Paper
|id=None
|storemode=property
|title=A Method for Evaluating Ontologies
|pdfUrl=https://ceur-ws.org/Vol-897/ecs_12.pdf
|volume=Vol-897
|dblpUrl=https://dblp.org/rec/conf/icbo/Seyed12
}}
==A Method for Evaluating Ontologies==
A Method for Evaluating Ontologies Author Patrice Seyed Supervisors Stuart C. Shapiro, William J. Rapaport, and Barry Smith Studies/Stage Defended dissertation in December 2011, graduating May 2012. Affiliation Department of Computer Science and Engineering, University at Buffalo E-Mail apseyed@buffalo.edu Aims and Objectives of the Research My dissertation work was focused on developing a method for evaluating and standardizing ontologies, based on an integration of the Basic Formal Ontology (BFO) and OntoClean [1]. The primary objective is to help standardize the creation of ontologies for the Open Biomedical Ontologies (OBO) Foundry, for which BFO is the chosen upper ontology, given that there are no formal criteria that candidate domain ontologies must meet for ratification into the OBO Foundry. In this project I integrated BFO with the three primary components of OntoClean: Rigidity, Identity, and Unity. The axioms that resulted from the integration of BFO with OntoClean’s notion of Rigidity serves as a foundation for a decision-tree implemented within a prototype Protégé plugin for assisting a modeler in evaluating classes she introduces into an ontology, one at a time. If a class does not satisfy criteria to be consistent with BFO, reflected in the integration axioms and determined through the answers provided to the decision tree questions, the plugin assists the modeler in determining how the class can be re-conceptualized and formulated in a manner that is BFO-compliant. The dissertation is completed, but the integration work is not fully implemented within the Protégé plugin, and additionally, the plugin has not been fully user tested. Therefore our aims are as follows: Introduce the integration of BFO with Identity and Unity into the ontology-building Protégé plugin software, be it within the existing decision tree approach or otherwise. Design and administer formal user testing to improve overall utility, specifically making improvements to (a) the graphical interface, making it easier to navigate (i.e., usability), (b) the decision tree questions, including their ordering and improve how intuitive they are to the modeler, and (c) feedback to the modeler that better explains why their class is not compliant with BFO, and also, feedback on what logical formulas have been constructed and asserted in the ontology on her behalf. Justification for the Research Topic Ontologies developed for the OBO Foundry include some that have been ratified, and others holding the status of “candidate”. There are no formal, principled criteria that candidate OBO Foundry ontology must meet for ratification. Also, there is no available ontology building software that considers and enforces alignment with BFO, the designated upper ontology of the OBO Foundry. Such criteria and software must be established to maintain consistency with BFO and between domain ontologies. We aim to develop software that accomplishes this based on BFO and its integration with OntoClean, an approach for detecting when the taxonomic relation is being used improperly. Having chosen for our implementation a plugin environment that interoperates with a popular ontology editor, Protégé, we expect that the principles underlying the integration work will become more accessible to both novice and expert domain modelers during the process of making important classification choices for their ontologies. Research Questions What approach should be applied for integrating Identity and Unity into the Protégé plugin for evaluating ontologies? o If the decision-tree approach is maintained, how does this affect the decision tree question ordering and format of predominately having the modeler answer yes/no question? What kind of experiment design would be most beneficial for gathering the sort of results that will help improve the graphical interface, the decision-tree, and user feedback? Are there some improvements that can be made, given our intent for the plugin, that go beyond the current set of integration axioms? Research Methodology The methodology for the integration work required ontological and logical analysis on the various aspects of BFO and OntoClean’s theory. The current challenge now is to apply an appropriate methodology for user testing the Protégé plugin. In preliminary user testing we asked several users to test the plugin, and simply asked them to give feedback about the utility of the interface and the intuitiveness of the decision tree questions. This is akin to a study that uses a survey to gain user feedback, although less structured. We would like to establish a more rigorous experiment design that reveals how users would prefer to interact with the software. Research Results to Date We received some preliminary user feedback, which we are in the process of addressing: User Feedback Category of Feedback “It would be interesting to put a start screen that explains what the plug-in does. Even if the plug-in has been created for advanced users of Protégé, a presentation would be good for help those who The Graphical Interface have problems with the concepts of BFO.” “Question 2 and 9 contain much the same content. I recommend that the developers refactor the decision tree based on the content of the questions to ensure that multiple similar-sounding questions The Decision-Tree Questions are never asked of the users.” “Maybe an explanation of why the plugin take a different course of questions when you answer yes/no in the homogeneity screen.” Feedback to the Modeler References Seyed AP (2012), A Method for Evaluating and Standardizing Ontologies. Ph.D. Dissertation. Department of Computer Science and Engineering, University at Buffalo, (2012).