=Paper= {{Paper |id=Vol-1207/competition |storemode=property |title=None |pdfUrl=https://ceur-ws.org/Vol-1207/competition.pdf |volume=Vol-1207 }} ==None== https://ceur-ws.org/Vol-1207/competition.pdf
ORE 2014 Competition
In addition to workshop paper submissions, ORE 2014 also included a compe-
tition in which OWL reasoners were faced with different reasoning tasks. The
competition included six disciplines in which reasoners could compete: ontology
classification, consistency checking, and realisation each for OWL EL and OWL
DL reasoners. The tasks were performed on several large corpora of real-life
OWL ontologies obtained from the web, as well as user-submitted ontologies
which were found to be challenging for reasoners.

   The competition framework is available from GitHub https://github.com/
andreas-steigmiller/ore-2014-competition-framework/.

Participating Systems
TrOWL: http://trowl.eu/
Konclude: http://www.derivo.de/en/produkte/konclude/
ELepHant: https://code.google.com/p/elephant-reasoner/
TReasoner: https://code.google.com/p/treasoner/
HermiT: http://www.hermit-reasoner.com/
MORe: http://code.google.com/p/more-reasoner/
ELK: http://code.google.com/p/elk-reasoner/
jcel: http://jcel.sourceforge.net/
FaCT++: http://code.google.com/p/factplusplus/
Jfact: http://sourceforge.net/projects/jfact/
Chainsaw: http://sourceforge.net/projects/chainsaw/

   A package containing all the ORE 2014 reasoners is available from https:
//zenodo.org/record/11145/ (note that we included a license that only allows
usage for reproducing the competition results).

Datasets
The ORE 2014 data set contains overall 16,555 unique ontologies. The set com-
prises:

 – the MOWLCorp (Manchester OWL Corpus), which was obtained through
   a Web Crawl, Google Custom Search API and user submissions (http://
   mowlrepo.cs.manchester.ac.uk/datasets/mowlcorp/),
 – the Oxford Ontology Library (http://www.cs.ox.ac.uk/isg/ontologies/),
 – a BioPortal (https://bioportal.bioontology.org/) Snapshot (June 2014),
 – and user submitted ontologies such as BioKB, DMOP, FHKB, USDA, DPC,
   genomic-CDS, City-Bench.

   The ontologies in the data set are binned by profiles. For the competition,
the EL profile bin (8,805 ontologies) and the pure DL bin (7,704 DL ontologies

                                      iv
that do not fall into one of the profiles) were used. Two further bins are ob-
tained from these two bins by considering only the ontologies with an ABox (DL
2,439, EL 1,941 ontologies). The latter two are used for the realisation discipline,
whereas the former ones are used for the classification and consistency checking
disciplines.
     Within these bins, the ontologies are further categorised by size (very small,
small, medium, large, very large). A file list is then created by iterating over
these categories (skipping categories that are already fully covered). From these
file lists, the first X are used for the competition, where X is chosen such that
most reasoners are able to finish within a time limit (7 hours for classification
and realisation, 3 hours for consistency checking). For classification X is 250
(OWL DL) and 300 (OWL EL), for consistency checking and realisation X is
200 (OWL DL) and 250 (OWL EL).
     The whole data set is available for download at http://zenodo.org/record/
10791 and more details about the corpus can be found at http://mowlrepo.
cs.manchester.ac.uk/datasets/ore-2014/.


Execution

The competition was executed live on July 18th with a PC cluster at the Univer-
sity of Manchester provided by Konstantin Korovin. The machines of the cluster
were equipped with an Intel Xeon QuadCore CPU running at 2.33GHz and
12GB RAM, where 10GB could be used by the reasoners. The reasoners were
executed on the machines (one reasoner per machine) by running them natively
on the used Fedora 12 operating system (64bit) or within a Java Runtime Envi-
ronment (Java version 1.6). A three minute time limit was given every reasoner
for each ontology, where 2.5 min was allowed for reasoning, i.e., 0.5 min could
additionally/separately be used for parsing of the ontology and serialization of
the result. Expected results were determined by a majority vote between the
hash codes of the normalised results of those reasoners that terminated within
the time limits. In case of a draw, one hash code was randomly chosen and
declared as the expected hash code.


Results

The results of the ORE 2014 live competition are available from https://
zenodo.org/record/11142/. The competition queries are available from https:
//zenodo.org/record/11133/

   The first three reasoners (ranked by number of expected results within the
time limit of 3 min per ontology) were given prizes:
   OWL EL Consistency Checking:                  OWL DL Consistency Checking:
   1. Prize: ELK                                 1. Prize: Konclude
   2. Prize: Konclude                            2. Prize: Chainsaw
   3. Prize: MORe                                3. Prize: HermiT

                                         v
  Fig. 1. Results of the consistency checking disciplines (OWL EL & DL)



OWL EL Classification:                             OWL DL Classification:
1. Prize: Konclude                                 1. Prize: Konclude
2. Prize: MORe                                     2. Prize: HermiT
3. Prize: ELK                                      3. Prize: MORe




     Fig. 2. Results of the classification disciplines (OWL EL & DL)




OWL EL Realisation:                                OWL DL Realisation:
1. Prize: Konclude                                 1. Prize: Konclude
2. Prize: TrOWL                                    2. Prize: FaCT++
3. Prize: FaCT++                                   3. Prize: TrOWL




      Fig. 3. Results of the realisation disciplines (OWL EL & DL)


                                   vi
   The competition was also part of the 1st FLoC Olympic Games 2014 (http:
//vsl2014.at/olympics/) together with 13 other competitions. For the Olympic
Games each competition could award three Kurt Gödel Medals. For ORE 2014
the reasoners were ranked according to the number of expected results over the
number of attempted tasks over all disciplines in which a reasoner participated.
The medal winners were:


   1. Prize: Konclude (95.5%)
   2. Prize: ELK (86.4%)
   3. Prize: MORe (85.7%)




                                      vii