=Paper= {{Paper |id=Vol-2786/Paper38 |storemode=property |title=Efficient Reasoner Performance Prediction using Multi-label learning |pdfUrl=https://ceur-ws.org/Vol-2786/Paper38.pdf |volume=Vol-2786 |authors=Ashwin Makwana |dblpUrl=https://dblp.org/rec/conf/isic2/Makwana21 }} ==Efficient Reasoner Performance Prediction using Multi-label learning== https://ceur-ws.org/Vol-2786/Paper38.pdf
                                                                                                                                                 304


Efficient Reasoner Performance Prediction using Multi-label
learning
Ashwin Makwana
Department of Computer Engineering, CSPIT, Charotar University of Science and Technology, CHARUSAT,
 Gujarat, India.

                Abstract
                The reasoner is the mechanism for interpreting the semantics of web ontology language. This
                paper focuses on reasoner performance study and predicting it by use of machine learning.
                Reasoner evaluation is very challenging as reasoner’s efficiency may vary on different
                ontologies with the same complexity level. Different reasoners give different inference for the
                same ontology. Thus, reasoner could be enhanced for some however not for all ontologies.
                Here, paper focus on reasoner performance variability of reasoner and how ontology features
                affect reasoner performance. The main goal is to provide simple, efficiently computable
                guidelines to users. For prediction, supervised machine learning is used as a machine learning
                technique which help us to capture these dependencies. First introduced a new collection of
                efficiently computable ontology features, that characterize the design quality of an OWL
                ontology. Second, modeling of two learning problems: first, predicting the overall empirical
                hardness of OWL ontologies regarding a group of reasoners; and then, anticipating single
                reasoner robustness when inferring ontologies under some online usage constraints. To fulfill
                this goal, a generic learning framework is used, which integrates the introduced ontology
                features. The framework employs a rich set of machine learning models and feature selection
                methods. Furthermore, we used multi-label learning by analyzing the learned models unveiled
                a set of crucial ontology features likely to alter the empirical reasoner robustness.

                Keywords
                Semantic Web, Ontology, Reasoner, Multi-label learning, Supervised learning, Prediction.

Introduction
                                                                                             Here, we have considered both profiles of
The problem that we want to focus is on
                                                                                             ontology OWL (DL and EL). We have checked
semantic web reasoner performance measure
                                                                                             the performance parameter and compared it
and empirical assessment of multiple reasoners.
                                                                                             with the benchmark. The Semantic Web
An application developer can find the best
                                                                                             requires a standard, machine-processable
suitable reasoner for given ontology. We
                                                                                             representation of ontologies. The W3C has
proposed here machine learning techniques
                                                                                             defined standard models and languages for this
based on given ontology features to predict
                                                                                             purpose. There are standard languages used for
correctness or relevance and time for reasoning
                                                                                             semantic web, Resource description framework
task by a set of reasoners. First, we have done
                                                                                             (RDF) [1] and Web ontology language (OWL)
experimentation for the empirical study of
                                                                                             [2]. Ontologies represented with these
individual reasoner’s performance prediction
                                                                                             languages is becoming prevalent. These range
for its correctness and reasoning time using
                                                                                             from domain-specific ontologies, for example
various ML model. After that, we proposed a
                                                                                             Gene Ontology. Semantic web Reasoner is one
multi-label classification technique to predict
                                                                                             of the crucial components to fetch relevant
reasoning time and relevance of reasoner.
                                                                                             knowledge from an ontology. To select
                                                                                             appropriate reasoner is an essential task for a
ISIC’21: International Semantic Intelligence Conference,                                     semantic web developer. During selecting
February 25-27, 2021, Delhi, India.
EMAIL:ashwinmakwana.ce@charusat.ac.in (Ashwin Makwana)                                       reasoner, it has required to find a prediction of
ORCID: 0000-0002-4232-9598 (Ashwin Makwana)
             ©️ 2020 Copyright for this paper by its authors. Use permitted under Creative
                                                                                             reasoner’s performance.
             Commons License Attribution 4.0 International (CC BY 4.0).                         Description logic-based Reasoners are
             CEUR Workshop Proceedings (CEUR-WS.org)
                                                                                             crucial elements to work with OWL ontologies.
                                                                                                            305


They are sued to produce explicit knowledge           only. While RakSOR[12]              support user
from ontologies to check their consistency and        assistance and it takes runtime as well as
many other things. People are building an             correctness as criteria of ranking, but issues are;
ontology by putting on domain knowledge and           it uses a complicated and time-consuming
trying to get more expressive and representative      process and only supports DL ontologies.
ontologies. But more the ontology is                  Multi-RakSOR [13] uses the automatic ranking
expressive; the more reasoning is complex. In         of ontology reasoners, which combines multi-
the worst case, reasoning can be non-                 label classification and multi-target regression
deterministic doubled exponential. Thankfully,        techniques. It focused on the outcome as
in practice, the reasoning is feasible even with      reasoner ranking and reasoner relevance
very expressive ontologies. However, in               prediction, which uses correctness and
general there is the theoretical complexity does      efficiency of reasoners as raking criteria. It also
not meet the empirical complexity.                    considers EL and DL both type of profiles of
    There is an ample number of reasoners             ontology. But it requires more optimization
available for the semantic web application, and       steps for improvement in performance. All
it is difficult for the application developer to      these three-ranking works are not working on
choose right reasoner for an ontology for the         other reasoning tasks like consistency checking
domain-specific application. For evaluating           and realization, and not focused on memory and
ontology reasoners, OWL Reasoner Evaluation           energy usage by reasoners.
workshop organized every year. In this                    Machine learning can bring a solution to this
evaluation process, there are two significant         problem since it can help us to anticipate future
issues one that is a disparity of reasoner’s          reasoner behaviors by analyzing past running.
computing time which causes efficiency                Predicting single reasoners performance
problem, i.e., for the same size and expressivity     includes predicting the ranking of reasoners
classes we get different computational time.          given an input ontology. However, all these
The second problem is related to a disparity of       solutions have many drawbacks. So, we
reasoner’s computed results, which produce            proposed a new approach, the automatically
correctness problem, i.e., for the same size and      rank reasoner, to recommend the fastest
expressivity classes, we get different agreement      reasoner.
level. For resolving above two issues, there are          The contribution of this work is to proposed
various explanations given by many researchers        reasoner performance parameter prediction
[3]–[6], but no tools available to cope with          method. Which is experimented and
these phenomena.                                      implemented using ORE framework tool using
    Main research gap in this area is an              Python libraries.
exponential growth in a number of the reasoner;
there is a variable empirical performance of          Literature Survey
reasoner. There is a lack of prior knowledge and
expertise in this field. So main crux of this gap
is how to help an application developer to            Ontology Features and Metrics
choose the appropriate and suitable reasoner to
work with domain-specific-ontology for a              Ontology features are qualitative and
given application. To address these issues,           quantitative attributes covering structural and
many researchers [7]– [10] used machine               syntactic measures of a given ontology.
learning techniques to learn reasoner’s future        Ontology metrics or functions are used for
behaviors from its past running for predicting        deciding reasoner performance prediction.
single reasoner performances for, given an               Use of ontology metrics can predict
input ontology. Recently few works were               classification time. Based on parameters
carried out for predicting and ranking of a set of    presented in [8], [14] proposed a twenty seven
reasoners for, given an input ontology viz-a-         parameters that can be categorized the given
viz, R2O2 [11] and RakSOR[12]. R2O2 is                ontology’s complexity and structure. Many
working on reasoning optimization technique,          other metrices proposed in the literature [15]
but there are issues in it [11], that it works only   [16] [14] to measure various parameters of
the runtime as criteria, there is no user             ontologies. In one article [15], authors estimate
assistance, there is the massive cost of the          the quality of ontology like software
prediction steps and support DL ontologies            engineering measures where, they shown a
                                                                                                                  306


framework based on a suite of four metrics.                   major ontology characteristics, and useful for
Another group of authors [9] claim that metrics               reasoner performance prediction. Auto
proposed by paper [7] are not sufficient, they                computation of these metrics is possible using
used ML techniques and other ontology metrics                 efficient tools and methods, which help us to
for significant reduction in the dimensionality               predict reasoners performance. In this paper
of various features of the ontology. Based on                 [17] mainly eight ontology metrics were
that, they identified vital features which                    defined by considering ontology design-
correlated reasoners performance. Many                        complexity. Ontology level and Class level are
numbers of ontology features were identified in               two main types of metrics. Each of the
the literature by the researcher for preparing                ontology, total twenty-seven distinct metrics
learning models for reasoner classification time              are considered. Figure 1 shows the
prediction. Recently one research group [10]                  classification of various ontology metrics.
reuse those feature and defied new features to                These metrics are divided into categories like
compare to [9], they identify total 112 ontology              Ontology Size, Ontology Expressivity,
features and split the ontology features into four            Ontology Structure, and Ontology Syntax.
categories like size description, expressivity                These categories are further divided into
description, structural features, and syntactic               various subcategories.
features. In one paper, authors [17] proposed a
set of metrics which covers various points of
ontology design. These metrics include all
                                             Signature Size       SC, SOP,SDP,
                                                (SSIGN)              SI, SDT
                                 Size
                              Description
                                                 Axiom
                                               Size(OAS)

                             Expressivity
                                               DL Family         OWL PROFILE
                             Description

                                             Hierarchy (HC,         MDepth          Msibling      MTangledness
                                                  HP)               ADepth          ASibling      Tangledness

                               Structural                         HC-Cohesion
                                               Cohesion                           OP Cohesion     omt Cohesion
              Ontology        Description                         HP-Cohesion

                                               Richness            RRichness      AttrRichness


                                                Axioms             KBF (Set)        ATF(Set)        ADF(Set)


                                              Constructors         CCF (set)      ACCM OCCD         CCP (Set)

                               Syntatic
                                                Classes            CDP (set)         CCYC            CDISJ
                              Description

                                               Properties          OPCF (set)       HRF(set)


                                               Individual          NFF (set)        ISF (set)

                         Figure 1: Ontology Features and Metrics classification

Survey on Reasoners                                           Reasoning characteristics, Practical usability
                                                              and Performance indicators. First, describes
                                                              the basic features of ontology reasoners.
This brief study is to know the types of reasoner
                                                              Second, type of attributes determines whether
available with their characteristics and
                                                              the reasoner implements the OWL API. They
descriptions. Attributes of ontology reasoners.
                                                              also describe the availability and license of the
In paper [18] group of authors divide attributes
                                                              reasoners. And last third, type is used to
of ontology reasoner in to 3 main category:
                                                              measure the performance of ontology
                                                                                                          307


reasoners. e.g., classification performance,        on many OWL ontologies obtained from the
TBOX consistency, checking performance, etc.        Web and the ontologies presented by the user.
There are various Reasoners available,
comparative survey presented based on papers-       Performance Prediction of Reasoner
[19], [20]. This survey covers ten major
reasoners for the current study included in the
scope of this paper.                                Classification and Prediction are two main
                                                    techniques of Machine Learning, especially in
                                                    supervised learning, which required to apply
Survey on Reasoners Performance                     during performance prediction of reasoners.
Benchmark                                           Classification is ML technique which is used to
                                                    identify the class for a new object like ontology,
There is a requirement to measure, benchmark,       text or images, etc. from given set of classes.
and characterize the performance of various         Reasoner performance [8] is measured using
reasoner available. The main aim of the SEALS       various parameters. To judge performance
project was to evaluate the DL-based reasoners.     parameter[9] before using in a semantic web
The comparison of three reasoners was made          application is a significant issue of research.
from standard inference servies. They have [8]      Use of ontology metrics can predict
used a data set of 300 ontologies and completed     classification time. Based on metrics presented
a comparative study which analyzes the              in [14] [7] proposed total twenty-seven metrics
performance of the reasoners. The reasoner          of given ontology. Other proposed in the
performance for the ontology metrics by the         literature [15], [16], [23] to observe the quality,
usage of the machine learning techniques gave       complexity, and cohesion of ontologies. Many
us a better idea about the complexity of the        numbers of ontology features were identified in
individual ontologies.                              the literature by the researcher for preparing
The classification is done on the ontologies        learning models for reasoner classification time
using different reasoners. A comprehensive          prediction. In recent literature by researchers in
study is done regarding the variability and size    [10] reuse those feature and defied new features
of the dataset with more than three-hundred         to compare to earlier work done in this area,
ontologies. They have also found some unique        they identify total 112 ontology features. We
attributes with a thorough study.           Such    can use machine learning techniques to predict
characteristics are used in reasoner’s              ontology classification performance.
comparison and selection for given set of
performance criteria.                               Proposed Reasoner Prediction
 Paper [21] focuses on benchmarking related         Framework
to data sources and mappings to create more
practical synthetic ontologies under managed
conditions, we have used real-world ontology        Semantic Web applications with ontologies, the
                                                    behavior of reasoners used is very
statistics to parameterize the benchmark.
                                                    unpredictable. There are two main reasons for
Workshop [22] focuses on bringing together
both developers and users of reasoners for          this; one reasoner would exhibit enormous
OWL comprising systems which can use the            scatter in computational runtime across the
SEALS platform for their systems. Reasoning         same ontologies and secondly, reasoners would
systems like jcel, FaCT++, WSReasoner, and          derive different inferences for the same input
HermiT were present. The OWL reasoner               ontology. These show the hardness of
evaluation (ORE) [17] workshop encouraged           understanding reasoner’s empirical behaviors
the reasoned developers and ontology engineers      for good reasoner developers.
to analyze the performance of new reasoners on      For selecting the best reasoner for semantic web
OWL ontologies. The categorization, stability,      application using evaluation of reasoners
and other factors for the reasoner were tested in   performance, our hypothesis is that based on
                                                    ontology features and metrics. We can predict
the live and offline reasoned competition in the
workshop. A total of 14 reasoners were              reasoner’s performance and can predict best-
                                                    suited reasoner using machine learning
submitted implementing specific subsets of
OWL2. The reasoned competition is performed         techniques, especially using multi-label
                                                    learning algorithms and ranking techniques.
                                                                                                         308


Following are steps to follow for Reasoner          Feature variance and Feature correlation with
Performance Prediction.                             label data.
    • Import Data contain ontology features         The supervised learning algorithms can be
       and reasoner performance parameters.         divided or grouped as logically based
       Data set of standard OAEI.                   algorithms such as decision trees, Artificial
    • Select standard Test data and Train data      Neural Network (ANN) based techniques such
       given in dataset.                            as multi-layered perceptron, Statistical learning
    • Define various features using feature         algorithms such as Bayesian network and SVM.
       selection, i.e., ontology characteristics    We can use some supervised machine learning
       and metrics. Define Target, i.e.,            algorithms like Random Forest, Simple
       Reasoning time, and Reasoner status.         Logistic Regression, Multilayer Perceptron,
    • They fit multi-target classifiers for         ANN-based learner, SMO SVM based learner
       relevant and irrelevant reasoners for        and IBk K-Nearest-Neighbor based algorithm.
       given ontology set.
    • Arrange Reasoners for, given all              Multi-Label Learning for selection of
       ontology using relevant first and then       Reasoner
       according to the order of time after
       relevant reasoner put irrelevant
       reasoners according to the order of          Limitation of Single Label based learning is
       time.                                        that it may not give consistent output for the
                                                    selection of reasoner based on multiple criteria.
    • Give ranks according to the above
                                                    Multi-Label based learning with multi-criteria
       arrangements.
                                                    is useful because single criteria may not give a
    • Fit classifier / Regression to predict
                                                    consistent result.
       ranks.
                                                    The reason to apply multi-label classification is,
                                                    for each ontology, there may be multiple
Reasoner Performance Prediction                     possible correct reasoners. This inspired us to
using ML                                            do multi-label classification for predicting
                                                    relevancy of given ontology. Here Ranking of
                                                    reasoners can be decided based on multiple
The main aim is to do work on automatically
predict a reasoner’s time efficiency and            criteria, i.e., like correctness or relevance of
correctness. To achieve this goal, people have      reasoner for given ontology and time taken for
worked and suggested machine learning               doing reasoning of that ontology. So, to decide
approaches, which includes the following steps.     out of all possible correct reasoners, we need to
First, we required to work on the set of valuable   decide and identify the first one to experiment
ontology features, which will be used for           for given ontology. That is why we finalized
                                                    two criteria for ranking reasoners that is
learning ontology by machine learning model.
                                                    correctness and time required for reasoning.
ORE’2014 Framework is widely used to
conduct experiments on various reasoners for        The solution of Reasoner selection
their performance on given ontology corpus. At      methodology recently discussed and suggested
last, deployed supervised machine learning          by Alaya in [13] her paper on multi-label based
                                                    learning for ontology reasoner’s ranking. Based
techniques to learn predictive models of
                                                    on this study author suggested that multi-label
reasoner performance based on previous
execution. By interpreting these models, we can     classification can be applied to reasoner based
observer that few main features may change the      on ontology features. They have used Binary
                                                    Relevance method [24] for Multi-label
performance of reasoner.
Feature selection is one of the prime steps in      classification, which is one of the types of
preprocessing dataset for training model in         Problem Transformation method of MLC. For
machine learning. The main purpose of feature       predicting they have used Multi-Target
selection is selecting the most relevant features   Regression, especially Ensemble of Regression
by excluding non-useful features. Other             chain[25].
                                                    In place of the above method to decide the
researchers have used supervised discretization
                                                    better approach, we have done experiments
method (MDL), the Relief method (RLF); the
CFS Subset method (CFS). We will have used          with various ML model of MLC, where we
                                                    found Ensemble Approach of MLC better
                                                                                                                            309


compare to Binary relevance, especially we                             compare 10 reasoners for classification of 1900
used Ensembled of Random Forest model. For                             distinct ontologies. For Reasoner Performance
MTR also, we have used Random forest for                               prediction, we have used Python language and
regression, which outperforms the Regression                           Jupyter Notebook with python IDE. We have
chain method.                                                          used Python library like numpy, pandas,
If we compare the different problem                                    matplotlib, sklearn, xgboost, skmultilearn, and
transformation method for Multi-label learning,                        their subclasses for prediction and classification
classifier chain is not advisable to exploit the                       of Reasoner performance.
correlation between targets. It gives a better
chance of ranking higher to reasoner or label                          Dataset
predicted last. Label power set method is not
technically suitable for 1900 dataset in which a
combination of 10 become more than 1000. In                            Ontologies data set is taken from ORE
                                                                       Corpora2, around 1900 ontologies collected
another word number of classes increase to
more than 500, which is not good. Because of                           from this source which used for reasoner
that, we have used Binary relevance as multi-                          performance prediction.
                                                                       Reasoners3 set from popular categories are
label learning and Random forest as a base
                                                                       selected as candidates for performance
algorithm for multi-label learning because it
                                                                       evaluation and prediction process. Reasoner
performs better than KN, Logistic Regression,
MLP, AdaBoost, Navi Bias, and QDA.                                     correctness/robustness and performance time is
                                                                       generated for 10 reasoners which have shown
                                                                       good efficiency in classification task of
Assessment Measures                                                    ontologies, during ORE competitions. The list
                                                                       of 10 reasoners includes ELK, Konclude,
Evaluation and assessment measures are used                            MORe, ELepHant, HermiT, TrOWL, Pellet,
to check the quality of ML model. For binary                           FaCT++, Racer, JFact.
ML scenario, we could have TP, TN, FP, and
FN value used for assessment. From this, we                            Implementation
can calculate F1-measure, Precision, and
Recall.
For assessment of ML with multi-class models,                          Start by evaluating the reasoners; we have to
                                                                       find empirical data. They describe the
with an imbalanced dataset, we can use
                                                                       performance on a large set of ontologies. So,
assessment measures like the F1-measure,
Kappa coefficient, and Matthews the                                    select to use tools proposed in the ontology
correlation coefficient. These measures we                             reasoner evaluation workshop. We tool their
proposed to select the reasoner best predictive                        framework ORE. We set classification
model. Assessment of relevance prediction                              challenges (DL & EL) 1900 ontologies. All the
model and compare with the existing system                             DL ontologies are to be handled by 8 reasoners,
using Hamming loss and F1 measure.                                     and 2nd challenge #EL ontologies will be
                                                                       handled by ten reasoners 8 + ELK and
                                                                       Elephant. A time limit of 3 minutes.
Experimentation and Results                                            Steps for an experiment using Machine learning
                                                                       applied to estimate the best reasoner for
Experimentation Setup                                                  ontology:
                                                                           1. Import Data
                                                                           2. Feature selection
Experiments to collect data for empirical
                                                                           3. Select test and train data
behaviors of reasoners for classification task of
                                                                           4. Apply ML methods for predicting
a given set of OWL ontologies. For this we
                                                                               reasoner relevance (for 10 reasoners)
work with the evaluation tools in ORE
                                                                           5. Apply ML methods for predicting
(Ontology Reasoner Evaluation Workshop)
                                                                               reasoner time (for 10 reasoners)
[26] competition, which includes ORE
Framework1 and Ontology Corpora. We

1
  ORE Framework-“https://github.com/andreas-steigmiller/ore-2014-competition-framework/”
2
  Ontology corpus - ”http://zenodo.org/record/10791”
3
  Reasoners - ”https://zenodo.org/record/11145”
                                                                                                                          310


       6. Then select the best method for                       this model on the dataset, we predicted
          predicting relevance.                                 Execution time as the target variable. We have
       7. Predict reasoner’s performance using                  measured and compare Error rate, i.e., Root
          multilabel classification/regression.                 Mean Square (RMS) Error given by each model
                                                                for every 10 different reasoners. Figure 2 shows
Result and Discussion                                           that Random Forest is performing best for all
                                                                ten reasoners compare to all other models. A
                                                                neural network is the worst model for the
For Reasoner’s performance prediction, we
                                                                majority of reasoner performance prediction.
have applied various machine learning models
like k-NN, Decision Tree, Random Forest,
Neural Network, and AdaBoost. After applying

                                                     Error Rate (RMS)
 8.00E+09
 6.00E+09
 4.00E+09
 2.00E+09
 0.00E+00




                Nearest Neighbors            Decision Tree          Random Forest         Neural Net           AdaBoost

                     Figure 2 Prediction of Reasoner Execution Time using various ML


                                                             Accuracy
 1.5
   1
 0.5
   0




            Nearest Neighbors       Decision Tree    Random Forest      Neural Net    AdaBoost     Naive Bayes      QDA

                     Figure 3 Accuracy of Relevance Prediction for all Reasoners using ML



                                                              F1
1.5
 1
0.5
 0




            Nearest Neighbors   Decision Tree       Random Forest      Neural Net    AdaBoost    Naive Bayes      QDA
                   Figure 4 F1 measures of Relevance Prediction for all Reasoners using ML

Figure 3 shows a summary of all graph for                       models. We have also checked the performance
accuracy Vs. Various reasoners for all ML                       parameter of prediction using F1 measure as per
                                                                                                                     311


Figure 4 graph. This graph exhibits that Radom                 We have used Multi-Label Classification using
Forest gives the best result in terms of F1                    problem transformation methods and Adapted
measure. By this graph, we can conclude the                    Algorithms like MLkNN, BPMLP, RAkEL,
reasoner is the dominant reasoner in the DL                    and Random forest. MLkNN It is a version of
ontologies. We can see that Hermit having a                    existing KNN for the multilabel learning task.
high rate of correctness is very slow, EL is                   It does not divide the problem into
dominant reasoner when in handling EL                          subproblems. BPMLP, this is a multi-label
ontologies. All of this data will serve to create              version of Neural Network-based algorithm.
a learning data set. So, we try to divide the data             RAkEL is Random k Label set method.
in to Train and Test data to learn the                         Random Forest special version for multi-label
mulitRAkSOR predictive models; then we                         classification we have used.
assessed the relevance of the predictive quality                  We can observe results about multi-label
of reasoner relevance. Our result shows our                    learning method for prediction of reasoner’s
algorithm outperformed the existing solution.                  performance using the various parameters like
                                                               Hamming-Loss, Accuracy, Jaccard-Similarity,
                                                               Precision, Recall, and F1-measures.
Table 1 and Figure 5 shows that Random forest
shows significant improvement over other
Multi-label learning models, including
MulitRakSOR, especially for parameter
Hamming-Loss and F1-measure.
                      Table 1 Multi-Label Learning Model Performance Analysis
                            MLkNN        BPMLP               RAkEL      MultiRakSOR               Random Forest
        Hamming-Loss         0.14          0.5               0.14                 0.13                0.05
           Accuracy          0.45           0                0.05                  -                  0.72
            Jaccard-
                             0.83          0.4               0.82                  -                  0.93
           similarity
           Precision         0.88         0.51               0.84                  -                  0.95
            Recall           0.95         0.43               0.86                  -                  0.98
         F1-Measure          0.91          0.4               0.85                 0.95                0.97


                     Multi-Label Learning Model Performance Analysis
 1.5

   1

 0.5

   0
        Hamming-Loss        Accuracy    Jaccard-similarity       Precision               Recall         F1-Measure

                                        MLkNN        BPMLP      RAkEL        RF

                        Figure 5 Multi-Label Learning Model Performance Analysis

                                                               and using the semantic data on the web, there is
Conclusions                                                    a requirement of the reasoner.
                                                               For selecting appropriate reasoner by Semantic
                                                               Web application developer, we have proposed
For Semantic web heterogeneous store data in a
                                                               a machine learning-based models for relevance
structured way using Ontology concept, to fetch                and reasoning time prediction for given
answer of the query of user, we required                       ontology. We have applied the multi-label
reasoner and logical rules. For understanding
                                                                                                       312


learning method predicting the rank of various         J. Euzenat, M. Hauswirth, J. X. Parreira, J.
reasoners. Using single label prediction               Hendler, G. Schreiber, A. Bernstein, and
methods for given data set of ontologies and           E. Blomqvist, Eds. Berlin, Heidelberg:
reasoner, we have shown that Random Forest is          Springer Berlin Heidelberg, 2012, pp. 82–
giving the best performance in terms of                98.
performance parameters. Same way, for multi-      [6] T. D. Wang and B. Parsia, “Ontology
label learning model also Random forest                Performance Profiling and Model
variance outperform MLkNN, BPMLP,                      Examination: First Steps,” in The
RAKEL and recently proposed MultiRakSOR                Semantic Web, vol. 4825, K. Aberer, K.-
in terms of Hamming-Loss (0.015), Accuracy,            S. Choi, N. Noy, D. Allemang, K.-I. Lee,
Jaccard-Similarity, Precision, Recall and F1-          L. Nixon, J. Golbeck, P. Mika, D.
measures (0.97).                                       Maynard, R. Mizoguchi, G. Schreiber, and
In future work, we could expand this approach          P.     Cudré-Mauroux,         Eds.    Berlin,
for a greater number of ontologies and also on         Heidelberg: Springer Berlin Heidelberg,
multiple domains. We could also extend our             2007, pp. 595–608.
work in the future for SPARQL query               [7] Y.-B. Kang, Y.-F. Li, and S.
performance measurement benchmark using a              Krishnaswamy, “Predicting Reasoning
greater number of queries empirically by               Performance Using Ontology Metrics,” in
increasing number of experimentations.                 The Semantic Web – ISWC 2012, vol.
                                                       7649, P. Cudré-Mauroux, J. Heflin, E.
References                                             Sirin, T. Tudorache, J. Euzenat, M.
                                                       Hauswirth, J. X. Parreira, J. Hendler, G.
                                                       Schreiber, A. Bernstein, and E. Blomqvist,
[1] G. Klyne and J. Carroll, “Resource                 Eds. Berlin, Heidelberg: Springer Berlin
    Description Framework (RDF): Concepts              Heidelberg, 2012, pp. 198–214.
    and      Abstract      Syntax:,”      W3C
                                                  [8] Y.-B. Kang, Y.-F. Li, and S.
    Recommendation, Feb. 2004.
                                                       Krishnaswamy,           “A          Rigorous
[2] G. Antoniou and F. van Harmelen, “Web              Characterization       of      Classification
    Ontology Language: OWL,” in Handbook               Performance - A Tale of Four Reasoners,”
    on Ontologies, S. Staab and R. Studer, Eds.
                                                       2012.
    Berlin, Heidelberg: Springer Berlin
                                                  [9] V. Sazonau, U. Sattler, and G. Brown,
    Heidelberg, 2004, pp. 67–92.                       “Predicting Performance of OWL
[3] T. Gardiner, D. Tsarkov, and I. Horrocks,          Reasoners: Locally or Globally?,” in
    “Framework       for     an      Automated         Proceedings       of      the      Fourteenth
    Comparison of Description Logic                    International Conference on Principles of
    Reasoners,” in The Semantic Web - ISWC
                                                       Knowledge          Representation         and
    2006, Nov. 2006, pp. 654–667, doi:                 Reasoning, Vienna, Austria, 2014, pp.
    10.1007/11926078_47.                               661–664, Accessed: Aug. 16, 2018.
[4] M. Lee, N. Matentzoglu, B. Parsia, and U.          [Online].                          Available:
    Sattler, “A Multi-reasoner, Justification-
                                                       http://dl.acm.org/citation.cfm?id=303192
    Based      Approach        to     Reasoner         9.3032020.
    Correctness,” in The Semantic Web -           [10] N. Alaya, S. B. Yahia, and M. Lamolle,
    ISWC 2015, vol. 9367, M. Arenas, O.                “Towards Unveiling the Ontology Key
    Corcho, E. Simperl, M. Strohmaier, M.              Features          Altering          Reasoner
    d’Aquin, K. Srinivas, P. Groth, M.                 Performances,”           CoRR,           vol.
    Dumontier, J. Heflin, K. Thirunarayan,             abs/1509.08717, 2015, Accessed: Aug. 16,
    and S. Staab, Eds. Cham: Springer                  2018.          [Online].           Available:
    International Publishing, 2015, pp. 393–           http://arxiv.org/abs/1509.08717.
    408.                                          [11] Y.-B. Kang, S. Krishnaswamy, and Y.-F.
[5] R. S. Gonçalves, B. Parsia, and U. Sattler,        Li, “R2O2:An Efficient Ranking-Based
    “Performance       Heterogeneity        and        Reasoner for OWL Ontologies,” in The
    Approximate Reasoning in Description               Semantic Web - ISWC 2015, 2015, pp.
    Logic Ontologies,” in The Semantic Web
                                                       322–338.
    – ISWC 2012, vol. 7649, P. Cudré-             [12] N. Alaya, S. Ben Yahia, and M. Lamolle,
    Mauroux, J. Heflin, E. Sirin, T. Tudorache,
                                                       “RakSOR:        Ranking       of    Ontology
                                                                                                       313


     Reasoners        Based      on     Predicted        Workshop on Description Logics (DL
     Performances,” in 2016 IEEE 28th                    2015), Athens, Greece, June 6, 2015., pp.
     International Conference on Tools with              68–79, 2015.
     Artificial Intelligence (ICTAI), San Jose,     [21] Y. Li, Y. Yu, and J. Heflin, “Evaluating
     CA, USA, Nov. 2016, pp. 1076–1083, doi:             Reasoners Under Realistic Semantic Web
     10.1109/ICTAI.2016.0165.                            Conditions.,” 2012.
[13] N. Alaya, M. Lamolle, S. Ben Yahia, N.         [22] I. Horrocks, M. Yatskevich, and E.
     Alaya, M. Lamolle, and S. Ben Yahia,                Jiménez-Ruiz, Eds., Proceedings of the 1st
     “Multi-label Based Learning for Better              International Workshop on OWL
     Multi-criteria Ranking of Ontology                  Reasoner       Evaluation      (ORE-2012),
     Reasoners,” in The Semantic Web – ISWC              Manchester, UK, July 1st, 2012, vol. 858.
     2017, vol. 10587, F. Lecue, P. Cudré-               CEUR-WS.org, 2012.
     Mauroux, J. Sequeda, C. Lange, and J.          [23] H. Zhang, Y.-F. Li, and H. B. K. Tan,
     Heflin, Eds. Cham: Springer International           “Measuring design complexity of
     Publishing, 2017, pp. 3–19.                         semantic web ontologies,” Journal of
[14] H. Zhang, Y.-F. Li, and H. B. K. Tan,               Systems and Software, vol. 83, no. 5, pp.
     “Measuring Design Complexity of                     803–814,               2010,           doi:
     Semantic Web Ontologies,” J. Syst.                  https://doi.org/10.1016/j.jss.2009.11.735.
     Softw., vol. 83, no. 5, pp. 803–814, May       [24] M. Ioannou, G. Sakkas, G. Tsoumakas,
     2010, doi: 10.1016/j.jss.2009.11.735.               and I. P. Vlahavas, “Obtaining Bipartitions
[15] A. Burton-Jones, V. C. Storey, V.                   from Score Vectors for Multi-Label
     Sugumaran, and P. Ahluwalia, “A                     Classification,”      in     22nd     IEEE
     semiotic metrics suite for assessing the            International Conference on Tools with
     quality of ontologies,” Data & Knowledge            Artificial Intelligence, ICTAI 2010, Arras,
     Engineering, vol. 55, no. 1, pp. 84–102,            France, 27-29 October 2010 - Volume 1,
     2005,doi:https://doi.org/10.1016/j.datak.2          2010,         pp.        409–416,      doi:
     004.11.010.                                         10.1109/ICTAI.2010.65.
[16] H. Yao, A. M. Orme, and L. Etzkorn,            [25] E. Spyromitros-Xioufis, G. Tsoumakas,
     “Cohesion metrics for ontology design and           W. Groves, and I. Vlahavas, “Multi-target
     application,” Journal of Computer science,          regression via input space expansion:
     vol. 1, no. 1, pp. 107–113, 2005.                   treating targets as inputs,” Mach Learn,
[17] S. Bail et al., Eds., Proceedings of the 2nd        vol. 104, no. 1, pp. 55–98, Jul. 2016, doi:
     International Workshop on OWL                       10.1007/s10994-016-5546-z.
     Reasoner Evaluation (ORE-2013), Ulm,           [26] B. Parsia, N. Matentzoglu, R. S.
     Germany, July 22, 2013, vol. 1015.                  Gonçalves, B. Glimm, and A. Steigmiller,
     CEUR-WS.org, 2013.                                  “The OWL Reasoner Evaluation (ORE)
[18] K. Dentler, R. Cornet, A. ten Teije, and N.         2015 Resources,” in The Semantic Web –
     de Keizer, “Comparison of Reasoners for             ISWC 2016, vol. 9982, P. Groth, E.
     Large Ontologies in the OWL 2 EL                    Simperl, A. Gray, M. Sabou, M. Krötzsch,
     Profile,” Semant. web, vol. 2, no. 2, pp.           F. Lecue, F. Flöck, and Y. Gil, Eds. Cham:
     71–87, Apr. 2011, doi: 10.3233/SW-2011-             Springer International Publishing, 2016,
     0034.                                               pp. 159–167.
[19] A. Khamparia and B. Pandey,
     “Comprehensive Analysis of Semantic
     Web Reasoners and Tools: A Survey,”
     Education and Information Technologies,
     vol. 22, no. 6, pp. 3121–3145, Nov. 2017,
     doi: 10.1007/s10639-017-9574-5.
[20] N. Matentzoglu, J. Leo, V. Hudhra, U.
     Sattler, and B. Parsia, “A Survey of
     Current, Stand-alone OWL Reasoners,”
     Informal Proceedings of the 4th
     International Workshop on OWL
     Reasoner Evaluation (ORE-2015) co-
     located with the 28th International