<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>OpenReq-DD: A Requirements Dependency Detection Tool</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Universitat Politecnica de Catalunya</string-name>
        </contrib>
      </contrib-group>
      <abstract>
        <p>Requirements Engineering (RE) is one of the most critical phases in software development. Analyzing requirements data is a laborious task performed by expert stakeholders using manual processes, as there are no standard automatic tools to handle this issue in a more e cient way. The purpose of this paper is to summarize the approach of the OpenReq-DD dependency detection tool developed at the OpenReq project, which allows an automatic requirement dependency detection approach. The core of this proposal is based on an ontology which de nes dependency relations between speci c terminologies related to the domain of the requirements. Using this information, it is possible to apply Natural Language Processing techniques to extract meaning from these requirements and relations, and Machine Learning techniques to apply conceptual clustering, with the major purpose of classifying these requirements into the de ned ontology.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Copyright c 2019 by the paper's authors. Copying permitted for private and academic purposes.</p>
    </sec>
    <sec id="sec-2">
      <title>OpenReq-DD approach</title>
      <p>OpenReq-DD architecture is composed by a RESTful service as the main component, which exposes an API
to provide the required data and perform the dependency detection algorithm. This is intended to match a
microservice architecture and de ne an isolated, decoupled component that can be used in di erent contexts. This
component exports and integrates as internal dependencies all required toolkits and frameworks for the di erent
algorithm steps (see Fig. 1). For demonstrations, a simple GUI is provided (see Sec. 3). The OpenReq-DD
project is available at GitHub (https://github.com/OpenReqEU/dependency-detection), including a README
le with all required information to deploy and run the tool.</p>
      <p>Figure 1 shows the sequence of steps that OpenReq-DD performs to extract requirement dependencies. In
the following, we describe the needed input data (Sec. 2.1), and a a brief description of each stage (Sec. 2.2).
The dependency detection algorithm designed and developed in OpenReq-DD requires two types of data to
perform the dependency extraction. The format of this data has been de ned and discussed with the stakeholders
of the OpenReq project.</p>
      <p>Requirements list. A list of requirements from which to extract dependencies is provided. The JSON
Schema used for input and output response is available at https://goo.gl/Gx8vpJ
Ontology. The ontology provides the tool the required knowledge about the general patterns and
dependency types that are potential dependencies between requirements. This is the result of an study performed
by stakeholders of the project. The ontology knowledge is structured in a dependency relations tree, where
each node is a topic and each edge a dependency relation type (see Fig. 2). An example of this ontology is
available at https://goo.gl/Hx6GS2</p>
      <p>The output of the RESTful service is a JSON response using the same format that the input data, but with
the set of detected dependencies included.
Given the input data, OpenReq-DD initiates the dependency extraction by following the next steps (which are
transparent to the user).
2.2.1</p>
      <sec id="sec-2-1">
        <title>Preprocessing</title>
        <p>In order to reduce de ciencies in the data set, two preprocessing methods are applied to the requirements.</p>
        <p>
          Sentence Boundary Disambiguation (SBD). Sentence detection is applied to extract isolated sentences
from each requirement, by deciding where are the beginning and the end of each sentence. The Apache
toolkit OpenNLP [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] is used for this purpose.
        </p>
        <p>Noisy Text Cleaning. After SBD, a total of 14 rules are applied to clean the text of each sentence. The
cleaning includes, between others: removal of character, numeric and roman numerals list pointers; removal
of acronyms that may appear at the beginning of a requirement; removal of escape sequence characters;
addition of white spaces to prevent PoS tagger faults (e.g., between parenthesis or question marks).
2.2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Syntax Analysis</title>
        <p>Given the preprocessed requirements, a syntax analysis is executed in order to extract words that can be potential
candidates of a match with concepts of the ontology.</p>
        <p>Tokenization. The input sentence is split into single words using the OpenNLP toolkit.</p>
        <p>Sentence: "The parameters for OBU must be given by RBC."</p>
        <p>
          Tokens: "The", "parameters", "for", "OBU", "must", "be", "given", "by", "RBC", "."
PoS tagging. Each token of the sentence is marked with a part-of-speech tag using the NLP4J toolkit [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
        </p>
        <p>The
DT
parameters
NNS
for
IN</p>
        <p>OBU
NP
must
MD
be
VB
given
VBN
by
IN</p>
        <p>RBC
NNP
.
.</p>
        <p>Dependency Parser. Using the NLP4J toolkit, a dependency tree is generated where each node is a token
of the input sentence and edges are the relations between parent words and child words.</p>
        <p>Information Extraction. In this step, the words to categorize each requirement into the ontology are
detected. For doing so, patterns that take into account the position in the sentence and PoS tag of the word
are used. To extract these patterns, a study was performed with existing datasets: the most representative
words for each requirement were detected and afterwards patterns were extracted according to the position
in the sentences of these representative words.</p>
        <p>(a) Dependency parser result</p>
        <p>(b) Deepest path for term extraction
N-Grams Generation. The matched patterns inside the dependency tree are analyzed in order to generate
n-grams of nodes directly connected within the tree that are composed by a set of keywords encapsulating a
big concept, a general idea superior to the individual meaning of each keyword. See Fig. 4 for an example.
2.2.3</p>
      </sec>
      <sec id="sec-2-3">
        <title>Semantic Analysis</title>
        <p>Semantic analysis is the process that interprets the language meaning (i.e., the topic concept) of the whole text.
The main goal is to obtain the meaning of each word of the n-gram, and join them to get a unique meaning that
can be matched with the concepts of the ontology.</p>
        <p>Lemmatization. The morphological analyzer included in the NLP4J framework is used to apply several
rules (based on a large dictionary and several advanced heuristics) to extract the lemmas of each token.
These lemmas allow us to compare di erent words with the same lexeme.</p>
        <p>
          Semantic similarity. The DKPro-Similarity framework [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] is used as a word pair similarity detection in
order to improve the lemmatization process, by identifying those tokens with a high similarity score that
are not identi ed as part of the same lexeme. This step is potentially interesting for synonyms and similar
meanings, which are analyzed using the lexical database WordNet [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
(a) A parent with two direct
term children to merge in a
unique set of words.
        </p>
        <p>(b) A parent with one direct
term child that has its own set
of words to be merged in a
unique set of words.</p>
        <p>(c) A parent with one direct term child to merge
in a set of words, and another separated set of
words from the other not relevant child.
OpenReq-DD uses conceptual clustering to classify requirements into the di erent concepts of the input ontology
by similar features. For each n-gram obtained in the semantic analysis, the following rules are applied.</p>
        <p>First, nd equal matches between words combinations of the n-gram and the n-grams of the ontology.
If there is no match, nd equal matches between combinations of the extracted lemmas from the n-gram
and the extracted lemmas of the input ontology.</p>
        <p>If there is no match, calculate the semantic relatedness between lemmas from the n-gram and the input
ontology. The requirement is matched if the value is greater than a provided threshold.</p>
        <p>If none of the previous conditions are satis ed, the requirement is discarded as an individual in the ontology.</p>
        <p>The result of this step is the ontology lled by requirement individuals.
2.2.5</p>
      </sec>
      <sec id="sec-2-4">
        <title>Dependency Extraction</title>
        <p>Finally, each pair of classes in the ontology that are linked with a dependency relation are analyzed extracting
their instances (i.e., the requirements individuals) to nd the existing dependencies.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Demo plan</title>
      <p>3.1</p>
      <sec id="sec-3-1">
        <title>Environment con guration</title>
        <p>In this section we provide details about a demo plan of the dependency detection analysis using OpenReq-DD.
For a demo execution, it is necessary to run both the OpenReq-DD web service and the Java GUI application.</p>
        <p>The GUI component is a presentation layer that simpli es the communication with the Dependency-Detection
RESTful service. It allows the user to execute a dependency detection analysis in a simpli ed way, decoupling
HTTP communication requirements and output data interpretation on the client side.</p>
        <p>As input data we need a list of requirements and an ontology. Examples of both les are referenced in Sec. 2.1.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Dependency detection execution</title>
        <p>Figures 5a and 5b show the main views of the OpenReq-DD GUI application. The user is asked to upload two
les: the ontology structure le (*.owl le) and the requirements list le (*.json le). Once these les have been
uploaded, the user can initiate the dependency analysis by clicking on the "Extract dependencies" button.</p>
        <p>After this interaction, the whole process of dependency detection starts: the back-end of the tool applies the
steps explained in Sec. 2.2 . The RESTful service returns a JSON format response including the requirements
and the list of dependencies detected. Through the GUI component, the list of dependencies is shown in a table
including, for each dependency, three items: the source of the dependency; the dependency type; and the target
of the dependency.</p>
        <p>(a) Input Data View</p>
        <p>(b) Results View
We use the 6 RFP documents of the Siemens' use case to validate OpenReq-DD and the achievement of objectives
in terms of e ciency and e cacy of the requirements' dependencies detection.</p>
        <p>Tables 1 and 2 present a summary of both the automatic generated results of the OpenReq-DD tool and
the stakeholder's manual validation. On the left side, we introduce the RFPs used for the Siemens use case,
the number of total requirements of each RFP, and the number of dependencies extracted by our tool. These
results were manually analyzed by stakeholders' experts, using three statistic measures: the precision of
true positive detected dependencies, the precision with a re nement possibility (Precision-R) of true positive
detected dependencies, and the imprecision of the detected dependencies, which is related to false positives
outcomes. The results are good, but there is still room for improvement. We consider as future work exploring
dependencies beyond semantic similarity, using other natural language criteria, and the extraction of relevant
words from requirements using ML techniques (e.g., topic modelling).</p>
      </sec>
      <sec id="sec-3-3">
        <title>Precision</title>
      </sec>
      <sec id="sec-3-4">
        <title>Precision-R</title>
      </sec>
      <sec id="sec-3-5">
        <title>Imprecision</title>
        <p>89.2%
7.7%
3.1%</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments References</title>
      <p>The work presented in this paper has been conducted within the scope of the Horizon 2020 project OpenReq,
which is supported by the European Union under the Grant Nr. 732463.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>K.</given-names>
            <surname>Pohl</surname>
          </string-name>
          ,
          <article-title>The three dimensions of requirements engineering: A framework and its applications</article-title>
          ,
          <source>Information Systems</source>
          ,
          <volume>19</volume>
          (
          <issue>3</issue>
          ):
          <volume>243</volume>
          {
          <fpage>258</fpage>
          ,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>O. C. Z.</given-names>
            <surname>Gotel</surname>
          </string-name>
          and
          <string-name>
            <given-names>C. W.</given-names>
            <surname>Finkelstein</surname>
          </string-name>
          ,
          <article-title>An analysis of the requirements traceability problem</article-title>
          ,
          <source>In Proc. of IEEE International Conference on Requirements Engineering</source>
          , pages
          <volume>94</volume>
          {
          <fpage>101</fpage>
          ,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A. P.</given-names>
            <surname>Nikora</surname>
          </string-name>
          and
          <string-name>
            <given-names>G.</given-names>
            <surname>Balcom</surname>
          </string-name>
          ,
          <article-title>Automated identi cation of ltl patterns in natural language requirements</article-title>
          ,
          <source>In 20th International Symposium on Software Re-liability Engineering</source>
          , pages
          <volume>185</volume>
          {
          <fpage>194</fpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Fabiano</given-names>
            <surname>Dalpiaz</surname>
          </string-name>
          , Alessio Ferrari, Xavier Franch, Cristina Palomares:
          <article-title>Natural Language Processing for Requirements Engineering: The Best Is Yet to Come</article-title>
          .
          <source>IEEE Software</source>
          <volume>35</volume>
          (
          <issue>5</issue>
          ):
          <fpage>115</fpage>
          -
          <lpage>119</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>OpenReq</surname>
          </string-name>
          , Project, Accessed:
          <fpage>2019</fpage>
          -01-11. [Online]. Available: https://openreq.eu/.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Apache</surname>
            <given-names>OpenNLP Toolkit</given-names>
          </string-name>
          , Accessed:
          <fpage>2019</fpage>
          -01-11. [Online]. Available:http://opennlp.apache.org
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <article-title>[7] NLP Toolkit for JVM Languages (NLP4J), Part-of-speech Tagging</article-title>
          , Accessed:
          <fpage>2019</fpage>
          -01-11. [Online]. Available: https://emorynlp.github.io/nlp4j/.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>DKPro</given-names>
            <surname>Similarity</surname>
          </string-name>
          , Accessed:
          <fpage>2019</fpage>
          -01-11. [Online]. Available: https://dkpro.github.io/dkpro-similarity
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>WordNet</given-names>
            <surname>- A Lexical Database for English</surname>
          </string-name>
          , Accessed:
          <fpage>2019</fpage>
          -01-11. [Online]. Available: https://wordnet.princeton.edu/.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>