<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>D3M: Automated Data-Driven Decision Making Carles Farré, Javier Flores, Sergi Nadal, Alejandra Volkova</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Universitat Politècnica de Catalunya</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Barcelona</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Catalonia</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Spain</string-name>
        </contrib>
      </contrib-group>
      <abstract>
        <p>Data has an undoubtedly impact on society. Storing, processing and analyzing large amounts of available data is currently one of the key success factors for an organization. Nonetheless, we are recently witnessing a change represented by huge and heterogeneous amounts of data. Thus, in order to carry on these data exploitation tasks, organizations must first perform data integration combining data from multiple sources to yield a unified view over them. In this paper, we report on the Automated Data- Driven Decision Making (D3M) project, whose main objective is to provide a mature software solution for automatic data integration with advanced decision making capabilities.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Data-driven software engineering</kwd>
        <kwd>Data integration</kwd>
        <kwd>Decision making</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>1.1 Context</title>
      <p>D3M is grounded on research carried out in the lines of automated data integration and
domainspecific decision making. Here, we provide an overview of each.</p>
      <p>
        Automating data integration tasks. Data integration is a well-tsudied area aimed at
facilitating transparent access to various heterogeneous data sources [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. A prominent approach to data
integration is exposing a knowledge graph conceptualizing the domain of interest to offer a uniform
query interface over the sources. Queries over the knowledge graph are rewritten over the
sources via schema mappings. The maintenance of such constructs (e.g., evolving the knowledge
graph, adding new sources and mappings) is an arduous and manuallyn-itensive task that hinders the
ability of such systems toflexibly adapt and provide right-t ime integration [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. This limitation has
been coined as the data variety challenge, which refers to the complexity of providing on-demand
integration over a vast and evolving set of data sources. Dataspaces, which are data integration systems
embracing a pay-as-you-go approach by gradually integrating data sources as needed, represent a
significant step toward tackling the variety challenge. With the vision of reducing the usual upfront
and maintenance costs, dataspaces claim forthe adoption of a flexible and dynamic approach where
different integration tasks are automated. One of them, known as bootstrapping [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], is the process of
automatically generating the knowledge graph driven by the data sources, with the goal of
incrementally building the query interface and mappings to query such heterogeneous data sources in
an integrated manner.
      </p>
      <p>
        Domain-specific decision making. Organizations require facilitating access to informed decision
making based on Key Performance Indicators (KPIs) relevant to their business. However, creating
decision making support systems is expensive, time-consuming, and error-prone. The use of
domainspecific, operationalized quality models offering actionable analytics from heterogeneous sources has
been successful in multiple domains (e.g., software analytics [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]). It enables plenty of analytics
scenarios, from current situation assessment to prediction and what-fi analysis. In a recent systematic
review, data integration and final data aggregation were reported as part of the remaining challenges in
Big Data analytics [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. At the same time, current approaches shall analyze more than one artifact and
focus on integrating data from different sources and getting a holistic view [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Thus, to enable
domainspecific strategic indicators and data-driven decision making, it becomes necessary to facilitate the
integration of data sources driven by the real information needs of end users.
      </p>
    </sec>
    <sec id="sec-3">
      <title>1.2 Background</title>
      <p>This project builds upon two research assets reported in the project Generation and Evolution of
Smart APIs (GENESIS), funded by the National Spanish Program for Research Aimed at the
Challenges of Society 2016: a dataspace management system (hereafter referred to as ODIN), and a
software analytics tool (hereafter referred to as Strategic Dashboard). These products have been
successfully validated as a prototype in pilot projects.</p>
      <p>
        ODIN. ODIN (short for On-demand Data INtegration) is a dataspace management system
grounded on knowledge graphs [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. ODIN is conceived to overcome the limitations of traditional
virtual data integration in large-scale scenarios where data variety plays a key role [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Figure 1,
depicts how ODIN supports the dataspaces’ complete lifecycle. ODIN automatically extracts the
schemata from structured (e.g., relational) and semi-structured (e.g., JSON) data sources and translates
them into a canonical data model. To that end, a set of production rules parse their metadata and
generate source graphs. These are aligned while considering user feedback throughout this
process. As a result, ODIN generatesprovenance graphs (PG) tracing the results of the previous
stages. A PG is a target-agnostic metadata construct describing the integration of a particular set of
data sources. It captures the results boofotstrapping the sources and aligning their schemata, and
guarantees we can generate target-specific metadata from them. Thus, PGs are used to generate specific
constructs of a given integration tool, such as conjunctive query (CQ)-oriented graphs, which expose
the sources’ schemata in first-normal form, and are then linked via local-as-view (LAV) schema
mappings that connects elements of the sources’ schemas to the global graph. LAV mappings
characterize the sources in terms of a query over the
knowledge graph, making them inherently more suitable in data variety settings, where new sources
may be added or outdated sources removed dynamically.
      </p>
      <p>
        The Strategic Dashboard. The Strategic Dashboard is a modular, configurable, and extensible
software analytics tool used in Agile Software Development projects to improve the software
development process and the quality of the software opdruced. The Strategic Dashboard (Figure 2)
enables decision makers to define their owQn uality Model, which is composed of quali-tryelated
Strategic Indicators (e.g., customer satisfaction, process performance, risk level) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], decomposed in
their turn inot Quality Factors related to system development and usage (e.g., development speed,
software performance). Quality factors are defined over differentQuality Metrics (e.g., commits per
day, duplicated lines of code, software response time). The Strategic Dashboard automatically performs
a quality assessment of the whole quality model defined. Raw data is collected from multiple sources
of information, such as development tools used by the software development team (e.g., JIRA, Github)
and software usage from end-users (e.g. software logs). All the information is collected through
Connectors that feed a Distributed Data Sink from which the quality metrics, quality factors, and
strategic indicators are computed bottom-up. The quality assessment enables the strategic dashboard to
perform several analyses that are provided to the Decision Maker:
● Visualization of the current (and historical) status of software products and development
processes through an easy-to-use interface with advanced navigational capabilities.
● What-if analysis techniques enable decision makers to evaluate different scenarios based on the
impact of metrics on quality factors and, further on, on the strategic indicators.
● Forecasting techniques estimate the values of the strategic indicators and quality factors in a
time frame, to predict and anticipate future issues in the software development process.
● Semi-automatic generation of new requirements in response to alerts when a quality model
element (typically, a strategic indicator) drops below unsatisfactory levels of quality [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>Despite the benefits ODIN provides in terms of data integration, its query interface is limited to
technical users familiar with semantic web technologies. Thus, there is a gap between such a low-level
interface and the advanced capabilities that decision makers need in their organizations (e.g., progress
indicators, what-if analysis). Additionally, the Strategic Dashboard is tightly coupled to the Distributed
Data Sink, built ad-hoc for a specific domain. This difficults the integration of new data sources and
the calculation of new types of metrics, quality factors, and strategic indicators.</p>
    </sec>
    <sec id="sec-4">
      <title>2. Objectives</title>
      <p>The main objective of D3M isto adapt and integrate these two independent tools, ODIN and
the Strategic Dashboard, into a unified product bringing together the benefits of them both(:i)
enabling the integration of disparate data sources in an incremental manner and(ii) provide advanced
support on top of them for decision making processes via advanced dashboard interfaces. The project’s
main objective further decomposes into four specific objectives:
● O1: Data-driven semi-automatic bootstrapping. Provide means to enable an incremental
semiautomatic extraction of the knowledge graph from a set of heterogeneous and independent data
sources. This objective starting point is ODIN’s core and will endorse it with new support for
its enrichment with dayto- -day concept vocabulary, and its enrichment with the do m-ain
specific quality models.
● O2: Integrated data exploration interface. Enable data wrangling tasks (navigational queries
on tabular and semantic data) from heterogeneous data sources federated through a knowledge
graph. This refers to a new exploitation feature required obuyr industrial partners (as an
alternative to traditional decision making support).
● O3: Customized decision making support. Enable the creation of an advanced dashboard that
spans heterogeneous data sources applying domain-specific quality models to assist decision
makers. This objective generalizes the available Strategic Dashboard to provide support in any
domain, with improved techniques.
● O4: Unified product to support the end-to-end decision making process over heterogeneous
data sources. O4 integrates the results of O1-O3; i.e., it features incremental bootstrapping of
the knowledge graph from the data sources of interest, and two kinds of exploitation: decision
making support based on strategic indicators and data exploration based on data wrangling.</p>
      <p>To achieve such objectives, D3M proposes the architecture presented in Figure 3. D3M serves two
types of data consumers:data wranglers (for data exploration) anddecision makers (for advanced
analytics and data-driven quality models). Besides, it re quires interaction with other users for managing
the system metadata, such asdomain experts (for enriching the bootstrapped knowledge graph with
day-to-day concepts and a doma-inspecific quality model) anddata stewards (for assisting in the
configuration of the alignments among heterogeneous data sources). While the integrated architecture
proposed in Figure 3 offers the benefits of both ODIN and the Strategic Dashboard, it as well offers
innovation by boosting the automated decision making process by means of linking heterogeneous data
sources to the defined quality model via knowledge graphs, hence facilitating and mainly automating
the calculation of the strategic indicators and their visualization for the decision makers. Besides the
aforementioned objectives, D3M also aims to attain the following ones:
● O5: Incremental technology transfer of the proof of concept. Execute a technology transfer plan
to assure an incremental evolution of the maturity level of the developed software components
for D3M via validation and demonstration of the proposed proof-of-concept.
● O6: Assessment of the viability of the proof of concept. Perform a market analysis to assess the
technical, commercial, and social viability of the proposed product, and uncover evolutionary
paths for D3M becoming a product adapted to current industry needs.
● O7: Long-term sustainability of the proof of concept. Cultivate a broad network of industry and
public sector contacts to create awareness and attract prospective customers.
● O8: Intellectual property right assurance. Develop a strategy for managing the intellectual and
industrial property rights of the developed proof of concept.
● O9: Endorsing the project team with entrepreneurship skills. Define a training plan with a list
of entrepreneurship courses and monitor its execution.</p>
    </sec>
    <sec id="sec-5">
      <title>3. Use cases</title>
      <p>Here, we depict two industrial projects that serve as use cases for D3M. Currently, each one is
evolving ODIN and Strategic Dashboard in parallel, so that it is possible to apply the improvements
achieved to the overall D3M project. For each, we describe the use case context and how adopting D3M
can aid in the organization's decision making needs.</p>
    </sec>
    <sec id="sec-6">
      <title>3.1. Development of an imaging platform development for Malaria and</title>
    </sec>
    <sec id="sec-7">
      <title>Neglected Tropical Diseases (NTDs)</title>
      <p>A recent study by the World Health Organization shows that in 2018 an estimated 228 million cases of
malaria occurred worldwide, the majority in the African region6. The SDG targets 3.3 and 3.8 callfor
an end to such kind of epidemics by 2030. The main goal of this project is to develop an imaging
platform by using artificial intelligence techniques for automated diagnosis of Malaria, Tuberculosis,
and NTDs. The specific objectives are: (i) create an open source image bank and database; (ii) develop
an image diagnostic systemby image analysis using artificial intelligence techniques(;iii) develop
software for Android-phones to move the microscopy slides, images acquisition, image analysis, and
6 https://www.who.int/publications/i/item/9789241565721
diagnosis; (iv) model the laboratory management software to be able to import the microscopic images
and resend them to the general microscopy image bank(;vi) establish a quality control of the slides
preparation, digital microscopic images and image diagnosis; (vii) validate the imaging platform in the
field.</p>
      <p>The role of D3M in this use case is to empower epidemiologists to croasnsa-lyze diagnosis data
predicted automatically by the imaging platform with other contextual data collected from the available
data sources. For instance, the analysis of comorbidity, or coinfections, represents a paradigm change
in how health diseases are treated. Traditionally, individual diagnoses were performed for each analyzed
disease. However, major disease outbreaks have shown that previous conditions can impact the
diagnosis. Similarly, many countries (un)intentionally omit to report on new infection cases, either due
to limited resources or political issues. To get a complete picture of the situation, cross country -reported
data with other sources may indicate the prevalence (e.g., amount of medicine requested). However,
data integration needed to calculate these indicators is far from being trivial, especially in the case of
NTDs that lack systematic data collection and in developing countries with minimal resources. To that
end, as depicted in Figure 4, D3M presents the user with a knowledge graph conceptualizing all domain
elements of interest which are further linked to the different available data sources. With D3M,
epidemiologists will be able to cross different data sources guided by relevant strategic indicators from
the analytical dashboard, thus obtaining a more realistic and complete picture of the situation, and
making a paradigm shift from a disease-centric to a patient-centric analysis.
The domain of Software Analytics is broad and can be applied to various environments. This use
case focuses on the higher education (i.e., universities) domain via the projecImtplementation of a
dashboard for monitoring the progress of software projects developed by student teams, which can be
easily extrapolated to scenarios with teams of junior software developers. The project aims to allow
both students and professors to receive accurate and objective feedback on the individual and team
learning process. Informed decisions can be made about prioritizing, planning, and evaluating their
actions throughout the project. To this aim, an onboarding step will be beneficial for training juniors
to learn how to extract insights and take data driven decisions from the information generated by
the dashboard. D3M comes into play by using the Strategic Dashboard, which was adapted to the
specific domain by creating new Connectors, and defining specific Quality Metrics, and
customizing several Visualizations, so as part of future work it would be helpful to incorporate
ODIN that can give support for Connectors' and Quality Model dataspace management systems.</p>
      <p>The first connector that we created GfoirtHub, a provider of Internet hosting for software
development and version control using Git, allows us to collect information about commits, modified
lines, and issues. Another connector was made forTaiga, a free and open-source project management
system for startups and agile developers, this one supplies us data about Stchreum methodology’
resources, as user stories and tasks to deal with in eachSprint. Based on the information provided by
these connectors, we defined different metrics: i() the percentage of commits of each member of the
team and its corresponding modified lines; (ii) the percentage of tasks assigned to each developer; (iii)
the percentage of closed task by assigneei;v)( number of tasks without assignee dan(v) standard
deviation on numbers of commits or tasks. In addition to the previous metrics, we decided to focus on
the quality and correctness of team members’s information to connectors when they Tuasiega or
GitHub. For instance, we check (vi) if acceptance criteria are used when a user story is created or v(ii)
if a standard user story pattern is applied, also it is interesting to see v(iii) if commits contain real task
reference. In conclusion, all of them help to monitor the progress of software projects, some from the
point of view of project management and others from the point of view of code development.</p>
      <p>For team project metrics visualizations (see Figure 5), there is a display of the current evaluation,
which is calculated according to the formula settings for each metric and from the data collected by the
connectors for this particular day. With the following representation, we can see the exact value of the
metric rounded to the hundredth, through a half circle graph with different color caiteesg. orThe
categories are customizable, that is, the number of colors and the limits of each color can be defined in
a way that best suits the metrics. Apart from the current evaluation, it is possible to visualize the
historical data of the metrics through a line graph, that is, their evolution over time, to monitor progress
as the course progresses.</p>
    </sec>
    <sec id="sec-8">
      <title>4. Relevance to information science</title>
      <p>The underlying research carried out in the context of the D3M project addresses a broad spectrumof
challenges related to the information science field. Indeed, being D3M a project oriented to the
development of a software prototype, it can fall in the areInaforomf ation Systems and their
Engineering. Additionally, given that data integration is at the heart of D3M, it naturally fits the Data
and Information Management area, and Data Science. Furthermore, considering the applicability of the
project results via use cases to the industry, D3M is also relevant for the pDeocmifiacin-IsS
Engineering area (e.g., for the health or educational domains).</p>
    </sec>
    <sec id="sec-9">
      <title>5. Open lines of research</title>
      <p>Numerous open lines of research arise from D3M. A key question to be addressed is how far can we
automate the process of data integration? In other words, where is the sweet spot that allows automating
manual and cumbersome tasks without compromising the quality of the results obtained when the user is
involved. It is already known that a fully-a utomated approach to data integration is not feasible, given that
there will always exist some level of uncertainty and ambiguity. Nevertheless, we strive to
minimize the efforts required by users to address these cases.</p>
      <p>Another scientific challenge that D3M should face ishow to create and assess domain-specific
strategic indicators for any domain? In this regard, we have already met some of the issues that must
be addressed in the future: (i) enable the on-demand and incremental definition of metrics, factors, and
strategic indicators; (ii) define and implement a comprehensive catalog of visualizations for such
metrics/factors/indicators; and (iii) simplify and automate as much as possible the configuration and
deployment of strategic dashboards in new domains.</p>
    </sec>
    <sec id="sec-10">
      <title>6. Conclusions</title>
      <p>In this paper, we have presented the D3M project, an ongoing two-year project that will combine
and extend the efforts accomplished in ODIN and the Strategic Dashboard into a unified tool. This
solution will provide data wranglers with the mechanisms to easily integrate heterogeneous data sources
and have the means to extract analytical insights for data-driven decisions. The features of D3M will
be used on two industrial projects related to the domains of healthcare and software development. We
believe the results of D3M will provide the following achievements: (i) scalable and automated data
integration life cycle, (ii) effectively democratizing data access, (iii) advanced analytic models for
predicting and optimizing outcomes, (iv) a set of user-friendly dashboards to assist non-technical
endusers with exploratory and analytical tasks. Therefore, D3M can have a significant impact on the
industry.</p>
    </sec>
    <sec id="sec-11">
      <title>Acknowledgements</title>
      <p>This paper has been funded by the Spanish Agencia Estatal de Investigación (AEI) under project /
funding scheme PDC2021-121195-I00.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lenzerini</surname>
          </string-name>
          .
          <article-title>“Data Integration: A Theoretical Perspective”</article-title>
          .
          <source>In Proceedings of the 21st ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems</source>
          (
          <year>2002</year>
          ),
          <fpage>233</fpage>
          -
          <lpage>246</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Nadal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Abelló</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Romero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vansummeren</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Vassiliadis</surname>
          </string-name>
          . “
          <article-title>Graph-driven Federated Data Management”</article-title>
          .
          <source>In IEEE Transactions on Knowledge and Data Engineering</source>
          (
          <year>2021</year>
          ). Online (https://ieeexplore.ieee.org/document/9422168)
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Sequeda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. H.</given-names>
            <surname>Tirmizi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Corcho</surname>
          </string-name>
          et al. “
          <article-title>Survey of Directly Mapping SQL Databases to the Semantic Web”</article-title>
          .
          <source>In Knowledge Eng. Review</source>
          (
          <year>2011</year>
          ),
          <volume>26</volume>
          .
          <fpage>4</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Martínez-Fernández</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Vollmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jedlitschka</surname>
          </string-name>
          et al. “
          <article-title>Continuously assessing and improving software quality with software analytics tools: a case study”</article-title>
          .
          <source>IEEE Access</source>
          <volume>7</volume>
          (
          <year>2019</year>
          ),
          <fpage>68219</fpage>
          -
          <lpage>68239</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>U.</given-names>
            <surname>Sivarajah</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Kamal</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Irani</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Weerakkody</surname>
          </string-name>
          . “
          <article-title>Critical analysis of Big Data challenges and analytical methods”</article-title>
          .
          <source>Journal of Business Research</source>
          ,
          <volume>70</volume>
          (
          <year>2017</year>
          ),
          <fpage>263</fpage>
          -
          <lpage>286</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>The</given-names>
            <surname>Forrester Wave™: Value Stream Management Solutions</surname>
          </string-name>
          ,
          <year>Q3 2020</year>
          , available at https://www.forrester.com/report/The+Forrester+Wave+Value+Stream+Management+Solutio ns +Q3+
          <year>2020</year>
          /-/E-RES159825.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Nadal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Rabbani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Romero</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          <article-title>Tadesse “ODIN: A Dataspace Management System”</article-title>
          .
          <source>In International Semantic Web Conference (ISWC</source>
          <year>2019</year>
          )
          <article-title>(pp</article-title>
          .
          <fpage>185</fpage>
          -
          <lpage>188</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Nadal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Romero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Abelló</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Vassiliadis</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Vansummeren. “</surname>
          </string-name>
          <article-title>An integration-oriented ontology to govern evolution in Big Data ecosystems”</article-title>
          .
          <source>Inf. Syst</source>
          . (
          <year>2019</year>
          ),
          <volume>79</volume>
          :
          <fpage>3</fpage>
          -
          <lpage>19</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Oriol</surname>
          </string-name>
          et al. “
          <article-title>Data-driven and Tool-supported Elicitation of Quality Requirements in Agile Companies”</article-title>
          .
          <source>Software Quality Journal</source>
          (
          <year>2020</year>
          ),
          <volume>28</volume>
          (
          <issue>3</issue>
          ):
          <fpage>931</fpage>
          -
          <lpage>963</lpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>