<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>CSE</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Designing a Generic Research Data Infrastructure Architecture with Continuous Software Engineering</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nelson Tavares de Sousa, Wilhelm Hasselbring</string-name>
          <email>hasselbring@email.uni-kiel.de</email>
          <email>tavaresdesousa@email.uni-kiel.de</email>
          <email>{tavaresdesousa, hasselbring}@email.uni-kiel.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tobias Weber, Dieter Kranzlm u¨ller</string-name>
          <email>kranzlmueller@lrz.de</email>
          <email>weber@lrz.de</email>
          <email>{weber, kranzlmueller}@lrz.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Leibniz Supercomputing Centre, Bavarian Academy of Sciences and Humanities</institution>
          ,
          <addr-line>Garching</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Software Engineering Group, Kiel University</institution>
          ,
          <addr-line>Kiel</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <volume>3</volume>
      <fpage>85</fpage>
      <lpage>88</lpage>
      <abstract>
        <p>-Long-living software systems undergo a continuous development including adaptions due to altering requirements or the addition of new features. This is an even greater challenge if neither all users nor requirements are known at an initial design phase. In such a context, complex restructuring activities are much more probable, if the challenges are not taken into account from the beginning. We introduce a combination of the concepts of domain-driven design and self-contained systems to meet these challenges within the system's architecture design. We show the merits of this approach by designing an architecture for a generic research data infrastructure, a use case where the mentioned challenges can be found. Embedding this approach within continuous software engineering, allows to implement and integrate changes continuously, without neglecting other crucial properties such as maintainability and scalability. Index Terms-Microservice, Self-Contained System, Systemoriented Architecture, Continuous Software Engineering, Research Data Management</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>Software systems with a heterogeneous set of stakeholders
experience various challenges within their requirements
engineering. The extent to which all stakeholders are unknown
is a further factor which impedes an initial complete system
specification. Long-living systems may also undergo a
changing set of stakeholders and therefore shifting requirements. In
continuous software engineering, this needs to be considered
throughout all stages of a system’s life span. For instance,
beginning with the system specification, the design needs to
be able to compensate for changing requirements at any given
point in time. Continuous integration and deployment need to
be implemented in a way that new or changed requirements
can be integrated into the running infrastructure.</p>
      <p>In the following, we will introduce an approach to meet
these challenges. This approach is validated in a reference
implementation for the project Generic Research Data
Infrastructure (GeRDI). By abstracting and extrapolating the
requirements from a limited set of stakeholders, we are able to
extract different domains regarding the feature set of GeRDI
whereby in turn each major feature is implemented as a
distinct self-contained system (SCS)1. This allows us to be
adaptable regarding the requirements set and also to benefit
from different properties of both concepts, domain-driven
design and self-contained systems. Additionally, through loose
coupling we are able to integrate already existing software
and services such as high-performance computing or cloud
computing and storage. Furthermore, changes remain within
affected self-contained systems and will not propagate to other
self-contained systems.</p>
      <p>In Section II we introduce the application domain for our
reference implementation. Our approach to meet the
mentioned challenges is presented in Section III with the resulting
architecture. Section IV shows an implementation of one use
case. Deployment and operation requirements derived by this
approach will be discussed in Section V. Section VI presents
our conclusions and provides an outlook to future work.</p>
    </sec>
    <sec id="sec-2">
      <title>II. APPLICATION DOMAIN</title>
      <p>
        Data-intensive research requires appropriate management of
the research data. However, present solutions for data storage
often lead to inaccessible data silos, instead of providing
research data in a findable, accessible, inter-operable and
reusable (FAIR) way [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Various initiatives in this field target
on reducing barriers for researchers to establish an efficient
data management and processing for their research data. This
focus on making data accessible and shareable reflects the
key points of the European Open Science Cloud (EOSC) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
Apart from data, services which offer capabilities to process
and analyze the data also need to be reusable and shareable
to other researchers to increase not only the impact of their
research efforts, but also to make the research process more
efficient, transparent, and reproducible.
      </p>
      <p>
        The project GeRDI aims to provide an infrastructure which
fosters FAIR data practices and also supports researchers in
their data-driven workflow [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. This is realized by integrating
different domain services seamlessly into one single
infrastructure. The continuous involvement of nine research groups
of different research domains into the development process
allows us to determine specific workflows and their involved
services to optimize the infrastructure for real-life use cases.
As a reference, selected research cases of these research
groups will be reimplemented and extended using the GeRDI
infrastructure.
      </p>
      <p>One research case is provided by the Environmental,
Resource and Ecological Economics Group (EREE) of Kiel
External
Frontend
Bookmark
Bookmark UI</p>
      <p>Store
Store UI</p>
      <p>Preprocess</p>
      <p>Analyze
Preprocess UI</p>
      <p>Analyze UI
Backend Integration
Publish
Publish UI
Publish API</p>
      <p>Publish
Storage
Archive API</p>
      <p>Harvest API
Bookmark API</p>
      <p>Store API</p>
      <p>Preprocess API
Archive DB</p>
      <p>Harvest DB</p>
      <p>Search DB</p>
      <p>Bookmark DB</p>
      <p>Storage</p>
      <p>Preprocess</p>
      <p>Storage</p>
      <p>Analyze</p>
      <p>
        API
Analyze
Storage
University. In a report [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], published by the WWF, different
economic and fishery management scenarios were evaluated
in order to derive future changes in fishery catch rates. This
research case illustrates a possible scientific workflow which
GeRDI aims to support. Data is collected from multiple data
repositories, aggregated and filtered in a preprocess step, and
then passed to a computation model for scientific analysis and
prediction of fishery catch rates. The other research groups
contribute different research cases, to cover different research
domains (such as digital humanities, hydrology, or
socioeconomics) including different workflows and used services.
      </p>
    </sec>
    <sec id="sec-3">
      <title>III. ARCHITECTURE</title>
      <p>
        Our goal is to design an architecture which allows us to react
to changing requirements without major restructuring activities
and which has a level of complexity that is as low as possible.
To achieve this goal we rely on the strategy of
domaindriven design (DDD) for the concept of our architecture. In
DDD, complex systems are divided into bounded contexts in
order to contain different domains within distinct components
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In our case, we derive different domains by clustering
the required functionality of our research cases into sets of
generic services. This is done by analyzing the workflows of
all research cases and cutting them into different domains. As
a result, we obtain a set of domains as shown in Figure 1.
Each colored box depicts a service domain, enabling actions
throughout the research data’s life span. The service domains
are all implemented as self-contained systems:
      </p>
      <p>Archive depicts the data source which in most research cases
is a research data repository for long-term data archival. An
interface between such a research data repository and our
infrastructure is realized through Harvest which collects the
metadata, enriches it, and forwards it to a search index. With
Search, a researcher can find relevant research data among
all harvested, multidisciplinary research data repositories. A
selection of relevant metadata for sharing search results or
for further processing is then performed in Bookmark. After
that, data is downloaded either to a local machine or a remote
storage system (Store). The processing of data is divided into
two stages. The first step is to normalize or pre-filter data
in Preprocess. In Analyze, actual analysis on the preprocessed
data is performed to gain new scientific insights. The new data
can then be uploaded to a research data repository (Publish).
This closes a cycle (not included in Figure 1) as the uploaded
data is again available in the research data repository.</p>
      <p>The required functionality can be provided as a specific
implementation within each service domain. Additionally, the
implementations of all domains are able to communicate with
each other through remote interfaces. As a result, we are able
to not only reimplement our individual services, but to also
implement and integrate required functionality in the future,
by implementing them with compliance to the given interfaces.</p>
      <p>
        As mentioned in Section I, for the implementation of such
an architecture, we make use of SCS as an architectural
style. In our architecture concept, we vertically decompose
the system along the domains and are therefore able to use
methods of DDD not only as a design concept but also for the
implementation of our architecture. A self-contained system
depicts a certain functionality and implements it as a full
stack with an user interface layer, business logic layer and
persistence layer, which can be implemented as microservices
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Microservice architectures facilitate scalability [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], as
well as agility and reliability [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Communication between
SCS should be reduced to a minimum level. In cases where
communication is inevitable, this should occur through
welldefined REST-interfaces. Cross-cutting concerns regarding the
implementation, such as an authentication and authorization
infrastructure or system monitoring, are deployed within a
backend integration layer.
      </p>
      <p>A further layer for the frontend integration is also required,</p>
      <p>
        Archive
•Sea Around Us
•FAO Stat
•FAO FishStatJ
•SSP
•GIS Data
as each SCS implements its own user interface. The SCS
approach is scalable regarding different aspects [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. The
architecture scales well regarding its functionality, as new functions
can be continuously implemented and integrated as a SCS.
This is enabled by the inherent nature of loose coupling of
SCS which makes them interchangeable. It scales well with
the amount of developers, as each SCS can and should be
developed by an individual team [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. This allows to enable a
community-driven infrastructure, as external developer teams
may contribute their functions continuously to the GeRDI
infrastructure. Due to the option of instantiating multiple
instances of one SCS and a further load balancing in the frontend
integration layer, this approach also introduces potential to
scale well with regard to performance.
      </p>
      <p>Figure 1 also illustrates how we implement the reference
architecture of GeRDI. Each colored box depicts an SCS
and therefore a domain of the complete system. White boxes
within each SCS show the different layers of user interface
(UI), business logic (API) and persistence (DB or Storage)
within the SCS. Grey boxes at the top and bottom of the
figure show both integration layers. As we do not re-implement
research data repositories, their frontend and backend layers
are not integrated into the GeRDI frontend. Additionally,
implementations of Harvest do not require a UI, which leads
to the lack of the corresponding microservice.</p>
    </sec>
    <sec id="sec-4">
      <title>IV. USE CASE IMPLEMENTATION</title>
      <p>For the (re)implementation of research cases, this
architecture allows to make use of existing middleware software
wherever possible. Well-established software can be integrated
into the infrastructure if it can be mapped to one domain
and if it complies with its interfaces. Therefore, to implement
complete research workflows, all required services need to
be mapped first to the generic services model. Afterwards,
each service is implemented as a SCS and integrated into the
infrastructure.</p>
      <p>Figure 2 shows a mapping of required services and/or
functions for the EREE research case mentioned in Section II.
We see in Archive the different research data repositories
which deliver the relevant data. These repositories already
exist and will be integrated into GeRDI by connecting them
with different implementations of Harvesters for each
repository. Both Search and Bookmark depict certain attributes
for which data can be searched and bookmarked for. In our
reference implementation for a search platform, we need to
make sure we support the search and bookmarking for these
attributes. Storage is provided by local computers in this use
case. Therefore, the Store domain must support interfaces to
download data to a local machine and to use the saved data
for further steps. Preprocess and Analyze modify the original
data. In a first step, data is aggregated by using geographic
information system (GIS) data or catch rates of large marine
ecosystem (LME). Then, by feeding the preprocessed data
into a model, predictions for future scenarios regarding the
catch rates can be made. Thus, both service domains rely on
computation tasks. As a last step, the predicted data is again
published to a research data repository, Pangaea2 in this case.</p>
      <p>As already mentioned, we will reuse existing software
wherever possible in our reference implementation. The
implementations for Harvest are newly developed. The Search
makes use of Elasticsearch3 as a search platform. For storage
capabilities, network file sharing systems, such as Samba4,
can be used for this use case. By implementing a facade,
this can be made GeRDI compliant. The computation steps
require resources which can be provided by a cloud provider
or a compute center. To enable the computation of the Matlab
model, we reuse Jupyter5 and integrate it in combination with
its Matlab kernel into the infrastructure. For the publication
of the newly generated data, we use Pubflow6 which provides
functionality to upload research data to repositories.</p>
      <p>Different research cases may use different implementations.
However, to benefit from a broader set of research data
repositories, we encourage to make use of the same implementations
for the service domains Harvest, Search, and Bookmark. The
reference implementations for these domains will be open and
accessible through a GeRDI portal.</p>
    </sec>
    <sec id="sec-5">
      <title>V. INTEGRATION AND DEPLOYMENT In this section we will briefly describe the requirements of an operational setup to run software as exemplified in Section IV.</title>
      <p>The microservice architecture described in Section III
necessitates a container-ready system to mirror the
encapsulation. A registry is needed to disseminate the built images
to the deployment contexts. Tagging the images is another
requirement, since it facilitates the selection of compatible
versions of the different SCS and enables the description of a</p>
    </sec>
    <sec id="sec-6">
      <title>2https://www.pangaea.de/</title>
      <p>3https://www.elastic.co/products/elasticsearch
4https://www.samba.org/
5https://jupyter.org/
6https://www.pubflow.uni-kiel.de/en
release manifest (i.e. a list of images/version pairs including
the external dependencies).</p>
      <p>After passing the code reviews and all tests within the
continuous integration (CI) process the CI system builds the
container image. This way the developers’ assumptions with
regard to the deployment context (e.g. available libraries) is
encapsulated, thoroughly tested and ready to ship. The CI
system needs to support this workflow.</p>
      <p>We identified three deployment contexts: testing, staging
and production. The testing context needs full automation,
i.e. continuous deployment, and is used by developers to test
and discuss features. Staging and production contexts are less
automated since the robustness requirements are higher. They
can therefore be classified as continuous delivery systems (i.e.
manual work is necessary to deploy). Staging is not only used
to prepare a release to the production context, but also as a
preview for the stakeholders to facilitate an agile development
approach. Several computing centers should be able to provide
the computational resources to run all or parts of the three
deployment contexts. At the same time some operational
aspects need centralized services, such as monitoring and
logging facilities. Some parts of the infrastructure (such as
the search index) might profit from running on the same site
to reduce performance penalties through network traffic. As a
result, the deployment infrastructure needs to support
fullyautomated and semi-automated deployment workflows and
allow for transparent integration of compute nodes, without
losing the possibility to pin containers to specific nodes
if necessary. In addition to that, scalability and availability
requirements necessitate container orchestration abilities such
as on-the-fly scaling, node draining, and rolling updates.</p>
      <p>Since the deployment infrastructure is also developing over
time, its setup needs to be documented and automated by
a provisioning and configuration management system. The
scripts and configuration for such a system are also part of the
release process. Releases therefore consist of the source code,
the container images, the setup scripts for the infrastructure
and the release manifest. All release assets need to be available
for the public (open source licenses).</p>
      <p>
        The following setup meets the above described requirements
and will be used for GeRDI:
In a recent literature review only 6 out 69 case studies were
identified to discuss continuous practices in academic setups
(cf. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]). None of these describe the same requirements as
pointed out in this section.
      </p>
    </sec>
    <sec id="sec-7">
      <title>VI. CONCLUSION &amp; OUTLOOK</title>
      <p>We introduced an approach to handle an incomplete set of
requirements through an appropriate architecture design. Our
approach combines domain-driven design and self-contained
systems to provide an infrastructure which can be used for
the implementation of different and also unknown function
requirements. With continuous software engineering we are able
to continuously implement, deploy, and integrate functionality
changes into a running system. The result is used for a generic
research data infrastructure and allows to (re)implement
existing and yet unknown use cases. As an example, we depicted
one use case and introduced its implementation with this
architecture. Challenges regarding the operation of such a
system were also discussed and an appropriate setup was
presented.</p>
      <p>The development of GeRDI is in a early stage and therefore
prototypical. Evaluations are required to show the benefits of
this architecture in real-world usage. This includes the
implementation of different use cases of other research domains
which will show if the stated claims, regarding its adaptability,
will hold.</p>
      <p>Yet to be validated are other topics such as monitoring
which is required for a useful system scaling and performance
evaluation. The deployment and operation of an authentication
and authorization infrastructure for such a system is an
additional challenge of greater importance, due to a broader set of
possible service providers.</p>
    </sec>
    <sec id="sec-8">
      <title>ACKNOWLEDGEMENTS</title>
      <p>This work was supported by the DFG (German Research
Foundation) with the GeRDI project (Grants No. BO818/16-1
and HA2038/6-1).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Wilkinson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dumontier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. J.</given-names>
            <surname>Aalbersberg</surname>
          </string-name>
          , G. Appleton,
          <string-name>
            <given-names>M.</given-names>
            <surname>Axton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Baak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Blomberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-W.</given-names>
            <surname>Boiten</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. B. da Silva</given-names>
            <surname>Santos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. E.</given-names>
            <surname>Bourne</surname>
          </string-name>
          et al., “
          <article-title>The FAIR Guiding Principles for scientific data management and stewardship,” Scientific data</article-title>
          , vol.
          <volume>3</volume>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>C. H. L. E. G.</surname>
          </string-name>
          <article-title>on the European Open Science Cloud, “Realising the European Open Science Cloud</article-title>
          ,” European Commision,
          <source>Tech. Rep.</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Grunzke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Adolph</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Biardzki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bode</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Borst</surname>
          </string-name>
          , H.
          <article-title>-</article-title>
          <string-name>
            <surname>J. Bungartz</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Busch</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Frank</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Grimm</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <string-name>
            <surname>Hasselbring</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Kazakova</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Latif</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Limani</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Neumann</surname>
            , N. T. de Sousa,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Tendel</surname>
            , I. Thomsen,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Tochtermann</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Mu</surname>
          </string-name>
          <article-title>¨ller-</article-title>
          <string-name>
            <surname>Pfefferkorn</surname>
          </string-name>
          , and W. E. Nagel, “
          <article-title>Challenges in Creating a Sustainable Generic Research Data Infrastructure,” Softwaretechnik-Trends</article-title>
          , vol.
          <volume>37</volume>
          , no.
          <issue>2</issue>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Quaas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hoffmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kamin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kleemann</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Schacht</surname>
          </string-name>
          , “Fishing for Proteins,
          <source>” WWF</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>E.</given-names>
            <surname>Evans</surname>
          </string-name>
          ,
          <string-name>
            <surname>Domain-Driven</surname>
            <given-names>Design</given-names>
          </string-name>
          :
          <article-title>Tackling Complexity in the Heart of Software</article-title>
          .
          <string-name>
            <surname>Addison-Wesley Professional</surname>
          </string-name>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Lewis</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Fowler</surname>
          </string-name>
          , “Microservices,”
          <year>2014</year>
          , http://martinfowler.com/ articles/microservices.html.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>W.</given-names>
            <surname>Hasselbring</surname>
          </string-name>
          , “
          <article-title>Microservices for Scalability: Keynote Talk Abstract</article-title>
          ,”
          <source>in Proceedings of the 7th ACM/SPEC on International Conference on Performance Engineering (ICPE</source>
          <year>2016</year>
          ). New York, NY, USA: ACM,
          <year>2016</year>
          , pp.
          <fpage>133</fpage>
          -
          <lpage>134</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>W.</given-names>
            <surname>Hasselbring</surname>
          </string-name>
          and G. Steinacker, “
          <article-title>Microservice Architectures for Scalability, Agility and</article-title>
          Reliability in E-Commerce,” in
          <source>2017 IEEE International Conference on Software Architecture Workshops (ICSAW)</source>
          . Gothenburg, Sweden: IEEE, Apr.
          <year>2017</year>
          , pp.
          <fpage>243</fpage>
          -
          <lpage>246</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Shahin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Babar</surname>
          </string-name>
          , and L. Zhu, “
          <article-title>Continuous Integration, Delivery and Deployment: A Systematic Review on Approaches, Tools, Challenges</article-title>
          and Practices,” IEEE Access, vol.
          <volume>5</volume>
          , pp.
          <fpage>3909</fpage>
          -
          <lpage>3943</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>