<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Proceedings of the SQAMIA</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Containerized A/B Testing</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="editor">
          <string-name>General Terms: Software Quality Analysis with Monitoring</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Additional Key Words and Phrases: Docker</institution>
          ,
          <addr-line>containers, DevOps, A/B testing</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2017</year>
      </pub-date>
      <volume>6</volume>
      <fpage>11</fpage>
      <lpage>13</lpage>
      <abstract>
        <p>Software version ranking plays an important role in improved user experience and software quality. A/B testing is technique to distinguish between the popularity and usability of two quite similar versions (A and B) of a product, marketing strategy, search ad, etc. It is a kind of two-sample hypothesis testing, used in the eld of statistics. This controlled experiment can evaluate user engagement or satisfaction with a new service, feature, or product. A/B testing is typically used in evaluation of user-experience design in software technology. DevOps is an emerging software methodology in which the development and operations are not independent processes, they a ect each other. DevOps emphasizes the usage of virtualization technologies (e.g. containers). Docker is widely-used technology for containerization. In this paper we deal with a new approach for A/B testing via Docker containers. This approach is DevOps-style A/B testing because after the evaluation the better version remains in production.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. INTRODUCTION</title>
      <p>
        14:2
system-separated parts and it ensures the communication among them. Docker has a comprehensive
documentation [
        <xref ref-type="bibr" rid="ref3">Docker Inc. 2017</xref>
        ].
      </p>
      <p>Docker containers are built up from base images, there are general images (e.g. Ubuntu 16.04) and
specific images (for Python run environment). Dockerfiles describe how an image can be created and
Docker is able to generate the image and save it to the repository. Many services of the Docker platform
are available (e.g. Docker Engine, Docker Compose, etc.). The images are in Docker Registries. The
Docker Engine is responsible for managing containers (starting, stopping, etc.), while Docker Compose
is responsible for the configurations of containers on a single host system. Docker Compose is mainly
used in development and testing environments. One can define which services are required and what
are their configuration in the container. Docker Compose files can be created for this purpose.
Orchestration of Docker containers are typically executed with Kubernetes or OpenShift [Vohra 2017].</p>
      <p>Continuous Delivery (CD) is a software development discipline. This methodology aims at building
software in such a way that the software can be released to production at any time. It is a series of
processes that aims at safe and rapid deployment to the production. Every change is being delivered to
a production-like environment called staging environment. Rigorous automated testing ensures that
the business applications and service work as expected. Every change had been tested in staging, so
the application can be deployed to production safely.</p>
      <p>The DevOps approach extends the CD discipline and focuses on comprehensive CD pipelines: starts
with building and followed by different kinds of testing [Schaefer et al. 2013]. Unit testing, component
testing, integration testing, end-to-end testing, performance testing, etc. should be performed on the
software [Roche 2013]. In the meantime, static analyser tools try to find bugs, code smells and memory
leaks in the source code. 3rd-party compliance should be checked in the build pipeline. Automated
vulnerability scanning of the software is mandatory to discover security gaps. The visibility of the
whole process is guaranteed.</p>
      <p>After this phase, the automatic deployment of application starts. Application Release Automation
(ARA) tools are available that can communicate with the CI server and the deployment steps can be
designed on a graphical user interface of these tools. The DevOps culture argues for the deployment
automation at the level of the application [Cukier 2013]. The automatic upgrade and roll-back
processes involve many difficult changes. Database schemas, configuration files and parameters, APIs,
3rd-party components (e.g. message queues) may be changed when a new software version is released.
The deployment process has to cover these changes as well and requires automation and visibility.</p>
      <p>DevOps considers the monitoring and logging of the deployed application in the production
environment [Lwakatare et al. 2015]. The development team is eager for feedback from the application
which is in the production environment: e.g. what are the unused features in the software, memory or
other resource leak detection or performance bottlenecks. ELK-stack is a popular toolset for this
purpose [Lahmadi and Beck 2015]. Elasticsearch is a distributed search and analytics engine, Logstash
is a data processing pipeline and Kibana is responsible for the visualization. Docker out of box
supports some logging drivers such as JSON log driver and GELF log driver to handle the log streams of
each container. With GELF log driver the container logs can be forwarded to an ELK stack. Graylog
Extended Log Format (GELF) is understood by most of the log aggregating systems like Logstash or
more obviously Graylog. Developers have to get as much information as possible to be able to take
care of a trouble [Prakash et al. 2016]. Problems may cause automatic roll-back of the application to
the previous stable version in a seamless way. The analysis of logs and monitoring data is
applicationspecific and their evaluation may be difficult. Therefore, using big data analysis and machine learning
shall be involved.</p>
      <p>In this paper we argue for a new DevOps-style A/B testing for an automated, user experience-based
approach. We take advantage of logging and monitoring features to get feedback from the end-users.
Our approach works in Docker containerized realm, thus the webapplications and every tool which are
used in the evaluation run in containers. After the specified duration the A/B test is evaluated and
winner version of webapplication remains in the production environment automatically.</p>
      <p>This paper is organized as follows: in section 2 we present our A/B testing approach from high-level
and go into implementation details per components in section 3. Finally, this paper concludes and
presents future work in section 4.</p>
    </sec>
    <sec id="sec-2">
      <title>2. OUR APPROACH</title>
      <p>We propose an approach for A/B testing of webapplications in Docker containerized way. This approach
takes advantage of Docker, Nginx server, ELK stack and GrayLog. We have developed a script for
controlling the A/B testing. This script is written in Python.</p>
      <p>The two variants of the same webapplication are running in separate set of containers. The Nginx
server is also running in container. The Nginx is routing the users to A or B version based on their
IP hash. On the client side of both webapplications HTTP requests are submitted to the Nginx server.
Two kinds of requests are in-use. The first one is a periodical one which states if the user still using
the application. The second one is triggered by the end-user to check the user’s activity. Both requests
contains the origin of the application version as tag. We collect the logs of webapplications in an
ELKstack.</p>
      <p>The Python script runs on the host machine. The script takes a duration parameter that specifies
how long the A/B test is running. When this duration expires the script gets log information from the
ELK-stack container and evaluates which version is the better one. The script controls the Docker to
discontinue the running of the worse version and replaces it with the better one.</p>
    </sec>
    <sec id="sec-3">
      <title>3. TECHNICAL DETAILS</title>
    </sec>
    <sec id="sec-4">
      <title>3.1 Client side</title>
      <p>For our research we created two versions of a simple website with different title and headlines clearly
indicating which version we are looking at using our web browser. Both version has a link.</p>
      <p>The page also contains a JavaScript script which acts like a subset of any other webpage analytics
bundle. It generates a UUID on every page load and sends a HTTP GET request to the ’/ping’ route
sending the generated client id as parameter in every 5th second. By these messages we can set a
metric which can indicate how long our user stays on the page. We also send a HTTP GET request
containing the client id on the click event of the link to the ’/click’ endpoint.</p>
      <p>We do not create any relation between those UUIDs and session cookies to keep it anonymous like a
good analytic tool anyway. We did not want to make any unnecessary (A or B) version specific code in
the web page (nor the backend) because it could pollute the source code of the product itself and can
be irrelevant of its aspects.</p>
    </sec>
    <sec id="sec-5">
      <title>3.2 Backend side</title>
      <p>The backend simply serves a static HTML file (which contains all of the client side code) and responds
with status 200 on every request at route ’/click’ and ’/ping’. It dumps every request to the standard
output. All of these configurations have been done in a single nginx.conf file to keep this proof of
concept project simple.</p>
    </sec>
    <sec id="sec-6">
      <title>3.3 Docker containers</title>
      <p>First of all, our testing stack has a load balancer container which listens on port 80 and forwards the
requests to node1 or node2. The forwarding depends on the client IP hash in order to make sure the
clients click and ping requests are being forwarded to the very same node which served the HTML file
earlier (so the load balancer will not switch up the version between two requests from the same client).</p>
      <p>The load balancer references Node1 and Node2 by their aliases. Docker Engine has a solution to
create virtual networks between containers so when there are multiple products up and running
containers on the same host machine they do not interfere with each other connecting to separate
virtual networks. Docker compose takes care about creating a network for our project defined in a
docker-compose.yaml file (see below) by default. This default network is created with the name of the
containing folder (assuming it is the same as the project name) with a default prefix. This came
handy when we created a new container and connected it to the same network by hand on the and of
the test evaluation.</p>
      <p>The Docker Engine takes care about DNS services on the virtual network that is why we can
reference containers by their names. We do not need to change configurations on every startup and we do
not have to save IP addresses in environment variables or hosts files on containers. It is more dynamic
and more secure.</p>
      <p>error_log /dev/stdout info;
events {}
http {
access_log /dev/stdout;
upstream abtest {
ip_hash;
server node1;
server node2;
}
server {</p>
      <p>listen 80;
}
}
}
location / {</p>
      <p>proxy_pass http://abtest;</p>
      <p>Node1 only differs from Node2 in its index.html file and more importantly in its tag. Node1 has
“version-a” tag while Node2 has “version-b” tag at the beginning. The version tag is also sent in every
log message to the Graylog server providing the identity of the version. As shown at the code snippet
below Node1 and Node2 have not got any open ports they can receive requests only through the load
balancer.</p>
      <p>As we mentioned earlier the backend prints all of its requests to the standard output. The standard
output is forwarded in GELF format to the GELF server.</p>
      <p>version: '3'
services:
loadBalancer:
image: nginx
ports:</p>
      <p>- "80:80"
volumes:</p>
      <p>- ./etc/nginx.conf:/etc/nginx/nginx.conf:ro
node1:
image: nginx
logging:
driver: gelf
options:</p>
      <p>gelf-address: "udp://127.0.0.1:12201"
tag: "version-a"
volumes:
- ./nodes/static/versionA:/usr/share/nginx/html:ro
- ./nodes/etc/nginx.conf:/etc/nginx/nginx.conf
node2:
image: nginx
logging:
driver: gelf
options:</p>
      <p>gelf-address: "udp://127.0.0.1:12201"
tag: "version-b"
volumes:
- ./nodes/static/versionB:/usr/share/nginx/html:ro
- ./nodes/etc/nginx.conf:/etc/nginx/nginx.conf</p>
    </sec>
    <sec id="sec-7">
      <title>3.4 Log aggregation</title>
      <p>There are numerous ELK stack configurations available on the Docker community hub so we omit the
details for now. We have a Graylog server up and running which receives the logs of Node1 and Node2.
We have set up an extractor which checks the message property of the log and uses regular expression
to extract ’click’ or ’ping’ from the request route to a separate field called clientLogEvent when it is
present and an other extractor works in the same way and extracts clientSessionId. Making
extractors and testing queries on the Graylog web interface is comfortable and can be done without the need
of digging in Elastic search querying. It is suitable for anyone who wants to shape it to fit their own
specific A/B test scenario.</p>
    </sec>
    <sec id="sec-8">
      <title>3.5 Evaluation and replacement</title>
      <p>We have decided this task has to be done on a host machine by a script which can interact with the
Docker Engine (or Swarm, Kubernetes, etc). For security reasons we cannot (neither want to) give a
container access to other containers on system level.</p>
      <p>We have chosen Python as the most suitable script language for this task. Python has maturity, and
most of the *nix boxes have Python environment pre installed and another good reason is that Docker
has a solid Python SDK, actively used by the Docker Compose project.</p>
      <p>In our example we have decided to measure the count of click metric (’/click’ route requests) – the
bigger the better. When we exceed the duration of the test the script sends one query per version to
the Graylog server API to count its clicks (clientLogEvent: click). We use Apache Lucene syntax for
queries. The script compares the result and then with the power of the Docker SDK shuts down the
loser version Node and replaces it with an instance of the winner version.</p>
      <p>The Python script itself interacts with the Graylog Web API and gets a session token by sending
login credentials. At this point we could use API tokens set up on the Graylog Web interface, but we
have not wanted to increase the complexity of the configuration for this example. The query is sent
to the Graylog REST API, but it is just like any other REST API call so we omit the details for now.
The interesting part is how we replace the container running the worse version with a new container
running the better one. We stop and remove the “loser” container at first to avoid naming conflicts later
on. After that we create a new container with the same parameters as the “winner” container, but with
the name of the loser one. We connect the new container to the projects network using the same alias
as the removed container had. When we start up the new container the load balancing works the same
as before and the new node can be reached by the same name its predecessor could be reached by.
}</p>
      <p>}
loser = client.containers.get(loserContainerName)
loser.stop()
loser.remove()
newNode = client.containers.create('nginx',
name = loserContainerName,
volumes_from = [winnerContainerName],
log_config = {
'driver': 'gelf',
'options': {
'gelf-address': 'udp://127.0.0.1:12201',
'tag': winnerTag
)
bridgeNetwork = client.networks.get('bridge')
bridgeNetwork.disconnect(loserContainerName)
testNetWork = client.networks.get(self.networkName)
testNetWork.connect(
loserContainerName,
aliases = [loserContainerName]
)
newNode.start()</p>
      <p>The script is just a proof of concept but we have created a command line interface for it because
we have created some parameters so we can test it on different setups. Its help text tells us what
parameters we can use for our test.
$ abtestCli -h
usage: abtestCli.py [-h] [--duration DURATION] [--aTag ATAG] [--bTag BTAG]
[--networkName NETWORKNAME] [--apiAddress APIADDRESS]
[--apiUser APIUSER] [--apiPass APIPASS]
aName bName
A CLI tool for runtest
positional arguments:
aName
bName
optional arguments:
-h, --help
--duration DURATION
--aTag ATAG
--bTag BTAG
--networkName NETWORKNAME
--apiAddress APIADDRESS
--apiUser APIUSER
--apiPass APIPASS
$ docker-compose up -d</p>
      <p>show this help message and exit</p>
    </sec>
    <sec id="sec-9">
      <title>3.6 Running</title>
      <p>Assume that we have the docker-compose.yaml file in our currently working directory.</p>
      <p>After it started up our services we only have to start our Python CLI script. It has three mandatory
parameters:
(1) Duration – in ISO 8601 duration format
(2) A version container name
(3) B version container name</p>
      <p>$ abtestCLI.py PT30M ab_node1_1 ab_node2_1</p>
      <p>After thirty minutes the script will log the name of the better version and replace the worse with it.</p>
    </sec>
    <sec id="sec-10">
      <title>4. CONCLUSION</title>
      <p>A/B testing is a powerful method to improve software quality and user experience. It gains feedback
from two akin versions of the same product (software, search ad, newsletter email, etc.) and it measures
the end-user engagement.</p>
      <p>We have developed an approach and related tools for executing A/B testing in Docker containerized
environment. Our proof of concept implementation is working and has fulfilled our expectations but
there is a lot of work to do and a numerous of choices to make before it becomes production ready. One
of our goals was to keep the stack and the implementation simple to leverage the understanding of the
conception.</p>
      <p>We have mentioned that Docker Compose is for single host development and testing. And it did a
great job providing us an initial state for our services. Also we have met the limitations of it such
as dynamic configuration. Assume that we use the same stack, the A/B test is over and there are
winner version containers everywhere then our system shuts down. Since Docker Compose cannot
persist configuration changes to its compose file our configuration will be restored to the original one
on the next docker-compose up command. There are great configuration management software tools
like Puppet or Chef [Spinellis 2012]. Of course when it comes down to scalability we have to use Docker
Swarm or Kubernetes client libraries, etc for managing version replacement on a multi-host system.</p>
      <p>The concept is proven and we are excited to set it working on enterprise level. There could be a great
A/B test deployment service on Amazon AWS or Microsoft Azure. Those companies have resources and
technology to create a powerful analytics system with an integrated automatic deploy solution.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <given-names>David</given-names>
            <surname>Bernstein</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>Containers and Cloud: From LXC to Docker to Kubernetes</article-title>
          .
          <source>IEEE Cloud Computing</source>
          <volume>1</volume>
          ,
          <issue>3</issue>
          (Sept
          <year>2014</year>
          ),
          <fpage>81</fpage>
          -
          <lpage>84</lpage>
          . DOI:http://dx.doi.org/10.1109/
          <string-name>
            <surname>MCC</surname>
          </string-name>
          .
          <year>2014</year>
          .51
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <given-names>Daniel</given-names>
            <surname>Cukier</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>DevOps Patterns to Scale Web Applications Using Cloud Services</article-title>
          .
          <source>In Proceedings of the 2013 Companion Publication for Conference on Systems, Programming</source>
          , &amp;
          <article-title>Applications: Software for Humanity (SPLASH '13)</article-title>
          . ACM, New York, NY, USA,
          <fpage>143</fpage>
          -
          <lpage>152</lpage>
          . DOI:http://dx.doi.org/10.1145/2508075.2508432
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <given-names>Docker</given-names>
            <surname>Inc</surname>
          </string-name>
          .
          <year>2017</year>
          . Docker Documentation. https://docs.docker.com/. (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>Ron</given-names>
            <surname>Kohavi</surname>
          </string-name>
          , Roger Longbotham, Dan Sommerfield, and
          <string-name>
            <surname>Randal</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Henne</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>Controlled experiments on the web: survey and practical guide</article-title>
          .
          <source>Data Mining and Knowledge Discovery</source>
          <volume>18</volume>
          ,
          <issue>1</issue>
          (
          <year>2009</year>
          ),
          <fpage>140</fpage>
          -
          <lpage>181</lpage>
          . DOI:http://dx.doi.org/10.1007/s10618-008-0114-1
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <given-names>Abdelkader</given-names>
            <surname>Lahmadi</surname>
          </string-name>
          and Fre´de´ric Beck.
          <year>2015</year>
          .
          <article-title>Powering Monitoring Analytics with ELK stack</article-title>
          .
          <source>9th International Conference on Autonomous Infrastructure, Management and Security (AIMS</source>
          <year>2015</year>
          ).
          <source>(June</source>
          <year>2015</year>
          ). https://hal.inria.fr/hal-01212015
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <given-names>Lucy</given-names>
            <surname>Ellen</surname>
          </string-name>
          <string-name>
            <surname>Lwakatare</surname>
          </string-name>
          , Pasi Kuvaja, and
          <string-name>
            <given-names>Markku</given-names>
            <surname>Oivo</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Dimensions of DevOps</article-title>
          .
          <source>In Agile Processes in Software Engineering and Extreme Programming: 16th International Conference, XP 2015</source>
          , Helsinki, Finland, May
          <volume>25</volume>
          -29,
          <year>2015</year>
          , Proceedings, Casper Lassenius, Torgeir Dingsøyr, and Maria Paasivaara (Eds.). Springer International Publishing, Cham,
          <fpage>212</fpage>
          -
          <lpage>217</lpage>
          . DOI:http://dx.doi.
          <source>org/10.1007/978-3-319-18612-2 19</source>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <given-names>Tarun</given-names>
            <surname>Prakash</surname>
          </string-name>
          , Misha Kakkar, and
          <string-name>
            <given-names>Kritika</given-names>
            <surname>Patel</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Geo-identification of web users through logs using ELK stack</article-title>
          .
          <source>In 2016 6th International Conference - Cloud System and Big Data Engineering (Confluence)</source>
          .
          <fpage>606</fpage>
          -
          <lpage>610</lpage>
          . DOI:http://dx.doi.org/10.1109/CONFLUENCE.
          <year>2016</year>
          .7508191
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <given-names>James</given-names>
            <surname>Roche</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Adopting DevOps Practices in Quality Assurance</article-title>
          .
          <source>Commun. ACM</source>
          <volume>56</volume>
          ,
          <issue>11</issue>
          (Nov.
          <year>2013</year>
          ),
          <fpage>38</fpage>
          -
          <lpage>43</lpage>
          . DOI:http://dx.doi.org/10.1145/2524713.2524721
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <given-names>Andreas</given-names>
            <surname>Schaefer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Marc</given-names>
            <surname>Reichenbach</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Dietmar</given-names>
            <surname>Fey</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Continuous Integration and Automation for DevOps</article-title>
          .
          <source>In IAENG Transactions on Engineering Technologies: Special Edition of the World Congress on Engineering and Computer Science</source>
          <year>2011</year>
          , Kon Haeng Kim,
          <string-name>
            <surname>Sio-Iong Ao</surname>
            , and
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Burghard Rieger</surname>
          </string-name>
          (Eds.). Springer Netherlands, Dordrecht,
          <fpage>345</fpage>
          -
          <lpage>358</lpage>
          . DOI:http://dx.doi.org/10.1007/
          <fpage>978</fpage>
          -94-007-4786-9 28
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <given-names>Stephen</given-names>
            <surname>Soltesz</surname>
          </string-name>
          , Herbert Po¨tzl, Marc E. Fiuczynski, Andy Bavier, and
          <string-name>
            <given-names>Larry</given-names>
            <surname>Peterson</surname>
          </string-name>
          .
          <year>2007</year>
          .
          <article-title>Container-based Operating System Virtualization: A Scalable, High-performance Alternative to Hypervisors</article-title>
          .
          <source>SIGOPS Oper. Syst. Rev. 41</source>
          ,
          <issue>3</issue>
          (March
          <year>2007</year>
          ),
          <fpage>275</fpage>
          -
          <lpage>287</lpage>
          . DOI:http://dx.doi.org/10.1145/1272998.1273025
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <given-names>Diomidis</given-names>
            <surname>Spinellis</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>Don't Install Software by Hand</article-title>
          .
          <source>IEEE Software 29, 4 (July</source>
          <year>2012</year>
          ),
          <fpage>86</fpage>
          -
          <lpage>87</lpage>
          . DOI:http://dx.doi.org/10.1109/MS.
          <year>2012</year>
          .85
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>Margaret-Anne</surname>
            <given-names>Storey</given-names>
          </string-name>
          , Christoph Treude, Arie van Deursen, and Li-Te Cheng.
          <year>2010</year>
          .
          <article-title>The Impact of Social Media on Software Engineering Practices and Tools</article-title>
          .
          <source>In Proceedings of the FSE/SDP Workshop on Future of Software Engineering Research (FoSER '10)</source>
          . ACM, New York, NY, USA,
          <fpage>359</fpage>
          -
          <lpage>364</lpage>
          . DOI:http://dx.doi.org/10.1145/1882362.1882435
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <given-names>Deepak</given-names>
            <surname>Vohra</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Using an HA Master with OpenShift</article-title>
          . Apress, Berkeley, CA,
          <fpage>335</fpage>
          -
          <lpage>353</lpage>
          . DOI:http://dx.doi.
          <source>org/10.1007/978-1-4842-2598-1 15</source>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <given-names>Ya</given-names>
            <surname>Xu</surname>
          </string-name>
          , Nanyu Chen, Addrian Fernandez, Omar Sinno, and
          <string-name>
            <given-names>Anmol</given-names>
            <surname>Bhasin</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>From Infrastructure to Culture: A/B Testing Challenges in Large Scale Social Networks</article-title>
          .
          <source>In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '15)</source>
          . ACM, New York, NY, USA,
          <fpage>2227</fpage>
          -
          <lpage>2236</lpage>
          . DOI:http://dx.doi.org/10.1145/2783258.2788602
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>