<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Fault Tolerant Distributed Join Algorithm in RDBMS</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Saint-Petersburg State University</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Many of applications use vast volume of data for computing in business intelligence applications. Mostly, these applications handle queries with such operators as aggregation and join. State-of-the-art distributed RDBMS get over these tasks in assumption no errors occur. Unfortunately, distributed database management systems su er from failures. Failures causes queries with joining large tables re-execute so that enormous volume of resources must be leveraged. In this paper we propose a new fault tolerant join algorithm for distributed RDBMS. The results which have been already obtained and a detailed plan of further research are discussed.</p>
      </abstract>
      <kwd-group>
        <kwd>Databases Replication</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Nowadays, known RDBMS work with assumption that no any kind of failures
may occur. If a database fails, query should be re-executed. In this work, we
assume that a client runs query with join over enormous value of data of two
tables dispersed among many servers.</p>
      <p>
        Distributed systems based on Map-Reduce were invented to assist handling
vast volume of data on unstable distributed systems. Such kind of systems do
not interrupt the execution of query. Instead, they re-execute a part of failed
sub-tasks. Unfortunately, Map-Reduce systems do not do it in the best way [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>The goal of this research is to come up with and implement a fault tolerant
distributed join algorithm for unstable RDBMS. Existent RDBMS solutions do
not t to be used because of queries have to be re-executed in case of failure
occurrence. Map-Reduce solutions are capable of recovering failed tasks but do not
do it e ectively. The main task of our work is to seek an intermediate solution.</p>
      <p>This paper is organized as follows. Section 2 de nes the key terms and
notations used in this work. Problem statement and research questions are de ned in
Section 3. Section 4 provides a review of state of the art related work. Research
process, results and further plans are described in Sections 5 and 6. This paper
is concluded by Section 7.</p>
    </sec>
    <sec id="sec-2">
      <title>The Key Terms and Notations Used</title>
      <p>The following de nitions and notations are used in this paper.</p>
      <p>Consider the de nition of distributed database systems. Distributed database
systems are database management systems, consisting of local database systems.
Each of these local databases has its disks. Databases are located and dispersed
over a network of interconnected computers. In this paper, the con guration of
system is based on shared-nothing architecture.</p>
      <p>There is the single entry point named coordinator. It receives client queries and
returns an outcome of an executed query. Keepers are nodes where data is stored.
Workers are nodes where join operation is performed. jW j stands for amount of
workers in the con guration of a system. R,S are relations to be joined.
In this paper under classical algorithm will be often assumed classical, unstable,
distributed join algorithm.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Problem Statement</title>
      <p>
        There are a few causes the classical distributed join may interrupt [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ].
{ The coordinator became unreachable because of a communication or a
system failure.
{ A media or system failure occurred at a keeper or a worker site.
{ A site was suddenly turned o during the performing of query.
In this work, the main focus is on coming up with an algorithm which could
detect and properly handle causes listed above. The following algorithms of parallel
distributed join [4{7] for di erent sort of systems are used in assumption that
a system is fail-free. Examined works do not consider task of handling failures
from the list above. In contrast to fail-free RDBMS algorithm, there are many
research e orts [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] are dedicated to Hadoop for detecting and proper handling
of failures.
      </p>
      <p>Based on the said above, the following research questions are de ned in this
paper:
{ Will doubling tasks increase execution time of query with join?
{ What patterns and mechanisms exist for identifying and monitoring the
availability of a site?
{ How e ectively do existent fault tolerant algorithms of Hadoop do their
work?
{ How data replication can be used in order to design and implement fault
tolerant join algorithm?
4</p>
    </sec>
    <sec id="sec-4">
      <title>State of the Art</title>
      <p>Two main parts of this work to be considered - join algorithms in Map-Reduce
and RDBMS, and mechanisms ensuring fault-tolerance.
4.1</p>
      <sec id="sec-4-1">
        <title>Join algorithms</title>
        <p>
          The competitive analysis and description of join algorithms of Map-Reduce are
presented in works [
          <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
          ].
        </p>
        <p>
          Repartition join is a simple algorithm which performs data pre-processing in
Map phase, and direct join is done during the Reduce phase. The algorithm has
several drawbacks: the algorithm is more time consuming and it requires a lot
of memory during the reduce phase. Repartition join is widely used in [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
Broadcast join does the following. It populates the smaller input and proceeds
joining during map phase. The disadvantage of this algorithm is that if a smaller
input does not t into memory to build a hash-table, an additional joining phase
must be performed [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. This algorithm is used in Hadoop Pig [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
Semi-join algorithm is used to prevent transferring data that does not take part
in join phase. Approach of deleting unused tuples reduces amount of data to be
submitted and joined. The disadvantage of this algorithm is that an extra phase
is required to perform joining. Moreover, additional scanning is needed to drop
out unwanted data.
4.2
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>Fault-tolerance mechanisms</title>
        <p>
          In work [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] authors proposed a strategy of doubling each task during the query
execution. This stands for if one of the tasks fails, the second backup task will
end up on time. It reduces the job completion time by using larger amounts of
resources. Tasks are doubled at map and reduce phases. Readers may guess that
doubling the tasks leads to approximately doubling the resources.
        </p>
        <p>
          Haopeng Chen and Hao Zhu proposed two strategies to improve the failure
detection in Hadoop via heartbeat messages in the worker side [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. The rst
strategy is an adaptive interval which dynamically con gures the expiry time
adapted to the various sizes of jobs. The second strategy is to evaluate the
reputation of each worker according the reports of the failed fetch-errors from
each worker. If a worker failures, it lows its reputation. Once the reputation
becomes equal to some bound, the master node marks this worker as failed.
Another taking research [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] proposes a solution based on consensus algorithm
Raft. The key point of a system is that each node periodically transfers messages
with metadata to other sites. During the execution of a client query, a quorum
must take place to handle a client query fully. Raft algorithm is successfully
applied in well-known distributed system CockroachDB [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ].
        </p>
        <p>
          To remove single point of failure in Hadoop, a new approach of a metadata
replication was proposed in [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. The solution involves three major phases. In
initialization phase, each secondary node is registered to primary node and its
initial metadata is caught up with active/primary node. At replication phase,
such metadata as outstanding operations and lease states are replicated across
all sites. During the fail-over phase, standby/new elected primary node takes
over all communications.
        </p>
        <p>
          To defend stored data from being crashed or lost, mechanism of full data
replication has to used. Initially, data can be horizontally partitioned. As
example, PostgreSQL [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] provides model of streaming replication. There are two
roles de ned in replication mechanism. The rst role is master. The master server
receives client queries, gathers data from others servers and populates WAL
entries across involved servers. The second role is standby. It receives replicated
data and stores them in its own disks.
5
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Evaluation Plan and Preliminary Results</title>
      <p>Given the problem and research questions, the following plan has been
performed:
1. Conducted a survey of academic works made in this eld. Reviewed abilities
of state of the art RDBMS and NoSQL solutions. We checked out how these
solutions handle fault occurrences.
2. Reviewed distributed hash-join algorithms. Outlined a cost model and then
evaluated the distributed algorithm by applying the cost model to reviewed
algorithms. Highlighted possible emerging faults during the execution of join
algorithms.
3. Come up with the fault tolerant join algorithm. Applied cost model and
conducted a comparison of our algorithm with an unstable distributed join
algorithm.
5.1</p>
      <sec id="sec-5-1">
        <title>Fault Tolerant Distributed Join Algorithm</title>
        <p>
          As the basement, classical distributed hash-join algorithm has been taken from
work [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]. The fault tolerant distributed hash-join algorithm is similar to classical
hash-join for distributed database systems in a shared-nothing architecture.
1. Building. A coordinator receives a client query. To initiate a build phase, it
populates messages with a client query across all nodes. Once messages are
sent, the coordinator sets the status of performing a client query as processing
for all keepers.
2. Each keeper reads its partitions of relation R, applies a hash function h1 to
the join attribute of each attribute. Hash function h1 has its range of values
0...jW j 1. If a tuple hashes to value i, then it goes to i mod jW j and (i + 1)
mod jW j workers. For the latter, a message has to contain message reserved
data. Once a keeper ends up reading its partitions of relation R, it noti es
the coordinator about the status of work.
3. Each worker builds a hash table, allocated in memory, and lls in it with
tuples received from step 2. In this step, each worker uses a di erent hash
function h2 than the one used in step 2.
4. Once all keepers stopped reading their partitions of relation R, the
coordinator initiates a probing phase by sending noti cations to keepers.
5. Probing. Each keeper reads its partitions of relation S, applies a hash function
h1 to the join attribute of each attribute as it does in step 2. If a tuple hashes
to value i, then it goes to i mod jW j and (i + 1) mod jW j workers.
6. Worker i mod jW j receives a tuple of relation S, probes the hash table built
in step 2. If so, tuples join and an outcome tuple is generated. The other
worker (i + 1) mod jW j puts reserved data into its disk.
7. Once an outcome tuple is generated, a worker sends a heartbeat message
to the following worker. In this message, it points a position of the last
successfully joined tuple of relation S.
        </p>
        <p>C
RC</p>
        <p>K3</p>
        <p>K1
K2</p>
        <p>W2
KM
...</p>
        <p>W3
W1
...</p>
        <p>
          WN
In the Figure 1 shown a scheme of working of fault tolerant distributed join
algorithm. There is added a reserved coordinator RC. It synchronizes with the
primary coordinator C. Workers and keepers comprise a ring of nodes. Each site
is aware of the following node. It facilitates a site submits info about proceeded
work during the join to the following node. In case of i mod jW j worker is failed,
(i + 1) mod jW j worker takes over tasks of a failed worker. If keeper i mod jKj
fails, site i mod jK + 1j takes over jobs of the failed keeper.
In multi-objective query optimization distributed database systems process
nding Pareto set of solutions or the best possible trade-o s among the objective
functions [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. Objective functions might be total time of query execution, I/O
operations, CPU instructions and a number of messages to be transmitted. In
this work we found trade-o between the least time of the execution in case
of failure occurrence and extra resources needed to recover failed tasks. In
distributed database systems the total time of query execution is expressed through
mathematical model of weighted average. This model consists of sum of time to
perform I/O operations, CPU instructions and time to exchange a number of
messages among involved sites. Our work consider evaluating cost of total time
of the query execution.
        </p>
        <p>Figures 2, 3 depict time of the execution both algorithms in di erent cases.
The rst case is fail-free. Other cases simulate a keeper failed situation, a worker
failed and case with failed both keeper and worker. In fail-free case, classical
algorithm has bene t in front of fault tolerant algorithm. As for the rest cases,
on the average 9% fault tolerant algorithm takes less time to perform a client
query even if at least one of site is down.</p>
        <p>2;200
{ Design and implement distributed fault tolerant hash-join algorithm.
Conduct experiments with other solutions.
{ Perform comparison of the performance of our extension with Hadoop. Make
use of di erent volumes of data.
{ Evaluate and compare I/O, CPU, and memory costs.
{ Consider combining the developed fault-tolerant algorithm with other join
algorithms.
{ De ne benchmarks to evaluate and compare developed fault tolerant
algorithms with existent solutions.</p>
        <p>As example, developed algorithms might be compared with Hadoop
MapReduce Join algorithms. Evaluation should be performed with di erent volume
of data.
7</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Summary</title>
      <p>In this paper the fault tolerant distributed join algorithm has been proposed.
Results of comparison demonstrates that proposed algorithm lead to less time
to re-execute a failed task at a failed site than time needed to re-execute the
query using classical algorithm. Also future work is provided.</p>
      <p>Acknowlegements. Author thanks Boris Novikov for his helpful comments
that have signi cantly improved this paper.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>Christos</given-names>
            <surname>Doulkeridis</surname>
          </string-name>
          and
          <string-name>
            <given-names>Kjetil</given-names>
            <surname>Norvaag</surname>
          </string-name>
          .
          <article-title>A survey of large-scale analytical query processing in mapreduce</article-title>
          .
          <source>The VLDB Journal</source>
          ,
          <volume>23</volume>
          (
          <issue>3</issue>
          ):
          <volume>355</volume>
          {
          <fpage>380</fpage>
          ,
          <year>June 2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Algirdas</given-names>
            <surname>Avizienis</surname>
          </string-name>
          ,
          <string-name>
            <surname>Jean-Claude</surname>
            <given-names>Laprie</given-names>
          </string-name>
          , Brian Randell, and
          <string-name>
            <given-names>Carl</given-names>
            <surname>Landwehr</surname>
          </string-name>
          .
          <article-title>Basic concepts and taxonomy of dependable and secure computing</article-title>
          .
          <source>IEEE Trans. Dependable Secur. Comput.</source>
          ,
          <volume>1</volume>
          (
          <issue>1</issue>
          ):
          <volume>11</volume>
          {
          <fpage>33</fpage>
          ,
          <year>January 2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>Jim</given-names>
            <surname>Gray</surname>
          </string-name>
          ,
          <string-name>
            <surname>Paul McJones</surname>
          </string-name>
          , Mike Blasgen, Bruce Lindsay, Raymond Lorie, Tom Price,
          <string-name>
            <given-names>Franco</given-names>
            <surname>Putzolu</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Irving</given-names>
            <surname>Traiger</surname>
          </string-name>
          .
          <article-title>The recovery manager of the system r database manager</article-title>
          .
          <source>ACM Comput. Surv.</source>
          ,
          <volume>13</volume>
          (
          <issue>2</issue>
          ):
          <volume>223</volume>
          {
          <fpage>242</fpage>
          ,
          <year>June 1981</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>Cagri</given-names>
            <surname>Balkesen</surname>
          </string-name>
          , Gustavo Alonso, Jens Teubner, and
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Tamer Ozsu. Multi-core, main-memory joins: Sort vs. hash revisited</article-title>
          .
          <source>Proc. VLDB Endow</source>
          .,
          <volume>7</volume>
          (
          <issue>1</issue>
          ):
          <volume>85</volume>
          {
          <fpage>96</fpage>
          ,
          <year>September 2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>Claude</given-names>
            <surname>Barthels</surname>
          </string-name>
          , Ingo Muller, Timo Schneider,
          <string-name>
            <given-names>Gustavo</given-names>
            <surname>Alonso</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Torsten</given-names>
            <surname>Hoeer</surname>
          </string-name>
          .
          <article-title>Distributed join algorithms on thousands of cores</article-title>
          .
          <source>Proc. VLDB Endow</source>
          .,
          <volume>10</volume>
          (
          <issue>5</issue>
          ):
          <volume>517</volume>
          {
          <fpage>528</fpage>
          ,
          <year>January 2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>Georges</given-names>
            <surname>Gardarin</surname>
          </string-name>
          and
          <string-name>
            <given-names>Patrick</given-names>
            <surname>Valduriez</surname>
          </string-name>
          .
          <article-title>Join and semijoin algorithms for a multiprocessor database machine</article-title>
          .
          <source>ACM Transactions on Database Systems</source>
          ,
          <volume>9</volume>
          ,
          <string-name>
            <surname>03</surname>
          </string-name>
          <year>1984</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>J.</given-names>
            <surname>Teubner</surname>
          </string-name>
          and
          <string-name>
            <given-names>G.</given-names>
            <surname>Alonso</surname>
          </string-name>
          .
          <article-title>Main-memory hash joins on modern processor architectures</article-title>
          .
          <source>IEEE Transactions on Knowledge and Data Engineering</source>
          ,
          <volume>27</volume>
          (
          <issue>7</issue>
          ):
          <volume>1754</volume>
          {
          <fpage>1766</fpage>
          ,
          <year>July 2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>Bunjamin</given-names>
            <surname>Memishi</surname>
          </string-name>
          , Shadi Ibrahim,
          <article-title>Mar a Perez, and Gabriel Antoniu</article-title>
          .
          <source>Fault Tolerance in MapReduce: A Survey</source>
          , pages
          <volume>205</volume>
          {
          <fpage>240</fpage>
          . 10
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>Spyros</given-names>
            <surname>Blanas</surname>
          </string-name>
          ,
          <string-name>
            <surname>Jignesh M. Patel</surname>
            , Vuk Ercegovac, Jun Rao,
            <given-names>Eugene J.</given-names>
          </string-name>
          <string-name>
            <surname>Shekita</surname>
            , and
            <given-names>Yuanyuan</given-names>
          </string-name>
          <string-name>
            <surname>Tian</surname>
          </string-name>
          .
          <article-title>A comparison of join algorithms for log processing in mapreduce</article-title>
          .
          <source>In Proceedings of the 2010 ACM SIGMOD International Conference on Management of Data, SIGMOD '10, page</source>
          <volume>975</volume>
          {
          <fpage>986</fpage>
          , New York, NY, USA,
          <year>2010</year>
          .
          <article-title>Association for Computing Machinery</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>A.</given-names>
            <surname>Pigul</surname>
          </string-name>
          .
          <article-title>Comparative study parallel join algorithms for mapreduce environment</article-title>
          .
          <source>Proceedings of the Institute for System Programming of RAS</source>
          ,
          <volume>23</volume>
          :
          <fpage>285</fpage>
          {
          <fpage>306</fpage>
          , 01
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>APACHE</surname>
            <given-names>HIVE</given-names>
          </string-name>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>APACHE</surname>
            <given-names>PIG</given-names>
          </string-name>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Pedro</surname>
            <given-names>Costa</given-names>
          </string-name>
          , Marcelo Pasin, Alysson Bessani, and
          <string-name>
            <given-names>Miguel</given-names>
            <surname>Correia</surname>
          </string-name>
          .
          <article-title>Byzantine fault-tolerant mapreduce: Faults are not just crashes</article-title>
          .
          <source>pages 32{39</source>
          , 11
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <given-names>Hao</given-names>
            <surname>Zhu</surname>
          </string-name>
          and
          <string-name>
            <given-names>Haopeng</given-names>
            <surname>Chen</surname>
          </string-name>
          .
          <article-title>Adaptive failure detection via heartbeat under hadoop</article-title>
          .
          <source>pages 231{238</source>
          , 12
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15. Diego Ongaro and John Ousterhout.
          <article-title>In search of an understandable consensus algorithm</article-title>
          .
          <source>In Proceedings of the 2014 USENIX Conference on USENIX Annual Technical Conference</source>
          , USENIX ATC'
          <volume>14</volume>
          , pages
          <fpage>305</fpage>
          {
          <fpage>320</fpage>
          , Berkeley, CA, USA,
          <year>2014</year>
          . USENIX Association.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16. CockroachDB o cial website,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Feng</surname>
            <given-names>Wang</given-names>
          </string-name>
          , Jie Qiu, Jie Yang, Bo Dong,
          <string-name>
            <given-names>Xinhui</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and Ying</given-names>
            <surname>Li</surname>
          </string-name>
          .
          <article-title>Hadoop high availability through metadata replication</article-title>
          .
          <source>In Proceedings of the First International Workshop on Cloud Data Management, CloudDB '09</source>
          , pages
          <fpage>37</fpage>
          {
          <fpage>44</fpage>
          , New York, NY, USA,
          <year>2009</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18. PostgreSQL o cial website,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>David J. DeWitt</surname>
            , Je rey
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Naughton</surname>
          </string-name>
          , Donovan A.
          <string-name>
            <surname>Schneider</surname>
            , and
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Seshadri</surname>
          </string-name>
          .
          <article-title>Practical skew handling in parallel joins</article-title>
          .
          <source>In Proceedings of the 18th International Conference on Very Large Data Bases, VLDB '92</source>
          , pages
          <fpage>27</fpage>
          {
          <fpage>40</fpage>
          , San Francisco, CA, USA,
          <year>1992</year>
          . Morgan Kaufmann Publishers Inc.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Vikram</surname>
          </string-name>
          <article-title>Singh. Multi-objective parametric query optimization for distributed database systems</article-title>
          . In Millie Pant, Kusum Deep, Jagdish Chand Bansal, Atulya Nagar, and Kedar Nath Das, editors,
          <source>Proceedings of Fifth International Conference on Soft Computing for Problem Solving</source>
          , pages
          <volume>219</volume>
          {
          <fpage>233</fpage>
          ,
          <string-name>
            <surname>Singapore</surname>
          </string-name>
          ,
          <year>2016</year>
          . Springer Singapore.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>