=Paper=
{{Paper
|id=Vol-1864/paper_18
|storemode=property
|title=A study of PosDB Performance in a Distributed Environment
|pdfUrl=https://ceur-ws.org/Vol-1864/paper_18.pdf
|volume=Vol-1864
|authors=George Chernishev,Viacheslav Galaktionov,Valentin Grigorev,Evgeniy Klyuchikov,Kirill Smirnov
}}
==A study of PosDB Performance in a Distributed Environment==
A study of PosDB Performance
in a Distributed Environment
George Chernishev Vyacheslav Galaktionov Valentin Grigorev
Saint-Petersburg State University, Saint-Petersburg State University Saint-Petersburg State University
JetBrains Research Email: viacheslav.galaktionov@gmail.com Email: valentin.d.grigorev@gmail.com
Email: chernishev@gmail.com
Evgeniy Klyuchikov Kirill Smirnov
Saint-Petersburg State University Saint-Petersburg State University
Email: evgeniy.klyuchikov@gmail.com Email: kirill.k.smirnov@math.spbu.ru
Abstract—PosDB is a new disk-based distributed column-store an initial version of our system, PosDB, was presented and its
relational engine aimed for research purposes. It uses the Volcano high-level features were described [19].
pull-based model and late materialization for query processing,
In this paper, we present the results of first distributed
and join indexes for internal data representation. In its current
state PosDB is capable of both local and distributed processing experiments with PosDB. We evaluate system performance
of all SSB (Star Schema Benchmark) queries. by studying several performance metrics, namely speedup and
Data, as well as query plans, can be distributed among network scaleup. For evaluation we use a standard OLAP benchmark —
nodes in our system. Data distribution is performed by horizontal the Star Schema Benchmark [20].
partitioning.
In this paper we experimentally evaluate the performance of
The paper is structured as follows. The architecture of the
our system in a distributed environment. We analyze system per- system is described in detail in section II. A short survey of
formance and report a number of metrics, such as speedup and distributed technology in databases is presented in section I.
scaleup. For our evaluation we use the standard benchmark — In section III we discuss used metrics (scaleup and speedup).
the SSB. The experimental evaluation and its results are presented in
section IV.
I. I NTRODUCTION
Column-stores have been actively investigated for the last II. R ELATED W ORK
ten years. Many open-source [1], [2], [3], [4], [5] and commer-
cial [6], [7], [8] products with different features and aims have There is a shortage of distribution-related studies for rela-
been developed. The core design issues such as compression tional column-oriented databases [18]. The main reasons are
[9], [10], materialization strategy [11], [12] and result reuse the scarcity of research prototypes and the drawbacks in the
[13] got significant attention. Nevertheless, distribution of existing ones.
data and control in disk-based column-store systems was not Two research prototypes of distributed column-store sys-
studied at all. tems are known to the authors — [5], [21]. Both studies
The reason for this is that none of open-source systems are use an in-memory DBMS, MonetDB, some of whose parts
truly distributed, although some of them [5] support mediator- were rewritten to add distribution-related functionality. This
based [14] distribution. Several commercial systems, such as approach cannot be considered “true” distribution, because, in
Vertica [6], are distributed but closed-source. To the best of our general, it restricts the pool of available distributed processing
knowledge, no investigation of distribution aspects in column- techniques. Developers have to take into account the architec-
stores has been conducted. ture of the underlying centralized DBMS in order to employ
To address this problem, we are developing a disk-based it. Unfortunately, the degree of these restrictions is unclear for
distributed relational column-store engine — PosDB. In its the aforementioned systems.
current state it is based on the Volcano pull-based model [15] Another distributed column-store, the ClickHouse system, is
and late materialization. Data distribution is supported in the an industrial open-source disk-based system. However, there
form of horizontal per-table partitioning. Each fragment can are two issues with this system. Firstly, it was open-sourced
be additionally replicated on an arbitrary number of nodes, only recently, in 2016, and there are no research papers based
i.e. our system is partially replicated [16]. Control (query) on this system, known to the authors. Secondly, it has several
distribution is also supported: parts of a query plan can be serious architectural drawbacks: a very restricted partitioning
sent to a remote node for execution. [22] and issues with distributed joins [23].
In our earlier studies [17], [18] we have described opportu- At the same time, there are hundreds, if not thousands, of
nities offered by such a system and sketched its design. Later, papers on the subject in application to row-stores [16], [14].
To user
Server
Node 1 Sort Node n
DataSource(DATE#1) Join Join DataSource(DATE#n)
Aggregate
DataSource(SUPPLIER#1) Join Join DataSource(SUPPLIER#n)
UnionAll
DataSource(PART#1) Join Join DataSource(PART#n)
Node n − 1
...
DataSource(LINORDER(0-900)) Node 2 DataSource(LINORDER(. . . – . . . ))
Fig. 1. Example of distributed query plan — distributed query 2.1 from SSB
III. P OS DB: A RCHITECTURE •Asynchronizer — an ancillary unary operator that
processes its child operator in a separate thread and stores
PosDB uses the Volcano pull-based model [15], so each the results in the internal fixed-size buffer;
query plan is represented as a tree with operators as vertexes and the following that produce tuples:
and data flows as edges. All operators support the “open()– • Select, for tuple reconstruction;
getNext()–close()” interface and can be divided into two • Aggregate, for simple aggregation without grouping;
groups: • SGAggregate, HashSGAggregate, for complex ag-
• Operators that produce blocks of positions. gregation with grouping and sorting;
• Operators that produce individual tuples. • SparseTupleSorter, for tuple sorting.
As can be seen, query distribution is maintained on the
PosDB relies on late materialization, so operators of the
operator level using two ancillary operators: ReceivePos
second type are always deployed on the top of a query tree.
and UnionAll. It should be emphasized, that a multithreaded
They are used to build tuples from position blocks and to
implementation of UnionAll is essential here, because sequen-
perform aggregation. The whole tree below the materialization
tial execution would definitely incur severe waiting penalties,
point consists of operators which return blocks.
completely negating the benefits of a distributed environment.
Each position block stores several position vectors of equal Figure 1 presents an example of a distributed query plan for
length, one per table. This structure is essentially a join index the query 2.1 from the SSB which is as follows:
[24], [25], which we use to process a chain of join operators.
Currently we have the following operators that produce join select sum(lo_revenue), d_year, p_brand1
indexes: from lineorder, date, part, supplier
where lo_orderdate = d_datekey
• DataSource, FilteredDataSource, operators for and lo_partkey = p_partkey
creating initial position streams. The former generates a and lo_suppkey = s_suppkey
list of contiguous positions without incurring any disk and p_category = ’MFGR#12’
I/O, while the latter conducts a full column scan and and s_region = ’AMERICA’
produces a stream of positions whose corresponding group by d_year, p_brand1
values satisfy a given predicate. These operators are the order by d_year, p_brand1;
only possible leaves of a query tree in our system;
• GeneralPosAnd, SortedPosAnd, binary operators Also, there is a notion of data readers in our system. Data
for the intersection of two position streams related to one reader is a special entity used for reading attribute values
table; corresponding to the position stream. Currently, we support
• NestedLoopJoin, MergeJoin, HashJoin, binary the following hierarchy of readers:
operators, which implement the join operation in different • ContinuousReader and NetworkReader, basic
ways; readers for accessing a local or remote partition respec-
• UnionAll — an n-ary operator that processes its sub- tively;
trees in separate threads and merges their output into a • PartitionedDataReader, an advanced reader for
single stream in an arbitrary order; accessing the whole column, whose partitions are stored
• ReceivePos — an ancillary unary operator that sends a on one or several machines. For each partition it creates
query plan subtree to a remote node, receives join indexes a corresponding basic reader to perform local or remote
from it and returns them to the ancestor; full scan. Then, using information from the catalog, a
PartitionedDataReader automatically determines
PART#1
which reader to use for a position in a join index;
SUPPLIER#1
Node 1
• SyncReader, an advanced reader responsible for syn-
chronous reading of multiple attributes. This reader main- LINEORDER(1–200)
tains a PartitionedDataReader for each column. DATE#1
Initially, a query plan does not contain readers. Each
CUSTOMER#1
operator creates readers on demand and feeds them po-
sitions to receive necessary data. Operators that mate-
rialize tuples use SyncReader, others usually employ PART#2
Master Node
PartitionedDataReader. Using these advanced readers
SUPPLIER#2
Node 2
allows operators to be unaware of data distribution.
LINEORDER(200–400)
IV. G ENERAL C ONSIDERATIONS AND U SED M ETRICS
DATE#2
Distributing the DBMS has two important goals [16]: im-
CUSTOMER#2
proving performance and ensuring easy system expansion.
These goals are usually evaluated using two metrics [26]: ...
scaleup and speedup.
Speedup reflects the dependency of system performance on PART#k
the number of processing nodes under the fixed workload. SUPPLIER#k
Node k
Thus, it shows the performance improvement that can be LINEORDER(...–...)
achieved by using additional equipment and without system
redesign. DATE#k
Linear speedup is highly desired but rarely can be achieved CUSTOMER#3
in practice. Superlinear speed points out an unaccounted
distributed system resources or poor algorithm. So, a good Fig. 2. Data distribution scheme in PosDB
system should try to approximate linear dependency as well
as it can.
Scaleup is a similar metric that reflects how easy it is to in this paper for each query. The server is responsible for
sustain the achieved performance level under an increased receiving data from worker nodes and for aggregation. Note
workload. The number of processing nodes and a size of the that all queries in this benchmark can be distributed in such
workload are increased by the same number of times. An a manner that no inter node (worker node) communication is
ideal system achieves linear scaleup, but again, it is rarely required.
achievable in practice.
Workload can be increased either by increasing the number A. Description of Experiments, Hardware and Software Ex-
of queries or the amount of data. The former is the transac- perimental Setups
tional scaleup and the latter is the data scaleup. We do not We consider three different experiments, all using the SSB
investigate transactional scaleup, because PosDB is oriented workload:
towards OLAP processing — a kind of processing that implies
1) The dependency of PosDB performance on SSB scale
long-running queries. Taniar et al. [26] argue that transactional
factor in a local (one node) case.
scaleup is relevant in transaction processing systems where the
2) The speedup of PosDB, i.e. the dependency of the
transactions are small queries. On the other hand, data scaleup
performance for a fixed workload (scale factor 50) on the
is very important for our system because the amount of data
number of nodes. The number of nodes includes server
in OLAP environments can exhibit feasible growth.
and 1, 2, 4, 6, 8 worker nodes.
V. E XPERIMENTS 3) The scaleup of PosDB, i.e. the performance on k =
1, 2, 4, 6, 8 nodes for scale factor 10 ∗ k workload.
In order to conduct the experiments, we selected the fol-
lowing setup of data and query distributions. We designate These experiments are conducted on a cluster of ten ma-
one processing node as a server and assign it several worker chines connected by 1GB local network. Each machine has
nodes. The server only processes user requests, while the data the following characteristics: Intel(R) Core(TM) i5-2310 CPU
is stored on worker nodes (see Figure 2). Each worker node @ 2.90GHz (4 cores total), 4 GB RAM. The software used is
stores a horizontal partition of the fact table (LINEORDER) Ubuntu Linux 16.04.1 (64 bit), GCC 5.4.0, JSON for Modern
along with the replicas of all other (dimension) tables. Dimen- C++ 2.1.0.
sion tables are always tiny compared to the fact table, so their
replication incurs almost no storage overhead. B. Experiment 1
Figure 1 shows the distributed query for query 2.1 from the In this experiment we study PosDB behavior in a local case
workload. It illustrates the general approach which we follow under the full SSB workload. We have chosen six different
·106
2.4
SF 1 10 30 50 100 200
2.2
2
1.8
1.6
1.4
Time (ms)
1.2
1
0.8
0.6
0.4
0.2
0
Q1.1 Q1.2 Q1.3 Q2.1 Q2.2 Q2.3 Q3.1 Q3.2 Q3.3 Q3.4 Q4.1 Q4.2 Q4.3
Fig. 3. Query performance from scale factor dependency in local case
scale factors: 1, 10, 30, 50, 100, 200. The results of this exper- • Execution time of the queries from the second flight
iment are presented in Figure 3. It should be emphasized that decreases with the increase in selectivity, as is to be
logarithmic y-axis is used here. expected.
• Query flight 3 reveals two interesting points. Query 3.1 is
·104 much more expensive than the others, because its first join
2 operator returns a significantly higher number of records,
thus loading the rest of the query tree. With high scale
factors, query 3.3 become much more expensive than
1.5
others. We suppose that it is due to intensive disk usage.
• Queries 4.1 and 4.2 behave in a very similar way, however
Time (s)
1 the last join in the query 4.1 produces more results. This
is the reason for the slightly extended run times for the
whole query. Also, there is an anomaly in query 4.3 which
0.5 still has to be explained. We plan to explore it in our
further studies.
0 The total time of the whole workload is presented in
50 100 150 200
Figure 4. In order to obtain this graph we summed up the
Fig. 4. Dependency of total SSB execution time from scale factor run times of all queries described in the SSB. Essentially,
this graph is just another representation of the information
After careful analysis, several interesting conclusions can presented in Figure 3.
be drawn:
• Although query 1.3 has a higher selectivity, its execution
C. Experiment 2
time is higher than that of the other queries of its flight. You can see the results of the second experiment in Figure 5.
Perhaps it is due to a more expensive aggregation. Starting with 1, the number of nodes is increased by 2 with
each step. The contents of the LINEORDER table (about 11 PosDB scaleup, we also plotted the “linear scaleup” and
GBs) are evenly partitioned and distributed across them. Other “no scaleup” cases. The former is a situation when scaleup
tables are fully replicated. The red line shows how much faster is constant (ideal value) during all experiments. In the “no
the queries are executed when the number of nodes increases. scaleup” case we assume that the amount of data grows
The green line represents the “ideal” case, where the speedup linearly, but the computing power remains constant, so scaleup
grows linearly. is 1/(number of machines).
We can see that PosDB scaleup is in [0.5, 0.75] boundaries,
Linear speedup
slowly decreasing with the number of servers growing. Thus,
8
PosDB comparing to the case “no scaleup,” we can conclude that our
system can offer a good scale-up.
6
VI. C ONCLUSION
4
In this paper we presented an evaluation of PosDB, our dis-
tributed column-store query engine. We used the Star Schema
Benchmark — a standard benchmark used for evaluation of
2 OLAP systems. We studied several performance metrics, such
as speedup and scaleup. In our experiments we were able
2 4 6 8 to achieve scale factor 200 on a single machine, our system
demonstrated sublinear speedup and a good data scaleup. The
Fig. 5. Speedup from number of servers dependency in PosDB evaluation also allowed us to discover some anomalies and
bottlenecks in our system. They are the subject of our future
As you can see, PosDB’s performance increases when new
research.
nodes are added, although not linearly, which is because our
system is yet in its infancy. We believe that such high overhead
R EFERENCES
can be written off on the lack of a proper buffer manager,
which means that the same data may be transferred over the [1] M. Stonebraker, D. J. Abadi, A. Batkin, X. Chen, M. Cherniack,
network many times. M. Ferreira, E. Lau, A. Lin, S. Madden, E. O’Neil, P. O’Neil,
A. Rasin, N. Tran, and S. Zdonik, “C-store: A column-oriented dbms,”
D. Experiment 3 in Proceedings of the 31st International Conference on Very Large
Data Bases, ser. VLDB ’05. VLDB Endowment, 2005, pp. 553–564.
In this experiment we measured PosDB data scaleup under [Online]. Available: http://dl.acm.org/citation.cfm?id=1083592.1083658
scale factors 10, 20, 40, 60, 80 on 1, 2, 4, 6, 8 nodes. Data and [2] S. Idreos, F. Groffen, N. Nes, S. Manegold, K. S. Mullender, and
M. L. Kersten, “Monetdb: Two decades of research in column-oriented
query for each test configuration are distributed similar to database architectures,” IEEE Data Eng. Bull., vol. 35, no. 1,
the experiment 2. LINEORDER is partitioned between nodes, pp. 40–45, 2012. [Online]. Available: http://sites.computer.org/debull/
other tables are fully replicated. Then, parts of query plan A12mar/monetdb.pdf
[3] “Google. supersonic library,” https://code.google.com/archive/p/
that lie below aggregation (or tuple construction) are sent supersonic/, 2017, acessed: 12/02/2017.
to different nodes, each with a DataSource operator for the [4] J. Arulraj, A. Pavlo, and P. Menon, “Bridging the archipelago
corresponding LINEORDER partition. See Figures 2 and 1 between row-stores and column-stores for hybrid workloads,” in
Proceedings of the 2016 International Conference on Management
for more details. of Data, ser. SIGMOD ’16, 2016, pp. 583–598. [Online]. Available:
http://db.cs.cmu.edu/papers/2016/arulraj-sigmod2016.pdf
1.5 [5] Y. Zhang, Y. Xiao, Z. Wang, X. Ji, Y. Huang, and S. Wang, ScaMMDB:
Linear scaleup Facing Challenge of Mass Data Processing with MMDB. Berlin,
PosDB Heidelberg: Springer Berlin Heidelberg, 2009, pp. 1–12. [Online].
No scaleup Available: http://dx.doi.org/10.1007/978-3-642-03996-6 1
[6] A. Lamb, M. Fuller, R. Varadarajan, N. Tran, B. Vandiver, L. Doshi,
1 and C. Bear, “The vertica analytic database: C-store 7 years later,”
Proc. VLDB Endow., vol. 5, no. 12, pp. 1790–1801, Aug. 2012.
[Online]. Available: http://dx.doi.org/10.14778/2367502.2367518
[7] M. Zukowski and P. Boncz, “From x100 to vectorwise: Opportunities,
0.5 challenges and things most researchers do not think about,” in
Proceedings of the 2012 ACM SIGMOD International Conference on
Management of Data, ser. SIGMOD ’12. New York, NY, USA:
ACM, 2012, pp. 861–862. [Online]. Available: http://doi.acm.org/10.
1145/2213836.2213967
0 [8] D. Abadi, P. Boncz, and S. Harizopoulos, The Design and Implemen-
2 4 6 8 tation of Modern Column-Oriented Database Systems. Hanover, MA,
USA: Now Publishers Inc., 2013.
Fig. 6. Data scaleup from number of servers dependency in PosDB [9] D. Abadi, S. Madden, and M. Ferreira, “Integrating compression and
execution in column-oriented database systems,” in Proceedings of the
(server+1 machine)executiontime 2006 ACM SIGMOD International Conference on Management of Data,
We consider scaleup as (server+k machines)executiontime ser. SIGMOD ’06. New York, NY, USA: ACM, 2006, pp. 671–682.
relation and present the results in Figure 6. To estimate the [Online]. Available: http://doi.acm.org/10.1145/1142473.1142548
[10] V. Raman, G. Attaluri, R. Barber, N. Chainani, D. Kalmuk,
V. KulandaiSamy, J. Leenstra, S. Lightstone, S. Liu, G. M. Lohman,
T. Malkemus, R. Mueller, I. Pandis, B. Schiefer, D. Sharpe,
R. Sidle, A. Storm, and L. Zhang, “Db2 with blu acceleration:
So much more than just a column store,” Proc. VLDB Endow.,
vol. 6, no. 11, pp. 1080–1091, Aug. 2013. [Online]. Available:
http://dx.doi.org/10.14778/2536222.2536233
[11] D. J. Abadi, D. S. Myers, D. J. DeWitt, and S. Madden, “Materialization
strategies in a column-oriented DBMS,” in Proceedings of the 23rd
International Conference on Data Engineering, ICDE 2007, The
Marmara Hotel, Istanbul, Turkey, April 15-20, 2007, 2007, pp. 466–
475. [Online]. Available: http://dx.doi.org/10.1109/ICDE.2007.367892
[12] L. Shrinivas, S. Bodagala, R. Varadarajan, A. Cary, V. Bharathan, and
C. Bear, “Materialization strategies in the vertica analytic database:
Lessons learned,” in 2013 IEEE 29th International Conference on Data
Engineering (ICDE), April 2013, pp. 1196–1207.
[13] M. G. Ivanova, M. L. Kersten, N. J. Nes, and R. A. Gonçalves,
“An architecture for recycling intermediates in a column-store,” in
Proceedings of the 2009 ACM SIGMOD International Conference on
Management of Data, ser. SIGMOD ’09. New York, NY, USA:
ACM, 2009, pp. 309–320. [Online]. Available: http://doi.acm.org/10.
1145/1559845.1559879
[14] D. Kossmann, “The state of the art in distributed query processing,”
ACM Comput. Surv., vol. 32, no. 4, pp. 422–469, Dec. 2000. [Online].
Available: http://doi.acm.org/10.1145/371578.371598
[15] G. Graefe, “Query evaluation techniques for large databases,” ACM
Comput. Surv., vol. 25, no. 2, pp. 73–169, Jun. 1993. [Online].
Available: http://doi.acm.org/10.1145/152610.152611
[16] M. T. Ozsu, Principles of Distributed Database Systems, 3rd ed. Upper
Saddle River, NJ, USA: Prentice Hall Press, 2007.
[17] G. Chernishev, New Trends in Databases and Information Systems:
ADBIS 2015 Short Papers and Workshops, BigDap, DCSA, GID,
MEBIS, OAIS, SW4CH, WISARD, Poitiers, France, September 8-
11, 2015. Proceedings. Cham: Springer International Publishing,
2015, ch. Towards Self-management in a Distributed Column-Store
System, pp. 97–107. [Online]. Available: http://dx.doi.org/10.1007/
978-3-319-23201-0 12
[18] ——, “The design of an adaptive column-store system,” Journal
of Big Data, vol. 4, no. 1, p. 21, 2017. [Online]. Available:
http://dx.doi.org/10.1186/s40537-017-0069-4
[19] G. Chernishev, V. Grigorev, V. Galaktionov, E. Klyuchikov, and
K. Smirnov, PosDB: a Distributed Column-Store Engine (paper sub-
mitted). Berlin, Heidelberg: Springer Berlin Heidelberg, 2017.
[20] “P. E. ONeil, E. J. ONeil and X. Chen. The Star Schema Bench-
mark (SSB).” http://www.cs.umb.edu/∼poneil/StarSchemaB.PDF, 2009,
acessed: 20/07/2012.
[21] Y. Liu, F. Cao, M. Mortazavi, M. Chen, N. Yan, C. Ku, A. Adnaik,
S. Morgan, G. Shi, Y. Wang, and F. Fang, DCODE: A Distributed
Column-Oriented Database Engine for Big Data Analytics. Cham:
Springer International Publishing, 2015, pp. 289–299. [Online].
Available: http://dx.doi.org/10.1007/978-3-319-24315-3 30
[22] “A migration Yandex ClickHouse. A transcript of a talk at High-
load++ 2016, http://www.highload.ru/2016/abstracts/2297.html,” https:
//habrahabr.ru/post/322620/, 2017, acessed: 30/04/2017.
[23] “A comparison of in-memory databases.” http://www.exasol.com/
site/assets/files/3147/a comparison of in-memory databases.pdf, 2017,
acessed: 30/04/2017.
[24] Z. Li and K. A. Ross, “Fast joins using join indices,” The VLDB
Journal, vol. 8, no. 1, pp. 1–24, Apr. 1999. [Online]. Available:
http://dx.doi.org/10.1007/s007780050071
[25] D. Tsirogiannis, S. Harizopoulos, M. A. Shah, J. L. Wiener,
and G. Graefe, “Query processing techniques for solid state
drives,” in Proceedings of the 2009 ACM SIGMOD International
Conference on Management of Data, ser. SIGMOD ’09. New
York, NY, USA: ACM, 2009, pp. 59–72. [Online]. Available:
http://doi.acm.org/10.1145/1559845.1559854
[26] D. Taniar, C. H. C. Leung, W. Rahayu, and S. Goel, High-Performance
Parallel Database Processing and Grid Databases, A. Zomaya, Ed.
Wiley Series on Parallel and Distributed Computing, 2008.