Practice:Accelerate TPC-H through PolarDB HTAP

In this section,we will accelerate TPC-H through PolarDB HTAP. The practice will be based on local storage.

Prepare

Deploy PolarDB PG on local storage

Before this section, we should deploy PolarDB for PG HTAP through the documents deploy PolarDB PG on local storage. There are one primary node(running on port 5432) and two read-only node(running on port 5433/5434).

We can verify PolarDB HTAP through commands as follows:

  1. ps xf

Three processes should exist. There are one primary node(running on port 5432) and two read-only node(running on port 5433/5434).

HTAP processes

Generate TPC-H Dataset

TPC-HPractice:Accelerate TPC-H through PolarDB HTAP - 图2 is a dataset dedicated for OLAP. There are 22 sqls in TPC-H. We will utilize tpch-dbgen to generate arbitrarily sized TPC-H dataset.

  1. # download tpch-dbgen
  2. git clone https://github.com/qiuyuhang/tpch-dbgen.git
  3. # compile
  4. cd tpch-dbgen
  5. make

Next, let’s generate some simulation data.

TIP

It is recommended to follow this command and start with 10GB data. After experiencing this case, you can also try 100GB of data by replacing 10 with 100 in this command. What’s more, you should be careful not to exceed the local external storage capacity.

  1. # Generate simulation data
  2. ./dbgen -s 10

Let me briefly explain the files inside tpch-dbgen. The .tbl files indicates the generated table data. There are 22 TPC-H sqls in queries/. The explain files only print the plan but not actually execute.

Load Data

Load TPC-H data with psql

TIP

Note that it should always be executed in the tpch-dbgen/ directory.

  1. # 创建表
  2. psql -f dss.ddl
  3. # 进入数据库Cli模式
  4. psql
  1. # 导入数据
  2. \copy nation from 'nation.tbl' DELIMITER '|';
  3. \copy region from 'region.tbl' DELIMITER '|';
  4. \copy supplier from 'supplier.tbl' DELIMITER '|';
  5. \copy part from 'part.tbl' DELIMITER '|';
  6. \copy partsupp from 'partsupp.tbl' DELIMITER '|';
  7. \copy customer from 'customer.tbl' DELIMITER '|';
  8. \copy orders from 'orders.tbl' DELIMITER '|';
  9. \copy lineitem from 'lineitem.tbl' DELIMITER '|';

After loading data, execute the following commands to set the maximum parallelism for the created tables.

  1. # Set maximum parallelism for tables that require PX queries (if not set, no PX queries will be executed)
  2. alter table nation set (px_workers = 100);
  3. alter table region set (px_workers = 100);
  4. alter table supplier set (px_workers = 100);
  5. alter table part set (px_workers = 100);
  6. alter table partsupp set (px_workers = 100);
  7. alter table customer set (px_workers = 100);
  8. alter table orders set (px_workers = 100);
  9. alter table lineitem set (px_workers = 100);

Execute parallel queries in single server

After loading the simulated data, we first execute parallel queries in single server to observe the query speed.

  1. After entering psql, execute the following command to turn on the timer.

    1. \timing
  2. Set max workers in single server through max_parallel_workers_per_gather:

    1. set max_parallel_workers_per_gather=2; -- Set 2
  3. Execute the following command to view the execution plan.

    1. \i queries/q18.explain.sql

    You can see the parallel plan in single machine with 2 workers as shown in the figure

    1. QUERY PLAN
    2. -----------------------------------------------------------------------------------------------------------------------------------------------------------------
    3. Limit (cost=9364138.51..9364141.51 rows=100 width=71)
    4. -> GroupAggregate (cost=9364138.51..9380736.94 rows=553281 width=71)
    5. Group Key: orders.o_totalprice, orders.o_orderdate, customer.c_name, customer.c_custkey, orders.o_orderkey
    6. -> Sort (cost=9364138.51..9365521.71 rows=553281 width=44)
    7. Sort Key: orders.o_totalprice DESC, orders.o_orderdate, customer.c_name, customer.c_custkey, orders.o_orderkey
    8. -> Hash Join (cost=6752588.87..9294341.50 rows=553281 width=44)
    9. Hash Cond: (lineitem.l_orderkey = orders.o_orderkey)
    10. -> Seq Scan on lineitem (cost=0.00..1724338.96 rows=59979696 width=9)
    11. -> Hash (cost=6749642.22..6749642.22 rows=138372 width=43)
    12. -> Hash Join (cost=6110531.76..6749642.22 rows=138372 width=43)
    13. Hash Cond: (orders.o_custkey = customer.c_custkey)
    14. -> Hash Join (cost=6032162.96..6658785.84 rows=138372 width=24)
    15. Hash Cond: (orders.o_orderkey = lineitem_1.l_orderkey)
    16. -> Seq Scan on orders (cost=0.00..410917.44 rows=15000544 width=20)
    17. -> Hash (cost=6029892.31..6029892.31 rows=138372 width=4)
    18. -> Finalize GroupAggregate (cost=5727599.96..6028508.59 rows=138372 width=4)
    19. Group Key: lineitem_1.l_orderkey
    20. Filter: (sum(lineitem_1.l_quantity) > '313'::numeric)
    21. -> Gather Merge (cost=5727599.96..6016055.08 rows=830234 width=36)
    22. Workers Planned: 2
    23. -> Partial GroupAggregate (cost=5726599.94..5919225.45 rows=415117 width=36)
    24. Group Key: lineitem_1.l_orderkey
    25. -> Sort (cost=5726599.94..5789078.79 rows=24991540 width=9)
    26. Sort Key: lineitem_1.l_orderkey
    27. -> Parallel Seq Scan on lineitem lineitem_1 (cost=0.00..1374457.40 rows=24991540 width=9)
    28. -> Hash (cost=50827.80..50827.80 rows=1500080 width=23)
    29. -> Seq Scan on customer (cost=0.00..50827.80 rows=1500080 width=23)
    30. (27 rows)
  4. Execute the following SQL.

    1. \i queries/q18.sql

You can see some of the results (press q not to see all the results) and the running time, and you can see that the running time is 1 minute 23 seconds.

The parallel plan result for Q18

TIP

Note that the following error message appears because of too many workers on a single machine:pq: could not resize shared memory segment “/PostgreSQL.2058389254” to 12615680 bytes: No space left on device.

The reason for that is the default shared memory of docker is not enough. You can refer to this linkPractice:Accelerate TPC-H through PolarDB HTAP - 图4 and set the parameters to restart docker to fix it.

Execute parallel queries with PolarDB HTAP

After experiencing single-computer parallel query, we turn on parallel queries in distributed servers.

  1. After psql, execute the following command to turn on the timing (if it is already on, you can skip it).

    1. \timing
  2. Execute the following command to enable parallel query in distributed servers(PX).

    1. set polar_enable_px=on;
  3. Set workers in every server as 1.

    1. set polar_px_dop_per_node=1;
  4. Execute the following command to view the execution plan.

    1. \i queries/q18.explain.sql

    This cluster comes with 2 ROs and the default workers is 2x1=2 when PX is turned on.

    1. QUERY PLAN
    2. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    3. Limit (cost=0.00..93628.34 rows=100 width=47)
    4. -> PX Coordinator 2:1 (slice1; segments: 2) (cost=0.00..93628.33 rows=100 width=47)
    5. Merge Key: orders.o_totalprice, orders.o_orderdate
    6. -> Limit (cost=0.00..93628.31 rows=50 width=47)
    7. -> GroupAggregate (cost=0.00..93628.31 rows=11995940 width=47)
    8. Group Key: orders.o_totalprice, orders.o_orderdate, customer.c_name, customer.c_custkey, orders.o_orderkey
    9. -> Sort (cost=0.00..92784.19 rows=11995940 width=44)
    10. Sort Key: orders.o_totalprice DESC, orders.o_orderdate, customer.c_name, customer.c_custkey, orders.o_orderkey
    11. -> Hash Join (cost=0.00..22406.63 rows=11995940 width=44)
    12. Hash Cond: (lineitem.l_orderkey = orders.o_orderkey)
    13. -> PX Hash 2:2 (slice2; segments: 2) (cost=0.00..4301.49 rows=29989848 width=9)
    14. Hash Key: lineitem.l_orderkey
    15. -> Partial Seq Scan on lineitem (cost=0.00..2954.65 rows=29989848 width=9)
    16. -> Hash (cost=10799.35..10799.35 rows=83024 width=39)
    17. -> PX Hash 2:2 (slice3; segments: 2) (cost=0.00..10799.35 rows=83024 width=39)
    18. Hash Key: orders.o_orderkey
    19. -> Hash Join (cost=0.00..10789.21 rows=83024 width=39)
    20. Hash Cond: (customer.c_custkey = orders.o_custkey)
    21. -> PX Hash 2:2 (slice4; segments: 2) (cost=0.00..597.52 rows=750040 width=23)
    22. Hash Key: customer.c_custkey
    23. -> Partial Seq Scan on customer (cost=0.00..511.44 rows=750040 width=23)
    24. -> Hash (cost=9993.50..9993.50 rows=83024 width=20)
    25. -> PX Hash 2:2 (slice5; segments: 2) (cost=0.00..9993.50 rows=83024 width=20)
    26. Hash Key: orders.o_custkey
    27. -> Hash Semi Join (cost=0.00..9988.30 rows=83024 width=20)
    28. Hash Cond: (orders.o_orderkey = lineitem_1.l_orderkey)
    29. -> Partial Seq Scan on orders (cost=0.00..1020.90 rows=7500272 width=20)
    30. -> Hash (cost=7256.00..7256.00 rows=166047 width=4)
    31. -> PX Broadcast 2:2 (slice6; segments: 2) (cost=0.00..7256.00 rows=166047 width=4)
    32. -> Result (cost=0.00..7238.62 rows=83024 width=4)
    33. Filter: ((sum(lineitem_1.l_quantity)) > '313'::numeric)
    34. -> Finalize HashAggregate (cost=0.00..7231.79 rows=207559 width=12)
    35. Group Key: lineitem_1.l_orderkey
    36. -> PX Hash 2:2 (slice7; segments: 2) (cost=0.00..7205.20 rows=207559 width=12)
    37. Hash Key: lineitem_1.l_orderkey
    38. -> Partial HashAggregate (cost=0.00..7197.41 rows=207559 width=12)
    39. Group Key: lineitem_1.l_orderkey
    40. -> Partial Seq Scan on lineitem lineitem_1 (cost=0.00..2954.65 rows=29989848 width=9)
    41. Optimizer: PolarDB PX Optimizer
    42. (39 rows)
  5. Execute the following SQL.

    1. \i queries/q18.sql

You can see some of the results (press q not to see all the results) and the running time, and you can see that the running time is 1 minute. This is a 27.71% reduction in runtime compared to the results of single-computer parallelism. If you are interested, you can also increase the parallelism or the amount of data to see improvement.

The PX results for q18

The PX parallel query goes to get the global consistency view, so the data obtained is consistent and there is no need to worry about data correctness.

We can manually set the workers for PX:

  1. set polar_px_dop_per_node = 1;
  2. \i queries/q18.sql
  3. set polar_px_dop_per_node = 2;
  4. \i queries/q18.sql
  5. set polar_px_dop_per_node = 4;
  6. \i queries/q18.sql

Edit this page on GitHub Practice:Accelerate TPC-H through PolarDB HTAP - 图6

Contributors: 何柯文, 棠羽