PostgreSQL 9.6: Parallelization of sequential read
for a long time, one of the most well-known imperfections PostgreSQL was the possibility of parallelization of queries. With the release of version 9.6 it will no longer be a problem. Much work has been done on this issue, and have since the commit 80558c1appears parallelization of sequential read, which we met in the course of this article.
First, you should take note: this functionality was continuously carried out and some parameters changed their names between commits. This article was written after checking out code, committed on June 17, some features described in this article, will be present only in version 9.6 beta2.
Comparing with release 9.5, the new settings have been added to the configuration file. Here they are:
the
Let's see how additional worker can be used to accelerate the execution of our queries. Let's create a test table with a field of type INT and one hundred millions of records:
the
PostgreSQL has a parameter max_parallel_workers_per_gather is 2 by default, this option is activated by two worker-during a sequential scan.
Conventional sequential scan does not carry anything new:
the
In fact, the presence of the conditions WHERE is necessary for parallelization:
the
Can we go back to the previous action and look at the difference in execution when max_parallel_workers_per_gather established 0:
the
2.5 times longer.
The scheduler does not always considers the parallelization of sequential read, the best option. If the query is enough selective, and there are many tuples that have to pass from worker to worker, he may prefer a "classical" sequential scan:
the
In fact, if we try to force the scheduler to use parallelize sequential read, we get the worst result:
the
The number of workarou can be increased to max_worker_processes (default: 8). Restore the value of the parallel_tuple_cost and see what happens, if you increase max_parallel_workers_per_gather to 8
the
Even given that PostgreSQL can use up to 8 workerb, he used only six. This is because Postgres also optimizes the number of workerb dependent on the size of the table and min_parallel_relation_size. The number of workarou available Postgres based on the geometric progression with denominator 3 and min_parallel_relation_size as scaling factor. Here is an example. Given that 8Mb is the default option:
the
The size of our table 3548Мб, respectively, 6 is the maximum number of available workarou.
the
Finally, I give a brief demonstration of improvements achieved using this patch. Starting our inquiry with a growing number of workerb, we get the following results:
the
You can see that the execution time is greatly improved, until it reaches one third of the original value. It is also easy to explain the fact that we do not see improvements when using 6 instead of 3 workerb: the machine on which the tests were run has 4 processors, so the results are stable after adding 3 additional workerb to the original process.
Finally, PostgreSQL 9.6 has entered a new stage of parallelization of queries, in which parallelization of sequential read is only the first great result. In addition, in version 9.6 was AsParallel aggregation, but that's a topic for another article, which will be released in the coming weeks!
Article based on information from habrahabr.ru
First, you should take note: this functionality was continuously carried out and some parameters changed their names between commits. This article was written after checking out code, committed on June 17, some features described in this article, will be present only in version 9.6 beta2.
Comparing with release 9.5, the new settings have been added to the configuration file. Here they are:
the
max_parallel_workers_per_gather: the number of workarou that can participate in sequential table scan;
min_parallel_relation_size: the minimum size of the relationship, after which the scheduler begins to use additional workerb;
parallel_setup_cost: parameter of the scheduler, which estimates the cost of creating a new vorker;
parallel_tuple_cost: parameter of the scheduler, which estimates the cost of the translation of the tuple from one worker to another;
force_parallel_mode option is useful for testing, strong concurrency, and requests that the scheduler will behave differently.
Let's see how additional worker can be used to accelerate the execution of our queries. Let's create a test table with a field of type INT and one hundred millions of records:
the
postgres=# CREATE TABLE test (i int);
CREATE TABLE
postgres=# INSERT INTO test SELECT generate_series(1,100000000);
INSERT 0 100000000
postgres=# ANALYSE test;
ANALYZE
PostgreSQL has a parameter max_parallel_workers_per_gather is 2 by default, this option is activated by two worker-during a sequential scan.
Conventional sequential scan does not carry anything new:
the
postgres=# EXPLAIN ANALYSE SELECT * FROM test;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
Seq Scan on test (cost=0.00..1442478.32 rows=100000032 width=4) (actual time=0.081..21051.918 rows=100000000 loops=1)
Planning time: 0.077 ms
Execution time: 28055.993 ms
(3 rows)
In fact, the presence of the conditions WHERE is necessary for parallelization:
the
postgres=# EXPLAIN ANALYZE SELECT * FROM test WHERE i=1;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
Gather (cost=1000.00..964311.60 rows=1 width=4) (actual time=3.381..9799.942 rows=1 loops=1)
Workers Planned: 2
Workers Launched: 2
- >Parallel Seq Scan on test (cost=0.00..963311.50 rows=0 width=4) (actual time=..6525.595 9791.066 rows=0 loops=3)
Filter: (i = 1)
Rows Removed by Filter: 33333333
Planning time: 0.130 ms
Execution time: 9804.484 ms
(8 rows)
Can we go back to the previous action and look at the difference in execution when max_parallel_workers_per_gather established 0:
the
postgres=# SET max_parallel_workers_per_gather TO 0;
SET
postgres=# EXPLAIN ANALYZE SELECT * FROM test WHERE i=1;
QUERY PLAN
--------------------------------------------------------------------------------------------------------
Seq Scan on test (cost=0.00..1692478.40 rows=1 width=4) (actual time=0.123..25003.221 rows=1 loops=1)
Filter: (i = 1)
Rows Removed by Filter: 99999999
Planning time: 0.105 ms
Execution time: 25003.263 ms
(5 rows)
2.5 times longer.
The scheduler does not always considers the parallelization of sequential read, the best option. If the query is enough selective, and there are many tuples that have to pass from worker to worker, he may prefer a "classical" sequential scan:
the
postgres=# SET max_parallel_workers_per_gather TO 2;
SET
postgres=# EXPLAIN ANALYZE SELECT * FROM test WHERE i<90000000;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Seq Scan on test (cost=0.00..1692478.40 rows=90116088 width=4) (actual time=0.073..31410.276 rows=89999999 loops=1)
Filter: (i < 90000000)
Planning time: 0.133 ms
Execution time: 37939.401 ms
(5 rows)
In fact, if we try to force the scheduler to use parallelize sequential read, we get the worst result:
the
postgres=# SET parallel_tuple_cost TO 0;
SET
postgres=# EXPLAIN ANALYZE SELECT * FROM test WHERE i<90000000;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------
Gather (cost=1000.00..964311.50 rows=90116088 width=4) (actual time=0.454..75546.078 rows=89999999 loops=1)
Workers Planned: 2
Workers Launched: 2
- >Parallel Seq Scan on test (cost=0.00..1338795.20 rows=37548370 width=4) (actual time=0.088..20294.670 rows=30000000 loops=3)
Filter: (i < 90000000)
Rows Removed by Filter: 3333334
Planning time: 0.128 ms
Execution time: 83423.577 ms
(8 rows)
The number of workarou can be increased to max_worker_processes (default: 8). Restore the value of the parallel_tuple_cost and see what happens, if you increase max_parallel_workers_per_gather to 8
the
postgres=# SET parallel_tuple_cost TO DEFAULT ;
SET
postgres=# SET max_parallel_workers_per_gather TO 8;
SET
postgres=# EXPLAIN ANALYZE SELECT * FROM test WHERE i=1;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
Gather (cost=1000.00..651811.50 rows=1 width=4) (actual time=8248.307..3.684 rows=1 loops=1)
Workers Planned: 6
Workers Launched: 6
- >Parallel Seq Scan on test (cost=0.00..650811.40 rows=0 width=4) (actual time=..7053.761 8231.174 rows=0 loops=7)
Filter: (i = 1)
Rows Removed by Filter: 14285714
Planning time: 0.124 ms
Execution time: 8250.461 ms
(8 rows)
Even given that PostgreSQL can use up to 8 workerb, he used only six. This is because Postgres also optimizes the number of workerb dependent on the size of the table and min_parallel_relation_size. The number of workarou available Postgres based on the geometric progression with denominator 3 and min_parallel_relation_size as scaling factor. Here is an example. Given that 8Mb is the default option:
Size | Worker |
<8Mb | 0 |
<24Мб | 1 |
<72Мб | 2 |
<216mb | 3 |
<648Мб | 4 |
<1944Мб | 5 |
<5822Мб | 6 |
... | ... |
The size of our table 3548Мб, respectively, 6 is the maximum number of available workarou.
the
postgres=# \dt+ test
List of relations
Schema | Name | Type | Owner | Size | Description
--------+------+-------+----------+---------+-------------
public | test | table | postgres | 3458 MB |
(1 row)
Finally, I give a brief demonstration of improvements achieved using this patch. Starting our inquiry with a growing number of workerb, we get the following results:
Size | Worker |
<0 | 24767.848 MS |
<1 | 14855.961 MS |
<2 | 10415.661 MS |
<3 | 8041.187 MS |
<4 | 8090.855 MS |
<5 | 8082.937 MS |
<6 | 8061.939 MS |
You can see that the execution time is greatly improved, until it reaches one third of the original value. It is also easy to explain the fact that we do not see improvements when using 6 instead of 3 workerb: the machine on which the tests were run has 4 processors, so the results are stable after adding 3 additional workerb to the original process.
Finally, PostgreSQL 9.6 has entered a new stage of parallelization of queries, in which parallelization of sequential read is only the first great result. In addition, in version 9.6 was AsParallel aggregation, but that's a topic for another article, which will be released in the coming weeks!
Комментарии
Отправить комментарий