Introduction to PostgreSQL BDR
Introduction to PostgreSQL BDR
PostgreSQL is not only stable and reliable DBMS but plus to all this is a dynamic product, in which from release to release appear to be a variety of breakthrough ideas. At the time one of such technologies was streaming replication. This high-performance replication which allows for very easy and cheap to scale the database to read. Using it you can create a reliable configuration distributing load to read between nodes. However, as I wrote above, the product developed, and today we will focus on new technologies BDR (Bi-Directional Replication).
Some terms for those who are not in the subject:
WAL (Write Ahead Log) — the transaction log, it is based on built-in streaming replication Postgres, the DBMS writes back all what happens to the data in the database.
SR (Streaming Replication) — the common name of the built-in streaming replication which is based on WAL, all that is written to WAL, and then sent to the slave and played. Sometimes physical and logical streaming replication.
PLSR (Physical Log Streaming Replication) — physical streaming replication (something that is already implemented and working), everything was in WAL without subsequent analysis is replicated to slave servers, and change data/schema, and lower-level things (full page writes, vacuum, hint bit settings).
LLSR (Logical Log Streaming Replication) — logical streaming replication (coming in 9.4) is also based on WAL logs, but more intellectual and replication retrieves only a portion of the log which describes the schema changes and database data (that is, some low-level things drop out).
What is hiding under the term BDR?
BDR (Bi-Directional Replication) this is new functionality added to the core of PostgreSQL which provides advanced tools for replication. At the moment it is implemented as a small patch and module. Stated that will only PostgreSQL 9.5 (now 9.3-stable and 9.4-beta1).
In short, the BDR allows you to create a geographically distributed asynchronous multi-master configuration (Oh yeah baby) using the built-in logical streaming replication LLSR.
However, the BDR is not a clustering tool, because there is no global lock managers or coordinators of the transaction (Hello Postgres-XC/XL). Each node is independent of any other that would be impossible in the case of the use of lock managers. Each node contains a local copy of the data is identical to data on other nodes. The statements are also executed only locally (to make it more clear what I mean, here is a comparison with Postgres-XC/Postgres-XL, where all servers work in harness, rulyatsya transactions global transaction Manager, and queries from the application do the coordinator(s) who sends the executed queries coming in on any working node, here). Each of the nodes is internally consistent at any time, entirely the same server group is coherent in the end (eventually consistent).
The uniqueness of the BDR is that it unlike anything built-in streaming replication, no existing trigger-based solution (Londiste, Slony, Bucardo).
The most notable difference from streaming replication is that BDR (LLSR) operates databases (per-database replication), and the classical PLSR replicates the entire instance (per-cluster replication), i.e., all databases inside the instance.
The current limitations and features:
1. All data changes caused by INSERT/DELETE/UPDATE is replicated (TRUNCATE at the time of writing not yet implemented)
2. Most operations are schema changes (DDL) to be replicated successfully. Unsupported DDL statements are recorded by the replication engine and are rejected with the error results (at the time of this writing did not work CREATE TABLE AS...)
4. Actions which are recorded in the WAL, but nepredstavlyaet as logical changes are not replicated to another node (write full pages, vacuumize tables, etc.). Thus the logical streaming replication (LLSR) relieved of some part of the overheads which are present in the physical streaming replication PLSR (however, this does not mean that the LLSR requires less network bandwidth than PLSR).
So perhaps enough of the theory, a little practice. It is now possible to test Bi-Directional replication.
The installation is running on two virtual machines with CentOS 6.5 minimal. Set needed to build packages:
the
# yum install readline-devel zlib-devel yum-utils -y
# yum groupinstall "Development Tools" -y
Go to the postgres account and execute the installation of postgresql with BDR support. It is worth noting that the guys from 2ndQuadrant wrote the installer to be willing to try did not make much effort to install and configure for what they are, a bunch of greens.
the
# su - postgres
$ curl -s "http://git.postgresql.org/gitweb/?p=2ndquadrant_bdr.git;a=blob_plain;f=contrib/bdr/scripts/bdr_quickstart.sh;hb=refs/heads/bdr-next" | bash
Add the directory with executables in postgres to the PATH environment variable and then check for psql. Who knows, the export disposable, so if you are planning for a long time to use or play with BDR, then add the command to .bashrc your user (if you have a bash of course).
the
$ export PATH=$HOME/2ndquadrant_bdr/bdr/bin:$PATH
$ psql --version
psql (PostgreSQL) 9.4beta1_bdr0601
Initialize the database directory on both nodes and then immediately run. Pre-rule postgresql.conf is optional, when you first start we will create a test database which will be replicated in the future.
the
$ initdb -D data/ -A trust -U postgres
$ pg_ctl -l logfile -D data/ -w start
$ psql -c 'create database 'staging_db'
Created a database, and then move on to configuring postgresql.conf. First, set upstream master. In the configuration below, we indicate the need to load the bdr library (shared_preload_libraries), defined by the level of detail of the WAL logs in a logical value (wal_level), defined by the number of slots for replication, the maximum number of processes employed in sending WAL logs (wal_senders) and includes the tracking of time for the COMMIT operation that is necessary for conflict resolution (last-UPDATE-wins). Then at the end of the file define the configuration for BDR: specify a name for the connection and settings to connect to a remote host. It is worth noting that the name specified in bdr.connections is arbitrary (mine is the name of the virtual machine), the main thing that the name you specify must participate in the names of the underlying parameters.
the
$ vi data/postgresql.conf
listen_address = '*'
shared_preload_libraries = 'bdr'
wal_level = logical
wal_senders = 4
max_replication_slots = 4
track_commit_timestamp = on
bdr.connections = 'vm13'
bdr.vm13_dsn = 'host=192.168.122.13 port=5432 user=postgres dbname=staging_db'
Now, the configuration downstream of the wizard. First, here is the description of the configuration and then its analysis below.
the
$ vi data/postgresql.conf
listen_address = '*'
shared_preload_libraries = 'bdr'
wal_level = logical
wal_senders = 4
max_replication_slots = 4
track_commit_timestamp = on
bdr.connections = 'vm12'
bdr.vm12_dsn = 'host=192.168.122.12 port=5432 user=postgres dbname=staging_db'
bdr.vm12_init_replica = on
bdr.vm12_replica_local_dsn = 'host=127.0.0.1 port=5432 user=postgres dbname=staging_db'
Configure the second node is different slightly, particularly here at the BDR configuration we indicate the need to initialize the replica (bdr.vm12_init_replica) from the node specified in the bdr.vm12_dsn on the local database whose details are given in bdr.vm12_replica_local_dsn. The last parameter is required if a database cluster initializaion with initdb (just our case) and in this case, the cluster must exist an empty database which will continue to participate in replication.
In the case of initialization in pg_basebackup option bdr.vm12_replica_local_dsn not needed.
Now define the authentication settings on both nodes, in my case everything is allowed. For production installations of course can not be done.
the
$ vi data/pg_hba.conf
host all all 192.168.122.0/24 trust
host replication postgres 192.168.122.0/24 trust
Performed restart of both nodes and watch the logs
the
$ pg_ctl -l logfile -D data/ -w restart
upstream master:
vm12 ~ $ tail-f logfile
LOG: unexpected EOF on standby connection
LOG: starting logical decoding for slot bdr_16384_6029905891437956874_1_16384__
DETAIL: streaming transactions committing after 0/1898F90, reading WAL from 0/1898C30
LOG: logical decoding found consistent point at 0/1898C30
DETAIL: running xacts with xcnt == 0
LOG: starting background worker process "bdr (6029905879776466735,1,16384,): vm13: apply"
LOG: unexpected EOF on standby connection
LOG: starting logical decoding for slot bdr_16384_6029905891437956874_1_16384__
DETAIL: streaming transactions committing after 0/1898F90, reading WAL from 0/1898C30
LOG: logical decoding found consistent point at 0/1898C30
DETAIL: running xacts with xcnt == 0
LOG: starting background worker process "bdr (6029905879776466735,1,16384,): vm13: apply"
downstream:
vm13 ~ $ tail-f logfile
LOG: registering background worker "bdr (6029905891437956874,1,16384,): vm12: apply"
LOG: starting background worker process "bdr (6029905891437956874,1,16384,): vm12: apply"
LOG: logical decoding found consistent point at 0/18A4290
DETAIL: running xacts with xcnt == 0
LOG: exported logical decoding snapshot: "0000071B-1" with 0 xids
LOG: starting logical decoding for slot bdr_16384_6029905879776466735_1_16384__
DETAIL: streaming transactions committing after 0/18A42C8, reading WAL from 0/18A4290
LOG: logical decoding found consistent point at 0/18A4290
DETAIL: running xacts with xcnt == 0
LOG: registering background worker "bdr (6029905891437956874,1,16384,): vm12: apply"
LOG: starting background worker process "bdr (6029905891437956874,1,16384,): vm12: apply"
LOG: logical decoding found consistent point at 0/18A4290
DETAIL: running xacts with xcnt == 0
LOG: exported logical decoding snapshot: "0000071B-1" with 0 xids
LOG: starting logical decoding for slot bdr_16384_6029905879776466735_1_16384__
DETAIL: streaming transactions committing after 0/18A42C8, reading WAL from 0/18A4290
LOG: logical decoding found consistent point at 0/18A4290
DETAIL: running xacts with xcnt == 0
In the logs everything is fine and no ERROR messages (and if there is, check the configs or sin on developers))). The configuration and startup complete. You can now test the work through the creation of tables in both databases.
A couple of things. Temporary stop the replication is carried off downstream of the wizard. However, it is worth noting that stopped remark leads to the fact that upstream the master continues to accumulate WAL logs which in turn may lead to uncontrolled consumption of disk space. Therefore, it is not recommended to permanently shut down the replica.
Delete replica forever through the removal of BDR downstream configuration on the server and restart downstream of the wizard. Then you need to remove the corresponding slot replication upstream master using the function pg_drop_replication_slot('slotname'). Available slots can be viewed using pg_get_replication_slots().
As a conclusion I will say my experience... I certainly Have some questions on the operation of the BDR, the answers to which, most likely will have to find out the experimental way. But at this stage I like this new tool, configured it easily and quickly, plus it is already working despite the fact that officially appears only in 9.5 (which is about a year). Thus, with the release of add another tool with which you can create a reliable failover configuration, and that's fine. PostgreSQL from release to release just gets better and better.
In fact that's all. Thank you all for your attention.
PS Links to read:
BDR User Guide
Logical Log Streaming Replication
PostgreSQL WAL Shipping and Streaming Replication
Комментарии
Отправить комментарий