PostgreSQL Asynchronous Replication + Pooling + Failover

Option is simple to understand asynchronous master-slave replication on Postgresql 9.1 database


/ > For the first time the task was solely implementing full replication for the first time and was written a mini-manual, which I would like to present here.

For system replication Master-Slave used a combination of
the
    the
  • PostgreSQL 9.1 (DB) +
  • Bucardo 4.5 (Replicator) + the

  • PgPool-II (puller and failover)


Bucardo

Async Postgres replication system written in Perl5.

The easy adaptability to Postgres and light reflected in the database configuration.

Creates its database in which are written the replicated server, database, table, concluded in the leaves (lists).
Used type of when Pushdelta (Trigger. One way master-slave).
Changing the structure does not support. Works both ways, i.e. in the event of a temporary disconnection of the wizard when you restore, it will automatically "catch up" slave.

A rough plan of installing bucardo on the Master:

the
sudo aptitude install bucardo 
# + mods install Perl (DBI, DBD::Pg, DBIx::Safe)
sudo bucardo_ctl install # connection settings


Next, manually create the bucardo control database, populate it from bucardo.schema (by default it's hooked to 8.4 and in the older versions of flash in the pan with the autocreate management database)

the
 sudo bucardo_ctl add db bucardo_dbname name=master_dbname # Destination DB master
# + database handles the password bucardo.db (to avoid problems)
sudo bucardo_ctl add db bucardo_dbname name=slave_dbname # DB-slave
sudo bucardo_ctl add table tbl_name db=bucardo_dbname herd=source_name # Add tables wizard to the list(herd) 
sudo bucardo_ctl add sync sync_name type=pushdelta source=herd_name targetdb=slave_dbname # list replication
sudo bucardo_ctl start # Start replication


On the slave I set, but not filled (because the slave one and it is going nowhere)

PgPool-II

Has ample opportunity, suffers from lack of manuals. Supports concurrent requests, load balancing, distribution of database connections across multiple pools, and is FailOver'ω, i.e. automatic switch from master to slave and back in case of problems with connections.

On debian is in the repository, the version of postgres does not.
the
 sudo aptitude install pgpool2


Stores configs in /etc/pgpool2/

the
    the
  • pgpool.conf — Basic settings (read below)
  • pcp.conf — System — can not touch the

  • pool_hba.conf — configuration of the access connection. You can use your can — Pogreby. Better — postgrey

I have used PgPool as a puller, load balancer and, most importantly, failover.

An example of configuring /etc/pgpool2/pgpool.conf (partially) For the Wizard:
the
 listen_addresses = '*'
port = 9999
socket_dir = '/var/run/postgresql'
pcp_port = 9898
pcp_socket_dir = '/var/run/postgresql'

backend_hostname0 = master_server
backend_port0 = 5432
backend_weight0 = 1
backend_data_directory0 = '/master_data'
backend_flag0 = 'ALLOW_TO_FAILOVER'

backend_hostname1 = slave_server
backend_port1 = 5432
backend_weight1 = 1
backend_data_directory1 = '/slave_data'
backend_flag1 = 'ALLOW_TO_FAILOVER'

connection_cache = on

replication_mode = off

load_balance_mode = on

master_slave_mode = on
master_slave_sub_mode = 'stream'

parallel_mode = on
pgpool2_hostname = "

system_db_hostname = master_server
system_db_port = 5432
system_db_dbname = 'pgpool'
system_db_schema = 'pgpool_catalog'
system_db_user = 'pgpool'
system_db_password = "

failover_command = '/etc/pgpool2/failover.sh %d %P %H %R'

recovery_user = 'postgres'
recovery_password = "
recovery_1st_stage_command = '/etc/pgpool2/recovery_1st_stage.sh'
recovery_2nd_stage_command = "
recovery_timeout = 90
client_idle_limit_in_recovery = 0


Also in /etc/pgpool2/ scripts are created with the following content specified in the config:

"failover.sh" — Actually the script is executed in case of a fall of the wizard
the
 #!/bin/bash -x
FALLING_NODE=$1 # %d
OLDPRIMARY_NODE=$2 # %P
NEW_PRIMARY=$3 # %H
PGDATA=$4 # %R

if [ $FALLING_NODE = $OLDPRIMARY_NODE ]; then
if [ $UID -eq 0 ]
then
su postgres -c "ssh -T postgres@$NEW_PRIMARY touch $PGDATA/trigger"
else
ssh -T postgres@$NEW_PRIMARY touch $PGDATA/trigger
fi
exit 0;
fi;
exit 0;

"recovery_1st_stage.sh" Script with the speaker name
the
 #!/bin/bash -x

PGDATA=$1
REMOTE_HOST=$2
REMOTE_PGDATA=$3

PORT=5432
PGHOME=/home/yugo-n/pgsql-9.2.1
ARCH=$PGHOME/data/arch

rm-rf $ARCH/*

ssh -T postgres@$REMOTE_HOST "
LD_LIBRARY_PATH=$PGHOME/lib:LD_LIBRARH_PATH;
rm-rf $REMOTE_PGDATA
$PGHOME/bin/pg_basebackup -h $HOSTNAME -U r_user -D $REMOTE_PGDATA -x-c fast
rm $REMOTE_PGDATA/trigger"

ssh -T postgres@$REMOTE_HOST "rm-rf $ARCH/*"
ssh -T postgres@$REMOTE_HOST "mkdir -p $REMOTE_PGDATA/pg_xlog/archive_status"

ssh -T postgres@$REMOTE_HOST "
cd $REMOTE_PGDATA;
cp postgresql.conf postgresql.conf.bak;
sed -e 's/#*hot_standby = off/hot_standby = on/' postgresql.conf.bak > postgresql.conf;
rm-f postgresql.conf.bak;
cat > recovery.conf << EOT
standby_mode = 'on'
primary_conninfo = 'host="$HOSTNAME" port=$PORT user=r_user'
restore_command = 'scp $HOSTNAME:$ARCH/%f %p'
trigger_file = '$PGDATA/trigger'
EOT
"


Important! you want all files/folders used pgpool'om was with owner postgres::postgres

For the Slave almost all the master_slave_mode = off and failover_command = "

PgPool runs a simple console
the
sudo pgpool


Thus, we obtain the Master monoslam replication, load sharing and failover.

The connection is always only to the master through the pgPool'a port (9999).
In normal operation, there is a record in the master, barandova is replicated to the slave, and reading and that and the other.
In the event of a power slave, when you resume it, all data downtime automatically coralicious.
In case of disconnection of the master, without breaking user connections pgpool routes the read and write fully to the remark, temporarily making it "master". When raising the master, he is catching all data from the slave and switches back again without breaking user connections.

There is a definite problem that could not be such as to create rights management to live with the master to the slave would have the right only RO, and in the fall switched to RW (and then when you restore back), but as external appeal goes only to the address of the Master, the danger is only in the playful hands of developers.

I hope the article will find use for novice DBA.
And thank you for your attention!
Article based on information from habrahabr.ru

Комментарии

Популярные сообщения из этого блога

mSearch: search + filter for MODX Revolution

Emulator data from GNSS receiver NMEA

The game Let's Twist: the Path into the unknown