Testing failover to one of 2 slaves and reattaching to the new master.
Starting config
master | | slave1 slave2
Post failover config
slave1 | | slave2 master
Prerequisites:
You will need Vagrant and Virtualbox to run this tutorial but that should be all.
Process
Turn off master.
Promote slave1 to Master
Attach slave2 as follower to slave1
Attach master is as follower to slave1 (without rebuilding)
Steps:
git clone https://github.com/philmcc/postgres_clusters.git cd postgres_clusters/3_node_master_2_slaves_94_rep_slots vagrant up
Check the cluster stat and replication:
psql -Upostgres -h 192.168.4.2 postgres=# select * from pg_replication_slots; slot_name | plugin | slot_type | datoid | database | active | xmin | catalog_xmin | restart_lsn -----------+--------+-----------+--------+----------+--------+------+--------------+------------- slave2 | | physical | | | t | | | 0/60003C0 slave1 | | physical | | | t | | | 0/60003C0 (2 rows)
postgres=# select * from pg_stat_replication; pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | backend_xmin | state | sent_location | write_location | flush_location | replay_location | sync_priority | sync_state ------+----------+---------+------------------+-------------+-----------------+-------------+-------------------------------+--------------+-----------+---------------+----------------+----------------+-----------------+---------------+------------ 3172 | 16384 | rep | walreceiver | 192.168.4.3 | | 52830 | 2017-07-20 09:30:36.674926+00 | | streaming | 0/60003C0 | 0/60003C0 | 0/60003C0 | 0/60003C0 | 0 | async 3180 | 16384 | rep | walreceiver | 192.168.4.4 | | 53949 | 2017-07-20 09:31:41.737832+00 | | streaming | 0/60003C0 | 0/60003C0 | 0/60003C0 | 0/60003C0 | 0 | async (2 rows)
Stop postgres on the master:
[vagrant@master ~]$ sudo su - [root@master ~]# service postgresql-9.4 stop Stopping postgresql-9.4 service: [ OK ] [root@master ~]#
Promote slave1
Connect to slave1:
vagrant ssh slave1 Create the failover trigger file: touch /tmp/postgresql.trigger.5432
From your local machine – check the status of slave1:
psql -Upostgres -h 192.168.4.3 postgres=# SELECT pg_is_in_recovery(); pg_is_in_recovery ------------------- f (1 row)
Check replication slots:
postgres=# select * from pg_replication_slots; slot_name | plugin | slot_type | datoid | database | active | xmin | catalog_xmin | restart_lsn -----------+--------+-----------+--------+----------+--------+------+--------------+------------- (0 rows)
There are none there. Lets create some for our other hosts so that we dont lose ant wal files (you may not want to do this on a production system as it will force the master to keep ahold of wall files untill the new slaves can get them. It is necessary to attach the master straight back in though):
postgres=# SELECT 1 FROM pg_create_physical_replication_slot('master'); ?column? ---------- 1 (1 row) postgres=# SELECT 1 FROM pg_create_physical_replication_slot('slave2'); ?column? ---------- 1 (1 row) postgres=# select * from pg_replication_slots; slot_name | plugin | slot_type | datoid | database | active | xmin | catalog_xmin | restart_lsn -----------+--------+-----------+--------+----------+--------+------+--------------+------------- master | | physical | | | f | | | slave2 | | physical | | | f | | | (2 rows)
(The old master will still be called master so that we can see what has changed, but it wont be the leader in the new config)
Ok. so now we have a new master and the cluster is back up and running. Now its time to plug in the first follower (slave2) so that we have a standby for failover.
Connect to slave2 and stop postgres:
vagrant ssh slave2 [vagrant@slave2 ~]$ sudo su - [root@slave2 ~]# service postgresql-9.4 stop Stopping postgresql-9.4 service: [ OK ]
edit the recovery.conf file to point to the new leader – (slave1)
and set recovery_target_timeline = ‘latest’
Note the primary_slot_name = ‘slave2’ controls which replication slot this node will use.
standby_mode = 'on' primary_conninfo = 'user=rep host=192.168.4.3 port=5432 sslmode=prefer sslcompression=1 krbsrvname=postgres' trigger_file = '/tmp/postgresql.trigger.5432' primary_slot_name = 'slave2' recovery_target_timeline = 'latest'
Start postgres back up again:
[root@slave2 ~]# service postgresql-9.4 start Starting postgresql-9.4 service: [ OK ]
Now check that the new follower is pointing at the leader and successfully getting data:
(on slave1 – the leader)
postgres=# select * from pg_replication_slots; slot_name | plugin | slot_type | datoid | database | active | xmin | catalog_xmin | restart_lsn -----------+--------+-----------+--------+----------+--------+------+--------------+------------- master | | physical | | | f | | | slave2 | | physical | | | t | | | 0/7000348 (2 rows)
As you can see the ‘slave2’ replication slot is now active and connected.
You can further check this by creating something on slave1 and seeing it appear on slave2.
Now, its time to put the old master back into use. We don’t want to use it as the master agian because we already have a leader and switching would mean more downtime so we will put it in as another follower:
Doing that can be tricky because failing over could have put the cluster on a different timeline. This is easy to get around though by telling the old master to use the latest timeline.
Connect to the master node and create a recovery.conf file as above:
standby_mode = 'on' primary_conninfo = 'user=rep host=192.168.4.3 port=5432 sslmode=prefer sslcompression=1 krbsrvname=postgres' trigger_file = '/tmp/postgresql.trigger.5432' primary_slot_name = 'master' recovery_target_timeline = 'latest'
Start the instance back up and check that the replication is connected and recieveing changes:
db1=# select * from pg_replication_slots; slot_name | plugin | slot_type | datoid | database | active | xmin | catalog_xmin | restart_lsn -----------+--------+-----------+--------+----------+--------+------+--------------+------------- master | | physical | | | t | | | 0/7021718 slave2 | | physical | | | t | | | 0/7021718 (2 rows) db1=# SELECT * from pg_stat_replication; pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | backend_xmin | state | sent_location | write_location | flush_location | replay_location | sync_priority | sync_state ------+----------+---------+------------------+-------------+-----------------+-------------+-------------------------------+--------------+-----------+---------------+----------------+----------------+-----------------+---------------+------------ 9173 | 16384 | rep | walreceiver | 192.168.4.4 | | 38245 | 2017-07-20 11:03:40.053334+00 | | streaming | 0/70218C8 | 0/70218C8 | 0/70218C8 | 0/70218C8 | 0 | async 9192 | 16384 | rep | walreceiver | 192.168.4.2 | | 57914 | 2017-07-20 11:13:21.212236+00 | | streaming | 0/70218C8 | 0/70218C8 | 0/70218C8 | 0/70218C8 | 0 | async (2 rows)
As you can see Everything is connected and changeds are being passed on.
All thats left now is a bit of tydying up. The ‘master’ node still has replication slots configured and thess need to be removed.
Its simple enough to do:
db1=# select * from pg_replication_slots; slot_name | plugin | slot_type | datoid | database | active | xmin | catalog_xmin | restart_lsn -----------+--------+-----------+--------+----------+--------+------+--------------+------------- slave2 | | physical | | | f | | | 0/60331B0 slave1 | | physical | | | f | | | 0/60331B0 (2 rows) db1=# select pg_drop_replication_slot('slave1'); pg_drop_replication_slot -------------------------- (1 row) db1=# select pg_drop_replication_slot('slave2'); pg_drop_replication_slot -------------------------- (1 row) db1=# select * from pg_replication_slots; slot_name | plugin | slot_type | datoid | database | active | xmin | catalog_xmin | restart_lsn -----------+--------+-----------+--------+----------+--------+------+--------------+------------- (0 rows)