PostgreSQL ++patroni+etcd+haproxy+keepalived安装部署

三台虚机作为演示环境,软件版本规划如下:

Postgresql: 12.2

patroni 2.1.4

etcd : 3.3.11

HAProxy 1.5.18

Keepalived 1.3.5

部署规划如下:

主机

IP

组件

备注

node1

192.168.30.152

PostgreSQL、Patroni、Etcd,haproxy、keepalived

主节点

node2

192.168.30.80

PostgreSQL、Patroni、Etcd、haproxy、keepalived

备节点1

node3

192.168.30.208

PostgreSQL、Patroni、Etcd、haproxy、keepalived

备节点2

安装postgresql集群(略)

postgresql主库状态

[postgres@node1 bin]$ pg_controldata |grep cluster
Database cluster state:               in production

node2节点状态

[postgres@node2 ~]$ pg_controldata |grep cluster
Database cluster state:               in archive recovery

node3节点状态

[postgres@node3 ~]$ pg_controldata |grep cluster
Database cluster state:               in archive recovery
集群状态

postgres=# select * from pg_stat_replication ;
 pid  | usesysid | usename  | application_name |  client_addr   | client_hostname | client_port |         backend_start         | ba
ckend_xmin |   state   | sent_lsn  | write_lsn | flush_lsn | replay_lsn | write_lag | flush_lag | replay_lag | sync_priority | sync_
state |          reply_time
------+----------+----------+------------------+----------------+-----------------+-------------+-------------------------------+---
-----------+-----------+-----------+-----------+-----------+------------+-----------+-----------+------------+---------------+------
------+-------------------------------
 3955 |    16384 | repluser | node2            | 192.168.30.80  |                 |       58612 | 2022-07-19 10:03:30.797054+08 |
           | streaming | 0/B0014F0 | 0/B0014F0 | 0/B0014F0 | 0/B0014F0  |           |           |            |             1 | sync
      | 2022-07-19 10:03:05.772113+08
 3979 |    16384 | repluser | node3            | 192.168.30.208 |                 |       38314 | 2022-07-19 10:03:35.850845+08 |
           | streaming | 0/B0014F0 | 0/B0014F0 | 0/B0014F0 | 0/B0014F0  |           |           |            |             0 | async
      | 2022-07-19 10:04:48.118754+08

安装 etcd

三节点安装 yum install etcd

node1节点配置

cat  /etc/etcd/etcd.conf 

ETCD_DATA_DIR="/var/lib/etcd/node1.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.30.152:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.30.152:2379,http://127.0.0.1:2379"
ETCD_NAME="node1"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.30.152:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.30.152:2379"
ETCD_INITIAL_CLUSTER="node1=http://192.168.30.152:2380,node2=http://192.168.30.80:2380,node3=http://192.168.30.208:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

node2节点配置

ETCD_DATA_DIR="/var/lib/etcd/node2.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.30.80:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.30.80:2379,http://127.0.0.1:2379"
ETCD_NAME="node2"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.30.80:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.30.80:2379"
ETCD_INITIAL_CLUSTER="node1=http://192.168.30.152:2380,node2=http://192.168.30.80:2380,node3=http://192.168.30.208:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

node3节点配置

ETCD_DATA_DIR="/var/lib/etcd/node3.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.30.208:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.30.208:2379,http://127.0.0.1:2379"
ETCD_NAME="node3"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.30.208:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.30.208:2379"
ETCD_INITIAL_CLUSTER="node1=http://192.168.30.152:2380,node2=http://192.168.30.80:2380,node3=http://192.168.30.208:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

依次启动 node1、node2、node3 节点的 etcd

systemctl start etcd.service

systemctl status etcd.service

systemctl enable etcd.service

检查etcd状态

etcdctl member list

[postgres@node1 bin]$ etcdctl member list
8073957066143594: name=node3 peerURLs=http://192.168.30.208:2380 clientURLs=http://192.168.30.208:2379 isLeader=false
a4871b3047aa7fcc: name=node2 peerURLs=http://192.168.30.80:2380 clientURLs=http://192.168.30.80:2379 isLeader=true
e06b6b4dbcc68272: name=node1 peerURLs=http://192.168.30.152:2380 clientURLs=http://192.168.30.152:2379 isLeader=false

安装 patroni

yum install -y gcc epel-release

yum install -y python-pip python-psycopg2 python-devel

wget https://bootstrap.pypa.io/pip/3.5/get-pip.py

python get-pip.py

pip install --upgrade pip

pip install --upgrade setuptools

pip install psycopg2-binary

pip install six

pip install patroni[etcd]

node1节点配置

[root@node1 etcd]#  cat /usr/patroni/patroni_postgresql.yml
scope: postgres_cluster
namespace: /service/
name: node1
restapi:
  listen: 192.168.30.152:8008
  connect_address: 192.168.30.152:8008

etcd:
  host: 192.168.30.152:2379

bootstrap:
  # and all other cluster members will use it as a `global configuration`
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
    master_start_timeout: 300
    synchronous_mode: on
    postgresql:
      use_pg_rewind: true
      use_slots: true
      parameters:
        listen_addresses: "0.0.0.0"
        port: 5432
        wal_level: "replica"
        hot_standby: "on"
        wal_keep_segments: 1000
        max_wal_senders: 10
        max_replication_slots: 10
        wal_log_hints: "on"
  # some desired options for 'initdb'
initdb:  # Note: It needs to be a list (some options need values, others are switches)
- encoding: UTF8
- data-checksums
pg_hba:  # Add following lines to pg_hba.conf after running 'initdb'
- host replication repluser 192.168.30.152/32    md5
- host replication repluser 192.168.30.80/32   md5
- host replication repluser 192.168.30.208/32   md5
- host all         all       0.0.0.0/0         md5

postgresql:
  listen: 0.0.0.0:5432
  connect_address: 192.168.30.152:5432
  data_dir: /data/postgres
  bin_dir: /home/postgres/postgres/bin
  authentication:
    replication:
      username: repluser
      password: repluser
    superuser:
      username: postgres
      password: postgres
    rewind:
      username: postgres
      password: postgres


tags:
    nofailover: false
    noloadbalance: false
    clonefrom: false
    nosync: false

node2节点配置

[root@node2 ~]#  cat /usr/patroni/patroni_postgresql.yml
scope: postgres_cluster
namespace: /service/
name: node2
restapi:
  listen: 192.168.30.80:8008
  connect_address: 192.168.30.80:8008

etcd:
  host: 192.168.30.80:2379

bootstrap:
  # and all other cluster members will use it as a `global configuration`
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
    master_start_timeout: 300
    synchronous_mode: on
    postgresql:
      use_pg_rewind: true
      use_slots: true
      parameters:
        listen_addresses: "0.0.0.0"
        port: 5432
        wal_level: "replica"
        hot_standby: "on"
        wal_keep_segments: 1000
        max_wal_senders: 10
        max_replication_slots: 10
        wal_log_hints: "on"
  # some desired options for 'initdb'
initdb:  # Note: It needs to be a list (some options need values, others are switches)
- encoding: UTF8
- data-checksums
pg_hba:  # Add following lines to pg_hba.conf after running 'initdb'
- host replication repluser 192.168.30.152/32    md5
- host replication repluser 192.168.30.80/32   md5
- host replication repluser 192.168.30.208/32   md5
- host all         all       0.0.0.0/0         md5

postgresql:
  listen: 0.0.0.0:5432
  connect_address: 192.168.30.80:5432
  data_dir: /data/postgres
  bin_dir: /home/postgres/postgres/bin
  authentication:
    replication:
      username: repluser
      password: repluser
    superuser:
      username: postgres
      password: postgres
    rewind:
      username: postgres
      password: postgres


tags:
    nofailover: false
    noloadbalance: false
    clonefrom: false
    nosync: false

node3节点配置

[root@node3 patroni]# cat /usr/patroni/patroni_postgresql.yml
scope: postgres_cluster
namespace: /service/
name: node3
restapi:
  listen: 192.168.30.208:8008
  connect_address: 192.168.30.208:8008

etcd:
  host: 192.168.30.208:2379

bootstrap:
  # and all other cluster members will use it as a `global configuration`
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
    master_start_timeout: 300
    synchronous_mode: on
    postgresql:
      use_pg_rewind: true
      use_slots: true
      parameters:
        listen_addresses: "0.0.0.0"
        port: 5432
        wal_level: "replica"
        hot_standby: "on"
        wal_keep_segments: 1000
        max_wal_senders: 10
        max_replication_slots: 10
        wal_log_hints: "on"
  # some desired options for 'initdb'
initdb:  # Note: It needs to be a list (some options need values, others are switches)
- encoding: UTF8
- data-checksums
pg_hba:  # Add following lines to pg_hba.conf after running 'initdb'
- host replication repluser 192.168.30.152/32    md5
- host replication repluser 192.168.30.80/32   md5
- host replication repluser 192.168.30.208/32   md5
- host all         all       0.0.0.0/0         md5

postgresql:
  listen: 0.0.0.0:5432
  connect_address: 192.168.30.208:5432
  data_dir: /data/postgres
  bin_dir: /home/postgres/postgres/bin
  authentication:
    replication:
      username: repluser
      password: repluser
    superuser:
      username: postgres
      password: postgres
    rewind:
      username: postgres
      password: postgres


tags:
    nofailover: false
    noloadbalance: false
    clonefrom: false
    nosync: false

通过 patroni 来管理 postgresql

systemctl status patroni

systemctl start patroni

systemctl enable patroni

[postgres@node1 bin]$ systemctl status patroni
● patroni.service - patroni - a high-availability PostgreSQL
   Loaded: loaded (/etc/systemd/system/patroni.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2022-07-18 13:54:40 CST; 20h ago
     Docs: https://patroni.readthedocs.io/en/latest/index.html
 Main PID: 4695 (patroni)
   CGroup: /system.slice/patroni.service
           ├─3937 postgres: postgres_cluster: walwriter
           ├─3938 postgres: postgres_cluster: autovacuum launcher
           ├─3939 postgres: postgres_cluster: logical replication launcher
           ├─3955 postgres: postgres_cluster: walsender repluser 192.168.30.80(58612) streaming 0/B0014F0
           ├─3979 postgres: postgres_cluster: walsender repluser 192.168.30.208(38314) streaming 0/B0014F0
           ├─4695 /usr/bin/python /usr/bin/patroni /usr/patroni/patroni_postgresql.yml
           ├─4715 /home/postgres/postgres/bin/postgres -D /data/postgres --config-file=/data/postgres/postgresql.conf --listen_ad...
           ├─4717 postgres: postgres_cluster: logger
           ├─4719 postgres: postgres_cluster: checkpointer
           ├─4720 postgres: postgres_cluster: background writer
           ├─4721 postgres: postgres_cluster: stats collector
           └─4727 postgres: postgres_cluster: postgres postgres 127.0.0.1(58014) idle

查看 patroni 集群状态 

patronictl -c /usr/patroni/patroni_postgresql.yml list

[postgres@node1 bin]$ patronictl -c /usr/patroni/patroni_postgresql.yml list
+ Cluster: postgres_cluster (7120985716210081590) -+----+-----------+
| Member | Host           | Role         | State   | TL | Lag in MB |
+--------+----------------+--------------+---------+----+-----------+
| node1  | 192.168.30.152 | Leader       | running |  8 |           |
| node2  | 192.168.30.80  | Sync Standby | running |  8 |       0.0 |
| node3  | 192.168.30.208 | Replica      | running |  8 |       0.0 |
+--------+----------------+--------------+---------+----+-----------+

安装haproxy

yum install -y haproxy

[root@node1 etcd]# cat /etc/haproxy/haproxy.cfg
global
    maxconn 100000
    log 127.0.0.1 local0 info
    log 127.0.0.1 local1 warning
    chroot /var/lib/haproxy
#    stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
    stats timeout 30s
    user postgres
    daemon

defaults
    mode               tcp
    log                global
    retries            2
    timeout queue      5s
    timeout connect    5s
    timeout client     60m
    timeout server     60m
    timeout check      15s

listen stats
    mode http
    bind *:7000
    stats enable
    stats uri /

listen master
    bind 192.168.30.100:5000
    maxconn 10000
    option tcplog
    option httpchk OPTIONS /master
    http-check expect status 200
    default-server inter 3s fastinter 1s fall 3 rise 4 on-marked-down shutdown-sessions
       server node1 192.168.30.152:5432 check port 8008
       server node2 192.168.30.80:5432 check port 8008
       server node3 192.168.30.208:5432 check port 8008


listen replicas
    bind 192.168.30.100:5001
    maxconn 10000
    option tcplog
    option httpchk OPTIONS /replica
    balance roundrobin
    http-check expect status 200
    default-server inter 3s fastinter 1s fall 3 rise 2 on-marked-down shutdown-sessions
       server node1 192.168.30.152:5432 check port 8008
       server node2 192.168.30.80:5432 check port 8008
       server node3 192.168.30.208:5432 check port 8008
[root@node2 ~]#  cat /etc/haproxy/haproxy.cfg
global
    maxconn 100000
    log 127.0.0.1 local0 info
    log 127.0.0.1 local1 warning
    chroot /var/lib/haproxy
#    stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
    stats timeout 30s
    user postgres
    daemon

defaults
    mode               tcp
    log                global
    retries            2
    timeout queue      5s
    timeout connect    5s
    timeout client     60m
    timeout server     60m
    timeout check      15s

listen stats
    mode http
    bind *:7000
    stats enable
    stats uri /

listen master
    bind 192.168.30.100:5000
    maxconn 10000
    option tcplog
    option httpchk OPTIONS /master
    http-check expect status 200
    default-server inter 3s fastinter 1s fall 3 rise 4 on-marked-down shutdown-sessions
       server node1 192.168.30.152:5432 check port 8008
       server node2 192.168.30.80:5432 check port 8008
       server node3 192.168.30.208:5432 check port 8008


listen replicas
    bind 192.168.30.100:5001
    maxconn 10000
    option tcplog
    option httpchk OPTIONS /replica
    balance roundrobin
    http-check expect status 200
    default-server inter 3s fastinter 1s fall 3 rise 2 on-marked-down shutdown-sessions
       server node1 192.168.30.152:5432 check port 8008
       server node2 192.168.30.80:5432 check port 8008
       server node3 192.168.30.208:5432 check port 8008
[root@node3 patroni]#  cat /etc/haproxy/haproxy.cfg
global
    maxconn 100000
    log 127.0.0.1 local0 info
    log 127.0.0.1 local1 warning
    chroot /var/lib/haproxy
#    stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
    stats timeout 30s
    user postgres
    daemon

defaults
    mode               tcp
    log                global
    retries            2
    timeout queue      5s
    timeout connect    5s
    timeout client     60m
    timeout server     60m
    timeout check      15s

listen stats
    mode http
    bind *:7000
    stats enable
    stats uri /

listen master
    bind 192.168.30.100:5000
    maxconn 10000
    option tcplog
    option httpchk OPTIONS /master
    http-check expect status 200
    default-server inter 3s fastinter 1s fall 3 rise 4 on-marked-down shutdown-sessions
       server node1 192.168.30.152:5432 check port 8008
       server node2 192.168.30.80:5432 check port 8008
       server node3 192.168.30.208:5432 check port 8008


listen replicas
    bind 192.168.30.100:5001
    maxconn 10000
    option tcplog
    option httpchk OPTIONS /replica
    balance roundrobin
    http-check expect status 200
    default-server inter 3s fastinter 1s fall 3 rise 2 on-marked-down shutdown-sessions
       server node1 192.168.30.152:5432 check port 8008
       server node2 192.168.30.80:5432 check port 8008
       server node3 192.168.30.208:5432 check port 8008

启动haproxy​​​

systemctl status haproxy

systemctl start haproxy

[root@node1 etcd]# systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2022-07-19 10:11:11 CST; 2s ago
 Main PID: 5317 (haproxy-systemd)
   CGroup: /system.slice/haproxy.service
           ├─5317 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
           ├─5318 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
           └─5319 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

Jul 19 10:11:11 node1 systemd[1]: Started HAProxy Load Balancer.
Jul 19 10:11:11 node1 haproxy-systemd-wrapper[5317]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/haproxy/ha...d -Ds
Jul 19 10:11:11 node1 haproxy-systemd-wrapper[5317]: [WARNING] 199/101111 (5318) : config : frontend 'GLOBAL' has no 'bind' ...nded.
Hint: Some lines were ellipsized, use -l to show in full.

安装keepalived

yum install -y keepalived

[root@node1 etcd]# cat /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_leader {
script "/usr/bin/curl -s http://127.0.0.1:8008/leader -v 2>&1|grep '200 OK' >/dev/null"
interval 2
weight 10
}
vrrp_script check_replica {
script "/usr/bin/curl -s http://127.0.0.1:8008/replica -v 2>&1|grep '200 OK' >/dev/null"
interval 2
weight 5
}
vrrp_script check_can_read {
script "/usr/bin/curl -s http://127.0.0.1:8008/read-only -v 2>&1|grep '200 OK' >/dev/null"
interval 2
weight 10
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 211
priority 100
advert_int 1
track_script {
check_can_read
check_replica
}
virtual_ipaddress {
192.168.30.100
}
}
[root@node2 ~]#  cat /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_leader {
script "/usr/bin/curl -s http://127.0.0.1:8008/leader -v 2>&1|grep '200 OK' >/dev/null"
interval 2
weight 10
}
vrrp_script check_replica {
script "/usr/bin/curl -s http://127.0.0.1:8008/replica -v 2>&1|grep '200 OK' >/dev/null"
interval 2
weight 5
}
vrrp_script check_can_read {
script "/usr/bin/curl -s http://127.0.0.1:8008/read-only -v 2>&1|grep '200 OK' >/dev/null"
interval 2
weight 10
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 211
priority 100
advert_int 1
track_script {
check_can_read
check_replica
}
virtual_ipaddress {
192.168.30.100
}
}
[root@node3 patroni]#  cat /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_leader {
script "/usr/bin/curl -s http://127.0.0.1:8008/leader -v 2>&1|grep '200 OK' >/dev/null"
interval 2
weight 10
}
vrrp_script check_replica {
script "/usr/bin/curl -s http://127.0.0.1:8008/replica -v 2>&1|grep '200 OK' >/dev/null"
interval 2
weight 5
}
vrrp_script check_can_read {
script "/usr/bin/curl -s http://127.0.0.1:8008/read-only -v 2>&1|grep '200 OK' >/dev/null"
interval 2
weight 10
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 211
priority 100
advert_int 1
track_script {
check_can_read
check_replica
}
virtual_ipaddress {
192.168.30.100
}
}

启动keepalived

systemctl start keepalived

systemctl status keepalived

[root@node1 etcd]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2022-07-19 10:13:07 CST; 1s ago
  Process: 5745 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 5747 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─5747 /usr/sbin/keepalived -D
           ├─5748 /usr/sbin/keepalived -D
           └─5749 /usr/sbin/keepalived -D

Jul 19 10:13:07 node1 Keepalived_vrrp[5749]: Registering gratuitous ARP shared channel
Jul 19 10:13:07 node1 Keepalived_vrrp[5749]: Opening file '/etc/keepalived/keepalived.conf'.
Jul 19 10:13:07 node1 Keepalived_vrrp[5749]: WARNING - default user 'keepalived_script' for script execution does not exist...reate.
Jul 19 10:13:07 node1 Keepalived_vrrp[5749]: SECURITY VIOLATION - scripts are being executed but script_security not enabled.
Jul 19 10:13:07 node1 Keepalived_vrrp[5749]: VRRP_Instance(VI_1) removing protocol VIPs.
Jul 19 10:13:07 node1 Keepalived_vrrp[5749]: Using LinkWatch kernel netlink reflector...
Jul 19 10:13:07 node1 Keepalived_vrrp[5749]: VRRP_Instance(VI_1) Entering BACKUP STATE
Jul 19 10:13:07 node1 Keepalived_vrrp[5749]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
Jul 19 10:13:07 node1 Keepalived_vrrp[5749]: /usr/bin/curl -s http://127.0.0.1:8008/replica -v 2>&1|grep '200 OK' >/dev/nul...atus 1
Jul 19 10:13:07 node1 Keepalived_vrrp[5749]: /usr/bin/curl -s http://127.0.0.1:8008/read-only -v 2>&1|grep '200 OK' >/dev/n...atus 1
Hint: Some lines were ellipsized, use -l to show in full.

VIP绑定

node1节点VIP绑定eth0网卡设备

[root@node1 etcd]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether a6:75:b8:69:57:9d brd ff:ff:ff:ff:ff:ff
    inet 192.168.30.152/24 brd 192.168.30.255 scope global noprefixroute dynamic eth0
       valid_lft 28322sec preferred_lft 28322sec
    inet 192.168.30.100/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::b343:d202:c83f:6573/64 scope link tentative noprefixroute dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::7aaf:69b6:7c81:9d35/64 scope link tentative noprefixroute dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::2562:23d:6bee:d2ae/64 scope link tentative noprefixroute dadfailed
       valid_lft forever preferred_lft forever

集群测试

通过VIP访问不同接口,实现读写分离和负载均衡

1、访问5000端口,连接读写主节点

[postgres@node2 ~]$ psql -h 192.168.30.100 -p 5000
Password for user postgres:
psql (12.2)
Type "help" for help.

postgres=# select inet_server_addr(),pg_is_in_recovery();
 inet_server_addr | pg_is_in_recovery
------------------+-------------------
 192.168.30.152   | f

2、访问5001端口,连接读节点,实现负载均衡

[postgres@node2 ~]$ psql -h 192.168.30.100 -p 5001
Password for user postgres:
psql (12.2)
Type "help" for help.

postgres=# select inet_server_addr(),pg_is_in_recovery();
 inet_server_addr | pg_is_in_recovery
------------------+-------------------
 192.168.30.80    | t

手动切换Leader

patronictl -c /usr/patroni/patroni_postgresql.yml switchover

[root@node2 ~]# patronictl -c /usr/patroni/patroni_postgresql.yml switchover
Master [node2]:
Candidate ['node1', 'node3'] []: node1
When should the switchover take place (e.g. 2022-07-19T11:01 )  [now]: now
Current cluster topology
+ Cluster: postgres_cluster (7120985716210081590) -+----+-----------+
| Member | Host           | Role         | State   | TL | Lag in MB |
+--------+----------------+--------------+---------+----+-----------+
| node1  | 192.168.30.152 | Sync Standby | running |  7 |       0.0 |
| node2  | 192.168.30.80  | Leader       | running |  7 |           |
| node3  | 192.168.30.208 | Replica      | running |  7 |       0.0 |
+--------+----------------+--------------+---------+----+-----------+
Are you sure you want to switchover cluster postgres_cluster, demoting current master node2? [y/N]: y
2022-07-19 10:01:14.23857 Successfully switched over to "node1"
+ Cluster: postgres_cluster (7120985716210081590) -+-----------+
| Member | Host           | Role    | State   | TL | Lag in MB |
+--------+----------------+---------+---------+----+-----------+
| node1  | 192.168.30.152 | Leader  | running |  7 |           |
| node2  | 192.168.30.80  | Replica | stopped |    |   unknown |
| node3  | 192.168.30.208 | Replica | running |  7 |       0.0 |
+--------+----------------+---------+---------+----+-----------+
[root@node2 ~]# patronictl -c /usr/patroni/patroni_postgresql.yml list
+ Cluster: postgres_cluster (7120985716210081590) -+----+-----------+
| Member | Host           | Role         | State   | TL | Lag in MB |
+--------+----------------+--------------+---------+----+-----------+
| node1  | 192.168.30.152 | Leader       | running |  8 |           |
| node2  | 192.168.30.80  | Sync Standby | running |  8 |       0.0 |
| node3  | 192.168.30.208 | Replica      | running |  8 |       0.0 |
+--------+----------------+--------------+---------+----+-----------+

你可能感兴趣的:(postgresql,数据库,postgresql,etcd,proxy模式)