1.本文主从复制已经验证没问题可以参考
2.pgpool部分:主postgresql数据库stop之后从库可以切换为主库(但是这一切都建立在pgpool软件没有stop或宕机的情况下)
3.pgpool部分:主pgpool和主postgresql数据库一起stop,只有VIP会切换,从postgresql数据库不能升级为主库(目前不知道问题在哪,请看出来的指教一下,谢谢)
4.最好使用一主一从,不要使用一主多从。
https://blog.csdn.net/zxfmamama/article/details/121008549
https://blog.csdn.net/weixin_39540651/article/details/106122610
https://blog.csdn.net/martinlinux/article/details/121367257
流复制数据同步: 通过postgresql数据库配置来实现
虚拟ip自动切换: 通过pgpool-ii 配置实现
数据库主备角色切换: 通过pgpool-ii 监测机 + 执行 postgresql 中的promote命令来实现
某一个 postgresql 数据库挂掉 (多台数据库启动后 其中一台作为主机,其余作为备机 构成一个数据库集群);
(1). 如果是主机primary,集群检测到挂掉会通过配置的策略重新选一个备机standby切换为主机primary, 整个集群仍旧保证可用, 当原主机恢复服务后, 重新作为一个新备机standby,同步完数据后加入集群
(2). 如果是备机standby,对整个集群无可见影响, 当备机恢复服务后,从主库同步完数据后,恢复正常状态加入集群;
某一台机器上的pgpool-ii 程序挂掉;
pgpool-ii 是一个介于postgresql 服务器和postgresql数据库之间的中间件, 提供了链接池(Connection Pooling),看门狗(WatchDog),复制,负载均衡,缓存等功能(具体的可以查看官方文档);
通过pgpool-ii 维护的虚拟ip, 向外界提供一个始终可用的访问地址, 屏蔽掉具体的主机数据库地址概念;
通过pgpool-ii 程序来自动处理宕机后相关方案。
(1). 监测每个pgpool-ii进程的状态, 监测到挂掉之后,及时"切换"虚拟ip所在的主机以保证可用性(有些人叫IP漂移);
(2). 整个集群始终对外提供一个唯一的,可用的虚拟IP 来提供访问;
(3). 监测每个主机postgresql数据库的状态, 以即使切换数据库的主备角色;
某一台主机直接宕机;
(1). 当pgpool-ii监测主机挂掉之后, 需要进行数据库角色的切换和ip的切换两个操作(如果需要)
主从 | 系统 | IP | 安装软件 |
---|---|---|---|
主 | CentOS Linux release 7.9.2009 | 192.168.8.10 | Postgresql 12.3 + pgpool-ii 4.1 |
从 | CentOS Linux release 7.9.2009 | 192.168.8.20 | Postgresql 12.3 + pgpool-ii 4.1 |
从 | CentOS Linux release 7.9.2009 | 192.168.8.30 | Postgresql 12.3 + pgpool-ii 4.1 |
VIP | 192.168.8.33 | 通过一个虚拟的IP统一对外提供访问 |
#yum源
yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
#安装(这里是版本为12的postgresql)
sudo yum install -y postgresql12-server
systemctl stop firewalld
systemctl disable firewalld
具体安装参考https://blog.csdn.net/zxfmamama/article/details/121008549
mkdir /opt/rpm/
curl -O http://download.postgresql.org/pub/repos/yum/12/redhat/rhel-7-x86_64/postgresql12-12.3-1PGDG.rhel7.x86_64.rpm
curl -O http://download.postgresql.org/pub/repos/yum/12/redhat/rhel-7-x86_64/postgresql12-contrib-12.3-1PGDG.rhel7.x86_64.rpm
curl -O http://download.postgresql.org/pub/repos/yum/12/redhat/rhel-7-x86_64/postgresql12-libs-12.3-1PGDG.rhel7.x86_64.rpm
curl -O http://download.postgresql.org/pub/repos/yum/12/redhat/rhel-7-x86_64/postgresql12-server-12.3-1PGDG.rhel7.x86_64.rpm
# 上传文件到服务器之后, 执行安装命令
rpm -ivh postgresql*.rpm
执行完安装之后(查看状态可跳过, 直接进行数据库初始化):
会帮我们创建一个postgresql-12服务, 此时未进行数据库初始化, 还无法访问.
会帮我们创建一个postgres/postgres 的用户,密码相同.
此时使用systemctl status postgresql-12 查看服务状态:
● postgresql-12.service - PostgreSQL 12 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-12.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Docs: https://www.postgresql.org/docs/12/static/
我们可以找到默认配置文件地址: /usr/lib/systemd/system/postgresql-12.service
如果cat命令查看配置文件, 我们可以得到一些基础信息:
数据库数据目录: Environment=PGDATA=/var/lib/pgsql/12/data/
postgresql安装目录: PGHOME=/usr/pgsql-12/
mkdir /opt/tar
yum install -y vim lrzsz tree wget gcc gcc-c++ make readline-devel readline zlib-devel zlib ncurses-devel
wget http://ftp.postgresql.org/pub/source/v12.3/postgresql-12.3.tar.gz
tar -zxf postgresql-12.3.tar.gz
cd postgresql-12.3
./configure --prefix=/usr/local/postgresql
make && make install
#创建data和log目录
mkdir /usr/local/postgresql/log
mkdir /usr/local/postgresql/data
#配置环境变量
cat << eof >> /etc/profile
export PGHOME=/usr/local/postgresql
export PGDATA=/usr/local/postgresql/data
export PATH=\$PATH:\$HOME/.local/bin:\$HOME/bin:\$PGHOME/bin
eof
source /etc/profile
#添加用户并授权
useradd postgres
chown -R postgres:root /usr/local/postgresql/
#初始化数据库
注意:不能在 root 用户下初始数据库,否则会报错
su postgres
/usr/local/postgresql/bin/initdb -D /usr/local/postgresql/data/
输出以下信息表示成功
###
[postgres@localhost postgresql]$ /usr/local/postgresql/bin/initdb -D /usr/local/postgresql/data/
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "zh_CN.UTF-8".
The default database encoding has accordingly been set to "UTF8".
initdb: could not find suitable text search configuration for locale "zh_CN.UTF-8"
The default text search configuration will be set to "simple".
Data page checksums are disabled.
fixing permissions on existing directory /usr/local/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default timezone ... Asia/Shanghai
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
Success. You can now start the database server using:
/usr/local/postgresql/bin/pg_ctl -D /usr/local/postgresql/data/ -l logfile start
###
启停及基本使用参阅
https://blog.csdn.net/martinlinux/article/details/121364635
cd /usr/local/postgresql/data
cp postgresql.conf postgresql.conf-bak
cp pg_hba.conf pg_hba.conf-bak
vim postgresql.conf
listen_addresses = '*'
port = 5432
vim pg_hba.conf
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all trust
host replication all 127.0.0.1/32 trust
host replication all ::1/128 trust
加入以下一行
host all all 0.0.0.0/0 md5
postgres用户启动
pg_ctl start -l /usr/local/postgresql/log/pg_server.log -D /usr/local/postgresql/data/
登录
psql -U postgres
ALTER USER postgres WITH PASSWORD 'postgres' #设置postgres用户密码为postgres
流复制讲解链接:https://blog.csdn.net/weixin_39540651/article/details/106122610
流复制大约是从pg9版本之后使用, 流复制其原理为:备库不断的从主库同步相应的数据,并在备库apply每个WAL record,这里的流复制每次传输单位是WAL日志的record。(关于预写式日志WAL,是一种事务日志的实现)
1.事务commit后,日志在主库写入wal日志,还需要根据配置的日志同步级别,等待从库反馈的接收结果。
2.主库通过日志传输进程将日志块传给从库,从库接收进程收到日志开始回放,最终保证主从数据一致性。
PostgreSQL通过wal日志来传送的方式有两种:基于文件的日志传送和流复制。
不同于基于文件的日志传送,流复制的关键在于“流”,所谓流,就是没有界限的一串数据,类似于河里的水流,是连成一片的。因此流复制允许一台后备服务器比使用基于文件的日志传送更能保持为最新的状态。
比如我们有一个大文件要从本地主机发送到远程主机,如果是按照“流”接收到的话,我们可以一边接收,一边将文本流存入文件系统。这样,等到“流”接收完了,硬盘写入操作也已经完成。
PostgreSQL通过配置synchronous_commit (enum)参数来指定事务的同步级别。我们可以根据实际的业务需求,对不同的事务,设置不同的同步级别。
synchronous_commit = off # synchronization level; # off, local, remote_write, or on
remote_apply:事务commit或rollback时,等待其redo在primary、以及同步standby(s)已持久化,并且其redo在同步standby*(s)已apply。
off 表示commit 时不需要等待wal 持久化。
local 表示commit 是只需要等待本地数据库的wal 持久化。
remote_write 表示commit 需要等待本地数据库的wal 持久化,同时需要等待sync standby节点wal write buffer完成(不需要持久化)。
on 表示commit 需要等待本地数据库的wal 持久化,同时需要等待sync standby节点wal持久化。
不同的事务同步级别对应的数据安全级别越高,对应的对性能影响也就越大。上述从上至下安全级别越来越低
等级 | 说明 |
---|---|
minimal | 不能通过基础备份和wal日志恢复数据库 |
replica | 9.6新增,将之前版本的 archive 和 hot_standby合并, 该级别支持wal归档和复制 |
logical | 在replica级别的基础上添加了支持逻辑解码所需的信息 |
su - postgres
创建目录(所有节点)
# postgres
su - postgres
mkdir /usr/local/postgresql/archivedir
psql #登录
#创建repuser用户,用户备库访问主库
postgres=# create role repuser login replication encrypted password 'repuser';
CREATE ROLE
postgres=# \q
cd /usr/local/postgresql/data
vim pg_hba.conf
#加入下边一句
host replication repuser 0.0.0.0/0 md5
cd /usr/local/postgresql/data
vim postgresql.conf
archive_mode = on
archive_command = 'cp "%p" "/usr/local/postgresql/archivedir/"'
# 最大连接数,从库的max_connections必须要大于主库的
max_connections = 100
#参数表明是否等待wal日志buffer写入磁盘再返回用户事物状态信息。默认值是ON,同步流复制模式需要打开。
#但是选择on打开之后从库宕机会影响主库写入。
synchronous_commit = off
# *=all,意思是所有slave都被允许以同步方式连接到master,但同一时间只能有一台slave是同步模式。# 另外可以指定slave,将值设置为slave的application_name即可。
synchronous_standby_names = '*'
#控制wal存储的级别,默认值是最小的(minimal)。解释见目录:3.3.3 注意事项
wal_level = replica
#这个设置了可以最多有几个流复制连接,差不多有几个从,就设置几个
max_wal_senders = 2
max_replication_slots = 10
#用于指定pg_wal目录中保存的过去的wal文件(wal 段)的最小数量,以防备用服务器在进行流复制时需要,默认值0。
wal_keep_segments = 16
#控制流复制超时时间,中断那些停止活动超过指定毫秒数的复制连接。这对发送服务器检测一个后备机崩溃或网络中断有用。设置为0将禁用该超时机制。这个参数只能在postgresql.conf文件中或在服务器命令行上设置。默认值是 60 秒。
wal_sender_timeout = 60s
max_standby_streaming_delay = 30s # 数据流备份的最大延迟时间
# 多久向主报告一次从的状态,当然从每次数据复制都会向主报告状态,这里只是设置最长的间隔时间
wal_receiver_status_interval = 10s
hot_standby_feedback = on # 如果有错误的数据复制,是否向主进行反馈
修改完成重载
pg_ctl -D /usr/local/postgresql/data/ reload
1.清空从库数据存储文件夹
rm -rf /usr/local/postgresql/data/*
2.从主服务器上copy数据到从服务器,这一步叫做“基础备份”
pg_basebackup -h 192.168.8.10 -p 5432 -U repuser -Fp -Xs -Pv -R -D /usr/local/postgresql/data/
-h –指定作为主服务器的主机。
-D –指定数据目录。
-U –指定连接用户。
-P –启用进度报告。
-v –启用详细模式。
-R–启用恢复配置的创建:创建一个standby.signal文件,并将连接设置附加到数据目录下的postgresql.auto.conf。
-X–用于在备份中包括所需的预写日志文件(WAL文件)。流的值表示在创建备份时流式传输WAL。
-C –在开始备份之前,允许创建由-S选项命名的复制插槽。
-S –指定复制插槽名称。
3.此时data目录下会出现standby.signal文件,编辑此文件
vim standby.signal
## 加入
standby_mode = 'on'
4.修改postgresql.conf文件
vim postgresql.conf
#从机信息和连接用户
primary_conninfo = 'host=主节点IP port=5432 user=repuser password=repuser用户的密码'
#说明恢复到最新状态
recovery_target_timeline = latest
#大于主节点,正式环境应当重新考虑此值的大小
max_connections = 200
#说明这台机器不仅用于数据归档,还可以用于数据查询
hot_standby = on
#流备份的最大延迟时间
max_standby_streaming_delay = 30s
#向主机汇报本机状态的间隔时间
wal_receiver_status_interval = 10s
#如果出现错误复制,向主机反馈
hot_standby_feedback = on
5.启动从库
pg_ctl start -l /usr/local/postgresql/log/pg_server.log -D /usr/local/postgresql/data/
6.登录主库并查询
[postgres@localhost data]$ psql
psql (12.3)
Type "help" for help.
postgres=# select client_addr,sync_state from pg_stat_replication;
client_addr | sync_state
--------------+------------
192.168.8.20 | sync
192.168.8.30 | potential
(2 rows)
postgres=# \x # 切换垂直显示
Expanded display is on.
postgres=# select * from pg_stat_replication;
postgres=# select * from pg_stat_replication;
-[ RECORD 1 ]----+------------------------------
pid | 1615
usesysid | 16384
usename | repuser
application_name | walreceiver
client_addr | 192.168.8.20
client_hostname |
client_port | 43536
backend_start | 2022-01-06 21:32:51.301058+08
backend_xmin | 496
state | streaming
sent_lsn | 0/C000060
write_lsn | 0/C000060
flush_lsn | 0/C000060
replay_lsn | 0/C000060
write_lag |
flush_lag |
replay_lag |
sync_priority | 1
sync_state | sync
reply_time | 2022-01-06 21:33:31.877547+08
-[ RECORD 2 ]----+------------------------------
pid | 1617
usesysid | 16384
usename | repuser
application_name | walreceiver
client_addr | 192.168.8.30
client_hostname |
client_port | 54644
backend_start | 2022-01-06 21:32:53.401188+08
backend_xmin | 496
state | streaming
sent_lsn | 0/C000060
write_lsn | 0/C000060
flush_lsn | 0/C000060
replay_lsn | 0/C000060
write_lag |
flush_lag |
replay_lag |
sync_priority | 1
sync_state | potential
reply_time | 2022-01-06 21:33:33.625504+08
#主创建数据库
psql
postgres=# create database test_repl;
CREATE DATABASE
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
postgres | postgres | UTF8 | zh_CN.UTF-8 | zh_CN.UTF-8 |
template0 | postgres | UTF8 | zh_CN.UTF-8 | zh_CN.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | zh_CN.UTF-8 | zh_CN.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
test_repl | postgres | UTF8 | zh_CN.UTF-8 | zh_CN.UTF-8 |
(4 rows)
#从库查看
psql
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
postgres | postgres | UTF8 | zh_CN.UTF-8 | zh_CN.UTF-8 |
template0 | postgres | UTF8 | zh_CN.UTF-8 | zh_CN.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | zh_CN.UTF-8 | zh_CN.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
test_repl | postgres | UTF8 | zh_CN.UTF-8 | zh_CN.UTF-8 |
(4 rows)
原文链接:http://www.voycn.com/article/pgpool-ii-postgresql-yizhuliangbeiyuanmabianyihuanjingdajian
https://www.jianshu.com/p/ef183d0a9213
软件 | 信息 | 说明 |
---|---|---|
pgpool-II | 4.1.2 | |
安装路径 | /usr/local/pgpool/ | |
端口 | 9999 | Pgpool 连接端口 |
端口 | 9898 | PCP 端口 |
端口 | 9000 | watchdog 端口 |
端口 | 9694 | Watchdog 心跳 |
主配置文件 | /usr/local/pgpool/etc/pgpool.conf | |
Pgpool 启动用户 | root | 可以实现非root运行 |
运行模式 | streaming replication mode | |
Watchdog | on | |
开机自启 | Disable |
#设置rpm源
curl -O https://www.pgpool.net/yum/rpms/4.1/redhat/rhel-7-x86_64/pgpool-II-release-4.1-2.noarch.rpm
rpm -ivh pgpool-II-release-4.1-2.noarch.rpm
#安装(关于对应的 postgresql 版本,体现在文件名中的 pgXX 这里)
yum -y install pgpool-II-pg12
yum -y install pgpool-II-pg12-debuginfo
yum -y install pgpool-II-pg12-devel
yum -y install pgpool-II-pg12-extensions
# ftp下载rpm离线包
curl -O https://www.pgpool.net/yum/rpms/4.1/redhat/rhel-7-x86_64/pgpool-II-pg12-4.1.2-1pgdg.rhel7.x86_64.rpm
curl -O https://www.pgpool.net/yum/rpms/4.1/redhat/rhel-7-x86_64/pgpool-II-pg12-debuginfo-4.1.2-1pgdg.rhel7.x86_64.rpm
curl -O https://www.pgpool.net/yum/rpms/4.1/redhat/rhel-7-x86_64/pgpool-II-pg12-devel-4.1.2-1pgdg.rhel7.x86_64.rpm
curl -O https://www.pgpool.net/yum/rpms/4.1/redhat/rhel-7-x86_64/pgpool-II-pg12-extensions-4.1.2-1pgdg.rhel7.x86_64.rpm
# 上传文件到服务器之后, 执行安装命令
rpm -ivh pgpool*.rpm
下载地址:https://www.pgpool.net/mediawiki/download.php?f=pgpool-II-4.1.2.tar.gz
root用户安装
tar -zxf pgpool-II-4.1.2.tar.gz
cd pgpool-II-4.1.2
./configure --prefix=/usr/local/pgpool
make
make install
2.安装pgpool_recovery
cd /home/postgres/pgpool-II-4.1.2/src/sql/pgpool-recovery
make && make install
vim /etc/profile
export PGPOOL_HOME=/usr/local/pgpool
export PATH=$PATH:$PGPOOL_HOME/bin
source /etc/profile
#root
ssh-keygen -t rsa
ssh-copy-id [email protected]
ssh-copy-id [email protected]
ssh-copy-id [email protected]
ssh-copy-id [email protected]
ssh-copy-id [email protected]
ssh-copy-id [email protected]
su - postgres
ssh-keygen -t rsa
ssh-copy-id [email protected]
ssh-copy-id [email protected]
ssh-copy-id [email protected]
# postgres 用户
su - postgres
echo "192.168.8.10:5432:replication:repuser:repuser" >> ~/.pgpass
echo "192.168.8.20:5432:replication:repuser:repuser" >> ~/.pgpass
echo "192.168.8.30:5432:replication:repuser:repuser" >> ~/.pgpass
chmod 600 ~/.pgpass
scp /home/postgres/.pgpass [email protected]:/home/postgres/
scp /home/postgres/.pgpass [email protected]:/home/postgres/
# --------------------------------------------------------------
# root 用户
echo 'localhost:9898:pgpool:pgpool' > ~/.pcppass
chmod 600 ~/.pcppass
scp /root/.pcppass [email protected]:/root/
scp /root/.pcppass [email protected]:/root/
因为pgpool-ii 配置中会以 postgres 用户执行一些系统权限命令, 需要使用设置普通用户授权:
# root
chmod u+s /sbin/ifconfig
chmod u+s /sbin/ip
chmod u+s /sbin/ifup
chmod u+s /bin/ping
chmod u+s /sbin/arping
mkdir /usr/local/pgpool/log/ && touch /usr/local/pgpool/log/pgpool.log
chown -R postgres /usr/local/pgpool/
pool_hba.conf 是配置用户链接时的验证策略, 和postgresql的pg_hba.conf保持一致,要么都是trust,要么都是md5验证方式,这里采用了md5验证方式如下设置:
su - postgres
cd /usr/local/pgpool/etc
cp pool_hba.conf.sample pool_hba.conf
#编辑内容如下(这里和postgresql设置一样, trust/md5保持一致)
vim pool_hba.conf
#加入以下配置
host all all 0.0.0.0/0 md5
host replication repuser 0.0.0.0/0 md5
host all pgpool 0.0.0.0/0 md5
这个文件是pgpool管理器自己的用户名和密码,用于管理集群的.
su - postgres
cd /usr/local/pgpool/etc
cp pcp.conf.sample pcp.conf
#创建pgpool用户,只在主库执行就可以,因为已经开启了主从
postgres=# CREATE ROLE pgpool WITH PASSWORD '123456' LOGIN;
1.生成md5密码(格式:pg_md5 密码)(这里的密码是postgres数据库用户的密码)
pg_md5 postgres
e8a48653851e28c69d0506508fb27fc5
pg_md5 123456
e10adc3949ba59abbe56e057f20f883e
#echo "pgpool:e10adc3949ba59abbe56e057f20f883e" >> /usr/local/pgpool/etc/pcp.conf
2.将账号密码加入配置文件,编辑内容如下:
vim pcp.conf
#在pgpool中添加pg数据库的用户名和密码
postgres:e8a48653851e28c69d0506508fb27fc5
pgpool:e10adc3949ba59abbe56e057f20f883e
3.执行下一步之前,先准备好pgpool.conf配置文件
cp pgpool.conf.sample pgpool.conf
4.数据库登录用户是postgres,这里输入登录密码,不能出错
#输入密码后,在/usr/local/pgpool/etc/目录下会生成一个pool_passwd文件
pg_md5 -p -m -u postgres pool_passwd
pg_md5 -p -m -u pgpool pool_passwd
[postgres@localhost etc]$ cat pool_passwd
postgres:md53175bce1d3201d16594cebf9d7eb3f9d
pgpool:md5a258db5c0f4f595eb0667066f0f4bb60
这个文件中会配置我们pgpool-ii 节点的关键参数, pgpool-ii 自带提供几种不同模式的配置文件:
failover.sh (数据库故障切换脚本)pcp.conf (用户访问验证策略trust/md5)
pgpool.conf (pgpool-ii 主配置文件)
pool_hba.conf (集群节点密码管理)
pool_passwd (数据库密码管理文件)
recovery_1st_stage.sample (在线故障恢复的脚本示例, 放到postgresql数据目录/var/lib/pgsql/12/data 下)
准备好以下配置文件和脚本(所有节点)
cd /usr/local/pgpool/etc
mkdir /usr/local/pgpool/oiddir
cp recovery_1st_stage.sample recovery_1st_stage
cp failover.sh.sample failover.sh
cp follow_master.sh.sample follow_master.sh
cp pgpool_remote_start.sample pgpool_remote_start
vim pgpool.conf
#常用基础配置
pid_file_name = '/usr/local/pgpool/etc/pgpool.pid'# pid 文件位置, 如果不配置有默认的
logdir = '/usr/local/pgpool/log' # status 文件存储位置,通用
listen_addresses = '*'
port = 9999
pcp_listen_addresses = '*'
pcp_port = 9898
pcp_socket_dir = '/usr/local/pgpool'
# 后台数据库链接信息配置
backend_hostname0 = '192.168.8.10' # 第一台数据库信息
backend_port0 = 5432
backend_weight0 = 1 # 这个权重和后面负载比例相关
backend_data_directory0 = '/usr/local/postgresql/data'
backend_flag0 = 'ALLOW_TO_FAILOVER'
backend_application_name0 = 'server0'
backend_hostname1 = '192.168.8.20' # 第二台数据库信息
backend_port1 = 5432
backend_weight1 = 1
backend_data_directory1 = '/usr/local/postgresql/data'
backend_flag1 = 'ALLOW_TO_FAILOVER'
backend_application_name1 = 'server1'
backend_hostname2 = '192.168.8.30' # 第三台数据库信息
backend_port2 = 5432
backend_weight2 = 1
backend_data_directory2 = '/usr/local/postgresql/data'
backend_flag2 = 'ALLOW_TO_FAILOVER'
backend_application_name2 = 'server2'
#pg_hba.conf生效
enable_pool_hba = on
# 流复制相关配置
replication_mode = off # pgpool-ii 中复制制模式关闭
load_balance_mode = on # 负载均衡打开
master_slave_mode = on # 主从打开
master_slave_sub_mode = 'stream' # 主从之间模式为流传输stream
sr_check_period = 5 # 流复制检查相关配置
sr_check_user = 'repuser' #流复制用户
sr_check_password = 'repuser' #流复制密码
sr_check_database = 'postgres'
delay_threshold = 10000000
数据库故障转移(故障后处理)
# 数据库运行状况检查,以便Pgpool-II执行故障转移: 数据库的主备切换
health_check_period = 10 # Health check period, Disabled (0) by default
health_check_timeout = 20 # 健康检查的超时时间,0 永不超时
health_check_user = 'postgres' # 健康检查的用户
health_check_password = 'postgres' # 健康检查的用户密码
health_check_database = 'postgres' # 健康检查的数据库
# 故障后处理, 为了当postgresql数据库挂掉之后执行相应的策略
# 这个脚本时放在pgpool的目录下, 确切的说是由pgpool执行脚本来维护集群中数据库的状态
failover_command = '/usr/local/pgpool/etc/failover.sh %d %P %H %R'
follow_master_command = '/usr/local/pgpool/etc/follow_master.sh %d %h %p %D %m %M %H %P %r %R'
# 2台服务器不配置follow_master_command参数。如果使用3台PostgreSQL服务器,则需要指定follow_master_command在主节点故障转移上的故障转移后运行。如果有两个PostgreSQL服务器,则无需follow_master_command设置。
#具体配置脚本详见pgpool.conf文件
search_primary_node_timeout = 10
watchdog(看门狗)配置(用于检测pgpool-ii 节点状态, 为后续pgpool故障处理提供依据)
use_watchdog = on # 激活看门狗配置
wd_hostname = '192.168.8.10' # 当前主机(也可使用IP)
wd_port = 9000 # 工作端口
# 虚拟IP指定
delegate_IP = '192.168.8.33'
if_cmd_path = '/sbin' # 如果if_up_cmd, if_down_cmd 以/开头, 忽略此配置
# 命令中的`eth0` 请根据自己机器上ip addr 实际的网卡名称进行修改
# 当前节点启动指定虚拟IP的命令
if_up_cmd = '/sbin/ip addr add $_IP_$/24 dev eth0 label eth0:0'# 当前节点指定关闭虚拟IP的命令
if_down_cmd = '/sbin/ip addr del $_IP_$/24 dev eth0'
# watchdog 健康检查
wd_heartbeat_port = 9694 # 健康检查端口
wd_heartbeat_keepalive = 2
wd_heartbeat_deadtime = 30
# 其他机器地址配置(多台请增加配置)
heartbeat_destination0 = '192.168.8.20'
heartbeat_destination_port0 = 9694
heartbeat_device0 = 'eth0'
heartbeat_destination1 = '192.168.8.30'
heartbeat_destination_port1 = 9694
heartbeat_device1 = 'eth0'
# 其他pgpgool节点链接信息(多台请增加配置)
other_pgpool_hostname0 = '192.168.8.20' # 其他节点地址
other_pgpool_port0 = 9999
other_wd_port0 = 9000 # 其他节点watchdof 端口
other_pgpool_hostname1 = '192.168.8.30' # 其他节点地址
other_pgpool_port1 = 9999
other_wd_port1 = 9000
# watchdog 发生故障后, 处理的相关配置(宕机, pgpool进程终止)# 当某个节点故障后,
failover_when_quorum_exists = on
failover_require_consensus = on
allow_multiple_failover_requests_from_node = on
enable_consensus_with_half_votes = on
'''
关于watchdog本身(pgpool-ii)发生故障后相关的处理策略, 请务必阅读官方文档: CONFIG-WATCHDOG-FAILOVER-BEHAVIOR
watchdog本身(pgpool-ii节点)本身故障后, 如果配置打开, 其他节点会执行仲裁, 如仲裁从节点中那一个成为主节点, 那一台阶段虚拟IP等, 这个仲裁本身有投票机制, 和无视仲裁结果等配置;
如果不配置, 主pgpool-i 节点关闭后, 可能不会转移虚拟IP, 出现集群暂时不可访问;
注意网卡配置,需要根据ip addr命令结果的配置
'''
关于旧在线恢复(旧master恢复后自动变为备库)
# 此配置将在多个pgpool-ii 节点时无效
recovery_user = 'postgres'
recovery_password = 'postgres'
recovery_1st_stage_command = 'recovery_1st_stage' # 这个脚本时放在postgresql数据目录下的
如果有多个pgpool-ii 节点共同维护集群状态, 此配置将不可用, 需要手动恢复同步数据>加入集群
log_destination = 'syslog'
syslog_facility = 'LOCAL1'
memqcache_oiddir = '/usr/local/pgpool/oiddir'
psql template1 -c "CREATE EXTENSION pgpool_recovery"
vim failover.sh
修改postgresql主目录
PGHOME=/usr/local/postgresql
vim follow_master.sh
修改以下内容
PGHOME=/usr/local/postgresql
ARCHIVEDIR=/usr/local/postgresql/archivedir
REPLUSER=repuser
PCP_USER=pgpool
PGPOOL_PATH=/usr/local/pgpool/bin
PCP_PORT=9898
vim pgpool_remote_start
修改postgresql主目录
PGHOME=/usr/local/postgresql
pcp_recovery_node -h vip -p 9898 -U pgpool -n 1 恢复节点时使用这个脚本,记得修改配置
vim recovery_1st_stage
PRIMARY_NODE_HOST=$(hostname) #下边恢复节点时需要使用这个脚本
PGHOME=/usr/local/postgresql
ARCHIVEDIR=/usr/local/postgresql/archivedir
REPLUSER=repuser
su - postgres
chmod +x /usr/local/pgpool/etc/{failover.sh,follow_master.sh,recovery_1st_stage,pgpool_remote_start}
'''
pool_hba.conf
pcp.conf
pool_passwd
pgpool.conf
failover.sh
follow_master.sh
recovery_1st_stage
pgpool_remote_start
'''
#所有备节点都操作
#因为已经做了ssh-keygen
su - postgres
cd /usr/local/pgpool/etc/
scp 192.168.8.10:/usr/local/pgpool/etc/\{pool_hba.conf,pcp.conf,pool_passwd,pgpool.conf,failover.sh,follow_master.sh,recovery_1st_stage,pgpool_remote_start\} .
su - postgres
cd /usr/local/pgpool/etc/
cp recovery_1st_stage.sample recovery_1st_stage
cp recovery_1st_stage /usr/local/postgresql/data/
当前bash里这步不用操作
/usr/local/postgresql/data
vim postgresql.conf
#这里改为vip的
#primary_conninfo = '192.168.8.33 port=5432 user=repuser password=repuser'
#重载
#pg_ctl -D /usr/local/postgresql/data/ reload
#在主节点发送过来的配置文件基础上,修改 pgpool.conf 部分参数,如下
su - postgres
cd /usr/local/pgpool/etc/
vim pgpool.conf
#修改以下几项
wd_hostname = '192.168.8.20' # 当前机器ip
wd_port = 9000
wd_priority = 2 #选举时的优先级
heartbeat_destination0 = '192.168.8.10' # 其他pg库机器(如192.168.8.10主)
heartbeat_destination_port0 = 9694
heartbeat_device0 = 'eth0' #机器网卡
heartbeat_destination1 = '192.168.8.30' # 其他pg库机器(如192.168.8.30备)
heartbeat_destination_port1 = 9694
heartbeat_device1 = 'eth0' #机器网卡
other_pgpool_hostname0 = '192.168.8.10' # 其他pgpool节点机器
other_pgpool_port0 = 9999
other_wd_port0 = 9000
other_pgpool_hostname1 = '192.168.8.30' # 其他pgpool节点机器
other_pgpool_port1 = 9999
other_wd_port1 = 9000
当前bash里这步不用操作
/usr/local/postgresql/data
vim postgresql.conf
#这里改为vip的
#primary_conninfo = '192.168.8.33 port=5432 user=repuser password=repuser'
#重载
#pg_ctl -D /usr/local/postgresql/data/ reload
#在主节点发送过来的配置文件基础上,修改 pgpool.conf 部分参数,如下
su - postgres
cd /usr/local/pgpool/etc/
vim pgpool.conf
#修改以下几项
wd_hostname = '192.168.8.30' # 当前机器ip
wd_port = 9000
wd_priority = 3 #选举时的优先级
heartbeat_destination0 = '192.168.8.10' # 其他pg库机器(如192.168.8.10主)
heartbeat_destination_port0 = 9694
heartbeat_device0 = 'eth0' #机器网卡
heartbeat_destination1 = '192.168.8.20' # 其他pg库机器(如192.168.8.30备)
heartbeat_destination_port1 = 9694
heartbeat_device1 = 'eth0' #机器网卡
other_pgpool_hostname0 = '192.168.8.10' # 其他pgpool节点机器
other_pgpool_port0 = 9999
other_wd_port0 = 9000
other_pgpool_hostname1 = '192.168.8.20' # 其他pgpool节点机器
other_pgpool_port1 = 9999
other_wd_port1 = 9000
chmod u+s /usr/sbin
chown postgres:postgres failover_stream.sh
chmod 777 failover_stream.sh
启动:
1、先启动数据库服务
2、启动pgpool服务
3、先主后从
关闭:
1、启关闭pgpool服务
2、关闭数据库服务
3、先从后主
# 启动命令
su - postgres# 启动命令(日志位置可在命令中指定)
pgpool -n -d -D > /usr/local/pgpool/log/pgpool.log 2>&1 & # 有debug日志
pgpool -n -D > /usr/local/pgpool/log/pgpool.log 2>&1 & # 无debug日志
# 终止命令
pgpool -m fast stop
chown -R postgres.postgres /usr/local/pgpool
chmod +x /usr/local/pgpool/etc/follow_master.sh
chmod +x /usr/local/pgpool/etc/failover.sh
psql -h vip -p9999 -Upostgres -d postgres#或
psql -h 192.168.8.10 -p9999 -Upostgres -d postgres
su - postgres
#vip不一定在主节点,会出现在其他节点,因为是负载均衡的
psql -h 192.168.8.33 -p9999 -Upostgres -d postgres
postgres=# show pool_nodes;
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_sta
te | last_status_change
---------+--------------+------+--------+-----------+---------+------------+-------------------+-------------------+-------------------+---------------------
---+---------------------
0 | 192.168.8.10 | 5432 | up | 0.333333 | primary | 1 | false | 0 | |
| 2022-01-04 20:51:59
1 | 192.168.8.20 | 5432 | up | 0.333333 | standby | 0 | true | 0 | |
| 2022-01-04 20:51:59
2 | 192.168.8.30 | 5432 | up | 0.333333 | standby | 0 | false | 0 | |
| 2022-01-04 20:51:59
(3 rows)
'''
lb_weight和前面pool.conf配置中backend_weight0 = 1的比例有关;
role 为postgresql 数据库的主备状态;
up为数据库已加入集群管理;
此时可以通过虚拟IP链接, 执行新增/修改/删除等操作来测试集群正常工作状态的数据同步;
'''
[postgres@localhost etc]$ pcp_watchdog_info -h 192.168.8.33 -p 9898 -U pgpool
Password:
3 YES 192.168.8.10:9999 Linux localhost.localdomain 192.168.8.10
192.168.8.10:9999 Linux localhost.localdomain 192.168.8.10 9999 9000 4 MASTER
192.168.8.20:9999 Linux localhost.localdomain 192.168.8.20 9999 9000 7 STANDBY
192.168.8.30:9999 Linux localhost.localdomain 192.168.8.30 9999 9000 7 STANDBY
[postgres@localhost etc]$
#192.168.8.10
su - postgres
pg_ctl stop -l /usr/local/postgresql/log/pg_server.log -D /usr/local/postgresql/data/
查看发现从库已经升级为主库
su - postgres
psql -h 192.168.8.33 -p9999 -Upostgres -d postgres
postgres=# show pool_nodes;
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_sta
te | last_status_change
---------+--------------+------+--------+-----------+---------+------------+-------------------+-------------------+-------------------+---------------------
---+---------------------
0 | 192.168.8.10 | 5432 | down | 0.333333 | standby | 1 | false | 0 | |
| 2022-01-15 22:12:13
1 | 192.168.8.20 | 5432 | up | 0.333333 | primary | 0 | true | 0 | |
| 2022-01-15 22:12:13
2 | 192.168.8.30 | 5432 | up | 0.333333 | standby | 0 | false | 0 | |
| 2022-01-15 22:11:34
(3 rows)
查看发现没有从库
postgres=# select client_addr,sync_state from pg_stat_replication;
client_addr | sync_state
-------------+------------
(0 rows)
master的数据库挂(primary)掉了,同时变成了standby,pgpool执行数据库故障切换脚本failover.sh,slave正在作为新的主数据库(primary)提供服务
这里当master 重启之后, 需要恢复同步数据, 重新加入集群
su - postgres
psql -h 192.168.8.33 -p9999 -Upostgres -d postgres
处理关键步骤:
如果正常关闭: 释放虚拟IP > 节点正常关闭 > 其他节点检测到时区链接 > 仲裁选出新的"主节点" > 接管虚拟IP > 正常提供服务
如果异常关闭: 其他节点检测到时区链接且IP不可用> 仲裁选出新的"主节点" > 接管虚拟IP > 正常提供服务
pgpool -m fast stop
登录192.168.8.20查看VIP是否漂移
ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:08:ad:25 brd ff:ff:ff:ff:ff:ff
inet 192.168.8.20/24 brd 192.168.8.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.8.33/24 scope global secondary ens33:0
valid_lft forever preferred_lft forever
inet6 fe80::6b8a:a167:b99f:3737/64 scope link noprefixroute
valid_lft forever preferred_lft forever
#可查看pgpool.log 日志, 可看到被关闭的释放虚拟IP, 其他节点接管虚拟IP
#因为没有停止postgresql数据库所以还是primary
psql -h 192.168.8.33 -p 9999
postgres=# show pool_nodes;
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_sta
te | last_status_change
---------+--------------+------+--------+-----------+---------+------------+-------------------+-------------------+-------------------+---------------------
---+---------------------
0 | 192.168.8.10 | 5432 | up | 0.333333 | primary | 0 | true | 0 | |
| 2022-01-06 22:30:34
1 | 192.168.8.20 | 5432 | up | 0.333333 | standby | 0 | false | 0 | |
| 2022-01-06 22:30:34
2 | 192.168.8.30 | 5432 | up | 0.333333 | standby | 0 | false | 0 | |
| 2022-01-06 22:30:34
(3 rows)
查看watchdog
[postgres@localhost etc]$ pcp_watchdog_info -h 192.168.8.33 -p 9898 -U pgpool
Password:
3 YES 192.168.8.20:9999 Linux localhost.localdomain 192.168.8.20
192.168.8.20:9999 Linux localhost.localdomain 192.168.8.20 9999 9000 4 MASTER
192.168.8.10:9999 Linux localhost.localdomain 192.168.8.10 9999 9000 10 SHUTDOWN
192.168.8.30:9999 Linux localhost.localdomain 192.168.8.30 9999 9000 7 STANDBY
恢复同步数据
# 2. 清除当前"备库"上的data目录
mkdir /usr/local/postgresql/data
chown postgres.postgres data
# 3. 执行恢复命令
su - postgres
pg_basebackup -h 目标机器IP -p 5432 -U repuser -Fp -Xs -Pv -R -D /usr/local/postgresql/data
# 4. 启动服务(切换到root用户)
systemctl restart postgresql-12
# 5. 重新将postgresql数据库加入集群管理(su - postgres)# -n 节点序号, postgresql 数据库在集群中的节点序号
pcp_attach_node -d -U pgpool -h vip -p 9898 -n 0
或pcp_recovery_node -h vip -p 9898 -U pgpool -n 0
# ----------------------------
# pgPool-II configuration file
# ----------------------------
#
# This file consists of lines of the form:
#
# name = value
#
# Whitespace may be used. Comments are introduced with "#" anywhere on a line.
# The complete list of parameter names and allowed values can be found in the
# pgPool-II documentation.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal. If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, or use "pgpool reload". Some
# parameters, which are marked below, require a server shutdown and restart to
# take effect.
#
#------------------------------------------------------------------------------
# CONNECTIONS
#------------------------------------------------------------------------------
# - pgpool Connection Settings -
listen_addresses = '*'
# Host name or IP address to listen on:
# '*' for all, '' for no TCP/IP connections
# (change requires restart)
port = 9999
# Port number
# (change requires restart)
socket_dir = '/tmp'
# Unix domain socket path
# The Debian package defaults to
# /var/run/postgresql
# (change requires restart)
listen_backlog_multiplier = 2
# Set the backlog parameter of listen(2) to
# num_init_children * listen_backlog_multiplier.
# (change requires restart)
serialize_accept = off
# whether to serialize accept() call to avoid thundering herd problem
# (change requires restart)
reserved_connections = 0
# Number of reserved connections.
# Pgpool-II does not accept connections if over
# num_init_chidlren - reserved_connections.
# - pgpool Communication Manager Connection Settings -
pcp_listen_addresses = '*'
# Host name or IP address for pcp process to listen on:
# '*' for all, '' for no TCP/IP connections
# (change requires restart)
pcp_port = 9898
# Port number for pcp
# (change requires restart)
pcp_socket_dir = '/tmp'
# Unix domain socket path for pcp
# The Debian package defaults to
# /var/run/postgresql
# (change requires restart)
# - Backend Connection Settings -
backend_hostname0 = '192.168.8.10'
# Host name or IP address to connect to for backend 0
backend_port0 = 5432
# Port number for backend 0
backend_weight0 = 1
# Weight for backend 0 (only in load balancing mode)
backend_data_directory0 = '/usr/local/postgresql/data'
# Data directory for backend 0
backend_flag0 = 'ALLOW_TO_FAILOVER'
# Controls various backend behavior
# ALLOW_TO_FAILOVER, DISALLOW_TO_FAILOVER
# or ALWAYS_MASTER
backend_application_name0 = 'server0'
# walsender's application_name, used for "show pool_nodes" command
backend_hostname1 = '192.168.8.20'
backend_port1 = 5432
backend_weight1 = 1
backend_data_directory1 = '/usr/local/postgresql/data'
backend_flag1 = 'ALLOW_TO_FAILOVER'
backend_application_name1 = 'server1'
backend_hostname2 = '192.168.8.30'
backend_port2 = 5432
backend_weight2 = 1
backend_data_directory2 = '/usr/local/postgresql/data'
backend_flag2 = 'ALLOW_TO_FAILOVER'
backend_application_name2 = 'server2'
# - Authentication -
enable_pool_hba = on
# Use pool_hba.conf for client authentication
pool_passwd = 'pool_passwd'
# File name of pool_passwd for md5 authentication.
# "" disables pool_passwd.
# (change requires restart)
authentication_timeout = 60
# Delay in seconds to complete client authentication
# 0 means no timeout.
allow_clear_text_frontend_auth = off
# Allow Pgpool-II to use clear text password authentication
# with clients, when pool_passwd does not
# contain the user password
# - SSL Connections -
ssl = off
# Enable SSL support
# (change requires restart)
#ssl_key = './server.key'
# Path to the SSL private key file
# (change requires restart)
#ssl_cert = './server.cert'
# Path to the SSL public certificate file
# (change requires restart)
#ssl_ca_cert = ''
# Path to a single PEM format file
# containing CA root certificate(s)
# (change requires restart)
#ssl_ca_cert_dir = ''
# Directory containing CA root certificate(s)
# (change requires restart)
ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL'
# Allowed SSL ciphers
# (change requires restart)
ssl_prefer_server_ciphers = off
# Use server's SSL cipher preferences,
# rather than the client's
# (change requires restart)
ssl_ecdh_curve = 'prime256v1'
# Name of the curve to use in ECDH key exchange
ssl_dh_params_file = ''
# Name of the file containing Diffie-Hellman parameters used
# for so-called ephemeral DH family of SSL cipher.
#------------------------------------------------------------------------------
# POOLS
#------------------------------------------------------------------------------
# - Concurrent session and pool size -
num_init_children = 32
# Number of concurrent sessions allowed
# (change requires restart)
max_pool = 4
# Number of connection pool caches per connection
# (change requires restart)
# - Life time -
child_life_time = 300
# Pool exits after being idle for this many seconds
child_max_connections = 0
# Pool exits after receiving that many connections
# 0 means no exit
connection_life_time = 0
# Connection to backend closes after being idle for this many seconds
# 0 means no close
client_idle_limit = 0
# Client is disconnected after being idle for that many seconds
# (even inside an explicit transactions!)
# 0 means no disconnection
#------------------------------------------------------------------------------
# LOGS
#------------------------------------------------------------------------------
# - Where to log -
log_destination = 'syslog'
# Where to log
# Valid values are combinations of stderr,
# and syslog. Default to stderr.
# - What to log -
log_line_prefix = '%t: pid %p: ' # printf-style string to output at beginning of each log line.
log_connections = off
# Log connections
log_hostname = off
# Hostname will be shown in ps status
# and in logs if connections are logged
log_statement = off
# Log all statements
log_per_node_statement = off
# Log all statements
# with node and backend informations
log_client_messages = off
# Log any client messages
log_standby_delay = 'none'
# Log standby delay
# Valid values are combinations of always,
# if_over_threshold, none
# - Syslog specific -
syslog_facility = 'LOCAL1'
# Syslog local facility. Default to LOCAL0
syslog_ident = 'pgpool'
# Syslog program identification string
# Default to 'pgpool'
# - Debug -
#log_error_verbosity = default # terse, default, or verbose messages
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
pid_file_name = '/usr/local/pgpool/etc/pgpool.pid'
# PID file name
# Can be specified as relative to the"
# location of pgpool.conf file or
# as an absolute path
# (change requires restart)
logdir = '/usr/local/pgpool/log'
# Directory of pgPool status file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTION POOLING
#------------------------------------------------------------------------------
connection_cache = on
# Activate connection pools
# (change requires restart)
# Semicolon separated list of queries
# to be issued at the end of a session
# The default is for 8.3 and later
reset_query_list = 'ABORT; DISCARD ALL'
# The following one is for 8.2 and before
#reset_query_list = 'ABORT; RESET ALL; SET SESSION AUTHORIZATION DEFAULT'
#------------------------------------------------------------------------------
# REPLICATION MODE
#------------------------------------------------------------------------------
replication_mode = off
# Activate replication mode
# (change requires restart)
replicate_select = off
# Replicate SELECT statements
# when in replication mode
# replicate_select is higher priority than
# load_balance_mode.
insert_lock = on
# Automatically locks a dummy row or a table
# with INSERT statements to keep SERIAL data
# consistency
# Without SERIAL, no lock will be issued
lobj_lock_table = ''
# When rewriting lo_creat command in
# replication mode, specify table name to
# lock
# - Degenerate handling -
replication_stop_on_mismatch = off
# On disagreement with the packet kind
# sent from backend, degenerate the node
# which is most likely "minority"
# If off, just force to exit this session
failover_if_affected_tuples_mismatch = off
# On disagreement with the number of affected
# tuples in UPDATE/DELETE queries, then
# degenerate the node which is most likely
# "minority".
# If off, just abort the transaction to
# keep the consistency
#------------------------------------------------------------------------------
# LOAD BALANCING MODE
#------------------------------------------------------------------------------
load_balance_mode = on
# Activate load balancing mode
# (change requires restart)
ignore_leading_white_space = on
# Ignore leading white spaces of each query
white_function_list = ''
# Comma separated list of function names
# that don't write to database
# Regexp are accepted
black_function_list = 'currval,lastval,nextval,setval'
# Comma separated list of function names
# that write to database
# Regexp are accepted
black_query_pattern_list = ''
# Semicolon separated list of query patterns
# that should be sent to primary node
# Regexp are accepted
# valid for streaming replicaton mode only.
database_redirect_preference_list = ''
# comma separated list of pairs of database and node id.
# example: postgres:primary,mydb[0-4]:1,mydb[5-9]:2'
# valid for streaming replicaton mode only.
app_name_redirect_preference_list = ''
# comma separated list of pairs of app name and node id.
# example: 'psql:primary,myapp[0-4]:1,myapp[5-9]:standby'
# valid for streaming replicaton mode only.
allow_sql_comments = off
# if on, ignore SQL comments when judging if load balance or
# query cache is possible.
# If off, SQL comments effectively prevent the judgment
# (pre 3.4 behavior).
disable_load_balance_on_write = 'transaction'
# Load balance behavior when write query is issued
# in an explicit transaction.
# Note that any query not in an explicit transaction
# is not affected by the parameter.
# 'transaction' (the default): if a write query is issued,
# subsequent read queries will not be load balanced
# until the transaction ends.
# 'trans_transaction': if a write query is issued,
# subsequent read queries in an explicit transaction
# will not be load balanced until the session ends.
# 'always': if a write query is issued, read queries will
# not be load balanced until the session ends.
statement_level_load_balance = off
# Enables statement level load balancing
#------------------------------------------------------------------------------
# MASTER/SLAVE MODE
#------------------------------------------------------------------------------
master_slave_mode = on
# Activate master/slave mode
# (change requires restart)
master_slave_sub_mode = 'stream'
# Master/slave sub mode
# Valid values are combinations stream, slony
# or logical. Default is stream.
# (change requires restart)
# - Streaming -
sr_check_period = 0
# Streaming replication check period
# Disabled (0) by default
sr_check_user = 'repuser'
# Streaming replication check user
# This is necessary even if you disable
# streaming replication delay check with
# sr_check_period = 0
sr_check_password = 'repuser'
# Password for streaming replication check user.
# Leaving it empty will make Pgpool-II to first look for the
# Password in pool_passwd file before using the empty password
sr_check_database = 'postgres'
# Database name for streaming replication check
delay_threshold = 10000000
# Threshold before not dispatching query to standby node
# Unit is in bytes
# Disabled (0) by default
# - Special commands -
follow_master_command = ''
# Executes this command after master failover
# Special values:
# %d = failed node id
# %h = failed node host name
# %p = failed node port number
# %D = failed node database cluster path
# %m = new master node id
# %H = new master node hostname
# %M = old master node id
# %P = old primary node id
# %r = new master port number
# %R = new master database cluster path
# %N = old primary node hostname
# %S = old primary node port number
# %% = '%' character
#------------------------------------------------------------------------------
# HEALTH CHECK GLOBAL PARAMETERS
#------------------------------------------------------------------------------
health_check_period = 10
# Health check period
# Disabled (0) by default
health_check_timeout = 20
# Health check timeout
# 0 means no timeout
health_check_user = 'postgres'
# Health check user
health_check_password = 'postgres'
# Password for health check user
# Leaving it empty will make Pgpool-II to first look for the
# Password in pool_passwd file before using the empty password
health_check_database = 'postgres'
# Database name for health check. If '', tries 'postgres' frist, then 'template1'
health_check_max_retries = 0
# Maximum number of times to retry a failed health check before giving up.
health_check_retry_delay = 1
# Amount of time to wait (in seconds) between retries.
connect_timeout = 10000
# Timeout value in milliseconds before giving up to connect to backend.
# Default is 10000 ms (10 second). Flaky network user may want to increase
# the value. 0 means no timeout.
# Note that this value is not only used for health check,
# but also for ordinary conection to backend.
#------------------------------------------------------------------------------
# HEALTH CHECK PER NODE PARAMETERS (OPTIONAL)
#------------------------------------------------------------------------------
#health_check_period0 = 0
#health_check_timeout0 = 20
#health_check_user0 = 'nobody'
#health_check_password0 = ''
#health_check_database0 = ''
#health_check_max_retries0 = 0
#health_check_retry_delay0 = 1
#connect_timeout0 = 10000
#------------------------------------------------------------------------------
# FAILOVER AND FAILBACK
#------------------------------------------------------------------------------
failover_command = '/usr/local/pgpool/etc/failover.sh %d %P %H %R'
# Executes this command at failover
# Special values:
# %d = failed node id
# %h = failed node host name
# %p = failed node port number
# %D = failed node database cluster path
# %m = new master node id
# %H = new master node hostname
# %M = old master node id
# %P = old primary node id
# %r = new master port number
# %R = new master database cluster path
# %N = old primary node hostname
# %S = old primary node port number
# %% = '%' character
failback_command = '/usr/local/pgpool/etc/follow_master.sh %d %h %p %D %m %M %H %P %r %R'
# Executes this command at failback.
# Special values:
# %d = failed node id
# %h = failed node host name
# %p = failed node port number
# %D = failed node database cluster path
# %m = new master node id
# %H = new master node hostname
# %M = old master node id
# %P = old primary node id
# %r = new master port number
# %R = new master database cluster path
# %N = old primary node hostname
# %S = old primary node port number
# %% = '%' character
failover_on_backend_error = on
# Initiates failover when reading/writing to the
# backend communication socket fails
# If set to off, pgpool will report an
# error and disconnect the session.
detach_false_primary = off
# Detach false primary if on. Only
# valid in streaming replicaton
# mode and with PostgreSQL 9.6 or
# after.
search_primary_node_timeout = 300
# Timeout in seconds to search for the
# primary node when a failover occurs.
# 0 means no timeout, keep searching
# for a primary node forever.
auto_failback = off
# Dettached backend node reattach automatically
# if replication_state is 'streaming'.
auto_failback_interval = 60
# Min interval of executing auto_failback in
# seconds.
#------------------------------------------------------------------------------
# ONLINE RECOVERY
#------------------------------------------------------------------------------
recovery_user = 'postgres'
# Online recovery user
recovery_password = 'postgres'
# Online recovery password
# Leaving it empty will make Pgpool-II to first look for the
# Password in pool_passwd file before using the empty password
recovery_1st_stage_command = 'recovery_1st_stage'
# Executes a command in first stage
recovery_2nd_stage_command = ''
# Executes a command in second stage
recovery_timeout = 90
# Timeout in seconds to wait for the
# recovering node's postmaster to start up
# 0 means no wait
client_idle_limit_in_recovery = 0
# Client is disconnected after being idle
# for that many seconds in the second stage
# of online recovery
# 0 means no disconnection
# -1 means immediate disconnection
#------------------------------------------------------------------------------
# WATCHDOG
#------------------------------------------------------------------------------
# - Enabling -
use_watchdog = on
# Activates watchdog
# (change requires restart)
# -Connection to up stream servers -
trusted_servers = ''
# trusted server list which are used
# to confirm network connection
# (hostA,hostB,hostC,...)
# (change requires restart)
ping_path = '/bin'
# ping command path
# (change requires restart)
# - Watchdog communication Settings -
wd_hostname = '192.168.8.10'
# Host name or IP address of this watchdog
# (change requires restart)
wd_port = 9000
# port number for watchdog service
# (change requires restart)
wd_priority = 1
# priority of this watchdog in leader election
# (change requires restart)
wd_authkey = ''
# Authentication key for watchdog communication
# (change requires restart)
wd_ipc_socket_dir = '/tmp'
# Unix domain socket path for watchdog IPC socket
# The Debian package defaults to
# /var/run/postgresql
# (change requires restart)
# - Virtual IP control Setting -
delegate_IP = '192.168.8.33'
# delegate IP address
# If this is empty, virtual IP never bring up.
# (change requires restart)
if_cmd_path = '/sbin'
# path to the directory where if_up/down_cmd exists
# If if_up/down_cmd starts with "/", if_cmd_path will be ignored.
# (change requires restart)
if_up_cmd = '/sbin/ip addr add $_IP_$/24 dev ens33 label ens33:0'
# startup delegate IP command
# (change requires restart)
if_down_cmd = '/sbin/ip addr del $_IP_$/24 dev ens33'
# shutdown delegate IP command
# (change requires restart)
arping_path = '/usr/sbin'
# arping command path
# If arping_cmd starts with "/", if_cmd_path will be ignored.
# (change requires restart)
arping_cmd = '/usr/sbin/arping -U $_IP_$ -w 1 -I ens33'
# arping command
# (change requires restart)
# - Behaivor on escalation Setting -
clear_memqcache_on_escalation = on
# Clear all the query cache on shared memory
# when standby pgpool escalate to active pgpool
# (= virtual IP holder).
# This should be off if client connects to pgpool
# not using virtual IP.
# (change requires restart)
wd_escalation_command = ''
# Executes this command at escalation on new active pgpool.
# (change requires restart)
wd_de_escalation_command = ''
# Executes this command when master pgpool resigns from being master.
# (change requires restart)
# - Watchdog consensus settings for failover -
failover_when_quorum_exists = on
# Only perform backend node failover
# when the watchdog cluster holds the quorum
# (change requires restart)
failover_require_consensus = on
# Perform failover when majority of Pgpool-II nodes
# aggrees on the backend node status change
# (change requires restart)
allow_multiple_failover_requests_from_node = on
# A Pgpool-II node can cast multiple votes
# for building the consensus on failover
# (change requires restart)
enable_consensus_with_half_votes = on
# apply majority rule for consensus and quorum computation
# at 50% of votes in a cluster with even number of nodes.
# when enabled the existence of quorum and consensus
# on failover is resolved after receiving half of the
# total votes in the cluster, otherwise both these
# decisions require at least one more vote than
# half of the total votes.
# (change requires restart)
# - Lifecheck Setting -
# -- common --
wd_monitoring_interfaces_list = '' # Comma separated list of interfaces names to monitor.
# if any interface from the list is active the watchdog will
# consider the network is fine
# 'any' to enable monitoring on all interfaces except loopback
# '' to disable monitoring
# (change requires restart)
wd_lifecheck_method = 'heartbeat'
# Method of watchdog lifecheck ('heartbeat' or 'query' or 'external')
# (change requires restart)
wd_interval = 10
# lifecheck interval (sec) > 0
# (change requires restart)
# -- heartbeat mode --
wd_heartbeat_port = 9694
# Port number for receiving heartbeat signal
# (change requires restart)
wd_heartbeat_keepalive = 2
# Interval time of sending heartbeat signal (sec)
# (change requires restart)
wd_heartbeat_deadtime = 30
# Deadtime interval for heartbeat signal (sec)
# (change requires restart)
heartbeat_destination0 = '192.168.8.20'
# Host name or IP address of destination 0
# for sending heartbeat signal.
# (change requires restart)
heartbeat_destination_port0 = 9694
# Port number of destination 0 for sending
# heartbeat signal. Usually this is the
# same as wd_heartbeat_port.
# (change requires restart)
heartbeat_device0 = 'ens33'
heartbeat_destination1 = '192.168.8.30'
heartbeat_destination_port1 = 9694
heartbeat_device1 = 'ens33'
# Name of NIC device (such like 'eth0')
# used for sending/receiving heartbeat
# signal to/from destination 0.
# This works only when this is not empty
# and pgpool has root privilege.
# (change requires restart)
#heartbeat_destination1 = 'host0_ip2'
#heartbeat_destination_port1 = 9694
#heartbeat_device1 = ''
# -- query mode --
wd_life_point = 3
# lifecheck retry times
# (change requires restart)
wd_lifecheck_query = 'SELECT 1'
# lifecheck query to pgpool from watchdog
# (change requires restart)
wd_lifecheck_dbname = 'template1'
# Database name connected for lifecheck
# (change requires restart)
wd_lifecheck_user = 'nobody'
# watchdog user monitoring pgpools in lifecheck
# (change requires restart)
wd_lifecheck_password = ''
# Password for watchdog user in lifecheck
# Leaving it empty will make Pgpool-II to first look for the
# Password in pool_passwd file before using the empty password
# (change requires restart)
# - Other pgpool Connection Settings -
other_pgpool_hostname0 = '192.168.8.20'
# Host name or IP address to connect to for other pgpool 0
# (change requires restart)
other_pgpool_port0 = 9999
# Port number for other pgpool 0
# (change requires restart)
other_wd_port0 = 9000
other_pgpool_hostname1 = '192.168.8.30'
other_pgpool_port1 = 9999
other_wd_port1 = 9000
# Port number for other watchdog 0
# (change requires restart)
#other_pgpool_hostname1 = 'host1'
#other_pgpool_port1 = 5432
#other_wd_port1 = 9000
#------------------------------------------------------------------------------
# OTHERS
#------------------------------------------------------------------------------
relcache_expire = 0
# Life time of relation cache in seconds.
# 0 means no cache expiration(the default).
# The relation cache is used for cache the
# query result against PostgreSQL system
# catalog to obtain various information
# including table structures or if it's a
# temporary table or not. The cache is
# maintained in a pgpool child local memory
# and being kept as long as it survives.
# If someone modify the table by using
# ALTER TABLE or some such, the relcache is
# not consistent anymore.
# For this purpose, cache_expiration
# controls the life time of the cache.
relcache_size = 256
# Number of relation cache
# entry. If you see frequently:
# "pool_search_relcache: cache replacement happend"
# in the pgpool log, you might want to increate this number.
check_temp_table = catalog
# Temporary table check method. catalog, trace or none.
# Default is catalog.
check_unlogged_table = on
# If on, enable unlogged table check in SELECT statements.
# This initiates queries against system catalog of primary/master
# thus increases load of master.
# If you are absolutely sure that your system never uses unlogged tables
# and you want to save access to primary/master, you could turn this off.
# Default is on.
enable_shared_relcache = on
# If on, relation cache stored in memory cache,
# the cache is shared among child process.
# Default is on.
# (change requires restart)
relcache_query_target = master # Target node to send relcache queries. Default is master (primary) node.
# If load_balance_node is specified, queries will be sent to load balance node.
#------------------------------------------------------------------------------
# IN MEMORY QUERY MEMORY CACHE
#------------------------------------------------------------------------------
memory_cache_enabled = off
# If on, use the memory cache functionality, off by default
# (change requires restart)
memqcache_method = 'shmem'
# Cache storage method. either 'shmem'(shared memory) or
# 'memcached'. 'shmem' by default
# (change requires restart)
memqcache_memcached_host = 'localhost'
# Memcached host name or IP address. Mandatory if
# memqcache_method = 'memcached'.
# Defaults to localhost.
# (change requires restart)
memqcache_memcached_port = 11211
# Memcached port number. Mondatory if memqcache_method = 'memcached'.
# Defaults to 11211.
# (change requires restart)
memqcache_total_size = 67108864
# Total memory size in bytes for storing memory cache.
# Mandatory if memqcache_method = 'shmem'.
# Defaults to 64MB.
# (change requires restart)
memqcache_max_num_cache = 1000000
# Total number of cache entries. Mandatory
# if memqcache_method = 'shmem'.
# Each cache entry consumes 48 bytes on shared memory.
# Defaults to 1,000,000(45.8MB).
# (change requires restart)
memqcache_expire = 0
# Memory cache entry life time specified in seconds.
# 0 means infinite life time. 0 by default.
# (change requires restart)
memqcache_auto_cache_invalidation = on
# If on, invalidation of query cache is triggered by corresponding
# DDL/DML/DCL(and memqcache_expire). If off, it is only triggered
# by memqcache_expire. on by default.
# (change requires restart)
memqcache_maxcache = 409600
# Maximum SELECT result size in bytes.
# Must be smaller than memqcache_cache_block_size. Defaults to 400KB.
# (change requires restart)
memqcache_cache_block_size = 1048576
# Cache block size in bytes. Mandatory if memqcache_method = 'shmem'.
# Defaults to 1MB.
# (change requires restart)
memqcache_oiddir = '/usr/local/pgpool/oiddir'
# Temporary work directory to record table oids
# (change requires restart)
white_memqcache_table_list = ''
# Comma separated list of table names to memcache
# that don't write to database
# Regexp are accepted
black_memqcache_table_list = ''
# Comma separated list of table names not to memcache
# that don't write to database
# Regexp are accepted