通过VirtualBox创建两台虚拟机进行实验。
节点服务器 2台
服务器1 操作系统 CentOS release 6.7 (Final)
主机名 ha1
角色 主节点
IP地址 192.168.100.151
数据库 postgresql-9.4.5
slony slony1-2.2.4
服务器2 操作系统 CentOS release 6.7 (Final)
主机名 ha2
角色 备节点
IP地址 192.168.100.152
数据库 postgresql-9.4.5
slony slony1-2.2.4
两个节点均需执行以下配置。
root用户编辑/etc/hosts文件,修改内容如下:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.100.151 ha1 192.168.100.152 ha2
root用户编辑/etc/sysconfig/iptables防火墙配置文件,开启PostgreSQL端口5432的远程访问,最终文件内容如下:
# Generated by iptables-save v1.4.7 on Tue Dec 29 10:45:17 2015 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [26:2536] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 5432 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT # Completed on Tue Dec 29 10:45:17 2015
注意事项:
新规则的位置必须在-A INPUT -j REJECT --reject-with icmp-host-prohibited、-A FORWARD -j REJECT --reject-with icmp-host-prohibited这两个规则之上,否则不会生效。
修改完成之后,重启iptables服务。
root用户创建相应的操作系统用户、数据库目录及slony目录,并赋权。
[root@ha1 ~]# groupadd postgres [root@ha1 ~]# useradd –g postgres postgres [root@ha1 ~]# passwd postgres [root@ha1 ~]# mkdir –p /usr/local/pg945/data [root@ha1 ~]# mkdir –p /usr/local/slony/log [root@ha1 ~]# mkdir –p /usr/local/slony/archive [root@ha1 ~]# chown –R postgres:postgres /usr/local/pg945 [root@ha1 ~]# chown –R postgres:postgres /usr/local/slony
配置postgres用户的环境变量,如下:
export PGBASE=/usr/local/pg945 export PGDATA=$PGBASE/data export PGUSER=postgres export PGPORT=5432 export PATH=$PATH:$HOME/bin:$PGBASE/bin:/usr/local/slony/bin export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$PGBASE/lib
两个节点均需执行以下配置。
将下载的PostgreSQL数据库源码包放在/home/postgres目录下, postgres用户进行解压、安装。
安装目录为/usr/local/pg945,具体过程略。
安装完成之后,配置数据库允许远程访问,具体配置略。
进入数据库,创建测试对象,操作如下:
[postgres@ha1 ~]$ psql psql (9.4.5) Type "help" for help. postgres=# create user slony superuser password 123456; postgres=# create database slony owner slony; postgres=# \c slony slony slony=# create schema slony authorization slony; slony=# create table tb1 (id int primary key,name varchar);
两个节点均需执行编译安装。
将下载的Slony源码包放在/home/postgres目录下,postgres进行解压、安装。configure操作为:
[postgres@ha1 ~]$ ./configure --prefix=/usr/local/slony --with-perltools --with-pgconfigdir=/usr/local/pg945/bin
具体过程略。
两个节点均需创建配置文件。
在/usr/local/slony/etc目录下创建配置文件slon.conf,最基本、主要的两个参数为:
cluster_name='slony' conn_info='host=localhost port=5432 user=slony'
具体文件内容参考附件。
此处依照slony中添加资源的顺序进行说明,最终的完整脚本可参照附件。
只需要在主节点执行脚本即可。
可以理解为是一个数据库,其下存放了集群运行过程中所需要的配置信息,如节点、监听、数据库对象等。
结合手册,通过反复测试可以看出:
a) 一套slony集群只能有一个cluster,由cluster name定义
b) init cluster时需要先指定一个admin conninfo
c) admin conninfo所定义的节点即为主节点
d) 只需要定义主节点的conninfo即可,可不定义备节点
slonik定义如下:
cluster_name=slony master_conninfo="host=ha1 dbname=slony user=slony" slave_conninfo="host=ha2 dbname=slony user=slony" slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; init cluster (id=1, comment = 'Master Node'); EOF
通过store node命令,定义一个node,然后将其保存至cluster的配置中(_$cluster_name.sl_node表)。
在这个过程中会新节点上创建一个_$cluster_name,这也是为什么上文中说的在init cluster时只需要主节点的conninfo信息,不需要备节点的信息。
由于从1.2.11版本开始加入了log shipping,所以不再需要配置spoolnode参数。
slonik脚本如下:
slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; store node (id = 2,comment = 'slave node',event node = 1); EOF
通过反复测试得出:
a) store node中需要定义event node
b) event node得是已经存在的node
c) 在store node之前需要定义admin conninfo
d) 由于event node指定的是1,故还还需要指定node 1的admin conninfo
通信路径(path)存储在_$cluster_name.sl_path中,在完成node定义之后,并不会在cluster中存储该node的链接信息,需要再通过store path命令来完成存储。
通过store path命令向cluster中写入node的访问方式。类似于store node,需要在store node之前写入相关node的admin conninfo。
slonik定义如下:
slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; store path (server = 1,client = 2,conninfo = '$master_conninfo'); store path (server = 2,client = 1,conninfo = '$slave_conninfo'); EOF
通过store listen命令完成该操作。若sl_path中已存储了相应的listen信息,则会发生wait for event事件。
而且,store listen操作需要一条一条执行,不能同时执行多条。
slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; store listen (origin = 1, provider = 1, receiver = 2); EOF slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; store listen (origin = 2, provider = 2, receiver = 1); EOF
set是slony中的最小单元,存储要进行同步的数据库对象。
set(origin发送端)信息存储在_$cluster_name.sl_set系统表中。
slonik配置如下:
slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; create set (id = 1,origin = 1,comment = '1to2'); EOF
配置完发送set之后,需要向其中写入参与同步的表的信息。
slonik配置如下:
slonik << EOF cluster name = $cluster_name; node 1 admin conninfo = 'host=$master_host dbname=$db_name user=$user_name'; node 2 admin conninfo = 'host=$slave_host dbname=$db_name user=$user_name'; set add table (set id = 1,origin = 1,fully qualified name='slony.tb1',comment='1to2'); EOF
set(receive接受端)信息存储在_$cluster_name.sl_subscribe系统表中。
slonik配置如下:
slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; subscribe set (id = 1,provider = 1,receiver = 2,forward = no,omit copy = no); EOF
#! /bin/sh # chkconfig: 2345 64 36 # This is an example of a start/stop script # if you have chkconfig, simply: # chkconfig --add postgresql prefix=/usr/local/pg945 PGDATA="/usr/local/pg945/data" PGUSER=postgres PGLOG="$PGDATA/pg_log/postgres-`date +%Y-%m-%d`.log" PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin DAEMON="$prefix/bin/postmaster" PGCTL="$prefix/bin/pg_ctl" set -e test -x $DAEMON || { echo "$DAEMON not found" if [ "$1" = "stop" ] then exit 0 else exit 5 fi } case $1 in start) echo -n "Starting PostgreSQL: " test x"$OOM_SCORE_ADJ" != x && echo "$OOM_SCORE_ADJ" > /proc/self/oom_score_adj test x"$OOM_ADJ" != x && echo "$OOM_ADJ" > /proc/self/oom_adj su - $PGUSER -c "$DAEMON -D '$PGDATA' &" >>$PGLOG 2>&1 echo "ok" ;; stop) echo -n "Stopping PostgreSQL: " su - $PGUSER -c "$PGCTL stop -D '$PGDATA' -s -m fast" echo "ok" ;; restart) echo -n "Restarting PostgreSQL: " su - $PGUSER -c "$PGCTL stop -D '$PGDATA' -s -m fast -w" test x"$OOM_SCORE_ADJ" != x && echo "$OOM_SCORE_ADJ" > /proc/self/oom_score_adj test x"$OOM_ADJ" != x && echo "$OOM_ADJ" > /proc/self/oom_adj su - $PGUSER -c "$DAEMON -D '$PGDATA' &" >>$PGLOG 2>&1 echo "ok" ;; reload) echo -n "Reload PostgreSQL: " su - $PGUSER -c "$PGCTL reload -D '$PGDATA' -s" echo "ok" ;; status) su - $PGUSER -c "$PGCTL status -D '$PGDATA'" ;; *) # Print help echo "Usage: $0 {start|stop|restart|reload|status}" 1>&2 exit 1 ;; esac exit 0
#!/bin/sh # chkconfig: - 98 02 # description: Starts and stops the Slon daemon that handles Slony-I replication. if [ -r /etc/sysconfig/slony1 ]; then . /etc/sysconfig/slony1 fi # Source function library. INITD=/etc/rc.d/init.d . $INITD/functions # Get function listing for cross-distribution logic. TYPESET=`typeset -f|grep "declare"` # Get config. . /etc/sysconfig/network # For SELinux we need to use 'runuser' not 'su' if [ -x /sbin/runuser ] then SU=runuser else SU=su fi # Check that networking is up. # We need it for slon [ "${NETWORKING}" = "no" ] && exit 0 # Find the name of the script NAME=`basename $0` if [ ${NAME:0:1} = "S" -o ${NAME:0:1} = "K" ] then NAME=${NAME:3} fi # Set defaults for configuration variables SLONENGINE=/usr/local/slony/bin SLONDAEMON=$SLONENGINE/slon SLONCONF=$SLONENGINE/etc/slon.conf SLONPID=/usr/local/slony/slon.pid SLONLOG=/usr/local/slony/log/slony-`date +%Y-%m-%d`.log test -x $SLONDAEMON || exit 5 script_result=0 cluster_name=slony conn_info="\"host=localhost port=5432 user=slony\"" start(){ SLON_START=$"Starting ${NAME} service: " echo -n "$SLON_START" $SU -l postgres -c "$SLONDAEMON $cluster_name $conn_info -f $SLONCONF &" >> "$SLONLOG" 2>&1 < /dev/null #$SU -l postgres -c "$SLONDAEMON -f $SLONCONF &" >> "$SLONLOG" 2>&1 < /dev/null sleep 2 pid=`pidof -s "$SLONDAEMON"` if [ $pid ] then success "$SLON_START" touch /usr/local/slony/${NAME} echo else failure "$PSQL_START" echo script_result=1 fi } stop(){ echo -n $"Stopping ${NAME} service: " if [ $UID -ne 0 ]; then RETVAL=1 failure else killproc /usr/local/slony/bin/slon RETVAL=$? [ $RETVAL -eq 0 ] && rm -f /usr/local/slony/${NAME} fi; echo return $RETVAL } restart(){ stop start } condrestart(){ [ -e /usr/local/slony/${NAME} ] && restart } condstop(){ [ -e /usr/local/slony/${NAME} ] && stop } # See how we were called. case "$1" in start) start ;; stop) stop ;; status) status slon script_result=$? ;; restart) restart ;; condrestart) condrestart ;; condstop) condstop ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|condstop}" exit 1 esac exit $script_result
#!/bin/sh export PGBASE=/usr/local/pg945 export PGDATA=$PGBASE/data export PGUSER=postgres export PGPORT=5432 export PATH=$PATH:$PGBASE/bin:$HOME/bin:/usr/local/slony/bin export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$PGBASE/lib export SLON_CONF=/usr/local/slony/etc/slon.conf cluster_name=slony master_conninfo="host=ha1 dbname=slony user=slony" slave_conninfo="host=ha2 dbname=slony user=slony" cluster_log=/usr/local/slony/log function cluster { slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; init cluster (id = 1,comment = 'master'); EOF } function node { slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; store node (id = 2,comment = 'slave',event node = 1); EOF } function path { slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; store path (server = 1,client = 2,conninfo = '$master_conninfo'); store path (server = 2,client = 1,conninfo = '$slave_conninfo'); EOF } function listen { slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; store listen (origin = 1, provider = 1, receiver = 2); EOF slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; store listen (origin = 2, provider = 2, receiver = 1); EOF } function set_origin { slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; create set (id = 1,origin = 1,comment = '1to2'); EOF } function table { slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$ha1_conninfo'; node 2 admin conninfo = '$ha2_conninfo'; set add table (set id = 1,origin = 1,fully qualified name='slony.tb1',comment='ha1toha2'); EOF } function set_receive { slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; subscribe set (id = 1,provider = 1,receiver = 2,forward = no,omit copy = no); EOF } cluster node path listen set_origin table set_receive