通过Haproxy来实现PXC数据库集群的读写负载均衡问题,在PXC集群之外再设置一台Haproxy代理服务器,
所有应用程序对于数据库集群的读写操作都发给这台代理服务器,再由Haproxy决定某个读写操作具体
发给集群中的那台数据库服务器。
PXC集群安装可以参考http://blog.csdn.net/zhanglei_16/article/details/51473538
1:安装Haproxy
rpm -Uvh epel-release-6-8.noarch.rpm
yum -y install haproxy
2:设置Haproxy
Haproxy的配置文件为 /etx/haproxy/haproxy.cfg
vi /etc/haproxy/haproxy.cgf
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
chroot /usr/share/haproxy
user haproxy
group haproxy
daemon
defaults
log global
mode http
option tcplog
option dontlognull
retries 3
option redispatch
maxconn 2000
timeout connect 5000
timeout client 5000
timeout server 5000
frontend pxc-front
bind *:3307
mode tcp
default_backend pxc-back
frontend stats-front
bind *:8080
mode http
default_backend stats-back
frontend pxc-onenode-front
bind *:3306
mode tcp
default_backend pxc-onenode-back
backend pxc-back
mode tcp
balance leastconn
option httpchk
server mysql21 192.168.1.21:3306 check port 9200 inter 12000 rise 3 fall 3
server mysql22 192.168.1.22:3306 check port 9200 inter 12000 rise 3 fall 3
server mysql23 192.168.1.23:3306 check port 9200 inter 12000 rise 3 fall 3
backend stats-back
mode http
balance roundrobin
stats uri /haproxy/stats
stats refresh 5s
stats auth pxcstats:secret
backend pxc-onenode-back
mode tcp
balance leastconn
option httpchk
server mysql21 192.168.1.21:3306 check port 9200 inter 12000 rise 3 fall 3
server mysql22 192.168.1.22:3306 check port 9200 inter 12000 rise 3 fall 3 backup
server mysql23 192.168.1.23:3306 check port 9200 inter 12000 rise 3 fall 3 backup
集群运行状态HTTP访问端口:8080,用于通过web页面监控数据库集群中各节点的运行状态。
集群单节点写入TCP协议访问端口:3306。单节点写入,其他节点通过PXC的SST功能进行同步。
可能会引起数据延迟。但不会发生乐观锁(optimisticlocking)、回滚(rollback)引起的问题。
集群全节点读写TCP协议访问端口:3307.同时对集群所有节点读写操作。大多数情况可以实现
数据库的负载均衡。但会有乐观锁(optimisticlocking)、回滚(rollback)造成数据错误的风险。
3:启动Haproxy
启动Haproxy之前先修改防火墙配置,允许对8080、3306、3307端口访问。也可以关闭防火墙。
/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg
4:增加集群状态检查进程的用户和权限
登录到PXC中的任何一个节点:
GRANT PROCESS ON *.* TO 'clustercheckuser'@'localhost' IDENTIFIED BY 'clustercheckpassword!';
flush privileges;
在设置完以后,运行clustercheck检查集群状态:
[root@mysql1 ~]# /usr/local/mysql/bin/clustercheck
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 40
Percona XtraDB Cluster Node is synced.
5:为PXC集群中每一个节点服务器安装、配置xinetd
yum -y install xinetd
cp /usr/local/mysql/xinetd.d/mysqlchk /etc/xinetd.d/
vi /etc/xinetd.d/mysqlchk
# default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
flags = REUSE
socket_type = stream
type = UNLISTED
port = 9200
wait = no
user = nobody
server = /usr/local/mysql/bin/clustercheck
log_on_failure += USERID
only_from = 0.0.0.0/0
#
# Passing arguments to clustercheck
#
"
# Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
# Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local"
# 55-to-56 upgrade: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.extra"
#
# recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED
vi /etc/services
mysqlchk 9200/tcp #mysqlchk
最后重新启动xinetd :
service xinetd restart
6:通过Haproxy查看PXC数据库集群状态
在游览器中访问 URL"http://192.168.1.31:8080/haproxy/stats",监控集群信息。
该URL的用户认证信息在'/etx/haproxy/haproxy.cfg'中定义的,
"stats auth pxcstats:secret",也就是用户:pxcststs,密码:secret
7:用keepalived解决Haproxy单点故障
主代理服务器:192.168.1.31 proxy1
备份代理服务器:192.168.1.32 proxy2
Vip :192.168.1.30
参照上面内容,在proxy2安装配置Haproxy。
8:在proxy1、proxy2上安装配置keepalived
yum -y install keepalived
配置proxy1 的keepalived:
vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived
vrrp_script chk_http_port {
script "/etc/keepalieved/check_haproxy.sh"
interval 2
weight 2
global_defs {
route_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER
interface eth1
virtual_router_id 51
priority 150
advert_int 1
authentication {
auth_type PASS
auth_pass secret
}
track_script {
chk_http_port
}
virtual_ipaddress {
192.168.1.30
}
}
根据keepalived配置文件所示,需要一个用来检测Haproxy服务是否启动的脚本。
vi /etc/keepalived/check_haproxy.sh
#!/bin/bash
A=`ps -C haproxy --no-header|wc -l`
if [$A -eq 0];then
/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg
sleep 3
if [`ps -C haproxy --no-header|wc -l` -eq 0];then
/etc/init.d/keepalived stop
fi
fi
启动proxy1的keepalived:
/etc/init.d/keepalived start
通过访问Vip的haproxy状态监控URL来确认keepalived服务是否有效:
http://192.168.1.30:8080/haproxy/stats
配置备份服务器proxy2的keepalived:
vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived
vrrp_script chk_http_port {
script "/etc/keepalieved/check_haproxy.sh"
interval 2
weight 2
global_defs {
route_id LVS_DEVEL
}
vrrp_instance VI_1 {
state backup #这个改成backup
interface eth1
virtual_router_id 51
priority 120 #小于主服务器即可
advert_int 1
authentication {
auth_type PASS
auth_pass secret
}
track_script {
chk_http_port
}
virtual_ipaddress {
192.168.1.30
}
}
于主服务器一样配置/etc/keepalived/check_proxy.sh
启动proxy2的keepalived:/etc/init.d/keepalives start
验证Vip的漂移:
在proxy1上停止keepalived模拟服务器宕机:/etc/init.d/keepalives stop
通过访问Vip的haproxy状态监控URL来确认keepalived服务是否有效:
http://192.168.1.30:8080/haproxy/stats,可以正常访问,这就避免了Haproxy的单点故障。