Haproxy+keepalived+Mycat+MHA 实现后端(高可用、读写分离、负载均衡、双机双工)

后端架构拓扑图

                    +-------------+      +-----------+            +--------------------------+
                    | keepalived  |      |  +-----+  |            | +--------+    +--------+ |   
                    |------------ |      |  |mycat|  |       ==>  | |mysql(M)|<==>|mysql(M)| |      
                    |  +-------+  |      |  +-----+  |            | +--------+    +--------+ |   
                    |  |haproxy|=>| ==>  |           |            |  MHA或其他多主高可用方案    |
                    |  +-------+  |      |  +-----+  |            |-~-~-~-~-~-~~~~-~-~-~-~-~-|          
client --> vip      |    |高|     |      |  |mycat|  |  vip  ==>  | +--------+    +--------+ |
                    |    |可|     |      |  +-----+  |            | |mysql(S)| 从 |mysql(S)| |
                    |    |用|     |      |           |            | +--------+ 库 +--------+ | 
                    |  +-------+  |      |  +-----+  |            | +--------+ 集 +--------+ |    
                    |  |haproxy|=>| ==>  |  |mycat|  |       ==>  | |mysql(S)| 群 |mysql(S)| |  
                    |  +-------+  |      |  +-----+  |            | +--------+    +--------+ |  
                    +-------------+      +-----------+            +--------------------------+

定义项目主机 (环境使用虚拟机)

mysql 服务器四台 VIP192.168.10.100
192.168.10.1 mysql1 主
192.168.10.2 mysql2 主备
192.168.10.3 mysql3 从
192.168.10.4 mysql4 从
192.168.10.50 mysql 管理节点

Mycat读写分离服务器两台
192.168.10.11 mycat11
192.168.10.12 mycat12

Haproxy 调度服务器两台 VIP 192.168.10.200
192.168.10.21 haproxy21
192.168.10.22 haproxy22

client 主机 192.168.10.254

依照后端模板创建前端盘与xml文件创建九台虚拟主机

[root@client ~]# cd /var/lib/libvirt/images/
[root@client images]# for i in {1..9}
> do
> qemu-img create -b centos7.qcow2  -f qcow2 node$i.qcow2    //创建前端盘
> done

[root@client images]# cd /etc/libvirt/qemu/
[root@client qemu]# for i in {1..9}
> do
> virsh dumpxml centos7 > node$i.xml      //创建前端模板
> done

[root@client qemu]# vim node1.xml      //修改模板参数

node1    //修改模板名称   
.....

    
         //修改前端盘路径
    

.....
:g/add/d    //去掉所有add个性配置

启动虚拟机并设置网络参数

[root@client qemu]# for i in {1..9}
> do
> virsh define node$i.xml
> done
定义域 node1(从 node1.xml)
....

[root@client qemu]# for i in {1..9}; do virsh start node$i; done
域 node1 已开始
....

[root@client qemu]# for i in {1..9}; do virsh console node$i; done     //依次设置所有虚拟机网络参数
连接到域 node1

[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
   DEVICE="eth0"
   ONBOOT="yes"
   IPV6INIT="no"
   TYPE="Etnernet"
   BOOTPROTO="static"
   IPADDR=192.168.10.1
   PREFIX=24
   GATEWAY=192.168.10.254
   
[root@localhost ~]# systemctl restart network            //重启网络
[root@localhost ~]# hostnamectl set-hostname mysql1     //设置主机名

物理机拷贝已有YUM源到所有主机并注入秘钥

[root@client qemu]# for i in 1 2 3 4 11 12 21 22
> do
> ssh-copy-id 192.168.10.$i
> done

[root@client qemu]# for i in 1 2 3 4 11 12 21 22
> do
> scp  /etc/yum.repos.d/192.168.10.254_*   192.168.10.$i:/etc/yum.repos.d/
> ssh 192.168.10.$i  yum repolist
> done

环境准备完成!

首先部署mysql集群

[root@client qemu]# for i in {1..4}     //四台节点安装 MYSQL和innobackup
> do
> yum -y install mysql-community-client.x86_64 mysql-community-server.x86_64   yum -y install percona-xtrabackup-24.x86_64 perl-DBD-mysql perl-Digest-MD5 rsync 
> ssh 192.168.10.$i systemctl restart mysqld
> done

主从节点设置hosts及ssh秘钥(所有节点都要配置)

[root@mysql01 ~]# for i in {1..4}       //添加hosts
> do
> echo -e "192.168.10.$i\tmysql$i"  >> /etc/hosts
> done

[root@mysql01 ~]# vim /etc/hosts
192.168.10.1   mysql1
192.168.10.2   mysql2
....

[root@mysql01 ~]# ssh-keygen -N '' -f /root/.ssh/id_rsa     //生成秘钥

[root@mysql01 ~]# vim /etc/ssh/ssh_config
StrictHostKeyChecking no    //关闭检查
....

[root@mysql01 ~]# for i in {1..4}      //注入秘钥
> do
> ssh-copy-id mysql$i
> done

[root@mysql01 ~]# for i in {2..4}      //同步host及ssh配置到其他主机
> do
> scp /etc/hosts  mysql$i:/etc/
> scp /etc/ssh/ssh_config   mysql$i:/etc/ssh/
> ssh mysql$i  systemctl restart sshd
> done

注意:在所有mysql节点都要配置秘钥

所有主机设置密码(这里不做演示)

mysql初始化完成-开启主库binlog日志

[root@mysql01 ~]# vim /etc/my.cnf
 [mysqld]
 validate_password_policy=0
 validate_password_length=6
 bind-address=0.0.0.0
 server_id=1
 log-bin=mysql-bin
 binlog-format="mixed"
 plugin-load = "rpl_semi_sync_master=semisync_master.so;rpl_semi_sync_slave=semisync_slave.so"     //安装半同步插件(既是主又是从)
 rpl-semi-sync-master-enabled = 1                 //开启版同步复制(master)
 rpl-semi-sync-slave-enabled = 1                   //开启版同步复制(slave)
 relay_log_purge=0         //不自动删除本机的中继日志文件
 ....
 [root@mysql01 ~]# systemctl restart mysqld

导入已准备好的数据做线上备份测试(也可以自行insert参数)

[root@mysql01 ~]# mysql -uroot -p123456 < mydb.sql
mysql> select * from mydb.myscript limit 5;
.....

设置同步用户

mysql> grant replication slave, replication client on *.* to repluser@"192.168.10.%" identified by '123456';
mysql> grant all on *.* to admin@"192.168.10.%" identified by '123456';

将主库完全热备份(innobackup之前已安装)

[root@mysql01 ~]# innobackupex  --slave-info --user=root --password=123456 --no-timestamp /backup/    //完全备份
[root@mysql01 ~]# tar -zcf mybackup.tar.gz /backup/
[root@mysql01 ~]# for i in {2..4}
> do
> scp /root/mybackup.tar.gz  mysql$i:~
> done

从库恢复主库备份(一台为例)

[root@mysql02 ~]# tar -xf mybackup.tar.gz  -C /
[root@mysql02 ~]# systemctl stop mysqld
[root@mysql02 ~]# rm -rf /var/lib/mysql/*                            //目录必须是空
[root@mysql02 ~]# innobackupex --user=root --password=123456  --apply-log /backup/                  //进行数据恢复
[root@mysql02 ~]# innobackupex --user=root --password=123456  --copy-back /backup/
[root@mysql02 ~]# chown -R mysql:mysql /var/lib/mysql               //恢复之后是root权限需要修改成mysql
[root@mysql02 ~]# systemctl restart mysqld

查看binlog日志偏移量

[root@mysql02 ~]# cat /backup/xtrabackup_info
    start_time = 2020-04-04 00:30:45
    end_time = 2020-04-04 00:30:48
    lock_time = 0
    binlog_pos = filename 'mysql-bin.000001', position '23681'      //查找到偏移量位置

部署备用主库

[root@mysql02 ~]# vim /etc/my.cnf
 [mysqld]
validate_password_policy=0
validate_password_length=6
bind-address=0.0.0.0
server_id=2
log-bin=mysql-bin
binlog-format="mixed"
plugin-load = "rpl_semi_sync_master=semisync_master.so;rpl_semi_sync_slave=semisync_slave.so"    
rpl-semi-sync-master-enabled = 1
rpl-semi-sync-slave-enabled = 1
relay_log_purge=0
[root@mysql02 ~]# systemctl restart mysqld

设置备用主库为从节点

mysql> change master to
-> master_host="192.168.10.1",
-> master_user="repluser",
-> master_password="123456",
-> master_log_file="mysql-bin.000001",     //填写 backup/xtrabackup_info 内的binlog偏移量
-> master_log_pos=23681;

mysql> slave start;    //开启slave

mysql> show slave status\G
       ....
       Slave_IO_Running: Yes           //确认IO线程和SQL线程状态是否正常
       Slave_SQL_Running: Yes

部署slave节点两台(备份与配置slave上述相同这里不再演示)

[root@mysql3 ~]# vim /etc/my.cnf
server_id=4                //从库不需要开启binlog日志,系统默认异步模式,若需要开启半同步模式需添加下列配置并重启服务
relay_log_purge=0
#plugin-load=rpl_semi_sync_slave=semisync_slave.so
#rpl_semi_sync_slave_enabled=1

物理机拷贝官网下载MHA软件包到所有节点

[root@client yangyifile]# for i in 1 2 3 4 50
> do
> scp mha-soft-student.zip  192.168.10.$i:~
> done

mysql主从节点安装node软件包

[root@mysql1 ~]# yum provides unzip
[root@mysql1 ~]# yum -y install unzip-6.0-16.el7.x86_64
[root@mysql1 ~]# unzip mha-soft-student.zip -d .
[root@mysql1 ~]# cd mha-soft-student
[root@mysql1 mha-soft-student]# yum -y install perl-*.rpm
[root@mysql1 mha-soft-student]# yum -y install mha4mysql-node-0.56-0.el6.noarch.rpm    

部署管理节点 192.168.10.50
添加所有节点host,配置ssh秘钥,配置YUM,安装mysql软件(测试使用)这里不做演示

[root@mysql50 ~]# yum -y install unzip-6.0-16.el7.x86_64
[root@mysql50 ~]# cd mha-soft-student
[root@mysql50 mha-soft-student]# yum -y install perl-*.rpm
[root@mysql50 mha-soft-student]# mkdir /etc/mha_manager
[root@mysql50 mha-soft-student]# cp master_ip_failover /etc/mha_manager
[root@mysql50 mha-soft-student]# tar -xf mha4mysql-manager-0.56.tar.gz
[root@mysql50 mha-soft-student]# cd mha4mysql-manager-0.56
[root@mysql50 mha4mysql-manager-0.56]# yum -y  install perl-DBD-mysql   perl-DBI  perl-ExtUtils-*   perl-CPAN-*  mha4mysql-node-0.56-0.el6.noarch.rpm
[root@mysql50 mha4mysql-manager-0.56]# perl Makefile.PL
     - DBI                   ...loaded. (1.627)
     - DBD::mysql            ...loaded. (4.023)
     - Time::HiRes           ...loaded. (1.9725)
     - Config::Tiny          ...loaded. (2.14)
     - Log::Dispatch         ...loaded. (2.41)
     - Parallel::ForkManager ...loaded. (1.18)
     - MHA::NodeConst        ...loaded. (0.56)
    *** Module::AutoInstall configuration finished.
    
[root@mysql50 mha4mysql-manager-0.56]# make && make install
[root@mysql50 mha4mysql-manager-0.56]# cd samples/conf/
[root@mysql50 conf]# cp app1.cnf  /etc/mha_manager/
[root@mysql50 conf]# cd /etc/mha_manager/
[root@mysql50 mha_manager]# chmod +x master_ip_failover 
[root@mysql50 mha_manager]# vim master_ip_failover
....
my $vip = '192.168.10.100/24';  # Virtual IP                //故障切换脚本添加VIP
my $key = "1";
my $ssh_start_vip = "/sbin/ifconfig eth0:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig eth0:$key down";
....

配置管理节点参数

[root@mysql50 mha_manager]# vim app1.cnf
[server default]
manager_workdir=/etc/mha_manager            //工作目录
manager_log=/etc/mha_manager/manager.log       //日志存放的位置

#master_ip_failover_script=/etc/mha_manager/master_ip_failover        //定义故障迁移脚本名和路径(测试运行时注释掉这行)

ssh_user=root            //ssh登录的用户名
ssh_port=22           

repl_user=repluser            //同步数据的用户    主从上都已经授权的用户
repl_password=123456

user=admin                     //监视主机的用户     主从上都已经授权的用户
password=123456


[server1]
hostname=192.168.10.1    
port=3306

[server2]
hostname=192.168.10.2
port=3306            
candidate_master=1       //竞选主库打开

[server3]
hostname=192.168.10.3
port=3306
no_master=1              //不竞选主库

[server4]
hostname=192.168.10.4
 port=3306
no_master=1              //不竞选主库

测试运行

[root@mysql50 mha_manager]# masterha_check_ssh --conf=/etc/mha_manager/app1.cnf       //测试集群ssh 秘钥
....
Sat Apr  4 10:10:51 2020 - [info] All SSH connectn tests passed successfully.
 
[root@mysql50 mha_manager]# masterha_check_repl --conf=/etc/mha_manager/app1.cnf     //测试集群主从同步
....
MySQL Replication Health is OK.

[root@mysql1 ~]# ifconfig eth0:1 192.168.10.100/24      //master配置VIP

正式运行

## 打开故障切换脚本
[root@mysql50 mha_manager]# masterha_manager --conf=/etc/mha_manager/app1.cnf  --remove_dead_master_conf  --ignore_last_failover
Sat Apr  4 11:12:40 2020 - [info] Reading default configuration from /etc/masterha_default.cnf..
Sat Apr  4 11:12:40 2020 - [info] Reading application default configuration from /etc/mha_manager/app1.cnf..
Sat Apr  4 11:12:40 2020 - [info] Reading server configuration from /etc/mha_manager/app1.cnf..

       --remove_dead_master_conf	   //删除宕机主库配置
       --ignore_last_failover     //忽略xxx.health文件

在打开一个终端查看状态

[root@mysql50 ~]# masterha_check_status --conf=/etc/mha_manager/app1.cnf
app1 (pid:2433) is running(0:PING_OK), master:192.168.10.1         //主库  192.168.10.1

测试宕机主库查看切换状态

[root@mysql1 ~]# systemctl stop mysqld
[root@mysql2 mha-soft-student]# ifconfig eth0:1          //VIP已自动切换到mysql2主机
 eth0:1: flags=4163  mtu 1500
    inet 192.168.10.100  netmask 255.255.255.0  broadcast 192.168.10.255
    ether 52:54:00:a4:28:db  txqueuelen 1000  (Ethernet)

mysql1主机恢复后,将mysql2(master)主机完全备份之后恢复到mysql1主机,再将mysql1做成mysql2的从库,修改app1.conf文件

[root@mysql2 ~]# mkdir /newbackup               //完全备
[root@mysql2 ~]# innobackupex  --slave-info --user=root --password=123456 --no-timestamp /newbackup/
[root@mysql2 ~]# tar -zcf newbackup.tar.gz /newbackup/
[root@mysql2 ~]# scp newbackup.tar.gz  mysql1:~

[root@mysql1 ~]# tar -xf newbackup.tar.gz              //恢复备份
[root@mysql1 ~]# innobackupex --user=root --password=123456 --apply-log newbackup
[root@mysql1 ~]# innobackupex --user=root --password=123456 --copy-back newbackup
[root@mysql1 ~]# chown -R mysql:mysql /var/lib/mysql
[root@mysql1 ~]# systemctl restart mysqld

[root@mysql1 ~]# mysql -uroot -p123456
mysql> change master to
-> master_host="192.168.10.2",
-> master_user="repluser",
-> master_password="123456",
-> master_log_file="mysql-bin.000001",
-> master_log_pos=319;

mysql> start slave;
mysql> show slave status\G    //确认IO线程和SQL线程是否正常

mysql> show full processlist;       //在mysql2上查看从库状态 (mysql2上执行)

部署Mycat读写分离服务器
192.168.10.11 mycat11
192.168.10.12 mycat12

这里我们使用 Mycat-server-1.6-RELEASE 稳定版 (也可以使用 maxscale 来做读写分离)

[root@client 03]# for i in 11 12
> do
> scp Mycat-server-1.6-RELEASE-20161028204710-linux.tar.gz  192.168.10.$i:~
> done

两台主机分别安装部署Mycat

[root@mycat11 ~]# tar -xf Mycat-server-1.6-RELEASE-20161028204710-linux.tar.gz 
[root@mycat11 ~]# mv mycat /usr/local/mycat
[root@mycat11 ~]# cd /usr/local/mycat
[root@mycat11 mycat]# find . -type f  -exec chmod 644 {} \;
[root@mycat11 mycat]# find . -type d  -exec chmod 755 {} \;
[root@mycat11 mycat]# chmod -R 755 /usr/local/mycat/bin/
[root@mycat11 mycat]# yum -y install java-1.8.0-openjdk
[root@mycat11 mycat]# cd conf/
[root@mycat11 conf]# vim server.xml      
    ....
                                         //定义虚拟用户
            123456                  
            yangyi_db01       //定义虚拟库名称
    

    
            user
            yangyi_db01
            true               //这里是只读
    

[root@mycat11 conf]# vim schema.xml       




   //虚拟库对应虚拟主机
 
    //虚拟库对应的虚拟主机和真实数据库名
 
      select user()
      
           //定义主库地址及登录用户 (VIP)
           //定义从库地址及登录用户 (多台)
      
      
      
 

主库添加一个读权限用户(会自动同步到所有从库)

mysql> grant select on *.* to readyy@"192.168.10.%" identified by '123456';

启动Mycat服务

[root@mycat11 ~]# /usr/local/mycat/bin/mycat  restart
Stopping Mycat-server...
....
[root@mycat11 ~]# ss -ntulp | grep 8066  

客户端主机登录测试访问

[root@client 桌面]# mysql -h192.168.10.11 -uyangyi -p123456 -P8066

mysql> select @@hostname;     //读从库自动切换(读取数据不会切换到master)
+------------+
| @@hostname |
+------------+
| mysql3     |
+------------+

mysql> select @@hostname;
+------------+
| @@hostname |
+------------+
| mysql4     |
+------------+

mysql> select @@hostname;
+------------+
| @@hostname |
+------------+
| mysql1     |
+------------+

部署第二台Mycat读写分离服务器实现高可用(根据访问需求部署多台)

[root@mycat12 ~]# yum -y install java-1.8.0-openjdk  rsync    //第二台主机安装 rsync openjdk      java环境
[root@mycat12 conf]# mkdir /usr/local/mycat

[root@mycat11 ~]# yum -y install rsync   //第一台Mycat也安装同步工具
[root@mycat11 ~]# rsync -aSH --delete /usr/local/mycat/   192.168.10.12:/usr/local/mycat/   //将Maycat同步到第二台Mycat主机

启动第二台Mycat服务器并测试访问

[root@mycat12 ~]# /usr/local/mycat/bin/mycat restart

[root@client 桌面]# mysql -h192.168.10.12 -uyangyi -p123456 -P8066     //客户机访问第二台Maycat测试读写分离
mysql> select @@hostname;
+------------+
| @@hostname |
+------------+
| mysql3     |
+------------+
 ....

部署两台haproxy(一台为例)

[root@haproxy21 ~]# yum -y install haproxy
[root@haproxy21 ~]# cd /etc/haproxy/
[root@haproxy21 haproxy]# cp haproxy.cfg haproxy.cfg.back
[root@haproxy21 haproxy]# vim haproxy.cfg
listen mysql_3306 *:3306                 //监听的端口
mode        tcp                      //mysql 使用tcp协议
option      tcpka                    //使用长连接  (防止反复的连接后端服务器的压力)
balance     leastconn             //最小连接调度算法 (由于查询数据库需要时间不能和web一样马上可以处理,避免阻塞后面的快速查询)
server mycat_01 192.168.10.11:8066 check inter 3000 rise 1 maxconn 1000 fall 3    
server mycat_02 192.168.10.12:8066 check inter 3000 rise 1 maxconn 1000 fall 3

[root@haproxy21 haproxy]# systemctl restart haproxy
[root@haproxy21 haproxy]# systemctl enable haproxy

客户端测试调度算法轮询 (为了测试调度效率修改一台Mycat虚拟库名称修改yangyi_db02)

[root@mycat11 ~]# /usr/local/mycat/bin/mycat stop
[root@mycat11 ~]# vim /usr/local/mycat/conf/server.xml
       
            123456
            yangyi_db02      //修改虚拟库名
    
    
            user
            yangyi_db02      //修改虚拟库名
            true
    

:% s/yangyi_db01/yangyi_db02/g     //替换名称


[root@mycat11 ~]# vim /usr/local/mycat/conf/schema.xml
    //修改虚拟库名

[root@mycat11 ~]# /usr/local/mycat/bin/mycat restart   //重启Mycay
[root@mycat11 ~]# ss -ntulp | grep 8066    //确认服务开启
tcp    LISTEN     0      100      :::8066                 :::*                   users:(("java",pid=2561,fd=81))


[root@client 桌面]# mysql -uyangyi -p123456 -h 192.168.10.21 -e "show databases;"    //测试单台haproxy调度效果
+-------------+
| DATABASE    |
+-------------+
| yangyi_db01 |
+-------------+

[root@client 桌面]# mysql -uyangyi -p123456 -h 192.168.10.21 -e "show databases;"
+-------------+
| DATABASE    |
+-------------+
| yangyi_db02 |
+-------------+

部署第二台haproxy

[root@haproxy22 ~]# yum -y install haproxy   //第二台安装Haproxy软件

[root@haproxy21 ~]# scp /etc/haproxy/haproxy.cfg  [email protected]:/etc/haproxy/    //拷贝配置文件到第二台
[root@haproxy22 ~]# systemctl restart haproxy          //由于配置文件一致性,所以重启服务即可访问
[root@haproxy22 ~]# systemctl enable haproxy

测试第二台Haproxy主机访问(这里不做演示,可参考第一台测试效果)

在Haproxy两台7层调度器上部署keepalived (实现高可用)并同时部署VIP192.168.10.200

[root@haproxy21 ~]# yum list *keepalived*
可安装的软件包
keepalived.x86_64                                     1.3.5-1.el7                                      192.168.10.254_centos7_
[root@haproxy21 ~]# yum -y install keepalived.x86_64
[root@haproxy21 ~]# cd /etc/keepalived/
[root@haproxy21 keepalived]# cp keepalived.conf keepalived.conf.back
[root@haproxy21 keepalived]# vim keepalived.conf
! Configuration File for keepalived
global_defs {
  router_id mycat
}

vrrp_script chk_haproxy {
     script "killall -0 haproxy"      //判断Haproxy是否存活
interval 2
}

vrrp_instance Mycat01 {
    state BACKUP
    interface eth0
    virtual_router_id 150
    priority 200
    advert_int 5
    authentication {
        auth_type PASS
        auth_pass test_mycat01
}
virtual_ipaddress {
    192.168.10.200/24   brd 192.168.10.255   dev eth0 label eth0:1    //绑定eth0:1
}
 track_script {
    chk_haproxy weight=0    //当优先级高的主机出现故障自动降级为0 
    }
}

[root@haproxy21 ~]# systemctl restart keepalived
[root@haproxy21 ~]# systemctl enable keepalived
[root@haproxy21 ~]# ifconfig 
eth0:1: flags=4163  mtu 1500
       inet 192.168.10.200  netmask 255.255.255.0  broadcast 192.168.10.255
       ether 52:54:00:40:55:4e  txqueuelen 1000  (Ethernet)

部署第二台keepalived

[root@haproxy22 ~]# yum -y install keepalived

[root@haproxy21 ~]# scp /etc/keepalived/keepalived.conf 192.168.10.22:/etc/keepalived/keepalived.conf  //将配置文件拷贝到10.22主机
[root@haproxy22 ~]# vim /etc/keepalived/keepalived.conf
....
priority 100    //修改下优先级
....
[root@haproxy22 ~]# systemctl restart keepalived
[root@haproxy22 ~]# systemctl enable keepalived

测试VIP切换效果

[root@haproxy21 ~]# systemctl stop haproxy       //将VIP主机的Haproxy服务宕掉查看VIP切换
[root@haproxy21 ~]# ifconfig eth0:1
eth0:1: flags=4163  mtu 1500    //VIP消失
    ether 52:54:00:40:55:4e  txqueuelen 1000  (Ethernet)

[root@haproxy22 ~]# ifconfig eth0:1
eth0:1: flags=4163  mtu 1500
    inet 192.168.10.200  netmask 255.255.255.0  broadcast 192.168.10.255    //VIP已切换到第二台
    ether 52:54:00:9a:f1:8e  txqueuelen 1000  (Ethernet)

此架构唯一不足之处,由于VIP只有一个所以客户端访问只能进行单工,如果想实现双机双工,需要部署双VIP,两台keepalived抢占自己部署的VIP,前端可用域名解析两个VIP进行轮询负载均衡效果

在原有架构上进行变更
Haproxy01主机修改keepalived配置文件

[root@haproxy21 ~]# vim /etc/keepalived/keepalived.conf
....
vrrp_instance Mycat01 {
state MASTER
interface eth0
track_interface {
    eth0
}
virtual_router_id 150
priority 200            //优先级高
! nopreempt          //抢占VIP 192.168.10.200
advert_int 2
authentication {
    auth_type PASS
    auth_pass test_mycat01
}
virtual_ipaddress {
    192.168.10.200/24   brd 192.168.10.255   dev eth0 label eth0:1
}
track_script {
   chk_haproxy weight=0   
}
}
vrrp_instance Mycat02 {
state BACKUP
interface eth0
track_interface {
    eth0
}
virtual_router_id 151
priority 100           //优先级低
nopreempt           //抢占VIP 192.168.10.201
advert_int 2
authentication {
    auth_type PASS
    auth_pass test_mycat02
}
virtual_ipaddress {
    192.168.10.201/24   brd 192.168.10.255   dev eth0 label eth0:2
}
track_script {
   chk_haproxy weight=0      
}
}

Haproxy02主机修改keepalived配置文件

[root@haproxy22 ~]# vim /etc/keepalived/keepalived.conf
vrrp_instance Mycat01 {
state BACKUP
interface eth0
track_interface {
    eth0
}
virtual_router_id 150
priority 100            //优先级低
nopreempt             //不抢占192.168.10.200/24
advert_int 2
authentication {
    auth_type PASS
    auth_pass test_mycat01
}
virtual_ipaddress {
    192.168.10.200/24   brd 192.168.10.255   dev eth0 label eth0:1
}
track_script {
   chk_haproxy weight=0     
}

vrrp_instance Mycat02 {
state MASTER
interface eth0
track_interface {
    eth0
}

virtual_router_id 151
priority 200       //优先级高
! nopreempt      //不抢占192.168.10.201/24
advert_int 2
authentication {
    auth_type PASS
    auth_pass test_mycat02
}
virtual_ipaddress {
    192.168.10.201/24   brd 192.168.10.255   dev eth0 label eth0:2
}
track_script {
   chk_haproxy weight=0     
}
}

重启keepalive和Haproxy服务查看双主机VIP

[root@haproxy21 ~]# ifconfig
eth0:1: flags=4163  mtu 1500
    inet 192.168.10.200  netmask 255.255.255.0  broadcast 192.168.10.255
    ether 52:54:00:40:55:4e  txqueuelen 1000  (Ethernet)

[root@haproxy22 ~]# ifconfig
eth0:2: flags=4163  mtu 1500
    inet 192.168.10.201  netmask 255.255.255.0  broadcast 192.168.10.255
    ether 52:54:00:9a:f1:8e  txqueuelen 1000  (Ethernet)

最终测试
宕掉192.168.10.21主机Haproxy服务,查看VIP切换至另一台Haproxy主机

[root@haproxy21 ~]# systemctl  stop haproxy     //宕掉Haproxy01
[root@haproxy21 ~]# ifconfig eth0:1
eth0:1: flags=4163  mtu 1500     //VIP消失
    ether 52:54:00:40:55:4e  txqueuelen 1000  (Ethernet)
    
[root@haproxy22 ~]# ifconfig
eth0:1: flags=4163  mtu 1500    //双VIP
    inet 192.168.10.200  netmask 255.255.255.0  broadcast 192.168.10.255
    ether 52:54:00:9a:f1:8e  txqueuelen 1000  (Ethernet)
eth0:2: flags=4163  mtu 1500
    inet 192.168.10.201  netmask 255.255.255.0  broadcast 192.168.10.255
    ether 52:54:00:9a:f1:8e  txqueuelen 1000  (Ethernet)

恢复启动Haproxy01主机会自动抢占回VIP (这里不多做测试)

附加 Mycay 参数说明:schema.xml

balance指的负载均衡类型,目前的取值有4种:
balance=“0”, 不开启读写分离机制,所有读操作都发送到当前可用的writeHost上。
balance=“1”,全部的readHost与stand by writeHost参与select语句的负载均衡
balance=“2”,所有读操作都随机的在writeHost、readhost上分发。
balance=“3”,所有读请求随机的分发到wiriterHost对应的readhost执行,writerHost不负担读压力

switchType指的是切换的模式,目前的取值也有4种:
switchType=’-1’ 表示不自动切换
switchType=‘1’ 默认值,表示自动切换
switchType=‘2’ 基于MySQL主从同步的状态决定是否切换,心跳语句为 show slave status
switchType=‘3’ 基于MySQL galary cluster的切换机制(适合集群)(1.4.1),心跳语句为 show status like ‘wsrep%’

WriteType参数设置:
writeType=“0”, 所有写操作都发送到可用的writeHost上。
writeType=“1”,所有写操作都随机的发送到readHost。
writeType=“2”,所有写操作都随机的在writeHost、readhost分上发。

你可能感兴趣的:(高可用)