MySQL——MHA原理介绍及VIP配置

文章目录

  • MHA环境准备
  • MHA原理
    • MHA的组成
    • MHA软件构成
  • MHA安装及状态检查
    • perl包安装及node、manager包安装
    • MHA配置文件
    • 状态检查
  • MHA故障处理
  • MHA应用透明的VIP功能
    • 验证VIP是否可用

MHA环境准备

准备三台装有mysql的节点,软链接的路径根据自己mysql程序安装的位置而定,链接完以后直接就能使用该命令了!

主库:

hdfeng01	node

从库:

hdfeng02	node
hdfeng03	node

manager(mha管理工具):

hdfeng01

配置sql命令的软链接

ln -s /opt/mysql-basedir/mysql/bin/mysqlbinlog	/usr/bin/mysqlbinlog
ln -s /opt/mysql-basedir/mysql/bin/mysql		/usr/bin/mysql
在MHA中,命令的调用是绝对路径,所以需要做相应的命令到/usr/bin,绝对路径不调用环境变量的命令。

配置各节点的SSH免密钥登录

[root@hdfeng01 .ssh]# ssh-keygen
[root@hdfeng02 .ssh]# ssh-keygen
[root@hdfeng03 .ssh]# ssh-keygen
各节点生成自己的公钥
[root@hdfeng01 .ssh]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.xxx.xxx (192.168.xxx.xxx)' can't be established.
ECDSA key fingerprint is SHA256:0+wmcGkA4F3MPhcqutTOU85C1i9xrsPtNRJbjFjCYIo.
ECDSA key fingerprint is MD5:88:31:74:ca:ce:c3:94:dd:d4:29:34:72:97:5f:08:e4.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.

然后每台机器分别往另外两台机器发送公钥,出现以上信息表示添加成功
进行验证
[root@hdfeng01 ~]# ssh 192.168.xxx.xxx date
20200317日 星期二 14:40:39 CST
[root@hdfeng02 ~]# ssh 192.168.xxx.xxx date
20200317日 星期二 14:40:50 CST
[root@hdfeng03 ~]# ssh 192.168.xxx.xxx date
20200317日 星期二 14:40:50 CST
每台节点都分别向另外两台发送,出现时间即免密成功!

到这一步为止,环境准备完毕!

MHA原理

MHA主要是用在高可用架构上的,一主两从主要是为了能主备切换,主故障了,从库顶上去,而且切换时间能保证在30s以内,在主从故障切换的过程中也能保证数据的完整性和一致性,达到了真正意义上的高可用,虽然不及多活架构下的双活来的强大,但是在资源占用上会比多活架构占用更少的资源。

MHA的组成

由以下两部分组成:
一、MHA Manager(管理)

manager其实可以部署在独立的一个节点上,也可以部署在压力比较小的node上,可以管理多个主从集群。

master故障处理:

1、监控节点:监控master所在的系统、网络、SSH、主从库状态是否正常。
2、主库如果宕机,manager会有一个选主的过程,过程如下
(1)如果从库和主库有差异,会选GTID或position最接近主库的成为主。
(2)如果从库和主库数据一致,按配置文件内的从库配置顺序来选主。
(3)在MHA中如有candidate_master=1,表示强制指定该从库为主
第3条隐性条件:
1)、如果slave的relaylog落后了master 100M,权限则会直接失效
2)、设置check_repl_delay=0表示该slave无论落后多少日志都进行备选主

3、master故障时,会有一个数据补偿,有以下两种情况:
(1)当slave能ssh到从库时,会对比和主库的position号和GTID号,然后保存到自己的节点并且应用上去。(save_binary_logs)
(2)当ssh不能连接时,会对比从库的relaylog差异,进行数据补偿。(apply_diff_relay_logs) 

4、failover

(1)进行主备切换,并且对外提供服务
(2)当某一台slave成功选主以后,其它从库和新主进行主从关系的建立(change master to)

5、应用透明(VIP功能)

在应用层时,上层用户是感觉不到后面数据库进行了切换的,只在在切换的时候可能会有一定的影响
而且MHA只能进行切换一次,这也是弊端,需要马上处理。

6、故障切换通知机制(send_report)

通知用户主库发生故障,进行了主从切换

7、二次数据补偿(binlog_server)

指的是从库在同步从库数据一致性的同时,还需要准备一个无服务的数据库单独的去复制主库的数据,
随时同步主库的信息,这样的话就算主库发生故障,也能使用这个数据库中的内容进行恢复。

二、MHA Node(数据)

1、跑在每个有mysql的节点上,manager会去找这些node中哪一个是master节点
2、当master出现故障的时候,将最高优先级的slave节点更新为master节点
3、让其它slave节点指向这个新的master节点
4、对于上层应用来说切换过程是透明的,并不受影响

一般的话如果在发生主从故障的时候,MHA去获取master服务器上的binlog,在mster节点硬件故障、ssh无法访问、GTID复制模式这些情况下,mha不会考虑拿master节点的binlog到新的master上了,而是结合MGR,即使只有一台slave节点接收到binlog,mha也能将binlog应用到其它slave节点上,保持数据的一致性。

MHA软件构成

manager使用工具:

masterha_check_ssh:			检查MHA的ssh情况
masterha_check_repl:		检查MHA的复制情况
masterha_manager:			开启MHA
masterha_check_status:		检查MHA运行情况
masterha_master_monitor:	检查master是否宕机
masterha_master_switch:		控制故障的转移(可以手动也可以自动)
masterha_conf_host:			添加/删除配置的server信息
masterha_stop:				关闭MHA
以下脚本无需操作,自动触发
save_binary_logs:			保存和复制master的binlog
apply_diff_relay_logs:		识别差异日志,应用到其它slave
purge_relay_logs:			清除中继日志

MHA安装及状态检查

perl包安装及node、manager包安装

下载manager包和node包:

官网:https://code.google.com/archive/p/mysql-master-ha/
github下载:https://github.com/yoshinorim/mha4mysql-manager/wiki/Downloads

安装Perl模块:

所有node安装:
yum install perl-DBD-MySQL -y
manager节点上安装
yum install -y perl-Config-Tiny epel-release perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes

安装软件:

manager节点:
[root@hdfeng01 opt]# rpm -ivh mha4mysql-node-0.56-0.el6.noarch.rpm
[root@hdfeng01 opt]# rpm -ivh mha4mysql-manager-0.56-0.el6.noarch.rpm
node节点:
[root@hdfeng01 opt]# rpm -ivh mha4mysql-node-0.56-0.el6.noarch.rpm

主节点master管理用户创建:

db01 [(none)]>grant all privileges on *.* to mha@‘192.168.%.%' identified by '123';

MHA配置文件

创建配置文件目录

[root@hdfeng01 opt]# mkdir -p /etc/mha

创建日志目录

[root@hdfeng01 opt]# mkdir -p /var/log/mha/hdfeng1

配置文件编辑

[root@hdfeng01 opt]# vim /etc/mha/hdfeng1.cnf
[server default]
manager_log=/var/log/mha/hdfeng1/manager
master_binlog_dir=/opt/mysql-data/mysql
manager_workdir=/var/log/mha/hdfeng1
user=mha
password=123
ping_interval=2
repl_password=123
repl_user=repl
ssh_user=root
[server1]
hostname=192.168.28.78
port=3306
[server2]
hostname=192.168.28.162
port=3306
[server3]
hostname=192.168.28.163
port=3306

参数说明:

ping_interval=1
监控主库的一个功能,能发送ping包监控主库的时间间隔,1秒监控一次,如果尝试ping主库三次没有收到回应,自动进行failover
candidate_master=1
如果有这个参数,表示该从库做为备选主库,主库发生故障该库直接提升为主库,就算主库不是集群里面最新的slave
check_repl_delay=0
一般如果slave中从库落后主库realylog100M,主库不会选择这个从库为新的master,因为这样的话
对这个从库进行恢复需要很长的时间,通过这个参数,mha触发主从切换的时候会忽略复制的延时,
这个参数对于设置了candidate_master=1的从库非常有用,这样这个从库一定是最新的master。

状态检查

SSH互信检查:

master节点对配置文件进行检查:
[root@hdfeng01 ~]# masterha_check_ssh --conf=/etc/mha/hdfeng1.cnf
Tue Mar 17 17:55:56 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Tue Mar 17 17:55:56 2020 - [info] Reading application default configuration from /etc/mha/hdfeng1.cnf..
Tue Mar 17 17:55:56 2020 - [info] Reading server configuration from /etc/mha/hdfeng1.cnf..
Tue Mar 17 17:55:56 2020 - [info] Starting SSH connection tests..
Tue Mar 17 17:55:59 2020 - [debug]
Tue Mar 17 17:55:57 2020 - [debug]  Connecting via SSH from root@192.168.28.162(192.168.28.162:22) to root@192.168.28.78(192.168.28.78:22)..
Tue Mar 17 17:55:58 2020 - [debug]   ok.
Tue Mar 17 17:55:58 2020 - [debug]  Connecting via SSH from root@192.168.28.162(192.168.28.162:22) to root@192.168.28.163(192.168.28.163:22)..
Tue Mar 17 17:55:58 2020 - [debug]   ok.
Tue Mar 17 17:55:59 2020 - [debug]
Tue Mar 17 17:55:58 2020 - [debug]  Connecting via SSH from root@192.168.28.163(192.168.28.163:22) to root@192.168.28.78(192.168.28.78:22)..
Tue Mar 17 17:55:58 2020 - [debug]   ok.
Tue Mar 17 17:55:58 2020 - [debug]  Connecting via SSH from root@192.168.28.163(192.168.28.163:22) to root@192.168.28.162(192.168.28.162:22)..
Tue Mar 17 17:55:59 2020 - [debug]   ok.
Tue Mar 17 17:56:03 2020 - [debug]
Tue Mar 17 17:55:56 2020 - [debug]  Connecting via SSH from root@192.168.28.78(192.168.28.78:22) to root@192.168.28.162(192.168.28.162:22)..
Warning: Permanently added '192.168.28.78' (ECDSA) to the list of known hosts.
Tue Mar 17 17:55:57 2020 - [debug]   ok.
Tue Mar 17 17:55:57 2020 - [debug]  Connecting via SSH from root@192.168.28.78(192.168.28.78:22) to root@192.168.28.163(192.168.28.163:22)..
Tue Mar 17 17:56:02 2020 - [debug]   ok.
Tue Mar 17 17:56:03 2020 - [info] All SSH connection tests passed successfully.

以上信息表示互信成功,如果报错,再检查相对应的ip、port、password等信息。

主从检查:

[root@hdfeng01 ~]# masterha_check_repl --conf=/etc/mha/hdfeng1.cnf
Tue Mar 17 18:01:59 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Tue Mar 17 18:01:59 2020 - [info] Reading application default configuration from /etc/mha/hdfeng1.cnf..
Tue Mar 17 18:01:59 2020 - [info] Reading server configuration from /etc/mha/hdfeng1.cnf..
Tue Mar 17 18:01:59 2020 - [info] MHA::MasterMonitor version 0.56.
Tue Mar 17 18:02:00 2020 - [info] GTID failover mode = 1
Tue Mar 17 18:02:00 2020 - [info] Dead Servers:
Tue Mar 17 18:02:00 2020 - [info] Alive Servers:
Tue Mar 17 18:02:00 2020 - [info]   192.168.28.78(192.168.28.78:3306)
Tue Mar 17 18:02:00 2020 - [info]   192.168.28.162(192.168.28.162:3306)
Tue Mar 17 18:02:00 2020 - [info]   192.168.28.163(192.168.28.163:3306)
Tue Mar 17 18:02:00 2020 - [info] Alive Slaves:
Tue Mar 17 18:02:00 2020 - [info]   192.168.28.162(192.168.28.162:3306)  Version=5.7.20-log (oldest major version between slaves) log-bin:enabled
Tue Mar 17 18:02:00 2020 - [info]     GTID ON
Tue Mar 17 18:02:00 2020 - [info]     Replicating from 192.168.28.78(192.168.28.78:3306)
Tue Mar 17 18:02:00 2020 - [info]   192.168.28.163(192.168.28.163:3306)  Version=5.7.20-log (oldest major version between slaves) log-bin:enabled
Tue Mar 17 18:02:00 2020 - [info]     GTID ON
Tue Mar 17 18:02:00 2020 - [info]     Replicating from 192.168.28.78(192.168.28.78:3306)
Tue Mar 17 18:02:00 2020 - [info] Current Alive Master: 192.168.28.78(192.168.28.78:3306)
Tue Mar 17 18:02:00 2020 - [info] Checking slave configurations..
Tue Mar 17 18:02:00 2020 - [info]  read_only=1 is not set on slave 192.168.28.162(192.168.28.162:3306).
Tue Mar 17 18:02:00 2020 - [info]  read_only=1 is not set on slave 192.168.28.163(192.168.28.163:3306).
Tue Mar 17 18:02:00 2020 - [info] Checking replication filtering settings..
Tue Mar 17 18:02:00 2020 - [info]  binlog_do_db= , binlog_ignore_db=
Tue Mar 17 18:02:00 2020 - [info]  Replication filtering check ok.
Tue Mar 17 18:02:00 2020 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
Tue Mar 17 18:02:00 2020 - [info] Checking SSH publickey authentication settings on the current master..
Tue Mar 17 18:02:01 2020 - [info] HealthCheck: SSH to 192.168.28.78 is reachable.
Tue Mar 17 18:02:01 2020 - [info]
192.168.28.78(192.168.28.78:3306) (current master)
 +--192.168.28.162(192.168.28.162:3306)
 +--192.168.28.163(192.168.28.163:3306)

Tue Mar 17 18:02:01 2020 - [info] Checking replication health on 192.168.28.162..
Tue Mar 17 18:02:01 2020 - [info]  ok.
Tue Mar 17 18:02:01 2020 - [info] Checking replication health on 192.168.28.163..
Tue Mar 17 18:02:01 2020 - [info]  ok.
Tue Mar 17 18:02:01 2020 - [warning] master_ip_failover_script is not defined.
Tue Mar 17 18:02:01 2020 - [warning] shutdown_script is not defined.
Tue Mar 17 18:02:01 2020 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.

以前表示主从健康,检查通过了。如果不是ok的话就是主从有问题,需要去检查主从的配置。

启动manager

nohup masterha_manager --conf=/etc/mha/hdfeng1.cnf --remove_dead_master_conf --ignore_last_failover  < /dev/null> /var/log/mha/hdfeng1/manager.log 2>&1 &

主节点检查:

[root@hdfeng01 ~]# masterha_check_status --conf=/etc/mha/hdfeng1.cnf
hdfeng1 (pid:9462) is running(0:PING_OK), master:192.168.28.78

检查各MHA的node是否正常:

[root@hdfeng01 ~]# mysql -umha -p -h 192.168.28.78 -e "show variables like 'server_id'"
Enter password:
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 1     |
+---------------+-------+
[root@hdfeng01 ~]# mysql -umha -p -h 192.168.28.162 -e "show variables like 'server_id'"
Enter password:
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 2     |
+---------------+-------+
[root@hdfeng01 ~]# mysql -umha -p -h 192.168.28.163 -e "show variables like 'server_id'"
Enter password:
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 3     |
+---------------+-------+

MHA故障处理

主库宕机,主库系统断电或者服务被杀死

[root@hdfeng01 ~]# systemctl stop mysqld

此时,mha会退出执行的manager,并且会把主库切换到server_id=2这台备选主库上,可以通过以下命令查看:

db02 [(none)]>show slave status\G;
Empty set (0.00 sec)
db03[(none)]>show slave status \G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.28.162

可以看到,主库已经切换到db02上来了
db03的从库也从db01的认主切换到了db02
此时hdfeng1.cnf下的配置文件下,db01的配置文件也会消失

[root@hdfeng01 ~]# cat /etc/mha/hdfeng1.cnf
[server default]
manager_log=/var/log/mha/hdfeng1/manager
manager_workdir=/var/log/mha/hdfeng1
master_binlog_dir=/opt/mysql-data/mysql
password=123
ping_interval=2
repl_password=123
repl_user=repl
ssh_user=root
user=mha

[server2]
hostname=192.168.28.162
port=3306

[server3]
hostname=192.168.28.163
port=3306

也可以通过日志进行分析:

[root@hdfeng01 ~]# tail -f /var/log/mha/hdfeng1/manager
Master 192.168.28.78(192.168.28.78:3306) is down!

Check MHA Manager logs at hdfeng01:/var/log/mha/hdfeng1/manager for details.

Started automated(non-interactive) failover.
Selected 192.168.28.162(192.168.28.162:3306) as a new master.
192.168.28.162(192.168.28.162:3306): OK: Applying all logs succeeded.
192.168.28.163(192.168.28.163:3306): OK: Slave started, replicating from 192.168.28.162(192.168.28.162:3306)
192.168.28.162(192.168.28.162:3306): Resetting slave info succeeded.
Master failover to 192.168.28.162(192.168.28.162:3306) completed successfully.

修复主库:

[root@hdfeng01 ~]# systemctl start mysqld
[root@hdfeng01 ~]# mysql

db01 [(none)]>change master to
    -> master_host='192.168.28.162',
    -> master_port=3306,
    -> master_auto_position=1,
    -> master_user='repl',
    -> master_password='123';
Query OK, 0 rows affected, 2 warnings (0.04 sec)

db01 [(none)]>start slave ;
Query OK, 0 rows affected (0.00 sec)

db01 [(none)]>show slave status \G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.28.162

到这一步,主从就已经修复完成,然后继续修复mha中manager的配置文件:

[root@hdfeng01 ~]# vim /etc/mha/hdfeng1.cnf
[server1]
hostname=192.168.28.78
port=3306

保存退出,然后启动MHA,即可修复完成

nohup masterha_manager --conf=/etc/mha/hdfeng1.cnf --remove_dead_master_conf --ignore_last_failover  < /dev/null> /var/log/mha/hdfeng1/manager.log 2>&1 &

然后检查是否正常:

[root@hdfeng01 ~]# masterha_check_status  --conf=/etc/mha/hdfeng1.cnf
hdfeng1 (pid:13112) is running(0:PING_OK), master:192.168.28.162
[root@hdfeng01 ~]# masterha_check_repl  --conf=/etc/mha/hdfeng1.cnf
[root@hdfeng01 ~]# masterha_check_ssh  --conf=/etc/mha/hdfeng1.cnf

MHA应用透明的VIP功能

相关参数,需要写到manager配置文件中,master_ip_failover这个文件我会上传到我的上去,供大家使用!

master_ip_failover_script=/usr/local/bin/master_ip_failover		[server default下添加]
vim master_ip_failover
my $vip = '192.168.1.110/24';						修改一个没人使用的IP做为虚拟IP
my $key = '1';
my $ssh_start_vip = "/sbin/ifconfig ens33:$key $vip";	设置自己的网卡
my $ssh_stop_vip = "/sbin/ifconfig ens33:$key down";	设置自己的网卡

修改完成以后添加执行权限:

[root@hdfeng01 bin]# chmod +x master_ip_failover

进行字符转换:
该文件我也会进行上传

[root@hdfeng01 opt]# rpm -ivh dos2unix-6.0.3-7.el7.x86_64.rpm
准备中...                          ################################# [100%]
正在升级/安装...
   1:dos2unix-6.0.3-7.el7             ################################# [100%]
[root@hdfeng01 opt]# dos2unix /usr/local/bin/master_ip_failover
dos2unix: converting file /usr/local/bin/master_ip_failover to Unix format ...

第一次配置VIP时,需要手动添加主库的虚拟IP,现在主库是db02,所以在db02上加第一个VIP地址

[root@hdfeng02 ~]# ifconfig eth0:1 192.168.28.110/24
[root@hdfeng02 ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:07:79:1f brd ff:ff:ff:ff:ff:ff
    inet 192.168.28.162/21 brd 192.168.31.255 scope global noprefixroute dynamic eth0
       valid_lft 81409sec preferred_lft 81409sec
    inet 192.168.28.110/24 brd 192.168.28.255 scope global eth0:1
       valid_lft forever preferred_lft forever
    inet6 fe80::b197:a6a2:890:3925/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

然后重启MHA:

[root@hdfeng01 ~]# masterha_stop --conf=/etc/mha/hdfeng1.cnf
MHA Manager is not running on hdfeng1(2:NOT_RUNNING).
[root@hdfeng01 ~]# nohup masterha_manager --conf=/etc/mha/hdfeng1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/hdfeng1/manager.log 2>&1 &
[1] 2004

验证VIP是否可用

检查状态:

[root@hdfeng01 ~]# masterha_check_status --conf=/etc/mha/hdfeng1.cnf
hdfeng1 (pid:2004) is running(0:PING_OK), master:192.168.28.162
[root@hdfeng02 ~]# ifconfig
eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.28.110  netmask 255.255.255.0  broadcast 192.168.28.255
        ether 00:0c:29:07:79:1f  txqueuelen 1000  (Ethernet)

测试是否配置成功:
能否进行IP漂移

[root@hdfeng02 ~]# systemctl stop mysqld	停主库
[root@hdfeng01 bin]# ifconfig
eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.28.110  netmask 255.255.255.0  broadcast 192.168.28.255
        ether 00:0c:29:b6:e0:ec  txqueuelen 1000  (Ethernet)

IP地址成功飘移!

db03[(none)]>show slave status \G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.28.78

MHA功能正常!
如果想要做邮件发送故障告警,也可以自行写shell脚本进行邮件的发送!
对于脚本的编写这里不多做介绍!

你可能感兴趣的:(数据库层面)