Master High Availability(MHA)是一种用于确保服务连续性和高可用性的解决方案,主要应用于MySQL等数据库系统。它通过监控数据库主节点的状态,当主节点出现故障时自动切换到备份节点上,并将新的主节点信息广播给其他节点,从而确保整个系统在主节点故障的情况下仍然可以正常工作。
MHA的主要功能包括:
自动监测:MHA会持续监测主节点的状态,并在发生故障时立即进行处理。
快速故障转移:当主节点出现故障时,MHA会自动将工作负载转移到备份节点上,从而实现快速的故障转移。
数据同步:MHA会自动将备份节点中的数据与主节点同步,保证在主节点故障恢复后可以无缝地切换回去。
自动切换:当主节点恢复后,MHA会自动将工作负载切换回主节点,以保证整个系统的稳定性和连续性。
MHA是一种基于集群、负载均衡、故障转移、备份和恢复等技术的高可用性解决方案,能够帮助企业实现服务连续性和高可用性,保证关键业务的顺利进行。
mysql版本:5.7.27
yum源:epel源
系统:centos7
主从架构:一主两从
192.168.2.31主库
192.168.2.32从库
192.168.2.33从库
vip:192.168.2.34
CPU:1核
内存:1G
keepalived版本:1.4.5
mysql跟keepalived已经安装好了,这篇文档主要部署MHA(mysql,keepalived安装看我之前文档)
链接:https://pan.baidu.com/s/1EQhvVSimRoC8Ak9Zwcos1Q
提取码:hw14
本文章所需要的rpm包跟脚本
三台机器都需要安装依赖
yum -y install perl-DBD-MySQL perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker perl-devel perl-CPAN
yum -y install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager
所需要的软件包,node节点三台机器都上传,manager节点我上传到了从库192.168.2.33上,
管理节点:192.168.2.33
node节点:192.168.2.31; 192.168.2.32
这一步三台机器都需要执行
rpm -ivh mha4mysql-node-0.58-0.el7.centos.noarch.rpm
查看安装路径
[root@mha1 ~]# rpm -ql mha4mysql-node-0.58-0.el7.centos.noarch
/usr/bin/apply_diff_relay_logs
/usr/bin/filter_mysqlbinlog
/usr/bin/purge_relay_logs
/usr/bin/save_binary_logs
/usr/share/man/man1/apply_diff_relay_logs.1.gz
/usr/share/man/man1/filter_mysqlbinlog.1.gz
/usr/share/man/man1/purge_relay_logs.1.gz
/usr/share/man/man1/save_binary_logs.1.gz
/usr/share/perl5/vendor_perl/MHA/BinlogHeaderParser.pm
/usr/share/perl5/vendor_perl/MHA/BinlogManager.pm
/usr/share/perl5/vendor_perl/MHA/BinlogPosFindManager.pm
/usr/share/perl5/vendor_perl/MHA/BinlogPosFinder.pm
/usr/share/perl5/vendor_perl/MHA/BinlogPosFinderElp.pm
/usr/share/perl5/vendor_perl/MHA/BinlogPosFinderXid.pm
/usr/share/perl5/vendor_perl/MHA/NodeConst.pm
/usr/share/perl5/vendor_perl/MHA/NodeUtil.pm
/usr/share/perl5/vendor_perl/MHA/SlaveUtil.pm
这一步只在管理节点192.168.2.33上执行
rpm -ivh mha4mysql-manager-0.58-0.el7.noarch.rpm
[root@mha3 ~]# rpm -ql mha4mysql-manager-0.58-0.el7.noarch
/usr/bin/masterha_check_repl
/usr/bin/masterha_check_ssh
/usr/bin/masterha_check_status
/usr/bin/masterha_conf_host
/usr/bin/masterha_manager
/usr/bin/masterha_master_monitor
/usr/bin/masterha_master_switch
/usr/bin/masterha_secondary_check
/usr/bin/masterha_stop
/usr/share/man/man1/masterha_check_repl.1.gz
/usr/share/man/man1/masterha_check_ssh.1.gz
/usr/share/man/man1/masterha_check_status.1.gz
/usr/share/man/man1/masterha_conf_host.1.gz
/usr/share/man/man1/masterha_manager.1.gz
/usr/share/man/man1/masterha_master_monitor.1.gz
/usr/share/man/man1/masterha_master_switch.1.gz
/usr/share/man/man1/masterha_secondary_check.1.gz
/usr/share/man/man1/masterha_stop.1.gz
/usr/share/perl5/vendor_perl/MHA/Config.pm
/usr/share/perl5/vendor_perl/MHA/DBHelper.pm
/usr/share/perl5/vendor_perl/MHA/FileStatus.pm
/usr/share/perl5/vendor_perl/MHA/HealthCheck.pm
/usr/share/perl5/vendor_perl/MHA/ManagerAdmin.pm
/usr/share/perl5/vendor_perl/MHA/ManagerAdminWrapper.pm
/usr/share/perl5/vendor_perl/MHA/ManagerConst.pm
/usr/share/perl5/vendor_perl/MHA/ManagerUtil.pm
/usr/share/perl5/vendor_perl/MHA/MasterFailover.pm
/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm
/usr/share/perl5/vendor_perl/MHA/MasterRotate.pm
/usr/share/perl5/vendor_perl/MHA/SSHCheck.pm
/usr/share/perl5/vendor_perl/MHA/Server.pm
/usr/share/perl5/vendor_perl/MHA/ServerManager.pm
所有节点都执行
mkdir -p /usr/local/masterha/app1 /usr/local/masterha/etc /usr/local/masterha/bin
ln -s /usr/local/mysql/bin/mysqlbinlog /usr/bin/mysqlbinlog
ln -s /usr/local/mysql/bin/mysql /usr/bin/mysql
mysql> grant all privileges on *.* to mha@'192.168.2.%' identified by 'mha';
Query OK, 0 rows affected, 1 warning (0.01 sec)
查看是否添加成功,主库添加会自动同步到从库,所以在从库上也查看一下
还有一点就是主从用户,检查一下是不是所有节点都有,有一个节点没有的话会报错
在管理节点上编辑
vim /usr/local/masterha/etc/app1.cnf
[server default]
manager_log=/usr/local/masterha/app1/manager.log # MHA管理器日志文件路径
manager_workdir=/usr/local/masterha/app1 # MHA管理器工作目录路径
master_ip_failover_script=/usr/local/masterha/bin/master_ip_failover # 切换IP的脚本路径
master_ip_online_change_script=/usr/local/masterha/bin/master_ip_online_change # 在线更改IP地址的脚本路径
remote_workdir=/usr/local/masterha/app1/ # 远程服务器工作目录路径
#主从复制账户
repl_user=repl_slave ##切记,所有节点上都得有主从账户
repl_password=repl_slave
#免密钥检测
ssh_user=root
#mha 连接mysql用户
user=mha
password=mha
ping_interval=1 # 心跳间隔时间
secondary_check_script= /usr/bin/masterha_secondary_check -s 192.168.2.32 -s 192.168.2.33 # 二次检查的脚本路径,用于检测主库和备库状态
maxcon=2000 # 最大连接数
maxfail=60 # 最大失败次数
ping_type=SELECT # 心跳检测方式
[server1] # MySQL实例1
master_binlog_dir="/opt/mysqldata" # 主库binlog文件所在目录
candidate_master=1 # 该实例是否为可选主库
check_repl_delay=0 # 复制延迟检查
hostname=192.168.2.31 # 实例主机名或IP地址
port=38141 # 实例MySQL端口号
ssh_port=22 # 实例SSH端口号
[server2]
master_binlog_dir="/opt/mysqldata"
candidate_master=1
check_repl_delay=0
hostname=192.168.2.32
port=38141
ssh_port=22
[server3]
master_binlog_dir="/opt/mysqldata"
no_master=1 # 该实例不作为主库
check_repl_delay=0
hostname=192.168.2.33
port=38141
ssh_port=22
对所有节点都进行配置
#创建秘钥对
[root@mha1 ~]# ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa >/dev/null 2>&1
#分发公钥,包括自己
[root@mha1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
[root@mha1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
[root@mha1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
#开启ssh服务支持秘钥认证
[root@mha1 ~]# vim /etc/ssh/sshd_config
44 PubkeyAuthentication yes
45 RSAAuthentication yes
49 AuthorizedKeysFile .ssh/authorized_keys
[root@mha1 ~]# systemctl restart sshd
/usr/bin/masterha_check_ssh --conf=/usr/local/masterha/etc/app1.cnf
[root@mha3 etc]# /usr/bin/masterha_check_ssh --conf=/usr/local/masterha/etc/app1.cnf
Fri May 12 22:43:26 2023 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri May 12 22:43:26 2023 - [info] Reading application default configuration from /usr/local/masterha/etc/app1.cnf..
Fri May 12 22:43:26 2023 - [info] Reading server configuration from /usr/local/masterha/etc/app1.cnf..
Fri May 12 22:43:26 2023 - [info] Starting SSH connection tests..
Fri May 12 22:43:27 2023 - [debug]
Fri May 12 22:43:26 2023 - [debug] Connecting via SSH from [email protected](192.168.2.31:22) to [email protected](192.168.2.32:22)..
Fri May 12 22:43:26 2023 - [debug] ok.
Fri May 12 22:43:26 2023 - [debug] Connecting via SSH from [email protected](192.168.2.31:22) to [email protected](192.168.2.33:22)..
Fri May 12 22:43:27 2023 - [debug] ok.
Fri May 12 22:43:27 2023 - [debug]
Fri May 12 22:43:27 2023 - [debug] Connecting via SSH from [email protected](192.168.2.32:22) to [email protected](192.168.2.31:22)..
Fri May 12 22:43:27 2023 - [debug] ok.
Fri May 12 22:43:27 2023 - [debug] Connecting via SSH from [email protected](192.168.2.32:22) to [email protected](192.168.2.33:22)..
Fri May 12 22:43:27 2023 - [debug] ok.
Fri May 12 22:43:28 2023 - [debug]
Fri May 12 22:43:27 2023 - [debug] Connecting via SSH from [email protected](192.168.2.33:22) to [email protected](192.168.2.31:22)..
Fri May 12 22:43:27 2023 - [debug] ok.
Fri May 12 22:43:27 2023 - [debug] Connecting via SSH from [email protected](192.168.2.33:22) to [email protected](192.168.2.32:22)..
Fri May 12 22:43:28 2023 - [debug] ok.
Fri May 12 22:43:28 2023 - [info] All SSH connection tests passed successfully.
在测试之前需要上传两个脚本,给744权限,脚本在文章最开始的网盘链接里下载
chmod 744 master_ip_failover master_ip_online_change
vim master_ip_online_change
vim master_ip_failover #两个脚本都修改下面内容
my $vip = '192.168.2.34/32';
/usr/bin/masterha_check_repl --conf=/usr/local/masterha/etc/app1.cnf
[root@mha3 bin]# /usr/bin/masterha_check_repl --conf=/usr/local/masterha/etc/app1.cnf
Fri May 12 23:17:28 2023 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri May 12 23:17:28 2023 - [info] Reading application default configuration from /usr/local/masterha/etc/app1.cnf..
Fri May 12 23:17:28 2023 - [info] Reading server configuration from /usr/local/masterha/etc/app1.cnf..
Fri May 12 23:17:28 2023 - [info] MHA::MasterMonitor version 0.58.
Fri May 12 23:17:29 2023 - [info] GTID failover mode = 0
Fri May 12 23:17:29 2023 - [info] Dead Servers:
Fri May 12 23:17:29 2023 - [info] Alive Servers:
Fri May 12 23:17:29 2023 - [info] 192.168.2.31(192.168.2.31:38141)
Fri May 12 23:17:29 2023 - [info] 192.168.2.32(192.168.2.32:38141)
Fri May 12 23:17:29 2023 - [info] 192.168.2.33(192.168.2.33:38141)
Fri May 12 23:17:29 2023 - [info] Alive Slaves:
Fri May 12 23:17:29 2023 - [info] 192.168.2.32(192.168.2.32:38141) Version=5.7.27-log (oldest major version between slaves) log-bin:enabled
Fri May 12 23:17:29 2023 - [info] Replicating from 192.168.2.31(192.168.2.31:38141)
Fri May 12 23:17:29 2023 - [info] Primary candidate for the new Master (candidate_master is set)
Fri May 12 23:17:29 2023 - [info] 192.168.2.33(192.168.2.33:38141) Version=5.7.27-log (oldest major version between slaves) log-bin:enabled
Fri May 12 23:17:29 2023 - [info] Replicating from 192.168.2.31(192.168.2.31:38141)
Fri May 12 23:17:29 2023 - [info] Not candidate for the new Master (no_master is set)
Fri May 12 23:17:29 2023 - [info] Current Alive Master: 192.168.2.31(192.168.2.31:38141)
Fri May 12 23:17:29 2023 - [info] Checking slave configurations..
Fri May 12 23:17:29 2023 - [info] read_only=1 is not set on slave 192.168.2.32(192.168.2.32:38141).
Fri May 12 23:17:29 2023 - [warning] relay_log_purge=0 is not set on slave 192.168.2.32(192.168.2.32:38141).
Fri May 12 23:17:29 2023 - [info] read_only=1 is not set on slave 192.168.2.33(192.168.2.33:38141).
Fri May 12 23:17:29 2023 - [warning] relay_log_purge=0 is not set on slave 192.168.2.33(192.168.2.33:38141).
Fri May 12 23:17:29 2023 - [info] Checking replication filtering settings..
Fri May 12 23:17:29 2023 - [info] binlog_do_db= , binlog_ignore_db=
Fri May 12 23:17:29 2023 - [info] Replication filtering check ok.
Fri May 12 23:17:29 2023 - [info] GTID (with auto-pos) is not supported
Fri May 12 23:17:29 2023 - [info] Starting SSH connection tests..
Fri May 12 23:17:30 2023 - [info] All SSH connection tests passed successfully.
Fri May 12 23:17:30 2023 - [info] Checking MHA Node version..
Fri May 12 23:17:31 2023 - [info] Version check ok.
Fri May 12 23:17:31 2023 - [info] Checking SSH publickey authentication settings on the current master..
Fri May 12 23:17:31 2023 - [info] HealthCheck: SSH to 192.168.2.31 is reachable.
Fri May 12 23:17:31 2023 - [info] Master MHA Node version is 0.58.
Fri May 12 23:17:31 2023 - [info] Checking recovery script configurations on 192.168.2.31(192.168.2.31:38141)..
Fri May 12 23:17:31 2023 - [info] Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/opt/mysqldata --output_file=/usr/local/masterha/app1//save_binary_logs_test --manager_version=0.58 --start_file=mysql-bin.000004
Fri May 12 23:17:31 2023 - [info] Connecting to [email protected](192.168.2.31:22)..
Creating /usr/local/masterha/app1 if not exists.. ok.
Checking output directory is accessible or not..
ok.
Binlog found at /opt/mysqldata, up to mysql-bin.000004
Fri May 12 23:17:31 2023 - [info] Binlog setting check done.
Fri May 12 23:17:31 2023 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Fri May 12 23:17:31 2023 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='mha' --slave_host=192.168.2.32 --slave_ip=192.168.2.32 --slave_port=38141 --workdir=/usr/local/masterha/app1/ --target_version=5.7.27-log --manager_version=0.58 --relay_log_info=/opt/mysqldata/relay-log.info --relay_dir=/opt/mysqldata/ --slave_pass=xxx
Fri May 12 23:17:31 2023 - [info] Connecting to [email protected](192.168.2.32:22)..
Checking slave recovery environment settings..
Opening /opt/mysqldata/relay-log.info ... ok.
Relay log found at /opt/mysqldata, up to mha2-relay-bin.000010
Temporary relay log file is /opt/mysqldata/mha2-relay-bin.000010
Checking if super_read_only is defined and turned on.. not present or turned off, ignoring.
Testing mysql connection and privileges..
mysql: [Warning] Using a password on the command line interface can be insecure.
done.
Testing mysqlbinlog output.. done.
Cleaning up test file(s).. done.
Fri May 12 23:17:31 2023 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='mha' --slave_host=192.168.2.33 --slave_ip=192.168.2.33 --slave_port=38141 --workdir=/usr/local/masterha/app1/ --target_version=5.7.27-log --manager_version=0.58 --relay_log_info=/opt/mysqldata/relay-log.info --relay_dir=/opt/mysqldata/ --slave_pass=xxx
Fri May 12 23:17:31 2023 - [info] Connecting to [email protected](192.168.2.33:22)..
Checking slave recovery environment settings..
Opening /opt/mysqldata/relay-log.info ... ok.
Relay log found at /opt/mysqldata, up to mha3-relay-bin.000010
Temporary relay log file is /opt/mysqldata/mha3-relay-bin.000010
Checking if super_read_only is defined and turned on.. not present or turned off, ignoring.
Testing mysql connection and privileges..
mysql: [Warning] Using a password on the command line interface can be insecure.
done.
Testing mysqlbinlog output.. done.
Cleaning up test file(s).. done.
Fri May 12 23:17:31 2023 - [info] Slaves settings check done.
Fri May 12 23:17:31 2023 - [info]
192.168.2.31(192.168.2.31:38141) (current master)
+--192.168.2.32(192.168.2.32:38141)
+--192.168.2.33(192.168.2.33:38141)
Fri May 12 23:17:31 2023 - [info] Checking replication health on 192.168.2.32..
Fri May 12 23:17:31 2023 - [info] ok.
Fri May 12 23:17:31 2023 - [info] Checking replication health on 192.168.2.33..
Fri May 12 23:17:31 2023 - [info] ok.
Fri May 12 23:17:31 2023 - [info] Checking master_ip_failover_script status:
Fri May 12 23:17:31 2023 - [info] /usr/local/masterha/bin/master_ip_failover --command=status --ssh_user=root --orig_master_host=192.168.2.31 --orig_master_ip=192.168.2.31 --orig_master_port=38141
IN SCRIPT TEST====/etc/init.d/keepalived stop==/etc/init.d/keepalived start===
Checking the Status of the script.. OK
Fri May 12 23:17:31 2023 - [info] OK.
Fri May 12 23:17:31 2023 - [warning] shutdown_script is not defined.
Fri May 12 23:17:31 2023 - [info] Got exit code 0 (Not master dead).
MySQL Replication Health is OK.
sed -i '244s/super_read_only/read_only/g' /usr/share/perl5/vendor_perl/MHA/SlaveUtil.pm
sed -i '262s/$dbh/#$dbh/g' /usr/share/perl5/vendor_perl/MHA/SlaveUtil.pm
sed -i '275s/super_read_only/read_only/g' /usr/share/perl5/vendor_perl/MHA/SlaveUtil.pm
vim +294 /usr/share/perl5/vendor_perl/MHA/HealthCheck.pm
替换为
$log->info("maxcon:$self->{maxcon},maxfail:$self->{maxfail},currentcon:$href->{Value}") if ($log and $href->{Value}>=$self->{maxcon});
编辑脚本来管理服务的启动跟关闭,脚本里的两条命令放在后台screen里面执行也可以,这里我是编写了脚本。
cd /usr/local/masterha/etc/
[root@mha3 etc]# vim start_mha.sh
#!/bin/bash
nohup masterha_manager --conf=/usr/local/masterha/etc/app1.cnf --ignore_last_failover >/tmp/mha_manager.log < /dev/null 2>&1 &
[root@mha3 etc]# vim stop_mha.sh
#!/bin/bash
masterha_stop --conf=/usr/local/masterha/etc/app1.cnf
[root@mha3 etc]# chmod 744 start_mha.sh stop_mha.sh #给两个脚本744权限
启动mha
[root@mha3 etc]# ./start_mha.sh
/usr/bin/masterha_check_status --conf=/usr/local/masterha/etc/app1.cnf
vim +294 /usr/share/perl5/vendor_perl/MHA/HealthCheck.pm
注释掉
$log->info("maxcon:$self->{maxcon},maxfail:$self->{maxfail},currentcon:$href->{Value}") if ($log and $href->{Value}>=$self->{maxcon});
打开
$log->info("maxcon:$self->{maxcon},maxfail:$self->{maxfail},currentcon:$href->{Value}") if ($log);
因为上面配置文件里面写了最大连接数2000,持续60s后会切换,所以咱们要测试的就是两种情况,第一种就是数据库宕机,第二种就是连接数超过2000.
maxcon=2000 # 最大连接数
maxfail=60 # 最大失败次数
主库192.168.2.31
[root@mha1 ~]# /etc/init.d/mysql stop
Shutting down MySQL............ SUCCESS!
管理节点10.0.2.33
tailf /usr/local/masterha/app1/manager.log
当主库宕机持续60s后,MHA就会切换,vip就会切换到192.168.2.32上。
此时的主库就是192.168.2.32,在192.168.2.33上执行
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.2.32
Master_User: repl_slave
Master_Port: 38141
Connect_Retry: 60
Master_Log_File: mysql-bin.000007
Read_Master_Log_Pos: 443
Relay_Log_File: mha3-relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000007
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
再把192.168.2.31上的mysql起来,这时192.168.2.31上是没有主从信息的
mysql> show slave status\G;
Empty set (0.00 sec)
ERROR:
No query specified
这时候需要我们去指定主从,指定主从的信息节点信息从管理节点192.168.2.33的/usr/local/masterha/app1/manager.log里面获得
192.168.2.33
[root@mha3 app1]# tailf /usr/local/masterha/app1/manager.log
Sat May 13 00:16:33 2023 - [info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='192.168.2.32', MASTER_PORT=38141, MASTER_LOG_FILE='mysql-bin.000007', MASTER_LOG_POS=443, MASTER_USER='repl_slave', MASTER_PASSWORD='xxx';
在192.168.2.31上执行以下命令重新指定主从
mysql> CHANGE MASTER TO MASTER_HOST='192.168.2.32',MASTER_USER='repl_slave',MASTER_PASSWORD='repl_slave',MASTER_PORT=38141,MASTER_LOG_FILE='mysql-bin.000007',MASTER_LOG_POS=443;
Query OK, 0 rows affected, 2 warnings (0.00 sec)
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.2.32
Master_User: repl_slave
Master_Port: 38141
Connect_Retry: 60
Master_Log_File: mysql-bin.000007
Read_Master_Log_Pos: 443
Relay_Log_File: mha1-relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000007
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
需要注意的是,在切换完成以后mha自动也就停止了,需要我们在手动执行脚本起来
目前主从关系
主:192.168.2.32
从:192.168.2.31
从:192.168.2.31
所以我们需要在从库192.168.2.32上编写脚本如下:
[root@mha2 mysqldata]# mkdir /shell
[root@mha2 mysqldata]# cd /shell/
[root@mha2 shell]# vim conn_mysql.sh
#!/bin/bash
for((i=1;i<2100;i=i+1))
do
nohup /usr/local/mysql/bin/mysql -p*************** mon_test -e "select sleep(200)" >/tmp/mysql.log < /dev/null 2>&1 &
done
[root@mha2 shell]# chmod 744 conn_mysql.sh
我们需要在主库192.168.2.32上创建一个库
mysql> create database mon_test;
Query OK, 1 row affected (0.00 sec)
[root@mha2 shell]# bash conn_mysql.sh
这时在管理节点192.168.2.33上查看会发现,这时连接数超2000了,等60s以后发生切换
切换完成以后确认一下,此时vip又切回到了192.168.2.31上,同时再192.168.2.33上查看,主库也变成了192.168.2.31
192.168.2.31
[root@mha1 ~]# hostname -I
192.168.2.31 192.168.2.34
192.168.2.33
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.2.31
Master_User: repl_slave
Master_Port: 38141
Connect_Retry: 60
Master_Log_File: mysql-bin.000005
Read_Master_Log_Pos: 154
Relay_Log_File: mha3-relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000005
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
然后我们去192.168.2.32上指定一下主从,需要区分的是这次是连接数超了,数据库还是正常运行,没有宕机,所以我们直接执行指定主从就好,指定主从的信息节点信息从管理节点192.168.2.33的/usr/local/masterha/app1/manager.log里面获得。
192.168.2.33
[root@mha3 app1]# tailf /usr/local/masterha/app1/manager.log
Sat May 13 00:41:03 2023 - [info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='192.168.2.31', MASTER_PORT=38141, MASTER_LOG_FILE='mysql-bin.000005', MASTER_LOG_POS=154, MASTER_USER='repl_slave', MASTER_PASSWORD='xxx';
在192.168.2.32上重新指定主从
mysql> CHANGE MASTER TO MASTER_HOST='192.168.2.31',MASTER_USER='repl_slave',MASTER_PASSWORD='repl_slave',MASTER_PORT=38141,MASTER_LOG_FILE='mysql-bin.000005',MASTER_LOG_POS=154;
Query OK, 0 rows affected, 2 warnings (0.00 sec)
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.2.31
Master_User: repl_slave
Master_Port: 38141
Connect_Retry: 60
Master_Log_File: mysql-bin.000005
Read_Master_Log_Pos: 154
Relay_Log_File: mha2-relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000005
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
至此,我们本篇文档MHA部署和切换就正式结束。