centos 7.6——MHA高可用配置及故障切换

MHA高可用配置及故障切换

文章目录

  • MHA高可用配置及故障切换
  • 一、传统的MySQL主从架构存在的问题
    • 1.1 单点故障
  • 二、MHA概述
  • 三、MHA的组成
  • 四、MHA特点
  • 五、 案例拓扑图
  • 六、实验环境
    • 6.1 实验步骤
  • 七、 搭建MySQL主从复制环境安装
    • 7.1 三台mysql1 、mysql2、mysql3 安装MySQL 5.6.36
    • 7.2 主从配置
  • 八、 安装Node组件(所有节点)
    • 8.1 MHA-manager 上安装manager组件
  • 九、 配置MHA
  • 十、 测试
    • (1) 测试ssh无密码验证,如果正常输出successfully
    • (2) 测试配置,最后出现is OK 表示成功
    • (3) 在master服务器上设置虚拟ip
    • (4) 启动MHA
    • (5) 查看 MHA 状态
    • (6) 查看日志 也可以开到当前master是mysql1
    • (7)查看mysql1的VIP地址是否存在,这个地址不会因为manager节点停止而消失
    • (8)模拟master服务器发生故障
    • (9)查看mysql2
    • (10)查看MHA服务器
  • 十一 、问题分析
  • 十二、虚拟ip解释
  • == 注意:==

一、传统的MySQL主从架构存在的问题

1.1 单点故障

图1

二、MHA概述

  • 一套优秀的MySQL高可用环境下故障切换和主从复制的软件
  • MySQL故障过程中,MHA能做到0-30秒内自动完成故障切换

三、MHA的组成

  • MHA Manager(管理节点)
  • MHA Node(数据节点)

四、MHA特点

  • 自动故障切换过程中,MHA试图从宕机的主服务器上保存二进制日志,最大程度的保证数据不丢失
  • 使用半同步复制,可以大大降低数据丢失的风险
  • 目前MHA支持一主多从架构,最少三台服务,即一主两从

五、 案例拓扑图

centos 7.6——MHA高可用配置及故障切换_第1张图片

六、实验环境

master主服务器 mysql1 192.168.75.200
主备服务器 mysql2 192.168.75.123
从服务器 mysql3 192.168.75.134
监控服务器 MHA-manager 192.168.75.144

6.1 实验步骤

  1. 三台mysql服务器安装5.6,
  2. 所有服务器安装节点软件
  3. MHA服务器安装manager
  4. 三台mysql服务器设置授权,manager服务器设置授权
  5. MHA设置配置文件
  6. 测试服务器免密登录
  7. 验证故障浮动路由飘移

七、 搭建MySQL主从复制环境安装

7.1 三台mysql1 、mysql2、mysql3 安装MySQL 5.6.36

1、安装编译依赖的环境

hostnamectl set-hostname mysql1
su //刷新一下
[root@mysql1 ~]# yum -y install ncurses-devel gcc-c++ perl-Module-Install

2、安装gmake 编译软件

[root@mysql1 ~]# tar zxvf cmake-2.8.6.tar.gz 
[root@mysql1 ~]# cd cmake-2.8.6/
[root@mysql1 cmake-2.8.6]# ./configure 
[root@mysql1 cmake-2.8.6]# gmake && gmake install 

3、安装Mysql数据库

[root@mysql1 ~]# tar zxvf mysql-5.6.36.tar.gz 
[root@mysql1 ~]# cd mysql-5.6.36/
[root@mysql1 mysql-5.6.36]# cmake -DCMAKE_INSTALL_PREFIX=/usr/local/mysql -DDEFAULT_CHARSET=utf8 -DDEFAULT_COLLATION=utf8_general_ci -DWITH_EXTRA_CHARSETS=all -DSYSCONFDIR=/etc
[root@mysql1 mysql-5.6.36]# make && make install
[root@mysql1 mysql-5.6.36]# cp support-files/my-default.cnf  /etc/my.cnf
[root@mysql1 mysql-5.6.36]# cp support-files/mysql.server  /etc/rc.d/init.d/mysqld
[root@mysql1 ~]# chmod +x /etc/rc.d/init.d/mysqld 
[root@mysql1 ~]# chkconfig --add mysqld
[root@mysql1 ~]# echo "PATH=$PATH:/usr/local/mysql/bin" >> /etc/profile
[root@mysql1 ~]# source /etc/profile
[root@mysql1 ~]# groupadd mysql
[root@mysql1 ~]# useradd -M -s /sbin/nologin mysql -g mysql
[root@mysql1 ~]# chown -R mysql.mysql /usr/local/mysql
[root@mysql1 ~]# mkdir -p /data/mysql
[root@mysql1 ~]# /usr/local/mysql/scripts/mysql_install_db --basedir=/usr/local/mysql/ --datadir=/usr/local/mysql/data/ --user=mysql   //初始化,注意防止报错信息

7.2 主从配置

4、修改Master 的主服务器配置文件 三台服务器的server-id 不能一样

主服务器修改

[root@mysql1 ~]# vim /etc/my.cnf
server-id = 1
log_bin = master-bin
log-slave-updates = true

两台从服务器修改

[root@mysql2 mysql-5.6.36]# vim /etc/my.cnf
server-id = 2
log_bin = master-bin
relay-log = relay-log-bin
relay-log-index = slave-relay-bin.index

5、三台服务器分别做两个软连接


[root@mysql1 ~]# ln -s /usr//local/mysql/bin/mysql /usr/sbin/
[root@mysql1 ~]# ln -s /usr//local/mysql/bin/mysqlbinlog /usr/sbin/

6、三台服务器启动MySQL


[root@mysql1 bin]# /usr/local/mysql/bin/mysqld_safe --user=mysql &
[root@mysql1 bin]# netstat -atnp | grep 3306
tcp6       0      0 :::3306                 :::*                    LISTEN      21159/mysqld 

7、 配置 Mysql 一主两从

(1)所有数据库节点(mysql1、mysql2、mysql3)都需要授权这两个用户
两个授权用户 一个是从库同步使用,一个是manager使用


[root@mysql1 ~]# mysql -uroot -p  ## 此时没有设置密码  直接回车登录数据库
mysql> grant replication slave on *.* to 'myslave'@'192.168.75.%' identified by '123';
mysql> grant all privileges on *.* to 'mha'@'192.168.75.%' identified by 'manager';
mysql> flush privileges;

(2) 下面三条理论不同添加,但是案例实验环境通过MHA检查MySQL 主从有报错,

报错信息两个从数据库连接不上主库,所以所有数据库(mysql1、mysql2、mysql3)加上下面授权


mysql> grant all privileges on *.* to 'mha'@'Mysql1' identified by 'manager';     ## 注意  后面跟的是主机名  主机名不区分大小写,系统会自动识别
mysql> grant all privileges on *.* to 'mha'@'Mysql2' identified by 'manager';
mysql> grant all privileges on *.* to 'mha'@'Mysql3' identified by 'manager';

(3)主服务器查看二进制文件和同步点


mysql> show master status;  ## 这时候主服务器一定要停止操作
+-------------------+----------+--------------+------------------+-------------------+
| File              | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+-------------------+----------+--------------+------------------+-------------------+
| master-bin.000001 |     1294 |              |                  |                   |
+-------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

8、 两台从服务器分别执行同步

  • 这里的ip是主服务器ip,二进制日志是master服务器和位置点都是依据master服务器

mysql> change master to master_host='192.168.75.100',master_user='myslave',master_password='123',master_log_file='master-bin.000001',master_log_pos=  2278;
mysql> start slave;

9、 在从服务器(mysql2、mysql3)查看IO和SQL 线程


mysql> show slave status\G;
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
            

10、必须设置两个从库为只读模式


mysql> set global read_only=1;
mysql> flush privileges;

八、 安装Node组件(所有节点)

  • 在所有服务器(mysql1、mysql2、mysql3、MHA-manager)安装node组件

  • 四台都要安装 包括MHA的服务区

1、所有服务器都安装MHA依赖的环境,首先安装epel源


[root@mysql2 mysql-5.6.36]# yum -y install epel-release --nogpgcheck    ## 先安装epel源 才能装下面的,否则有些找不到
[root@mysql1 ~]# yum -y install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker perl-CPAN

2、 MHA 软件包对于每个操作系统版本不一样,这里 用0.57版本

注意:所有服务器 上必须先安装node组件,最后再MHA-manager 节点上安装manager组件,

  • 因为manager依赖node组件,下面是在mysql1 上安装node组件

[root@mysql1 ~]# tar zxvf mha4mysql-node-0.57.tar.gz 
[root@mysql1 ~]# cd mha4mysql-node-0.57/
[root@mysql1 mha4mysql-node-0.57]# perl Makefile.PL 
root@mysql3 mha4mysql-node-0.57]# make
[root@mysql3 mha4mysql-node-0.57]# make install

8.1 MHA-manager 上安装manager组件

(1) 在MHA-manager 上安装manager组件 (!注意 一定要先安装node组件才能安装manager组件)


[root@mha-manager ~]# tar zxvf mha4mysql-manager-0.57.tar.gz 
[root@mha-manager ~]# cd mha4mysql-manager-0.57/
[root@mha-manager mha4mysql-manager-0.57]# perl Makefile.PL 
[root@mha-manager mha4mysql-manager-0.57]#make
[root@mha-manager mha4mysql-manager-0.57]#make install

安装后 进入/usr/local/bin/ 下面 会生成几个工具,主要包括以下几个


[root@mha-manager mha4mysql-manager-0.57]# cd /usr/local/bin/
[root@mha-manager bin]# ll
总用量 84
-r-xr-xr-x. 1 root root 16381 828 01:34 apply_diff_relay_logs
-r-xr-xr-x. 1 root root  4807 828 01:34 filter_mysqlbinlog
-r-xr-xr-x. 1 root root  1995 828 01:40 masterha_check_repl  ## 检查mysql 复制状况
-r-xr-xr-x. 1 root root  1779 828 01:40 masterha_check_ssh  ## 检查 MHA的SSH 配置状况
-r-xr-xr-x. 1 root root  1865 828 01:40 masterha_check_status  ## 检查当前MHA 运行状态
-r-xr-xr-x. 1 root root  3201 828 01:40 masterha_conf_host     ## 添加或删除配置的server信息
-r-xr-xr-x. 1 root root  2517 828 01:40 masterha_manager  ## 启动 manager 的脚本
-r-xr-xr-x. 1 root root  2165 828 01:40 masterha_master_monitor  ## 检测master 是否宕机
-r-xr-xr-x. 1 root root  2373 828 01:40 masterha_master_switch ## 控制故障迁移(自动或手动)
-r-xr-xr-x. 1 root root  5171 828 01:40 masterha_secondary_check
-r-xr-xr-x. 1 root root  1739 828 01:40 masterha_stop  ## 关闭manager
-r-xr-xr-x. 1 root root  8261 828 01:34 purge_relay_logs
-r-xr-xr-x. 1 root root  7525 828 01:34 save_binary_logs

(2)node安装后也会在/usr/local/bin 下面会生成几个脚本(这些工具通常由MHA Manager的脚本触发,无需人为操作)

主要如下


[root@mysql1 bin]# ll
总用量 26440
-r-xr-xr-x. 1 root root    16381 828 01:32 apply_diff_relay_logs  ## 识别差异的中继日志事件并将差异的事件应用于其他的slave
-rwxr-xr-x. 1 root root  8157912 827 17:11 cmake
-rwxr-xr-x. 1 root root  8743880 827 17:11 cpack
-rwxr-xr-x. 1 root root 10124504 827 17:11 ctest
-r-xr-xr-x. 1 root root     4807 828 01:32 filter_mysqlbinlog     ## 去除不必要的 ROLLBACK事件(MHA 已不再使用这个工具) 
-r-xr-xr-x. 1 root root     8261 828 01:32 purge_relay_logs    ## 清除中继日志(不会阻塞线程)
-r-xr-xr-x. 1 root root     7525 828 01:32 save_binary_logs  ## 保存和辅助master的二进制日志

(3)配置无密码认证

1、mha-manager 匹配到所有数据库节点的无密码认证


[root@mha-manager ~]# ssh-keygen -t rsa         ## 一路按回车键
[root@mysql1 ~]# cd .ssh
[root@mha-manager ~]# ssh-copy-id 192.168.75.200
[root@mha-manager ~]# ssh-copy-id 192.168.75.123
[root@mha-manager ~]# ssh-copy-id 192.168.75.134

2、msql1 匹配到msql2和mysql3数据库节点的无密码认证


[root@mha-manager ~]# ssh-keygen -t rsa         ## 一路按回车键
[root@mysql1 ~]# ssh-copy-id 192.168.75.123
[root@mysql1 ~]# ssh-copy-id 192.168.75.134

3、msql2 匹配到msql1和mysql3数据库节点的无密码认证


[root@mha-manager ~]# ssh-keygen -t rsa         ## 一路按回车键
[root@mysql1 ~]# ssh-copy-id 192.168.75.200
[root@mysql1 ~]# ssh-copy-id 192.168.75.134

4、msql3 匹配到msql1和mysql2数据库节点的无密码认证


[root@mha-manager ~]# ssh-keygen -t rsa         ## 一路按回车键
[root@mysql1 ~]# ssh-copy-id 192.168.75.200
[root@mysql1 ~]# ssh-copy-id 192.168.75.123

九、 配置MHA

1、在manager 节点上 复制相关脚本到/usr/local/bin 目录

[root@mha-manager ~]# cp -ra /root/mha4mysql-manager-0.57/samples/scripts /usr/local/bin/  
拷贝会有四个执行文件
[root@mha-manager ~]# ll /usr/local/bin/scripts/    
总用量 32
-rwxr-xr-x. 1 1001 1001  3648 531 2015 master_ip_failover  ## 自动切换时VIP管理的脚本
-rwxr-xr-x. 1 1001 1001  9870 531 2015 master_ip_online_change  ## 在线切换时vip 的管理  ## vip及虚拟ip
-rwxr-xr-x. 1 1001 1001 11867 531 2015 power_manager  ## 故障发生后关闭主机的脚本
-rwxr-xr-x. 1 1001 1001  1360 531 2015 send_report  ## 因故障切换后发送报警的脚本

2、 复制上面自动切换时VIP的管理脚本到/usr/local/bin 目录 这里使用脚本管理VIP


[root@mha-manager ~]# cp /usr/local/bin/scripts/master_ip_failover  /usr/local/bin/

3、 修改内容如下:(删除原有内容直接辅助)


[root@mha-manager ~]# vim /usr/local/bin/master_ip_failover 
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';

use Getopt::Long;

my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
#############################添加内容部分#########################################
my $vip = '192.168.75.220';
my $brdc = '192.168.75.255';
my $ifdev = 'ens33';
my $key = '1';
my $ssh_start_vip = "/sbin/ifconfig ens33:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig ens33:$key down";
my $exit_code = 0;
#my $ssh_start_vip = "/usr/sbin/ip addr add $vip/24 brd $brdc dev $ifdev label $ifdev:$key;/usr/sbin/arping -q -A -c 1 -I $ifdev $vip;iptables -F;";
#my $ssh_stop_vip = "/usr/sbin/ip addr del $vip/24 dev $ifdev label $ifdev:$key";
##################################################################################
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);

exit &main();

sub main {
     

print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";

if ( $command eq "stop" || $command eq "stopssh" ) {
     

my $exit_code = 1;
eval {
     
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
     
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {
     

my $exit_code = 10;
eval {
     
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
     
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
     
print "Checking the Status of the script.. OK \n";
exit 0;
}
else {
     
&usage();
exit 1;
}
}
sub start_vip() {
     
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
 A simple system call that disable the VIP on the old_master
sub stop_vip() {
     
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}

sub usage {
     
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}


4、 创建 MHA 软件目录 并拷贝配置文件


[root@mha-manager ~]# mkdir /etc/masterha
[root@mha-manager ~]# cp /root/mha4mysql-manager-0.57/samples/conf/app1.cnf  /etc/masterha

[root@mha-manager ~]# vim /etc/masterha/app1.cnf 
[server default]
manager_workdir=/var/log/masterha/app1
manager_log=/var/log/masterha/app1/manager.log
master_binlog_dir=/usr/local/mysql/data
master_ip_failover_script=/usr/local/bin/scripts/master_ip_failover
master_ip_online_change_script=/usr/local/bin/scripts/master_ip_online_change
password=manager
ping_interval=1
remote_workdir=/tmp
repl_password=123
repl_user=myslave
report_script=/usr/local/send_report
secondary_check_script=/usr/local/bin/masterha_secondary_check -s 192.168.75.123 -s 192.168.75.134
shutdown_script=""
ssh_user=root
user=mha

[server1]
hostname=192.168.75.200
port=3306

[server2]
candidate_master=1
check_repl_delay=0
hostname=192.168.75.123
port=3306

[server3]
hostname=192.168.75.134
port=3306


十、 测试

(1) 测试ssh无密码验证,如果正常输出successfully


[root@mha-manager ~]# masterha_check_ssh -conf=/etc/masterha/app1.cnf
Fri Aug 28 15:59:00 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri Aug 28 15:59:00 2020 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Fri Aug 28 15:59:00 2020 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Fri Aug 28 15:59:00 2020 - [info] Starting SSH connection tests..
Fri Aug 28 15:59:03 2020 - [debug] 
Fri Aug 28 15:59:01 2020 - [debug]  Connecting via SSH from root@192.168.75.200(192.168.75.200:22) to root@192.168.75.100(192.168.75.100:22)..
Fri Aug 28 15:59:02 2020 - [debug]   ok.
Fri Aug 28 15:59:02 2020 - [debug]  Connecting via SSH from root@192.168.75.200(192.168.75.200:22) to root@192.168.75.180(192.168.75.180:22)..
Fri Aug 28 15:59:02 2020 - [debug]   ok.
Fri Aug 28 15:59:03 2020 - [debug] 
Fri Aug 28 15:59:00 2020 - [debug]  Connecting via SSH from root@192.168.75.123(192.168.75.123:22) to root@192.168.75.200(192.168.75.200:22)..
Fri Aug 28 15:59:01 2020 - [debug]   ok.
Fri Aug 28 15:59:01 2020 - [debug]  Connecting via SSH from root@192.168.75.123(192.168.75.123:22) to root@192.168.75.180(192.168.75.180:22)..
Fri Aug 28 15:59:02 2020 - [debug]   ok.
Fri Aug 28 15:59:04 2020 - [debug] 
Fri Aug 28 15:59:01 2020 - [debug]  Connecting via SSH from root@192.168.75.134(192.168.75.134:22) to root@192.168.75.100(192.168.75.100:22)..
Fri Aug 28 15:59:02 2020 - [debug]   ok.
Fri Aug 28 15:59:02 2020 - [debug]  Connecting via SSH from root@192.168.75.134(192.168.75.134:22) to root@192.168.75.200(192.168.75.200:22)..
Fri Aug 28 15:59:03 2020 - [debug]   ok.
Fri Aug 28 15:59:04 2020 - [info] All SSH connection tests passed successfully.

(2) 测试配置,最后出现is OK 表示成功

[root@mha-manager ~]# masterha_check_repl -conf=/etc/masterha/app1.cnf 
Sat Aug 29 13:41:21 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Sat Aug 29 13:41:21 2020 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Sat Aug 29 13:41:21 2020 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Sat Aug 29 13:41:21 2020 - [info] MHA::MasterMonitor version 0.57.
Sat Aug 29 13:41:23 2020 - [info] GTID failover mode = 0
Sat Aug 29 13:41:23 2020 - [info] Dead Servers:
Sat Aug 29 13:41:23 2020 - [info] Alive Servers:
Sat Aug 29 13:41:23 2020 - [info]   192.168.75.123(192.168.75.123:3306)
Sat Aug 29 13:41:23 2020 - [info]   192.168.75.134(192.168.75.134:3306)
Sat Aug 29 13:41:23 2020 - [info] Alive Slaves:
Sat Aug 29 13:41:23 2020 - [info]   192.168.75.134(192.168.75.134:3306)  Version=5.6.36-log (oldest major version between slaves) log-bin:enabled
Sat Aug 29 13:41:23 2020 - [info]     Replicating from 192.168.75.123(192.168.75.123:3306)
Sat Aug 29 13:41:23 2020 - [info] Current Alive Master: 192.168.75.123(192.168.75.123:3306)
Sat Aug 29 13:41:23 2020 - [info] Checking slave configurations..
Sat Aug 29 13:41:23 2020 - [warning]  relay_log_purge=0 is not set on slave 192.168.75.134(192.168.75.134:3306).
Sat Aug 29 13:41:23 2020 - [info] Checking replication filtering settings..
Sat Aug 29 13:41:23 2020 - [info]  binlog_do_db= , binlog_ignore_db= 
Sat Aug 29 13:41:23 2020 - [info]  Replication filtering check ok.
Sat Aug 29 13:41:23 2020 - [info] GTID (with auto-pos) is not supported
Sat Aug 29 13:41:23 2020 - [info] Starting SSH connection tests..
Sat Aug 29 13:41:24 2020 - [info] All SSH connection tests passed successfully.
Sat Aug 29 13:41:24 2020 - [info] Checking MHA Node version..
Sat Aug 29 13:41:24 2020 - [info]  Version check ok.
Sat Aug 29 13:41:24 2020 - [info] Checking SSH publickey authentication settings on the current master..
Sat Aug 29 13:41:24 2020 - [info] HealthCheck: SSH to 192.168.75.123 is reachable.
Sat Aug 29 13:41:24 2020 - [info] Master MHA Node version is 0.57.
Sat Aug 29 13:41:24 2020 - [info] Checking recovery script configurations on 192.168.75.123(192.168.75.123:3306)..
Sat Aug 29 13:41:24 2020 - [info]   Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/usr/local/mysql/data --output_file=/tmp/save_binary_logs_test --manager_version=0.57 --start_file=master-bin.000005 
Sat Aug 29 13:41:24 2020 - [info]   Connecting to root@192.168.75.123(192.168.75.123:22).. 
  Creating /tmp if not exists..    ok.
  Checking output directory is accessible or not..
   ok.
  Binlog found at /usr/local/mysql/data, up to master-bin.000005
Sat Aug 29 13:41:25 2020 - [info] Binlog setting check done.
Sat Aug 29 13:41:25 2020 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Sat Aug 29 13:41:25 2020 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user='mha' --slave_host=192.168.75.134 --slave_ip=192.168.75.134 --slave_port=3306 --workdir=/tmp --target_version=5.6.36-log --manager_version=0.57 --relay_log_info=/usr/local/mysql/data/relay-log.info  --relay_dir=/usr/local/mysql/data/  --slave_pass=xxx
Sat Aug 29 13:41:25 2020 - [info]   Connecting to root@192.168.75.134(192.168.75.134:22).. 
  Checking slave recovery environment settings..
    Opening /usr/local/mysql/data/relay-log.info ... ok.
    Relay log found at /usr/local/mysql/data, up to relay-log-bin.000004
    Temporary relay log file is /usr/local/mysql/data/relay-log-bin.000004
    Testing mysql connection and privileges..Warning: Using a password on the command line interface can be insecure.
 done.
    Testing mysqlbinlog output.. done.
    Cleaning up test file(s).. done.
Sat Aug 29 13:41:25 2020 - [info] Slaves settings check done.
Sat Aug 29 13:41:25 2020 - [info] 
192.168.75.123(192.168.75.123:3306) (current master)
 +--192.168.75.134(192.168.75.134:3306)

Sat Aug 29 13:41:25 2020 - [info] Checking replication health on 192.168.75.134..
Sat Aug 29 13:41:25 2020 - [info]  ok.
Sat Aug 29 13:41:25 2020 - [info] Checking master_ip_failover_script status:
Sat Aug 29 13:41:25 2020 - [info]   /usr/local/bin/scripts/master_ip_failover --command=status --ssh_user=root --orig_master_host=192.168.75.123 --orig_master_ip=192.168.75.123 --orig_master_port=3306 
Sat Aug 29 13:41:25 2020 - [info]  OK.
Sat Aug 29 13:41:25 2020 - [warning] shutdown_script is not defined.
Sat Aug 29 13:41:25 2020 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.

(3) 在master服务器上设置虚拟ip


第一次配置需要去master上手动开启虚拟ip

[root@mysql1 ~]# /sbin/ifconfig ens33:1 192.168.75.220/24

(4) 启动MHA


[root@mha-manager ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf  --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &

(5) 查看 MHA 状态

  • 可以看到当前主服务器是mysql1节点

[root@mha-manager ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:81859) is running(0:PING_OK), master:192.168.75.200
[2]+  退出 1                nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1

(6) 查看日志 也可以开到当前master是mysql1


[root@mha-manager ~]# cat /var/log/masterha/app1/manager.log 

192.168.75.200(192.168.75.100:3306) (current master)
 +--192.168.75.200(192.168.75.1233306)
 +--192.168.75.180(192.168.75.134:3306)
 

(7)查看mysql1的VIP地址是否存在,这个地址不会因为manager节点停止而消失

[root@mysql1 ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.75.100  netmask 255.255.255.0  broadcast 192.168.75.255
        inet6 fe80::bdfe:6407:b656:c916  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:2f:7b:dc  txqueuelen 1000  (Ethernet)
        RX packets 26401  bytes 11820994 (11.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 19535  bytes 3314774 (3.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.75.220  netmask 255.255.255.0  broadcast 192.168.75.255
        ether 00:0c:29:2f:7b:dc  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 354  bytes 21650 (21.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 354  bytes 21650 (21.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        

——————————————————————————————
验证

(8)模拟master服务器发生故障

切换成功后日志最后会有successfully,同时MHA会自动停止并自动修改app1.cnf文件内容,将宕机的MySQL节点删除


[root@mha-manager ~]# tailf /var/log/masterha/app1/manager.log  ## 启用监控观察日志记录

[root@mysql1 ~]# pkill -9 mysql       ## 查看master变化  //这时候,master的浮动路由应该飘移到主备服务器上,

(9)查看mysql2


[root@mysql2 ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.75.123  netmask 255.255.255.0  broadcast 192.168.75.255
        inet6 fe80::1199:c740:2050:ac62  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:e7:9d:50  txqueuelen 1000  (Ethernet)
        RX packets 36370  bytes 12405923 (11.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 24783  bytes 4235966 (4.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.75.220  netmask 255.255.255.0  broadcast 192.168.75.255
        ether 00:0c:29:e7:9d:50  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 303  bytes 25961 (25.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 303  bytes 25961 (25.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:5f:ad:7f  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        

(10)查看MHA服务器


[root@localhost scripts]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:27220) is running(0:PING_OK), master:192.168.75.123

十一 、问题分析

MHA日志 : vim manager.log
配置文件:vim /etc/masterha/app1.cnf
vim /usr/local/bin/master_ip_failover


[root@mha-manager ~]# ll /usr/local/bin/scripts/    
总用量 32
-rwxr-xr-x. 1 1001 1001  3648 531 2015 master_ip_failover  ## 自动切换时VIP管理的脚本
-rwxr-xr-x. 1 1001 1001  9870 531 2015 master_ip_online_change  ## 在线切换时vip 的管理  ## vip及虚拟ip
-rwxr-xr-x. 1 1001 1001 11867 531 2015 power_manager  ## 故障发生后关闭主机的脚本
-rwxr-xr-x. 1 1001 1001  1360 531 2015 send_report  ## 因故障切换后发送报警的脚本


cd /var/log/masterha/app1
rw-r--r--. 1 root root     0 829 01:35 app1.failover.complete
-rw-r--r--. 1 root root    37 829 16:07 app1.master_status.health
-rw-r--r--. 1 root root 10618 829 14:05 manager.log

十二、虚拟ip解释

在MySQL高可用之MHA部署这篇博文中,已经将MHA的基础架构部署完成,但是并没有解决一个虚拟IP的问题,因为当master宕机后,新的master顶上来,这时前端APP要连接的数据库IP已经发生了变化,为了解决这个问题,必然要引入虚拟IP,谈起虚拟IP,首先想到的应该是keepalived这个工具,但这个工具有个弊端,就是有一个脑裂的问题,所以更建议在生产中使用脚本的方式来…

问题集
mysql问题,进程文件停止

== 注意:==

centos 7.6——MHA高可用配置及故障切换_第2张图片

你可能感兴趣的:(mysql高可用,mysql)