PS:从MySQL5.5开始,MySQL以插件的形式支持半同步复制。
MHA(Master High Availability)目前在MySQL高可用方面是一个相对成熟的解决方案,它由日本DeNA公司youshimaton(现就职于Facebook公司)开发,是一套优秀的作为MySQL高可用性环境下故障切换和主从提升的高可用软件。在MySQL故障切换过程中,MHA能做到在0~30秒之内自动完成数据库的故障切换操作,并且在进行故障切换的过程中,MHA能在最大程度上保证数据的一致性,以达到真正意义上的高可用。 MHA里有两个角色一个是MHA Node(数据节点)另一个是MHA Manager(管理节点)。 MHA Manager可以单独部署在一台独立的机器上管理多个master-slave集群,也可以部署在一台slave节点上。MHA Node运行在每台MySQL服务器上,MHA Manager会定时探测集群中的master节点,当master出现故障时,它可以自动将最新数据的slave提升为新的master,然后将所有其他的slave重新指向新的master。整个故障转移过程对应用程序完全透明。
MHA自动故障切换过程中,MHA试图从宕机的主服务器上保存二进制日志,最大程度的保证数据的不丢失,但这并不总是可行的。例如,如果主服务器硬件故障或无法通过ssh访问,MHA没法保存二进制日志,只进行故障转移而丢失了最新的数据。使用MySQL 5.5的半同步复制,可以大大降低数据丢失的风险。MHA可以与半同步复制结合起来。如果只有一个slave已经收到了最新的二进制日志,MHA可以将最新的二进制日志应用于其他所有的slave服务器上,因此可以保证所有节点的数据一致性。
异步复制(Asynchronous replication)
MySQL默认的复制即是异步的,主库在执行完客户端提交的事务后会立即将结果返给给客户端,并不关心从库是否已经接收并处理,这样就会有一个问题,主如果crash掉了,此时主上已经提交的事务可能并没有传到从上,如果此时,强行将从提升为主,可能导致新主上的数据不完整
全同步复制(Fully synchronous replication)
指当主库执行完一个事务,所有的从库都执行了该事务才返回给客户端。因为需要等待所有从库执行完该事务才能返回,所以全同步复制的性能必然会收到严重的影响。
半同步复制(Semisynchronous replication)
介于异步复制和全同步复制之间,主库在执行完客户端提交的事务后不是立刻返回给客户端,而是等待至少一个从库接收到并写到relay log中才返回给客户端。相对于异步复制,半同步复制提高了数据的安全性,同时它也造成了一定程度的延迟,这个延迟最少是一个TCP/IP往返的时间。所以,半同步复制最好在低延时的网络中使用。
工作原理
相较于其它HA软件,MHA的目的在于维持MySQL Replication中Master库的高可用性,其最大特点是可以修复多个Slave之间的差异日志,最终使所有Slave保持数据一致,然后从中选择一个充当新的Master,并将其它Slave指向它。 -从宕机崩溃的master保存二进制日志事件(binlogevents)。 -识别含有最新更新的slave。-应用差异的中继日志(relay log)到其它slave。 -应用从master保存的二进制日志事件(binlogevents)。 -提升一个slave为新master。 -使其它的slave连接新的master进行复制。
主机名 | IP | 类型 |
---|---|---|
master01 | 192.168.1.20 | 主Mysql(写) |
master02 | 192.168.1.40 | 从Mysql(读) |
slave | 192.168.1.30 | 从Mysql(读) |
manager | 192.168.1.42 | 管理节点 |
其中master01对外提供写服务,备选master(实际的master02)提供读服务,slave也提供相关的读服务,一旦master01宕机将会把备选master提升为新的master,slave指向新的master,manager作为管理服务器。
PS:每台都必须操作
[root@master01 ~]# systemctl stop firewalld
[root@master01 ~]# setenforce 0
setenforce: SELinux is disabled
[root@master01 ~]# systemctl disable firewalld
[root@master01 ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
[root@master01 ~]# vim /etc/hosts
[root@master01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.20 master01
192.168.1.40 master02
192.168.1.30 slave
192.168.1.42 manager
[root@master01 ~]# yum -y install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Config-IniFiles ncftp perl-Params-Validate perl-CPAN perl-Test-Mock-LWP.noarch perl-LWP-Authen-Negotiate.noarch perl-devel perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker
#master01(192.168.1.20)
[root@master01 ~]# ssh-keygen -t rsa
[root@master01 ~]# for i in master01 master02 slave manager ; do ssh-copy-id $i;done
#master02(192.168.1.40)
[root@master02 ~]# ssh-keygen -t rsa
[root@master02 ~]# for i in master01 master02 slave manager ; do ssh-copy-id $i;done
#slave(192.168.1.30)
[root@slave ~]# ssh-keygen -t rsa
[root@slave ~]# for i in master01 master02 slave manager ; do ssh-copy-id $i;done
#manager(192.168.1.42)
[root@manager ~]# ssh-keygen -t rsa
[root@manager ~]# for i in master01 master02 slave manager ; do ssh-copy-id $i;done
测试ssh无交互登录:
[root@manager ~]# ssh master01
Last login: Fri Mar 12 12:57:39 2021 from 192.168.1.1
[root@master01 ~]# exit
登出
Connection to master01 closed.
[root@manager ~]# ssh master02
Last failed login: Fri Mar 12 13:54:41 CST 2021 from manager on ssh:notty
There was 1 failed login attempt since the last successful login.
Last login: Fri Mar 12 12:57:43 2021 from 192.168.1.1
[root@master02 ~]# exit
登出
Connection to master02 closed.
[root@manager ~]# ssh slave
Last login: Fri Mar 12 12:57:48 2021 from 192.168.1.1
[root@slave ~]# exit
登出
Connection to slave closed.
[root@manager ~]#
mysql> show variables like '%plugin_dir%';
+---------------+------------------------------+
| Variable_name | Value |
+---------------+------------------------------+
| plugin_dir | /usr/local/mysql/lib/plugin/ |
+---------------+------------------------------+
1 row in set (0.16 sec)
#查看数据库是否支持动态载入
mysql> show variables like '%have_dynamic%';
+----------------------+-------+
| Variable_name | Value |
+----------------------+-------+
| have_dynamic_loading | YES |
+----------------------+-------+
1 row in set (0.00 sec)
#master01(192.168.1.20)
mysql> install plugin rpl_semi_sync_master soname 'semisync_master.so';
Query OK, 0 rows affected (0.18 sec)
mysql> install plugin rpl_semi_sync_slave soname 'semisync_slave.so';
Query OK, 0 rows affected (0.11 sec)
#master02和slave同上!!!
检查Plugin是否已正确安装
mysql> show plugins;
+----------------------------+----------+--------------------+--------------------+---------+
| Name | Status | Type | Library | License |
+----------------------------+----------+--------------------+--------------------+---------+
......
| rpl_semi_sync_master | ACTIVE | REPLICATION | semisync_master.so | GPL |
| rpl_semi_sync_slave | ACTIVE | REPLICATION | semisync_slave.so | GPL |
+----------------------------+----------+--------------------+--------------------+---------+
46 rows in set (0.00 sec)
#master02和slave同上!!!
查看半同步相关信息
mysql> show variables like '%rpl_semi_sync%';
+-------------------------------------------+------------+
| Variable_name | Value |
+-------------------------------------------+------------+
| rpl_semi_sync_master_enabled | OFF |
| rpl_semi_sync_master_timeout | 10000 |
| rpl_semi_sync_master_trace_level | 32 |
| rpl_semi_sync_master_wait_for_slave_count | 1 |
| rpl_semi_sync_master_wait_no_slave | ON |
| rpl_semi_sync_master_wait_point | AFTER_SYNC |
| rpl_semi_sync_slave_enabled | OFF |
| rpl_semi_sync_slave_trace_level | 32 |
+-------------------------------------------+------------+
8 rows in set (0.00 sec)
从上表可以看到半同复制插件已经安装,只是还没有启用,所以是OFF
#master01(192.168.1.20)
[root@master01 ~]# vim /etc/my.cnf
[root@master01 ~]# tail -10 /etc/my.cnf
server-id = 1
log-bin = mysql-bin
binlog_format = mixed
log-bin-index = mysql-bin.index
rpl_semi_sync_master_enabled = 1
rpl_semi_sync_master_timeout = 1000
rpl_semi_sync_slave_enabled = 1
relay_log_purge = 0
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
[root@master01 ~]# systemctl restart mysqld
#master02(192.168.1.40)
[root@master02 ~]# vim /etc/my.cnf
[root@master02 ~]# tail -10 /etc/my.cnf
server-id = 2
log-bin=mysql-bin
binlog_format=mixed
log-bin-index=mysql-bin.index
relay_log_purge=0
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
rpl_semi_sync_master_enabled=1
rpl_semi_sync_master_timeout=10000
rpl_semi_sync_slave_enabled=1
[root@master02 ~]# systemctl restart mysqld
#slave(192.168.1.30)
[root@slave ~]# vim /etc/my.cnf
[root@slave ~]# tail -6 /etc/my.cnf
server-id = 3
log-bin = mysql-bin
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
read_only = 1
rpl_semi_sync_slave_enabled=1
[root@slave ~]# systemctl restart mysqld
mysql> show variables like '%rpl_semi_sync%';
+-------------------------------------------+------------+
| Variable_name | Value |
+-------------------------------------------+------------+
| rpl_semi_sync_master_enabled | ON |
| rpl_semi_sync_master_timeout | 1000 |
| rpl_semi_sync_master_trace_level | 32 |
| rpl_semi_sync_master_wait_for_slave_count | 1 |
| rpl_semi_sync_master_wait_no_slave | ON |
| rpl_semi_sync_master_wait_point | AFTER_SYNC |
| rpl_semi_sync_slave_enabled | ON |
| rpl_semi_sync_slave_trace_level | 32 |
+-------------------------------------------+------------+
8 rows in set (0.00 sec
mysql> show status like '%rpl_semi_sync%';
+--------------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients | 0 |
| Rpl_semi_sync_master_net_avg_wait_time | 0 |
| Rpl_semi_sync_master_net_wait_time | 0 |
| Rpl_semi_sync_master_net_waits | 0 |
| Rpl_semi_sync_master_no_times | 0 |
| Rpl_semi_sync_master_no_tx | 0 |
| Rpl_semi_sync_master_status | ON |
| Rpl_semi_sync_master_timefunc_failures | 0 |
| Rpl_semi_sync_master_tx_avg_wait_time | 0 |
| Rpl_semi_sync_master_tx_wait_time | 0 |
| Rpl_semi_sync_master_tx_waits | 0 |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0 |
| Rpl_semi_sync_master_wait_sessions | 0 |
| Rpl_semi_sync_master_yes_tx | 0 |
| Rpl_semi_sync_slave_status | OFF |
+--------------------------------------------+-------+
15 rows in set (0.00 sec)
有几个状态参数值得关注的:
#master01(192.168.1.20)
mysql> grant replication slave on *.* to mharep@'192.168.1.%' identified by '123456';
Query OK, 0 rows affected, 1 warning (1.00 sec)
mysql> grant all privileges on *.* to manager@'192.168.1.%' identified by '123456';
Query OK, 0 rows affected, 1 warning (0.01 sec)
mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 | 742 | | | |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)
#master02(192.168.1.40)
mysql> grant replication slave on *.* to mharep@'192.168.1.%' identified by '123456';
Query OK, 0 rows affected, 1 warning (10.01 sec)
mysql> grant all privileges on *.* to manager@'192.168.1.%' identified by '123456';
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> change master to master_host='192.168.1.20',master_user='mharep',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=742;
Query OK, 0 rows affected, 2 warnings (0.11 sec)
mysql> start slave ;
Query OK, 0 rows affected (0.00 sec)
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.20
Master_User: mharep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 742
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
#slave(192.168.1.30)
mysql> grant all privileges on *.* to manager@'192.168.1.%' identified by '123456';
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> change master to master_host='192.168.1.20',master_user='mharep',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=742;
Query OK, 0 rows affected, 2 warnings (0.12 sec)
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.20
Master_User: mharep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 742
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
mysql> show status like '%rpl_semi_sync%';
+--------------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients | 2 |
| Rpl_semi_sync_master_net_avg_wait_time | 0 |
| Rpl_semi_sync_master_net_wait_time | 0 |
| Rpl_semi_sync_master_net_waits | 0 |
| Rpl_semi_sync_master_no_times | 1 |
| Rpl_semi_sync_master_no_tx | 2 |
| Rpl_semi_sync_master_status | ON |
| Rpl_semi_sync_master_timefunc_failures | 0 |
| Rpl_semi_sync_master_tx_avg_wait_time | 0 |
| Rpl_semi_sync_master_tx_wait_time | 0 |
| Rpl_semi_sync_master_tx_waits | 0 |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0 |
| Rpl_semi_sync_master_wait_sessions | 0 |
| Rpl_semi_sync_master_yes_tx | 0 |
| Rpl_semi_sync_slave_status | OFF |
+--------------------------------------------+-------+
15 rows in set (0.01 sec)
mysql-MHA网址: https://github.com/yoshinorim
mha包括manager节点和data节点,data节点包括原有的MySQL复制结构中的主机,至少3台,即1主2从,当masterfailover后,还能保证主从结构;只需安装node包。manager server:运行监控脚本,负责monitoring 和 auto-failover;需要安装node包和manager包
PS:因网速原因此处选择本地
mha4mysql-node
#manager(192.168.1.42)
[root@manager ~]# tar zxf mha4mysql-node-0.58.tar.gz
[root@manager ~]# cd mha4mysql-node-0.58/
[root@manager mha4mysql-node-0.58]# perl Makefile.PL
[root@manager mha4mysql-node-0.58]# make && make install
#master01(192.168.1.20)
[root@master01 ~]# tar zxf mha4mysql-node-0.58.tar.gz
[root@master01 ~]# cd mha4mysql-node-0.58/
[root@master01 mha4mysql-node-0.58]# perl Makefile.PL
[root@master01 mha4mysql-node-0.58]# make && make install
#master02(192.168.1.40)
[root@master02 ~]# tar zxf mha4mysql-node-0.58.tar.gz
[root@master02 ~]# cd mha4mysql-node-0.58/
[root@master02 mha4mysql-node-0.58]# perl Makefile.PL
[root@master02 mha4mysql-node-0.58]# make && make install
#slave(192.168.1.30)
[root@slave ~]# tar zxf mha4mysql-node-0.58.tar.gz
[root@slave ~]# cd mha4mysql-node-0.58/
[root@slave mha4mysql-node-0.58]# perl Makefile.PL
[root@slave mha4mysql-node-0.58]# make && make install
mha4mysql-manager
#manager(192.168.1.42)
[root@manager ~]# tar zxf mha4mysql-manager-0.58.tar.gz
[root@manager ~]# cd mha4mysql-manager-0.58/
[root@manager mha4mysql-manager-0.58]# perl Makefile.PL
*** Module::AutoInstall version 1.06
*** Checking for Perl dependencies...
[Core Features]
- DBI ...loaded. (1.627)
- DBD::mysql ...loaded. (4.023)
- Time::HiRes ...loaded. (1.9725)
- Config::Tiny ...loaded. (2.14)
- Log::Dispatch ...loaded. (2.41)
- Parallel::ForkManager ...loaded. (1.18)
- MHA::NodeConst ...loaded. (0.58)
*** Module::AutoInstall configuration finished.
Checking if your kit is complete...
Looks good
Writing Makefile for mha4mysql::manager
[root@manager mha4mysql-manager-0.58]# make && make install
[root@manager mha4mysql-manager-0.58]# mkdir /etc/masterha
[root@manager mha4mysql-manager-0.58]# mkdir -p /etc/masterha/app
[root@manager mha4mysql-manager-0.58]# mkdir /scripts
[root@manager mha4mysql-manager-0.58]# cp samples/conf/* /etc/masterha/
[root@manager mha4mysql-manager-0.58]# cp samples/scripts/* /scripts/
绝大多数Linux应用程序类似,MHA的正确使用依赖于合理的配置文件。MHA的配置文件与mysql的my.cnf文件配置相似,采取的是param=value的方式来配置,配置文件位于管理节点,通常包括每一个mysql server的主机名,mysql用户名,密码,工作目录等等。
[root@manager ~]# vim /etc/masterha/app1.cnf
[root@manager ~]# cat /etc/masterha/app1.cnf
[server default]
manager_workdir=/masterha/app1
manager_log=/masterha/app1/manager.log
user=manager
password=123456
ssh_user=root
repl_user=mharep
repl_password=123456
ping_interval=1
[server1]
hostname=192.168.1.20
port=3306
master_binlog_dir=/usr/local/mysql/data
candidate_master=1
[server2]
hostname=192.168.1.40
port=3306
master_binlog_dir=/usr/local/mysql/data
candidate_master=1
[server3]
hostname=192.168.1.30
port=3306
master_binlog_dir=/usr/local/mysql/data
no_master=1
[root@manager ~]# >/etc/masterha/masterha_default.cnf
配关配置项的解释:
[root@manager ~]# masterha_check_ssh --global_conf=/etc/masterha/masterha_default.cnf --conf=/etc/masterha/app1.cnf
Fri Mar 12 15:53:20 2021 - [info] Reading default configuration from /etc/masterha/masterha_default.cnf..
Fri Mar 12 15:53:20 2021 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Fri Mar 12 15:53:20 2021 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Fri Mar 12 15:53:20 2021 - [info] Starting SSH connection tests..
Fri Mar 12 15:53:21 2021 - [debug]
Fri Mar 12 15:53:20 2021 - [debug] Connecting via SSH from [email protected](192.168.1.20:22) to [email protected](192.168.1.40:22)..
Fri Mar 12 15:53:21 2021 - [debug] ok.
Fri Mar 12 15:53:21 2021 - [debug] Connecting via SSH from [email protected](192.168.1.20:22) to [email protected](192.168.1.30:22)..
Fri Mar 12 15:53:21 2021 - [debug] ok.
Fri Mar 12 15:53:22 2021 - [debug]
Fri Mar 12 15:53:21 2021 - [debug] Connecting via SSH from [email protected](192.168.1.40:22) to [email protected](192.168.1.20:22)..
Fri Mar 12 15:53:21 2021 - [debug] ok.
Fri Mar 12 15:53:21 2021 - [debug] Connecting via SSH from [email protected](192.168.1.40:22) to [email protected](192.168.1.30:22)..
Fri Mar 12 15:53:22 2021 - [debug] ok.
Fri Mar 12 15:53:23 2021 - [debug]
Fri Mar 12 15:53:21 2021 - [debug] Connecting via SSH from [email protected](192.168.1.30:22) to [email protected](192.168.1.20:22)..
Fri Mar 12 15:53:21 2021 - [debug] ok.
Fri Mar 12 15:53:21 2021 - [debug] Connecting via SSH from [email protected](192.168.1.30:22) to [email protected](192.168.1.40:22)..
Fri Mar 12 15:53:22 2021 - [debug] ok.
Fri Mar 12 15:53:23 2021 - [info] All SSH connection tests passed successfully.
PS:最主要就是看最后一行反馈结果 (All SSH connection tests passed successfully.:所有SSH连接测试都成功通过。)
PS:MySQL必须都启动
[root@manager ~]# masterha_check_repl --global_conf=/etc/masterha/masterha_default.cnf --conf=/etc/masterha/app1.cnf
.....
PS:验证成功的话会自动识别出所有服务器和主从状况 注:验证成功的话会自动识别出所有服务器和主从状况 在验证时,若遇到这个错误:Can’t exec “mysqlbinlog” … 解决方法是在所有服务器上执行:
#所有DB服务器执行
[root@master01 ~]# ln -s /usr/local/mysql/bin/* /usr/local/bin/
[root@master02 mha4mysql-node-0.58]# ln -s /usr/local/mysql/bin/* /usr/local/bin/
[root@slave mha4mysql-node-0.58]# ln -s /usr/local/mysql/bin/* /usr/local/bin/
manager执行:
[root@manager ~]# masterha_check_repl --global_conf=/etc/masterha/masterha_default.cnf --conf=/etc/masterha/app1.cnf
Fri Mar 12 16:37:15 2021 - [info] Reading default configuration from /etc/masterha/masterha_default.cnf..
Fri Mar 12 16:37:15 2021 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Fri Mar 12 16:37:15 2021 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Fri Mar 12 16:37:15 2021 - [info] MHA::MasterMonitor version 0.58.
Fri Mar 12 16:37:16 2021 - [info] GTID failover mode = 0
Fri Mar 12 16:37:16 2021 - [info] Dead Servers:
Fri Mar 12 16:37:16 2021 - [info] Alive Servers:
......
mysql: [Warning] Using a password on the command line interface can be insecure.
done.
Testing mysqlbinlog output.. done.
Cleaning up test file(s).. done.
Fri Mar 12 16:37:21 2021 - [info] Slaves settings check done.
Fri Mar 12 16:37:21 2021 - [info]
192.168.1.20(192.168.1.20:3306) (current master)
+--192.168.1.40(192.168.1.40:3306)
+--192.168.1.30(192.168.1.30:3306)
Fri Mar 12 16:37:21 2021 - [info] Checking replication health on 192.168.1.40..
Fri Mar 12 16:37:21 2021 - [info] ok.
Fri Mar 12 16:37:21 2021 - [info] Checking replication health on 192.168.1.30..
Fri Mar 12 16:37:21 2021 - [info] ok.
Fri Mar 12 16:37:21 2021 - [warning] master_ip_failover_script is not defined.
Fri Mar 12 16:37:21 2021 - [warning] shutdown_script is not defined.
Fri Mar 12 16:37:21 2021 - [info] Got exit code 0 (Not master dead).
MySQL Replication Health is OK.
[root@manager ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf &>/tmp/mha_manager.log &
[1] 75148
PS:在应用Unix/Linux时,我们一般想让某个程序在后台运行,于是我们将常会用 & 在程序结尾来让程序自动运行。比如我们要运行mysql在后台: /usr/local/mysql/bin/mysqld_safe –user=mysql &。可是有很多程序并不想mysqld一样,这样我们就需要nohup命令,
状态检查:
[root@manager ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:75148) is running(0:PING_OK), master:192.168.1.20
转移过程:
自动failover master dead后,MHA当时已经开启,候选Master库(Slave)会自动failover为Master. 验证的方式是先停掉 master01,因为之前的配置文件中,把Candicate master02作为了候选人,那么就到 slave上查看 master 的 IP 是否变为了master02 的 IP
[root@master01 ~]# systemctl stop mysqld.service
#日志自行查看,作者此处日志也不怎么能看懂!!!
[root@manager ~]# tail -f /masterha/app1/manager.log
.......
----- Failover Report -----
app1: MySQL Master failover 192.168.1.20(192.168.1.20:3306) to 192.168.1.40(192.168.1.40:3306) succeeded
Master 192.168.1.20(192.168.1.20:3306) is down!
Check MHA Manager logs at manager:/masterha/app1/manager.log for details.
Started automated(non-interactive) failover.
The latest slave 192.168.1.40(192.168.1.40:3306) has all relay logs for recovery.
Selected 192.168.1.40(192.168.1.40:3306) as a new master.
192.168.1.40(192.168.1.40:3306): OK: Applying all logs succeeded.
192.168.1.30(192.168.1.30:3306): This host has the latest relay log events.
Generating relay diff files from the latest slave succeeded.
192.168.1.30(192.168.1.30:3306): OK: Applying all logs succeeded. Slave started, replicating from 192.168.1.40(192.168.1.40:3306)
192.168.1.40(192.168.1.40:3306): Resetting slave info succeeded.
Master failover to 192.168.1.40(192.168.1.40:3306) completed successfully.
从日志信息中可以看到 master failover 已经成功了,并可以看出故障转移的大体流程
ysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.40
Master_User: mharep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 742
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
可以看到 master 的 IP 现在为 192.168.1.40,说明 MHA 已经把Candicate master02 提升为了新的 master,IO线程和SQL线程也正确运行,MHA搭建成功。
1)检查是否有下列文件,有则删除。 发生主从切换后,MHAmanager服务会自动停掉,且在manager_workdir(/masterha/app1)目录下面生成文件app1.failover.complete,若要启动MHA,必须先确保无此文件) 如果有这个提示,那么删除此文件/ masterha/app1/app1.failover.complete [error][/usr/share/perl5/vendor_perl/MHA/MasterFailover.pm, ln298] Last failover was done at 2015/01/09 10:00:47.Current time is too early to do failover again. If you want to do failover, manually remove /
masterha/app1/app1.failover.complete and run this script again.
# ll /masterha/app1/app1.failover.complete
# ll /masterha/app1/app1.failover.error
2)检查MHA复制检查:(需要把master01设置成master02的从服务器)
mysql> change master to master_host='192.168.1.40',master_user='mharep',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=742;
Query OK, 0 rows affected, 2 warnings (0.01 sec)
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.40
Master_User: mharep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 742
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
#manager(192.168.1.42)
[root@manager app1]# masterha_chck_repl --conf=/etc/masterha/app1.cnf
[root@manager app1]# nohup masterha_manager --conf=/etc/masterha/app1.cnf &>/tmp/mha_manager.log &
[1] 20964
[root@manager app1]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 monitoring program is now on initialization phase(10:INITIALIZING_MONITOR). Wait for a while and try checking again.
[root@manager app1]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:20964) is running(0:PING_OK), master:192.168.1.40
3)停止MHA
# masterha_stop --conf=/etc/masterha/app1.cnf
4)启动MHA
# nohup masterha_manager --conf=/etc/masterha/app1.cnf &>/tmp/mha_manager.log &
当有slave 节点宕掉时,默认是启动不了的,加上 --ignore_fail_on_start 即使有节点宕掉也能启动MHA,如下
# nohup masterha_manager --conf=/etc/masterha/app1.cnf --ignore_fail_on_start&>/tmp/mha_manager.log &
5) 检查状态
# masterha_check_status --conf=/etc/masterha /app1.cnf
6) 检查日志
# tail -f /masterha/app1/manager.log
7)主从切换后续工作 重构: 重构就是你的主挂了,切换到Candicate master上,Candicate master变成了主,因此重构的一种方案原主库修复成一个新的slave 主库切换后,把原主库修复成新从库,然后重新执行以上5步。原主库数据文件完整的情况下,可通过以下方式找出最后执行的CHANGE MASTER命令
master02此时为主,停掉master02,重新切换到master01为主如下
[root@manager app]# grep 'CHANGE' /masterha/app1/manager.log
Sat Mar 13 10:10:25 2021 - [info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='192.168.1.20', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000003', MASTER_LOG_POS=154, MASTER_USER='mharep', MASTER_PASSWORD='xxx';
Sat Mar 13 10:10:26 2021 - [info] Executed CHANGE MASTER.
[root@mysql mha4mysql-node-0.58]# mysql -uroot -p123456
mysql>
mysql> CHANGE MASTER TO MASTER_HOST='192.168.1.20', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000003', MASTER_LOG_POS=154, MASTER_USER='mharep', MASTER_PASSWORD='123456';
Query OK, 0 rows affected, 2 warnings (0.01 sec)
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.20
Master_User: mharep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000003
Read_Master_Log_Pos: 154
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000003
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
启动manager
[root@manager app]# cd /masterha/app1/
[root@manager app1]# ls
app1.master_status.health manager.log
[root@manager app1]# rm -rf app1.master_status.health
[root@manager app1]# masterha_check_repl --global_conf=/etc/masterha/masterha_default.cnf --conf=/etc/masterha/app1.cnf
[root@manager app1]# nohup masterha_manager --conf=/etc/masterha/app1.cnf &>/tmp/mha_manager.log &
[2] 22770
[root@manager app1]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 monitoring program is now on initialization phase(10:INITIALIZING_MONITOR). Wait for a while and try checking again.
[root@manager app1]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:22643) is running(0:PING_OK), master:192.168.1.20
[2]+ 退出 1 nohup masterha_manager --conf=/etc/masterha/app1.cnf &>/tmp/mha_manager.lo
注意:如果正常,会显示"PING_OK",否则会显示"NOT_RUNNING",这代表MHA监控没有开启。定期删除中继日志 在配置主从复制中,slave上设置了参数relay_log_purge=0,所以slave节点需要定期删除中继日志,建议每个slave节点删除中继日志的时间错开。
corntab -e
0 5 * * * /usr/local/bin/purge_relay_logs - -user=root --password=pwd123 --port=3306 --disable_relay_log_purge >> /var/log/purge_relay.log 2>&1
异步与半同步异同 默认情况下MySQL的复制是异步的,Master上所有的更新操作写入Binlog之后并不确保所有的更新都被复制到Slave之上。异步操作虽然效率高,但是在Master/Slave出现问题的时候,存在很高数据不同步的风险,甚至可能丢失数据。 MySQL5.5引入半同步复制功能的目的是为了保证在master出问题的时候,至少有一台Slave的数据是完整的。在超时的情况下也可以临时转入异步复制,保障业务的正常使用,直到一台salve追赶上之后,继续切换到半同步模式。
vip配置可以采用两种方式,一种通过keepalived的方式管理虚拟ip的浮动;另外一种通过脚本方式启动虚拟ip的方式(即不需要keepalived或者heartbeat类似的软件)。
环境:
主机名 | IP | 类型 | 添加服务 |
---|---|---|---|
master01 | 192.168.1.20 | 主Mysql(写) | keepalived |
master02 | 192.168.1.40 | 从Mysql(读) | keepalived |
slave | 192.168.1.30 | 从Mysql(读) | |
manager | 192.168.1.42 | 管理节点 |
PS:在master01和master02上配置
[root@master01 ~]# mkdir /data
[root@master01 ~]# cd /data/
[root@master01 data]# yum -y install kernel-devel openssl-devel popt-devel
[root@master01 data]# wget https://www.keepalived.org/software/keepalived-2.2.0.tar.gz
[root@master01 data]# tar zxf keepalived-2.2.0.tar.gz
[root@master01 data]# cd keepalived-2.2.0/
[root@master01 keepalived-2.2.0]# ./configure --prefix=/usr/local/keepalived && make && make install
[root@master01 ~]# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/keepalived
[root@master01 ~]# cp /usr/local/keepalived/sbin/keepalived /usr/sbin/keepalived
[root@master01 ~]# cp /data/keepalived-2.2.0/keepalived/etc/init.d/keepalived /etc/init.d/keepalived
[root@master01 ~]# mkdir -p /etc/keepalived
[root@master01 ~]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf
开启防火墙的情况下情如下配置,没开则跳过
[root@master01 ~]# firewall-cmd --direct --permanent --add-rule ipv4 filter OUTPUT 0 --in-interface ens33 --destination 224.0.0.18 --protocol vrrp -j ACCEPT
success
[root@master01 ~]# firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 --in-interface ens33 --destination 224.0.0.18 --protocol vrrp -j ACCEPT
success
[root@master01 ~]# firewall-cmd --reload
success
#master01(192.168.1.20)
[root@master01 ~]# vim /etc/keepalived/keepalived.conf
[root@master01 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id MySQL-MHA01
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 100
nopreempt
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.200
}
}
[root@master01 ~]# systemctl start keepalived.service
[root@master01 ~]# ip a show dev ens33
2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:95:6e:64 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.20/24 brd 192.168.1.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.1.200/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::dde:d77f:a5c2:1ade/64 scope link noprefixroute
valid_lft forever preferred_lft forever
#master02(192.168.1.40)
[root@master02 ~]# vim /etc/keepalived/keepalived.conf
[root@master02 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id MySQL-MHA2
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 50
nopreempt
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.200
}
}
[root@master02 ~]# systemctl start keepalived.service
[root@master02 ~]# ip a show dev ens33
2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:7d:82:fc brd ff:ff:ff:ff:ff:ff
inet 192.168.1.40/24 brd 192.168.1.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::3b1a:3b14:6e81:4038/64 scope link noprefixroute
valid_lft forever preferred_lft forever
#或者使用以下命令查看ens33网卡是否绑定了VIP
[root@master01 ~]# tail -f /var/log/messages
Mar 14 21:51:19 mysql Keepalived_vrrp[51549]: Sending gratuitous ARP on ens33 for 192.168.1.200
Mar 14 21:51:19 mysql Keepalived_vrrp[51549]: Sending gratuitous ARP on ens33 for 192.168.1.200
Mar 14 21:51:19 mysql Keepalived_vrrp[51549]: Sending gratuitous ARP on ens33 for 192.168.1.200
Mar 14 21:51:19 mysql Keepalived_vrrp[51549]: Sending gratuitous ARP on ens33 for 192.168.1.200
Mar 14 21:51:19 mysql Keepalived_vrrp[51549]: Sending gratuitous ARP on ens33 for 192.168.1.200
......
到此:上面两台服务器的keepalived都设置为了BACKUP模式,在keepalived中2种模式,分别是master-backup模式和backup->backup模式。这两种模式有很大区别。在master->backup模式下,一旦主库宕机,虚拟ip会自动漂移到从库,当主库修复后,keepalived启动后,还会把虚拟ip抢占过来,即使设置了非抢占模式(nopreempt)抢占ip的动作也会发生。在backup->backup模式下,当主库宕机后虚拟ip会自动漂移到从库上,当原主库恢复和keepalived服务启动后,并不会抢占新主的虚拟ip,即使是优先级高于从库的优先级别,也不会发生抢占。为了减少ip漂移次数,通常是把修复好的主库当做新的备库。
MySQL服务进程挂掉时通过MHA 停止keepalived要想把keepalived服务引入MHA,我们只需要修改切换时触发的脚本文件master_ip_failover即可,在该脚本中添加在master发生宕机时对keepalived的处理。
#manager(192.168.1.42)
[root@manager ~]# vim /scripts/master_ip_failover
[root@manager ~]# cat /scripts/master_ip_failover
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';
use Getopt::Long;
my (
$command,$ssh_user,$orig_master_host,$orig_master_ip,$orig_master_port,
$new_master_host,$new_master_ip,$new_master_port
);
my $vip = '192.168.1.200';
my $ssh_start_vip = "systemctl start keepalived.service";
my $ssh_stop_vip = "systemctl stop keepalived.service";
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);
exit &main();
sub main {
print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
if ( $command eq "stop" || $command eq "stopssh" ) {
my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {
my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host\n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
#`ssh $ssh_user\@cluster1 \" $ssh_start_vip \"`;
exit 0;
}
else {
&usage();
exit 1;
}
}
# A simple system call that enable the VIP on the new master
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
return 0 unless ($ssh_user);
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}
sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status orig_master_host=host --orig_master_ip=ip --orig_master_port=port new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}
#调用故障切换脚本停止MHA
[root@manager ~]# masterha_stop --conf=/etc/masterha/app1.cnf
Stopped app1 successfully.
#在配置文件/etc/masterha/app1.cnf 中启用下面的参数(在[server default下面添加])
[root@manager ~]# vim /etc/masterha/app1.cnf
master_ip_failover_script=/scripts/master_ip_failover
#启动MHA:
[root@manager ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf &>/tmp/mha_manager.log &
[1] 23042
#检查状态
[root@manager ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:23042) is running(0:PING_OK), master:192.168.1.20
#再检查集群状态,看是否会报错。
[root@manager ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
可以看见已经没有报错了。 /scripts/master_ip_failover添加或者修改的内容意思是当主库数据库发生故障时,会触发MHA切换,MHA Manager会停掉主库上的keepalived服务,触发虚拟ip漂移到备选从库,从而完成切换。 当然可以在keepalived里面引入脚本,这个脚本监控mysql是否正常运行,如果不正常,则调用该脚本杀掉keepalived进程(参考MySQL 高可用性keepalived+mysql双主)。
PS: 在master01上停止mysqld服务 到slave查看slave的状态
#master01(192.168.1.20)
[root@master01 ~]# systemctl stop mysqld
#slave(192.168.1.30)
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.40
Master_User: mharep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000002
Read_Master_Log_Pos: 154
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000002
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
查看vip绑定
#master01(192.168.1.20)
[root@master01 ~]# ip a show dev ens33
2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:95:6e:64 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.20/24 brd 192.168.1.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::dde:d77f:a5c2:1ade/64 scope link noprefixroute
valid_lft forever preferred_lft forever
#master02(192.168.1.40)
[root@master02 ~]# ip a show dev ens33
2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:7d:82:fc brd ff:ff:ff:ff:ff:ff
inet 192.168.1.40/24 brd 192.168.1.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.1.200/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::3b1a:3b14:6e81:4038/64 scope link noprefixroute
valid_lft forever preferred_lft forever
从上面的显示结果可以看出vip地址漂移到了192.168.1.40主从切换后续工作
重构
重构就是你的主挂了,切换到Candicate master上,Candicate master变成了主,因此重构的一种方案原主库修复成一个新的slave 主库切换后,把原主库修复成新从库,原主库数据文件完整的情况下,可通过以下方式找出最后执行的CHANGEMASTER命令
#manster(192.168.1.42)
#查看到的日志永远有很多 ,每次都是选择最后一个筛选的命令
[root@manager ~]# grep 'CHANGE' /masterha/app1/manager.log
Sat Mar 13 09:42:17 2021 - [info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='192.168.1.40', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=742, MASTER_USER='mharep', MASTER_PASSWORD='xxx';
Sat Mar 13 09:42:18 2021 - [info] Executed CHANGE MASTER.
Sun Mar 14 21:02:23 2021 - [info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='192.168.1.20', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000003', MASTER_LOG_POS=154, MASTER_USER='mharep', MASTER_PASSWORD='xxx';
Sun Mar 14 21:02:24 2021 - [info] Executed CHANGE MASTER.
Sun Mar 14 21:27:53 2021 - [info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='192.168.1.40', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000002', MASTER_LOG_POS=154, MASTER_USER='mharep', MASTER_PASSWORD='xxx';
#master01(192.168.1.20)
[root@master01 ~]# systemctl start mysqld
[root@master01 ~]# mysql -uroot -p123456
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.32-log Source distribution
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> CHANGE MASTER TO MASTER_HOST='192.168.1.40', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000002', MASTER_LOG_POS=154, MASTER_USER='mharep', MASTER_PASSWORD='123456';
Query OK, 0 rows affected, 2 warnings (0.00 sec)
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.40
Master_User: mharep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000002
Read_Master_Log_Pos: 154
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000002
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
#manster(192.168.1.42)
[root@manager ~]# rm -rf /masterha/app1/app1.failover.complete
[root@manager ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --ignore_fail_on_start &>/tmp/mha_manager.log &
[1] 24505
[root@manager ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 monitoring program is now on initialization phase(10:INITIALIZING_MONITOR). Wait for a while and try checking again.
[root@manager ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:24505) is running(0:PING_OK), master:192.168.1.40
通过脚本的方式管理VIP。这里是修改/scripts/master_ip_failover,也可以使用其他的语言完成,比如php语言。使用php脚本编写的failover!
#提前把之前环境的keepalived服务关闭
[root@master02 ~]# systemctl stop keepalived.service
#需要手动在master服务器上绑定一个vip
#master02(192.168.1.40)
#如果没有ifconfig命令请按照安装
[root@master02 ~]# yum -y install net-tools
[root@master02 ~]# ifconfig ens33:0 192.168.1.200/24
[root@master02 ~]# ip a show dev ens33
2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:7d:82:fc brd ff:ff:ff:ff:ff:ff
inet 192.168.1.40/24 brd 192.168.1.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.1.200/24 brd 192.168.1.255 scope global secondary ens33:0
valid_lft forever preferred_lft forever
inet6 fe80::3b1a:3b14:6e81:4038/64 scope link noprefixroute
valid_lft forever preferred_lft forever
#master01(192.168.1.20)
[root@master01 ~]# ip a show dev ens33
2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:95:6e:64 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.20/24 brd 192.168.1.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::dde:d77f:a5c2:1ade/64 scope link noprefixroute
valid_lft forever preferred_lft forever
#manager(192.168.1.42)
[root@manager ~]# vim /scripts/master_ip_failover
[root@manager ~]# cat /scripts/master_ip_failover
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';
use Getopt::Long;
my (
$command,$ssh_user,$orig_master_host,$orig_master_ip,$orig_master_port,
$new_master_host,$new_master_ip,$new_master_port
);
my $vip = '192.168.1.200';
my $key = '0';
my $ssh_start_vip = "/sbin/ifconfig ens33:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig ens33:$key down";
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);
exit &main();
sub main {
print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
if ( $command eq "stop" || $command eq "stopssh" ) {
my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {
my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host\n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
#`ssh $ssh_user\@cluster1 \" $ssh_start_vip \"`;
exit 0;
}
else {
&usage();
exit 1;
}
}
# A simple system call that enable the VIP on the new master
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
return 0 unless ($ssh_user);
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}
sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status orig_master_host=host --orig_master_ip=ip --orig_master_port=port new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}
[root@manager ~]# masterha_stop --conf=/etc/masterha/app1.cnf
Stopped app1 successfully.
[1]+ 退出 1 nohup masterha_manager --conf=/etc/masterha/app1.cnf --ignore_fail_on_start &>/tmp/mha_manager.log
[root@manager ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf &>/tmp/mha_manager.log &
[1] 24975
#再检查集群状态,看是否会报错。
[root@manager ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
[root@manager ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 monitoring program is now on initialization phase(10:INITIALIZING_MONITOR). Wait for a while and try checking again.
[root@manager ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 monitoring program is now on initialization phase(10:INITIALIZING_MONITOR). Wait for a while and try checking again.
[root@manager ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:24975) is running(0:PING_OK), master:192.168.1.40
可以看见已经没有报错了。 /scripts/master_ip_failover添加或者修改的内容意思是当主库数据库发生故障时,会触发MHA切换,MHA Manager会停掉主库上的keepalived服务,触发虚拟ip漂移到备选从库,从而完成切换。 当然可以在keepalived里面引入脚本,这个脚本监控mysql是否正常运行,如果不正常,则调用该脚本杀掉keepalived进程。
#master02(192.168.1.40)
[root@master02 ~]# systemctl stop mysqld
#slave(192.168.1.30)
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.20
Master_User: mharep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000004
Read_Master_Log_Pos: 154
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000004
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
#查看漂移地址
#master02(192.168.1.40)
[root@master02 ~]# ip a show dev ens33
2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:7d:82:fc brd ff:ff:ff:ff:ff:ff
inet 192.168.1.40/24 brd 192.168.1.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::3b1a:3b14:6e81:4038/64 scope link noprefixroute
valid_lft forever preferred_lft forever
#master01(192.168.1.20)
[root@master01 ~]# ip a show dev ens33
2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:95:6e:64 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.20/24 brd 192.168.1.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.1.200/24 brd 192.168.1.255 scope global secondary ens33:0
valid_lft forever preferred_lft forever
inet6 fe80::dde:d77f:a5c2:1ade/64 scope link noprefixroute
valid_lft forever preferred_lft forever
从上可以看到master02(原来的master)释放了VIP,master01(新的master)接管了VIP地址 主从切换后续工作主库切换后,把原主库修复成新从库,相关操作请参考前面相关操作。 为了防止脑裂发生,推荐生产环境采用脚本的方式来管理虚拟ip,而不是使用keepalived来完成。到此为止,基本MHA集群已经配置完毕。
MHA软件由两部分组成,Manager工具包和Node工具包,具体的说明如下。 Manager工具包主要包括以下几个工具:
Node工具包(这些工具通常由MHA Manager的脚本触发,无需人为操作)主要包括以下几个工具:
MySQL必备技能掌握: