一 .试验前规划:
实验环境:CentOS—6.5
数据库: Mysql-5.6.19
虚拟机:VMware Workstation 10
网络拓扑结构:
三个节点非别为 master1,master2,slave. 其中master1与master2做了mysql的双主复制,slave节点基于master1做主从复制。
由于节点的限制我们将slave节点也做为监控主机。
IP地址规划:
master1: 10.43.2.81 10.43.2.99 做为提供给应用程序连接写的节点
master2: 10.43.2.93 10.43.2.100 做为提供给应用程序连接读的节点
slave: 10.43.2.83 10.43.2.101 做为提供给应用程序连接读的节点
权限的划分:
master1与master互为主从在这两个建立复制用户 repl 密码 repl
slave通过以上建立的复制用户与master1做主从复制,这里因为是试验环境为了方便操作所以将用同一个复制用户信息,在生产环境中应该避免这个问题。
二.Mysql的相关配置
在三个几点上安装mysql这个安装可以自行查阅资料
1.master1与master2做双主复制:
修改master1的配置文件如下:
[mysqld] character-set-server=utf8 server-id = 1 datadir = /mydata/data log-bin = /mydata/binglogs/master-bin relay_log = /mydata/relaylogs/relay binlog_format=mixed thread_concurrency = 4 log-slave-updates sync_binlog=1 auto_increment_increment=2 auto_increment-offset=1 sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES [client] default-character-set=utf8
进入master1的mysql为master2 授予一个可以用于复制的用户:repl 密码:repl
同样进入master2的mysql为master1授予一个可以用于复制的用户:repl 密码:repl
mysql> grant replication slave,replication client on *.* to 'repl'@'%' identified by 'repl' mysql> flush privileges;
这里用 % 表示可以在远程的任意主机登录用repl用户复制master的数据;当然这里做也是为了实验方便,便于试验环境迁移。在生产环境中应该避免。
2.master1:
mysql> show master status;
+-------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+-------------------+----------+--------------+------------------+-------------------+
| master-bin.000001 | 663 | | | |
+-------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)
3.修改master2的配置文件:
[mysqld] character-set-server=utf8 server-id = 3 //mysql的复制应该保持此参数唯一 datadir = /mydata/data log-bin = /mydata/binglogs/master-bin relay_log = /mydata/relaylogs/relay binlog_format=mixed thread_concurrency = 4 log-slave-updates sync_binlog=1 sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES [client] default-character-set=utf8
4.master2:
mysql> show master status;
+-------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+-------------------+----------+--------------+------------------+-------------------+
| master-bin.000001 | 663 | | | |
+-------------------+----------+--------------+------------------+-------------------+
1 row in set (0.01 sec)
5.master2连接master1
change master to master_host='10.43.2.81',master_user='repl',master_password='repl' //生产环境中操作需要指出开始复制的主的二进制日志文件和起始点,这里由于数据比较少,二进制日志完全在就默认不用指,让其从头开始复制
start slave ; show slave status\G Slave_IO_Running: Yes Slave_SQL_Running: Yes Seconds_Behind_Master: 0
观察这三个参数的值如上所示表示复制正常
6.同样master1连接master2
change master to master_host='10.43.2.93',master_user='repl',master_password='repl'
start slave ; show slave status\G Slave_IO_Running: Yes Slave_SQL_Running: Yes Seconds_Behind_Master: 0
观察这三个参数的值如上所示表示复制正常
7.slave的配置文件:
[mysqld] character-set-server=utf8 server-id = 3 datadir = /mydata/data relay_log = /mydata/relaylogs/relay binlog_format=mixed thread_concurrency = 4 log-slave-updates sync_binlog=1 sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES [client] default-character-set=utf8
slave就不需要开启二进制日志,只需要开启中继日志即可。
8.slave连接上master1
change master to master_host='10.43.2.81',master_user='repl',master_password='repl'
start slave ; show slave status\G Slave_IO_Running: Yes Slave_SQL_Running: Yes Seconds_Behind_Master: 0
9.在master2上建立数据进行测试:
在master2上创建数据库sanhong
create database sanhong;
在master1上执行show master status
发现如下结果:
mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sanhong | | test | +--------------------+ 5 rows in set (0.32 sec)
sanhong出现表示复制正常;
10.在slave上执行show master status
发现如下结果:
mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sanhong | | test | +--------------------+ 5 rows in set (0.32 sec)
sanhong出现表示复制正常;
三.高可用相关的配置
mmm主要功能由下面三个脚本提供
mmm_mond 负责所有的监控工作的监控守护进程,决定节点的移除等等
mmm_agentd 运行在mysql服务器上的代理守护进程,通过简单远程服务集提供给监控节点 默认监听在TCP的9989端口
mmm_control 通过命令行管理mmm_mond进程 默认监听在TCP的9988端口
安装配置mysql-mmm:
首先下载epel源 (对应自己操作系统的版本 CentoOS6.4)(三个节点同时安装)
wget http://mirrors.yun-idc.com/epel/6/i386/epel-release-6-8.noarch.rpm
安装epel源
yum install -y epel-release-6-8.noarch.rpm
安装mysql-mmm-agent (三个节点同时安装)
yum -y install mysql-mmm-agent
编辑mysql_common.conf (三个节点都需要 编辑完之后复制到三个节点上)
active_master_role writercluster_interface eth0 pid_path /var/run/mysql-mmm/mmm_agentd.pid bin_path /usr/libexec/mysql-mmm/ replication_user repl replication_password repl agent_user agent agent_password agent ip 10.43.2.81 mode master peer db2 ip 10.43.2.93 mode master peer db1 ip 10.43.2.83 mode slave hosts db1, db2 ips 10.43.2.101 mode exclusive hosts db2,db3 ips 10.43.2.99,10.43.2.100 mode balanced
在每个节点上修改mmm_agent.conf这个配置文件
include mmm_common.conf # The 'this' variable refers to this server. Proper operation requires # that 'this' server (db1 by default), as well as all other servers, have the # proper IP addresses set in mmm_common.conf. this db3 //保证这个名称为相应节点的名称,比如对于master1来说此处就应该改为 db1 (对应mmm_common.conf)
3.我们将slave做为monitor在上边安装监控所需要的包
yum install mysql-mmm* -y
编辑mmm_mon.cof
vim /etc/mysql-mmm/mmm_mon.conf include mmm_common.confip 127.0.0.1 pid_path /var/run/mysql-mmm/mmm_mond.pid bin_path /usr/libexec/mysql-mmm status_path /var/lib/mysql-mmm/mmm_mond.status ping_ips 10.43.2.81,10.43.2.83,10.43.2.93 auto_set_online 60 # The kill_host_bin does not exist by default, though the monitor will # throw a warning about it missing. See the section 5.10 "Kill Host # Functionality" in the PDF documentation. # # kill_host_bin /usr/libexec/mysql-mmm/monitor/kill_host # monitor_user monitor monitor_password monitor debug 0
4.启动MMM进行测试:
三个节点都需要启动;
[root@localhost mysql-mmm]# service mysql-mmm-agent start Starting MMM Agent Daemon: [ OK ]
监控主机节点启动监控服务:
[root@localhost mysql-mmm]# service mysql-mmm-monitor start Starting MMM Monitor Daemon: [ OK ]
在监控主机上查看各节点数据库的状态:
[root@localhost mysql-mmm]# mmm_control show db1(10.43.2.81) master/ONLINE. Roles: writer(10.43.2.101) db2(10.43.2.93) master/ONLINE. Roles: reader(10.43.2.99) db3(10.43.2.83) slave/ONLINE. Roles: reader(10.43.2.100)
显示结果符合我们上边的规划,此时我们停掉一个数据库
[root@localhost mysql-mmm]# mmm_control set_offline db1 OK: State of 'db1' changed to ADMIN_OFFLINE. Now you can wait some time and check all roles! [root@localhost mysql-mmm]# mmm_control show db1(10.43.2.81) master/ADMIN_OFFLINE. Roles: //db1此时已经下线 vip已经流动到master2即db2上 db2(10.43.2.93) master/ONLINE. Roles: reader(10.43.2.99), writer(10.43.2.101) db3(10.43.2.83) slave/ONLINE. Roles: reader(10.43.2.100)
此时我们在master2上建立一个数据库 'jin' 观察slave的情况
master2: mysql> create database jin; Query OK, 1 row affected (0.02 sec) slave: mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | jin | | mysql | | performance_schema | | sanhong | | test | +--------------------+ 6 rows in set (0.00 sec)
出现'jin' 说明了 虽然slave与master1做的主从但是当master1离线后slave自动会同步master2的数据。
四:总结
经过以上步骤简单的实现了基于mmm的mysql高可用的实现。也发现了mmm优于keepalive的地方。
mmm不但可以监控两个master节点的运行状态,还可以监控多个slave节点的运行状态,任何一个节点出现问题,都会将失败节点对应的虚拟IP自动实现切换到其他健康节点上,保持读、写服务的连续性和高可用性。
mmm不仅能提供虚拟IP自动转移功能,更重要的是,如果活动的master节点发生故障,会自动将后端的多个slave节点转向备用的master节点继续进行同步复制,整个过程完全不需要手动更改同步复制的配置,这是其他所有mysql高可用集群方案都不具备的功能。
其实上边我们把master1的mysql进程停掉也能达到vip会流动到master2上,这里不再演示。