上周结尾:
一、一套烂的MHA环境的检查及修复思路
1.1 检查MHA的运行状态
[root@db03~]# masterha_check_status --conf=/etc/mha/app1.cnf
app1 is stopped(2:NOT_RUNNING).
1.2 判断主节点
虚拟机中,需要先将所有节点启动
systemctl start mysqld
直接的方法,看哪台mysql没有从库状态,结合从库的mysql指向位置
1.3 恢复一主两从的状态
db03:
[root@db03~]# grep -i 'change master to' /var/log/mha/app1/manager
Fri Jun 28 18:26:52 2019 - [info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='10.0.0.52', MASTER_PORT=3306, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='xxx';
Fri Jun 28 20:32:57 2019 - [info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='10.0.0.51', MASTER_PORT=3306, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='xxx';
mysql> CHANGE MASTER TO MASTER_HOST='10.0.0.51', MASTER_PORT=3306, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';
mysql> start slave;
1.4 检查VIP
db01:
[root@db01 ~]# ifconfig eth0:1 10.0.0.55/24
1.5 检查配置文件
db03:
[server1]
hostname=10.0.0.51
port=3306
[server2]
hostname=10.0.0.52
port=3306
[server3]
hostname=10.0.0.53
port=3306
1.6 启动前预检查
[root@db03~]# masterha_check_ssh --conf=/etc/mha/app1.cnf
[root@db03~]# masterha_check_repl --conf=/etc/mha/app1.cnf
1.7 启动MHA
nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null> /var/log/mha/app1/manager.log 2>&1 &
以下为正常启动后的状态:
[root@db03 ~]# masterha_check_status --conf=/etc/mha/app1.cnf
app1 (pid:22419) is running(0:PING_OK), master:10.0.0.51
二、Binlog Server
选择一个专门保存主库binlog的服务器,必须要有mysqlbinlog命令。我们选择db03。
2.1 修改配置文件
vim etc/mha/app1.cnf
[binlog1]
no_master=1
hostname=10.0.0.53
master_binlog_dir=/data/mysql/binlog
2.2 创建必要的目录
mkdir -p /data/mysql/binlog
chown -R mysql.mysql /data/*
2.3 修改完之后,将主库binlog拉过来(从000001开始拉,之后的binlog会自动按顺序过来拉取主库binlog日志)
cd /data/mysql/binlog ----->必须进入到自己创建好的目录
mysqlbinlog -R --host=10.0.0.51 --user=mha --password=mha --raw --stop-never mysql-bin.000005 &
注意:拉取日志的起点,需要按照目前从库已经获取到的二进制日志为拉点
2.4 重启MHA
masterha_stop --conf=/etc/mha/app1.cnf
nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
MySQL读写分离--Atlas
一、Atlas介绍
Atlas是由Qihoo 360,Web平台部基础架构团队开发维护的一个基于MySQL协议的数据中间层项目。
它是在mysql-proxy 0.8.2版本的基础上,对其进行了优化,增加了一些新的功能特性。
360内部使用Atlas运行的mysql业务,每天承载的读写请求数达几十亿条。
注意:
- Atlas只能安装运行在64位的系统上
- Centos 5.X安装 Atlas-XX.el5.x86_64.rpm,Centos 6.X安装Atlas-XX.el6.x86_64.rpm。
- 后端mysql版本应大于5.1,建议使用Mysql 5.6以上
下载地址: https://github.com/Qihoo360/Atlas/releases
二、安装配置
2.1 下载安装软件
yum install -y Atlas*
2.2 移动到该目录
[root@db03~]# cd /usr/local/mysql-proxy/conf/
2.3 备份配置文件
mv test.cnf test.cnf.bak
2.4 修改配置文件
cat > test.cnf <
三、Atlas功能测试
[root@db03~]# mysql -umha -pmha -h 10.0.0.53 -P 33060
测试读操作:
db03 [(none)]>select @@server_id;
+-------------+
| @@server_id |
+-------------+
| 52 |
+-------------+
1 row in set (0.00 sec)
测试写操作:
db03 [(none)]>begin;select @@server_id;commit;
Query OK, 0 rows affected (0.00 sec)
+-------------+
| @@server_id |
+-------------+
| 51 |
+-------------+
1 row in set (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
db03 [(none)]>
四、生产用户要求(Atlas+MHA+VIP+SENDREPORT+BINLOG)
开发人员申请一个应用用户app(select ipdate insert),密码123456,要通过10网段登录。
1. 在主库中,创建用户
grant select,update,insert on *.* to app@'10.0.0.%' identified by '123456';
2. 在atlas中添加生产用户
/usr/local/mysql-proxy/bin/encrypt 123456 ------制作加密密码
3. 改配置文件
[root@db03/usr/local/mysql-proxy/conf]# vim test.cnf
pwds = repl:3yb5jEku5h4=,mha:O2jBXONX098=,app:/iZxz+0GRoA=
[root@db03/usr/local/mysql-proxy/conf]# /usr/local/mysql-proxy/bin/mysql-proxyd test restart
[root@db03 conf]# mysql -uapp -p123456 -h 10.0.0.53 -P 33060
五、Atlas基本管理
5.1 连接管理接口
[root@db03~]# mysql -uuser -ppwd -h 10.0.0.53 -P2345
5.2 打印帮助
db03 [(none)]>select * from help;
+----------------------------+---------------------------------------------------------+
| command | description |
+----------------------------+---------------------------------------------------------+
| SELECT * FROM help | shows this help |
| SELECT * FROM backends | lists the backends and their state |
| SET OFFLINE $backend_id | offline backend server, $backend_id is backend_ndx's id |
| SET ONLINE $backend_id | online backend server, ... |
| ADD MASTER $backend | example: "add master 127.0.0.1:3306", ... |
| ADD SLAVE $backend | example: "add slave 127.0.0.1:3306", ... |
| REMOVE BACKEND $backend_id | example: "remove backend 1", ... |
| SELECT * FROM clients | lists the clients |
| ADD CLIENT $client | example: "add client 192.168.1.2", ... |
| REMOVE CLIENT $client | example: "remove client 192.168.1.2", ... |
| SELECT * FROM pwds | lists the pwds |
| ADD PWD $pwd | example: "add pwd user:raw_password", ... |
| ADD ENPWD $pwd | example: "add enpwd user:encrypted_password", ... |
| REMOVE PWD $pwd | example: "remove pwd user", ... |
| SAVE CONFIG | save the backends to config file |
| SELECT VERSION | display the version of Atlas |
+----------------------------+---------------------------------------------------------+
16 rows in set (0.00 sec)
db03 [(none)]>
5.3 查询后端所有节点信息
db03 [(none)]>SELECT * FROM backends;
+-------------+----------------+-------+------+
| backend_ndx | address | state | type |
+-------------+----------------+-------+------+
| 1 | 10.0.0.55:3306 | up | rw |
| 2 | 10.0.0.52:3306 | up | ro |
| 3 | 10.0.0.53:3306 | up | ro |
+-------------+----------------+-------+------+
3 rows in set (0.00 sec)
5.4 动态添加删除节点
db03 [(none)]>REMOVE BACKEND 3;
Empty set (0.00 sec)
db03 [(none)]>SELECT * FROM backends;
+-------------+----------------+-------+------+
| backend_ndx | address | state | type |
+-------------+----------------+-------+------+
| 1 | 10.0.0.55:3306 | up | rw |
| 2 | 10.0.0.52:3306 | up | ro |
+-------------+----------------+-------+------+
2 rows in set (0.01 sec)
5.5 动态添加节点
db03 [(none)]>ADD SLAVE 10.0.0.53:3306;
Empty set (0.00 sec)
db03 [(none)]>SELECT * FROM backends;
+-------------+----------------+-------+------+
| backend_ndx | address | state | type |
+-------------+----------------+-------+------+
| 1 | 10.0.0.55:3306 | up | rw |
| 2 | 10.0.0.52:3306 | up | ro |
| 3 | 10.0.0.53:3306 | up | ro |
+-------------+----------------+-------+------+
3 rows in set (0.00 sec)
5.6 保存配置到配置文件
db03 [(none)]>SAVE CONFIG;
六、关于读写分离建议
MySQL-Pouter ----->MySQL
ProxySQL ----->Percona
Maxscale ----->MariaDB
MySQL MyCAT分布式架构
一、MyCAT基础架构图
二、基础环境准备
2.1 环境清理
[root@db01~]# pkill mysqld
[root@db01~]# rm -rf /data/330*
[root@db01~]# mv /etc/my.cnf /etc/my.cnf.bak
2.2 初始化
[root@db01~]# mkdir /data/33{07..10}/data -pmysqld --initialize-insecure --user=mysql --datadir=/data/3307/data --basedir=/application/mysql
mysqld --initialize-insecure --user=mysql --datadir=/data/3308/data --basedir=/application/mysql
mysqld --initialize-insecure --user=mysql --datadir=/data/3309/data --basedir=/application/mysql
mysqld --initialize-insecure --user=mysql --datadir=/data/3310/data --basedir=/application/mysql
2.3 准备配置文件和启动脚本
========db01==============
cat >/data/3307/my.cnf</data/3308/my.cnf</data/3309/my.cnf</data/3310/my.cnf</etc/systemd/system/mysqld3307.service</etc/systemd/system/mysqld3308.service</etc/systemd/system/mysqld3309.service</etc/systemd/system/mysqld3310.service</data/3307/my.cnf</data/3308/my.cnf</data/3309/my.cnf</data/3310/my.cnf</etc/systemd/system/mysqld3307.service</etc/systemd/system/mysqld3308.service</etc/systemd/system/mysqld3309.service</etc/systemd/system/mysqld3310.service< 10.0.0.52:3307
10.0.0.51:3309 ------> 10.0.0.51:3307
10.0.0.52:3309 ------> 10.0.0.52:3307
10.0.0.52:3308 <-----> 10.0.0.51:3308
10.0.0.52:3310 -----> 10.0.0.52:3308
10.0.0.51:3310 -----> 10.0.0.51:3308
2.6 分片规划
shard1:
Master:10.0.0.51:3307
slave1:10.0.0.51:3309
Standby Master:10.0.0.52:3307
slave2:10.0.0.52:3309
shard2:
Master:10.0.0.52:3308
slave1:10.0.0.52:3310
Standby Master:10.0.0.51:3308
slave2:10.0.0.51:3310
2.7 开始配置
shard1
10.0.0.51:3307 <-----> 10.0.0.52:3307
db02
mysql -S /data/3307/mysql.sock -e "grant replication slave on *.* to repl@'10.0.0.%' identified by '123';"
mysql -S /data/3307/mysql.sock -e "grant all on *.* to root@'10.0.0.%' identified by '123' with grant option;"
db01
mysql -S /data/3307/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.52', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3307/mysql.sock -e "start slave;"
mysql -S /data/3307/mysql.sock -e "show slave status\G"
db02
mysql -S /data/3307/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.51', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3307/mysql.sock -e "start slave;"
mysql -S /data/3307/mysql.sock -e "show slave status\G"
10.0.0.51:3309 ------> 10.0.0.51:3307
db01
mysql -S /data/3309/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.51', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3309/mysql.sock -e "start slave;"
mysql -S /data/3309/mysql.sock -e "show slave status\G"
10.0.0.52:3309 ------> 10.0.0.52:3307
db02
mysql -S /data/3309/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.52', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3309/mysql.sock -e "start slave;"
mysql -S /data/3309/mysql.sock -e "show slave status\G"
shard2
10.0.0.52:3308 <-----> 10.0.0.51:3308
db01
mysql -S /data/3308/mysql.sock -e "grant replication slave on *.* to repl@'10.0.0.%' identified by '123';"
mysql -S /data/3308/mysql.sock -e "grant all on *.* to root@'10.0.0.%' identified by '123' with grant option;"
db02
mysql -S /data/3308/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.51', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3308/mysql.sock -e "start slave;"
mysql -S /data/3308/mysql.sock -e "show slave status\G"
db01
mysql -S /data/3308/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.52', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3308/mysql.sock -e "start slave;"
mysql -S /data/3308/mysql.sock -e "show slave status\G"
10.0.0.52:3310 -----> 10.0.0.52:3308
db02
mysql -S /data/3310/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.52', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3310/mysql.sock -e "start slave;"
mysql -S /data/3310/mysql.sock -e "show slave status\G"
10.0.0.51:3310 -----> 10.0.0.51:3308
db01
mysql -S /data/3310/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.51', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3310/mysql.sock -e "start slave;"
mysql -S /data/3310/mysql.sock -e "show slave status\G"
========================
shard2
10.0.0.52:3308 <-----> 10.0.0.51:3308
db01
mysql -S /data/3308/mysql.sock -e "grant replication slave on *.* to repl@'10.0.0.%' identified by '123';"
mysql -S /data/3308/mysql.sock -e "grant all on *.* to root@'10.0.0.%' identified by '123' with grant option;"
db02
mysql -S /data/3308/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.51', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3308/mysql.sock -e "start slave;"
mysql -S /data/3308/mysql.sock -e "show slave status\G"
db01
mysql -S /data/3308/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.52', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3308/mysql.sock -e "start slave;"
mysql -S /data/3308/mysql.sock -e "show slave status\G"
10.0.0.52:3310 -----> 10.0.0.52:3308
db02
mysql -S /data/3310/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.52', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3310/mysql.sock -e "start slave;"
mysql -S /data/3310/mysql.sock -e "show slave status\G"
10.0.0.51:3310 -----> 10.0.0.51:3308
db01
mysql -S /data/3310/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='10.0.0.51', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql -S /data/3310/mysql.sock -e "start slave;"
mysql -S /data/3310/mysql.sock -e "show slave status\G"
2.8 监测主从状态
mysql -S /data/3307/mysql.sock -e "show slave status\G"|grep Yes
mysql -S /data/3308/mysql.sock -e "show slave status\G"|grep Yes
mysql -S /data/3309/mysql.sock -e "show slave status\G"|grep Yes
mysql -S /data/3310/mysql.sock -e "show slave status\G"|grep Yes
2.9 问题处理
注:如果中间出现错误,在每个节点进行执行以下命令,然后从2.7步骤重新开始
mysql -S /data/3307/mysql.sock -e "stop slave; reset slave all;"
mysql -S /data/3308/mysql.sock -e "stop slave; reset slave all;"
mysql -S /data/3309/mysql.sock -e "stop slave; reset slave all;"
mysql -S /data/3310/mysql.sock -e "stop slave; reset slave all;"
三、MyCAT安装
3.1 安装java环境
yum install -y java
3.2 下载
Mycat-server-xxxxx.linux.tar.gz
http://dl.mycat.io/
3.3 解压文件
tar xf Mycat-server-1.6.5-release-20180122220033-linux.tar.gz
3.4 软件目录结构
[root@db01/application/mycat]# ll
总用量 12
drwxr-xr-x 2 root root 190 7月 1 16:08 bin
drwxrwxrwx 2 root root 6 3月 1 2016 catlet
drwxrwxrwx 4 root root 4096 7月 1 20:53 conf
drwxr-xr-x 2 root root 4096 7月 1 16:08 lib
drwxrwxrwx 3 root root 74 7月 1 20:55 logs
drwxr-xr-x 2 root root 25 7月 1 16:19 tmlogs
-rwxrwxrwx 1 root root 219 1月 17 2018 version.txt
[root@db01/application/mycat]#
3.5 启动和连接
配置环境变量
vim /etc/profile
export PATH=/application/mycat/bin:$PATH
source /etc/profile
启动
mycat start
连接mycat:
mysql -uroot -p123456 -h 127.0.0.1 -P8066
四、配置文件介绍
logs目录:
wrapper.log ---->mycat启动日志
mycat.log ---->mycat详细工作日志
conf目录:
schema.xml
主配置文件(读写分离、高可用、分布式策略定制、节点控制)
server.xml
mycat软件本身相关的配置
rule.xml
分片规则配置文件,记录分片规则列表、使用方法等
五、应用前环境准备
5.1 用户创建及数据库导入
db01:
mysql -S /data/3307/mysql.sock
grant all on *.* to root@'10.0.0.%' identified by '123';
source /root/world.sql
mysql -S /data/3308/mysql.sock
grant all on *.* to root@'10.0.0.%' identified by '123';
source /root/world.sql
5.2 配置文件处理
cd /application/mycat/conf
mv schema.xml schema.xml.bak
vim schema.xml
select user()
六、配置文件简单介绍
6.1 逻辑库:schema
6.2 数据节点:datanode
6.3 数据主机:datahost(w和r)
select user()