mha4mysql是日本工程师Yoshinori Matsunobu开发的一款MySQL高可用软件。mha4mysql分为两部分,一是管理器部分mha4mysql-manager,二是结点部分mha4mysql-node。mha4mysql-node要运行在每台受管理的MySQL服务器上;而mha4mysql-manager所在服务器则不需要MariaDB,但需要mha4mysql-node。因为mha4mysql-manager依赖mha4mysql-node,即安装mha4mysql-manager前必须先安装mha4mysql-node。
下面讲解一下,基于debian:jessie制作mha4mariadb-manager的Docker镜像和
基于mariadb:10.2.22制作mha4mysql-node的Docker镜像。
Dockerfile如下
FROM debian:jessie
COPY ./mha4mysql-manager.tar.gz /tmp/
COPY ./mha4mysql-node.tar.gz /tmp/
RUN build_deps='ssh sshpass perl libdbi-perl libmodule-install-perl libdbd-mysql-perl
libconfig-tiny-perl liblog-dispatch-perl libparallel-forkmanager-perl make' \
&& apt-get update \
&& apt-get -y --force-yes install $build_deps \
&& tar -zxf /tmp/mha4mysql-node.tar.gz -C /opt \
&& cd /opt/mha4mysql-node \
&& perl Makefile.PL \
&& make \
&& make install \
&& tar -zxf /tmp/mha4mysql-manager.tar.gz -C /opt \
&& cd /opt/mha4mysql-manager \
&& perl Makefile.PL \
&& make \
&& make install \
&& cd /opt \
&& rm -rf /opt/mha4mysql-* \
&& apt-get clean
注释:
基于debian:jessie镜像二次制作;
将mha4mysql-manager和mha4mysql-node的当前版本(v0.56)打包,复制到镜像内;
build_deps是mha4mysql-manager和mha4mysql-node的安装依赖、运行依赖;
先拆包安装mha4mysql-node,安装命令:perl Makefile.PL && make && make install;
才能拆包安装mha4mysql-manager,安装命令:perl Makefile.PL && make && make install;
清理一些无用文件。
在该目录下运行命令构造mha4mariadb-manager镜像
docker build -t mha4mariadb-manager .
Dockerfile如下
FROM mariadb:10.0
COPY ./mha4mysql-node.tar.gz /tmp/
RUN build_deps='ssh sshpass perl libdbi-perl libmodule-install-perl libdbd-mysql-perl make' \
&& apt-get update \
&& apt-get -y --force-yes install $build_deps \
&& tar -zxf /tmp/mha4mysql-node.tar.gz -C /opt \
&& cd /opt/mha4mysql-node \
&& perl Makefile.PL \
&& make \
&& make install \
&& cd /opt \
&& rm -rf /opt/mha4mysql-* \
&& apt-get clean
注释:
基于mariadb:10.2.22镜像二次制作;
将mha4mysql-node的当前版本(v0.56)打包,复制到镜像内;
build_deps是mha4mysql-node的安装依赖、运行依赖;
拆包安装mha4mysql-manager,安装命令:perl Makefile.PL && make && make install;
清理一些无用文件
在该目录下运行命令构造mha4mariadb-node镜像
docker build -t mha4mariadb-node .
这个编排主要实现一主一备一从的MariaDB MHA高可用集群。
L--mariadb-mha-docker //主目录
L--scripts //本地(Docker宿主)使用的一些脚本
L--mariadb_set_mbs.sh
L--mha_check_repl.sh
L--mha_check_ssh.sh
L--mha_start_manager.sh
L--ssh_share.sh
L--ssh_start.sh
L--services //需要build的服务(目前是空)
L--volumes //各个容器的挂载数据卷
L--mha_manager
L--conf
L--app1.conf //MHA配置
L--mha_node0
L--conf
L--my.cnf
L--mha_node1
L--conf
L--my.cnf
L--mha_node2
L--conf
L--my.cnf
L--mha_share //各个容器共享的目录
L--scripts //各个容器共用的一些脚本
L--mariadb_grant_slave.sh
L--mariadb_start_slave.sh
L--ssh_auth_keys.sh
L--ssh_generate_key.sh
L--sshkeys //各个容器的ssh public key
L--parameters.env //账号密码等环境参数
L--docker-compose.yml //编排配置
docker exec -it mha_node0 /bin/bash /root/mha_share/scripts/mariadb_grant_slave.sh
docker exec -it mha_node1 /bin/bash /root/mha_share/scripts/mariadb_grant_slave.sh
docker exec -it mha_node1 /bin/bash /root/mha_share/scripts/mariadb_start_slave.sh
docker exec -it mha_node2 /bin/bash /root/mha_share/scripts/mariadb_start_slave.sh
docker exec -it mha_manager masterha_check_repl --conf=/etc/mha/app1.conf
docker exec -it mha_manager masterha_check_ssh --conf=/etc/mha/app1.conf
docker exec -it mha_manager masterha_manager --conf=/etc/mha/app1.conf
docker exec -it mha_node0 /bin/bash /root/mha_share/scripts/ssh_generate_key.sh
docker exec -it mha_node1 /bin/bash /root/mha_share/scripts/ssh_generate_key.sh
docker exec -it mha_node2 /bin/bash /root/mha_share/scripts/ssh_generate_key.sh
docker exec -it mha_manager /bin/bash /root/mha_share/scripts/ssh_generate_key.sh
docker exec -it mha_node0 /bin/bash /root/mha_share/scripts/ssh_auth_keys.sh
docker exec -it mha_node1 /bin/bash /root/mha_share/scripts/ssh_auth_keys.sh
docker exec -it mha_node2 /bin/bash /root/mha_share/scripts/ssh_auth_keys.sh
docker exec -it mha_manager /bin/bash /root/mha_share/scripts/ssh_auth_keys.sh
docker exec -it mha_node0 /bin/bash service ssh start
docker exec -it mha_node1 /bin/bash service ssh start
docker exec -it mha_node2 /bin/bash service ssh start
docker exec -it mha_manager /bin/bash service ssh start
[server default]
user=root
password=123456
ssh_user=root
manager_workdir=/usr/local/mha
remote_workdir=/usr/local/mha
repl_user=myslave
repl_password=myslave
[server0]
hostname=10.5.0.10
[server1]
hostname=10.5.0.11
[server2]
hostname=10.5.0.12
[mysqld]
server-id=1
log-bin=mysql-bin
binlog-do-db=testing
binlog-ignore-db=mysql
replicate-do-db=testing
replicate-ignore-db=mysql
auto_increment_increment=2
auto_increment_offset=1
expire_logs_days=7
[mysqld]
server-id=2
log-bin=mysql-bin
binlog-do-db=testing
binlog-ignore-db=mysql
replicate-do-db=testing
replicate-ignore-db=mysql
auto_increment_increment=2
auto_increment_offset=2
expire_logs_days=7
read_only = ON
[mysqld]
server-id=3
replicate-do-db=testing
replicate-ignore-db=mysql
expire_logs_days=7
read_only = ON
mysql -u root -p$MYSQL_ROOT_PASSWORD <
mysql -u root -p$MYSQL_ROOT_PASSWORD <
cat $MHA_SHARE_SSHKEYS_PATH/*.pub > /root/.ssh/authorized_keys
ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa
cp /root/.ssh/id_rsa.pub "$MHA_SHARE_SSHKEYS_PATH/id_rsa_$CONTAINER_NAME.pub"
version: "3.7"
services:
mha_master:
image: mha4mariadb-node:latest
container_name: mha_node0
restart: always
networks:
net1:
ipv4_address: 10.5.0.10
ports:
- "33060:3306"
volumes:
- "./volumes/mha_share/:/root/mha_share/"
- "./volumes/mha_node0/lib/:/var/lib/mysql/"
- "./volumes/mha_node0/conf/:/etc/mysql/conf.d/"
env_file:
- ./parameters.env
environment:
- CONTAINER_NAME=mha_node0
mha_slave1:
image: mha4mariadb-node:latest
container_name: mha_node1
restart: always
depends_on:
- mha_master
networks:
net1:
ipv4_address: 10.5.0.11
ports:
- "33061:3306"
volumes:
- "./volumes/mha_share/:/root/mha_share/"
- "./volumes/mha_node1/lib/:/var/lib/mysql/"
- "./volumes/mha_node1/conf/:/etc/mysql/conf.d/"
env_file:
- ./parameters.env
environment:
- CONTAINER_NAME=mha_node1
mha_slave2:
image: mha4mariadb-node:latest
container_name: mha_node2
depends_on:
- mha_master
restart: always
networks:
net1:
ipv4_address: 10.5.0.12
ports:
- "33062:3306"
volumes:
- "./volumes/mha_share/:/root/mha_share/"
- "./volumes/mha_node2/lib/:/var/lib/mysql/"
- "./volumes/mha_node2/conf/:/etc/mysql/conf.d/"
env_file:
- ./parameters.env
environment:
- CONTAINER_NAME=mha_node2
mha_manager:
image: mha4mariadb-manager:latest
container_name: mha_manager
depends_on:
- mha_master
- mha_slave1
- mha_slave2
restart: always
networks:
net1:
ipv4_address: 10.5.0.9
volumes:
- "./volumes/mha_share/:/root/mha_share/"
- "./volumes/mha_manager/conf:/etc/mha"
- "./volumes/mha_manager/work:/usr/local/mha"
entrypoint: "tailf /dev/null"
env_file:
- ./parameters.env
environment:
- CONTAINER_NAME=mha_manager
networks:
net1:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
ROOT_PASSWORD=123456
MYSQL_ROOT_PASSWORD=123456
MYSQL_DATABASE=testing
MYSQL_User=testing
MYSQL_PASSWORD=testing
MHA_SHARE_SCRIPTS_PATH=/root/mha_share/scripts
MHA_SHARE_SSHKEYS_PATH=/root/mha_share/sshkeys
在主目录下执行docker-compose up -d
构建并运行整个Docker服务。
MHA要求各个主机能够相互SSH登录,所以整体服务首次启动成功后,在主目录下先执行一些命令:
$ sh ./scripts/ssh_start.sh
$ sh ./scripts/ssh_share.sh
ssh_start.sh
作用是在各个容器上开启SSH服务,ssh_share.sh
作用是在容器内生成SSH公密钥,再把公钥共享到其他容器。常用命令调用的脚本在scripts
和volumes/mha_share/scripts
下,这些脚本都很简单,一看就明。
若是整体服务重新启动,只需重新开启SSH服务即可:
$ sh ./scripts/ssh_start.sh
在manager容器上检测SSH是否配置成功:
$ docker exec -it mha_manager /bin/bash
root@mha_manager# masterha_check_ssh --conf=/etc/mha/app1.conf
若是成功,会显示
Mon Oct 16 14:53:59 2017 - [debug] ok.
Mon Oct 16 14:53:59 2017 - [info] All SSH connection tests passed successfully.
首先对主库、备考创建和授权复制账号,对备库、从库设置主库信息和开始复制,以下命令会完成这些操作:
$ sh ./scripts/mariadb_set_mbs.sh
可以在主库上的testing数据库里创建一张表,写入一些数据,看看备库、从库会不会同步。
在manager容器上检测REPL是否配置成功:
$ docker exec -it mha_manager /bin/bash
root@mha_manager# masterha_check_repl --conf=/etc/mha/app1.conf
若是成功,会显示
Mon Oct 16 15:01:35 2017 - [info] Got exit code 0 (Not master dead).
MySQL Replication Health is OK.
SSH和REPL检测没问题后,可以在manager容器上开启MHA监控:
$ docker exec -it mha_manager /bin/bash
root@mha_manager# masterha_manager --conf=/etc/mha/app1.conf
masterha_manager
进程会一直监视主库状态是否可用,若是主库宕机,masterha_manager
会将备库与从库的Relay Log进行比较,把最新的数据整合到备库,然后把备库提升为新主库,从库跟随复制新主库,最后masterha_manager
进程会退出,不再监控。
我们可以在本地(Docker宿主)暂停主库(mha_node0容器):
$ docker pause mha_node0
然后,manager容器上masterha_manager
确认主库失联后,开始切换主库,成功后会显示:
----- Failover Report -----
app1: MySQL Master failover 10.5.0.10(10.5.0.10:3306) to 10.5.0.11(10.5.0.11:3306) succeeded
Master 10.5.0.10(10.5.0.10:3306) is down!
Check MHA Manager logs at 56560d023a4c for details.
Started automated(non-interactive) failover.
The latest slave 10.5.0.11(10.5.0.11:3306) has all relay logs for recovery.
Selected 10.5.0.11(10.5.0.11:3306) as a new master.
10.5.0.11(10.5.0.11:3306): OK: Applying all logs succeeded.
10.5.0.12(10.5.0.12:3306): This host has the latest relay log events.
Generating relay diff files from the latest slave succeeded.
10.5.0.12(10.5.0.12:3306): OK: Applying all logs succeeded. Slave started, replicating from 10.5.0.11(10.5.0.11:3306)
10.5.0.11(10.5.0.11:3306): Resetting slave info succeeded.
Master failover to 10.5.0.11(10.5.0.11:3306) completed successfully.
可以到从库(mha_node2容器),查看复制状态,可以看到跟随主库是10.5.0.11,原来的主库为10.5.0.10即新主库(原备库):
docker exec -it mha_node2 /bin/bash
mysql -u root -p123456
MariaDB [(none)]> show slave status;
+----------------------------------+
Master_Host : 10.5.0.11
让node0变成备库,在/volumes/mha_node1/conf/my.cnf中加入
read_only = ON
重启mha_node0容器
docker restart mha_node0
进入node0节点查看redy_only模式是否开启
docker exec -it mha_node0 /bin/bash
mysql -uroot -p123456
show variables like 'read_only';
得到结果
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| read_only | ON |
+---------------+-------+
表示redy_only已经开启。
接下来只要把node0变为node1的从库即可恢复MHA的状态。(具体操作请查看MariaDB的主从复制部署)
并重启整个项目
docker-compose restarts
重新开启SSH服务即可:
$ sh ./scripts/ssh_start.sh
MHA检查复制状态时出现如下报错:
Checking if super_read_only is defined and turned on..DBD::mysql::st execute failed: Unknown system variable 'super_read_only' at /usr/share/perl5/vendor_perl/MHA/SlaveUtil.pm line 245.
原因是在5.5.56-MariaDB版本中虽然从节点设置了read_only选项,但是对于管理员权限的用户这点不生效,所以在MySQL5.6(Mariadb10.1)后新增了super_read_only选项,但当前版本中没有这个选项,所以报错。
解决办法是最后将MHA的版本换成mha4mysql-0.56。
而且版本的mha4mysql-0.57和mha4mysql-0.58都不能解决这个问题,测试之后只有mha4mysql-0.56能够解决。