环境
MGR:
CentOS7
3节点MGR单写,版本8.0.20
ProxySQL:
CentOS7
ProxySQL单机版,版本 2.0.12
MGR的部署
MGR 方案,使用 dbdeployer 部署,单节点写。
部署一行命令就可以:
dbdeployer deploy replication 8.0.20 --bind-address='0.0.0.0' --topology=group --single-primary
M: 192.168.2.4:23021
S1: 192.168.2.4:23022
S2: 192.168.2.4:23023
先在MGR集群的主节点上创建3个账号:
如果是dbdeployer部署的MGR,最好能修改下rsandbox这个复制用的账号的密码验证方式为mysql_native_password模式。具体方法这里就不多说了。 如果是其它方式部署的,基本大家还是会用mysql_native_password这个默认方式。这里就不多说了。
create database sbtest;
use sbtest;
create table t1(a int unsigned primary key auto_increment ,b int) ;
insert into t1(b) select floor(rand()*1000) ;
insert into t1(b) select floor(rand()*1000) ;
insert into t1(b) select floor(rand()*1000) ;
insert into t1(b) select floor(rand()*1000) ;
create user proxysql@'%' identified WITH mysql_native_password by'proxysql';
create user sbuser@'%' identified WITH mysql_native_password by 'sbuser';
create user ro@'%' identified WITH mysql_native_password by'ro';
grant all on *.* to proxysql@'%';
grant select,update,delete,insert,create,drop,alter on sbtest.* to 'sbuser'@'%';
grant select on *.* to ro@'%';
然后,需要在sys库创建一个视图,以方便后续proxysql做探测。具体如下:
https://gist.github.com/lefred/77ddbde301c72535381ae7af9f968322 # 这个issue说的是低版本和高版本的不同,网上很多的脚本(例如骏马金龙老哥那篇proxysql实验可能比较早)在8.0.20上无法运行,修改后的如下:
USE sys;
DELIMITER $$
DROP FUNCTION gr_member_in_primary_partition$$
DROP VIEW gr_member_routing_candidate_status$$
CREATE FUNCTION gr_member_in_primary_partition()
RETURNS VARCHAR(3)
DETERMINISTIC
BEGIN
RETURN (SELECT IF( MEMBER_STATE='ONLINE' AND ((SELECT COUNT(*) FROM
performance_schema.replication_group_members WHERE MEMBER_STATE != 'ONLINE') >=
((SELECT COUNT(*) FROM performance_schema.replication_group_members)/2) = 0),
'YES', 'NO' ) FROM performance_schema.replication_group_members JOIN
performance_schema.replication_group_member_stats rgms USING(member_id) WHERE rgms.MEMBER_ID=@@SERVER_UUID);
END$$
CREATE VIEW gr_member_routing_candidate_status AS SELECT
sys.gr_member_in_primary_partition() as viable_candidate,
IF( (SELECT (SELECT GROUP_CONCAT(variable_value) FROM
performance_schema.global_variables WHERE variable_name IN ('read_only',
'super_read_only')) != 'OFF,OFF'), 'YES', 'NO') as read_only,
sys.gr_applier_queue_length() as transactions_behind, Count_Transactions_in_queue as 'transactions_to_cert'
from performance_schema.replication_group_member_stats rgms
where rgms.MEMBER_ID=(select gv.VARIABLE_VALUE
from `performance_schema`.global_variables gv where gv.VARIABLE_NAME='server_uuid');$$
DELIMITER ;
执行完成后, 我们到3个节点都去执行下下面2个脚本,检查下各节点的状态:
select sys.gr_member_in_primary_partition();
select * from sys.gr_member_routing_candidate_status;
node1 [localhost:23021] {root} (sys) > select sys.gr_member_in_primary_partition();
+--------------------------------------+
| sys.gr_member_in_primary_partition() |
+--------------------------------------+
| YES |
+--------------------------------------+
1 row in set (0.00 sec)
node1 [localhost:23021] {root} (sys) > select * from sys.gr_member_routing_candidate_status;
+------------------+-----------+---------------------+----------------------+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+------------------+-----------+---------------------+----------------------+
| YES | NO | 0 | 0 |
+------------------+-----------+---------------------+----------------------+
1 row in set (0.00 sec)
node2 [localhost:23022] {root} ((none)) > select sys.gr_member_in_primary_partition();
+--------------------------------------+
| sys.gr_member_in_primary_partition() |
+--------------------------------------+
| YES |
+--------------------------------------+
1 row in set (0.00 sec)
node2 [localhost:23022] {root} ((none)) > select * from sys.gr_member_routing_candidate_status;
+------------------+-----------+---------------------+----------------------+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+------------------+-----------+---------------------+----------------------+
| YES | YES | 0 | 0 |
+------------------+-----------+---------------------+----------------------+
1 row in set (0.00 sec)
node3 [localhost:23023] {root} (sys) > select sys.gr_member_in_primary_partition();
+--------------------------------------+
| sys.gr_member_in_primary_partition() |
+--------------------------------------+
| YES |
+--------------------------------------+
1 row in set (0.00 sec)
node3 [localhost:23023] {root} (sys) > select * from sys.gr_member_routing_candidate_status;
+------------------+-----------+---------------------+----------------------+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+------------------+-----------+---------------------+----------------------+
| YES | YES | 0 | 0 |
+------------------+-----------+---------------------+----------------------+
1 row in set (0.00 sec)
ProxySQL的部署
配置proxysql的配置文件如下:
cat /etc/proxysql.cnf | egrep -v '^#|^$'
datadir="/var/lib/proxysql"
errorlog="/var/lib/proxysql/proxysql.log"
admin_variables=
{
admin_credentials="admin:admin"
mysql_ifaces="0.0.0.0:6032"
refresh_interval=2000
debug=true
}
mysql_variables=
{
threads=4
max_connections=1000
default_query_delay=0
default_query_timeout=36000000
have_compress=true
poll_timeout=2000
interfaces="0.0.0.0:6033"
default_schema="information_schema"
stacksize=1048576
server_version="8.0.20"
connect_timeout_server=3000
monitor_username="proxysql"
monitor_password="proxysql"
monitor_history=600000
monitor_connect_interval=60000
monitor_ping_interval=10000
monitor_read_only_interval=1500
monitor_read_only_timeout=500
ping_interval_server_msec=120000
ping_timeout_server=500
commands_stats=true
sessions_sort=true
connect_retries_on_failure=10
}
mysql_servers =
(
)
mysql_users:
(
)
mysql_query_rules:
(
)
scheduler=
(
)
mysql_replication_hostgroups=
(
)
systemctl start proxysql
配置ProxySQL
登陆到ProxySQL管理界面
mysql -u admin -padmin -h 127.0.0.1 -P6032 --prompt='Admin> '
#配置MGR默认组信息
insert into mysql_group_replication_hostgroups(writer_hostgroup,backup_writer_hostgroup,reader_hostgroup,offline_hostgroup,active) values(10,20,30,40,1);
写组:10
备写组:20
读组:30
离线组(不可用):40
Admin> select * from mysql_group_replication_hostgroups \G
*************************** 1. row ***************************
writer_hostgroup: 10
backup_writer_hostgroup: 20
reader_hostgroup: 30
offline_hostgroup: 40
active: 1
max_writers: 1
writer_is_also_reader: 0
max_transactions_behind: 0
comment: NULL
1 row in set (0.00 sec)
#配置用户(主要是添加程序端的这个用户,将其设置到写组10里面)
insert into mysql_users(username,password,default_hostgroup) values('sbuser','sbuser',10);
insert into mysql_users(username,password,default_hostgroup) values('ro','ro',30);
Admin> select * from mysql_users \G
*************************** 1. row ***************************
username: sbuser
password: sbuser
active: 1
use_ssl: 0
default_hostgroup: 10
default_schema: NULL
schema_locked: 0
transaction_persistent: 1
fast_forward: 0
backend: 1
frontend: 1
max_connections: 10000
comment:
*************************** 2. row ***************************
username: ro
password: ro
active: 1
use_ssl: 0
default_hostgroup: 30
default_schema: NULL
schema_locked: 0
transaction_persistent: 1
fast_forward: 0
backend: 1
frontend: 1
max_connections: 10000
comment:
2 rows in set (0.00 sec)
#配置后端节点信息
主节点定义为写组10,从节点定义为只读组30
insert into mysql_servers(hostgroup_id,hostname,port) values(10,'192.168.2.4',23021);
insert into mysql_servers(hostgroup_id,hostname,port) values(30,'192.168.2.4',23022);
insert into mysql_servers(hostgroup_id,hostname,port) values(30,'192.168.2.4',23023);
Admin> select * from mysql_servers ;
+--------------+-------------+-------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+-------------+-------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 10 | 192.168.2.4 | 23021 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
| 30 | 192.168.2.4 | 23022 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
| 30 | 192.168.2.4 | 23023 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
+--------------+-------------+-------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
3 rows in set (0.00 sec)
Admin> select * from runtime_mysql_servers ;
+--------------+-------------+-------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+-------------+-------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 10 | 192.168.2.4 | 23021 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
| 30 | 192.168.2.4 | 23023 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
| 30 | 192.168.2.4 | 23022 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
+--------------+-------------+-------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
3 rows in set (0.01 sec)
#配置读写分离参数
delete from mysql_query_rules;
insert into mysql_query_rules(rule_id,active,match_digest,destination_hostgroup,apply,comment)values(1,1,'^SELECT.*FOR UPDATE$',10,1,'写');
insert into mysql_query_rules(rule_id,active,match_digest,destination_hostgroup,apply,comment)values(2,1,'^SELECT',30,1,'读');
Admin> select rule_id,active,match_digest,negate_match_pattern,re_modifiers,destination_hostgroup,apply,comment from mysql_query_rules;
+---------+--------+----------------------+----------------------+--------------+-----------------------+-------+---------+
| rule_id | active | match_digest | negate_match_pattern | re_modifiers | destination_hostgroup | apply | comment |
+---------+--------+----------------------+----------------------+--------------+-----------------------+-------+---------+
| 1 | 1 | ^SELECT.*FOR UPDATE$ | 0 | CASELESS | 10 | 1 | 写 |
| 2 | 1 | ^SELECT | 0 | CASELESS | 30 | 1 | 读 |
+---------+--------+----------------------+----------------------+--------------+-----------------------+-------+---------+
2 rows in set (0.00 sec)
保存配置并载入内存生效
save mysql users to disk;
save mysql servers to disk;
save mysql query rules to disk;
save mysql variables to disk;
save admin variables to disk;
load mysql users to runtime;
load mysql servers to runtime;
load mysql query rules to runtime;
load mysql variables to runtime;
load admin variables to runtime;
我实际用账号连接proxysql进行读写的时候,遇到的问题和 https://github.com/sysown/proxysql/issues/2234 情况一样,需要再设置下面3行命令:
set mysql-set_query_lock_on_hostgroup=0;
load mysql variables to runtime;
save mysql variables to disk;
测试ProxySQL
读写账号:
mysql -usbuser -psbuser -h 127.0.0.1 -P6033
[(none)] 18:36:01 > show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| sbtest |
+--------------------+
2 rows in set (0.01 sec)
use sbtest;
insert into t1(b) select floor(rand()*1000) ;
insert into t1(b) select floor(rand()*1000) ;
只读账号:
mysql -uro -pro -h 127.0.0.1 -P6033 sbtest
[sbtest] 18:39:23 > select * from t1;
+---+------+
| a | b |
+---+------+
| 1 | 111 |
| 2 | 112 |
| 3 | 113 |
| 4 | 114 |
+---+------+
4 rows in set (0.00 sec)
故障测试
MGR主节点故障
人工将 当前MGR主节点 node01 关闭掉,稍等片刻, 再查看proxysql的后端情况,如下:
Admin> select * from runtime_mysql_servers ;
+--------------+-------------+-------+-----------+---------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+-------------+-------+-----------+---------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 10 | 192.168.2.4 | 23022 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
| 40 | 192.168.2.4 | 23021 | 0 | SHUNNED | 1 | 0 | 1000 | 0 | 0 | 0 | |
| 30 | 192.168.2.4 | 23023 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
+--------------+-------------+-------+-----------+---------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
3 rows in set (0.00 sec)
另外,proxysql的日志里也记录下了探活和新的选举情况。
我们可以测试 通过proxysql进行模拟业务上的读写,都是没问题的。
测试结束后,还可以将node01启动,再加回MGR集群。
此时,proxysql上看到的后端状态,如下:
Admin> select * from runtime_mysql_servers ;
+--------------+-------------+-------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+-------------+-------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 10 | 192.168.2.4 | 23022 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
| 30 | 192.168.2.4 | 23021 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
| 30 | 192.168.2.4 | 23023 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
+--------------+-------------+-------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
3 rows in set (0.01 sec)
可以看第二行这个实例,我们早期加入的时候是做主节点的,现在已经变成了从节点。
MGR从节点故障
从节点故障,影响可忽略。 如果全部从节点都不可用,读请求会转发到主节点去运行。
Admin> select * from runtime_mysql_servers ;
+--------------+-------------+-------+-----------+---------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+-------------+-------+-----------+---------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 10 | 192.168.2.4 | 23021 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | write |
| 40 | 192.168.2.4 | 23022 | 0 | SHUNNED | 1 | 0 | 1000 | 0 | 0 | 0 | read |
| 40 | 192.168.2.4 | 23023 | 0 | SHUNNED | 1 | 0 | 1000 | 0 | 0 | 0 | read |
+--------------+-------------+-------+-----------+---------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
3 rows in set (0.00 sec)