第1章 总体规划.. 2
1.1 系统规划.. 2
第2章 安装依赖包及准备工作.. 3
2.1 先卸载CentOS7及以上版本自带的mariadb. 3
2.2 检查相关依赖包.. 3
2.3 安装缺少的依赖包.. 3
2.4 配置防火墙和关闭selinux. 4
第3章 安装配置PXC. 5
3.1 安装XtraBackup. 5
3.2 创建mysql用户与安装目录.. 5
3.3 安装配置PXC. 5
3.4 启动PXC. 11
3.5 数据同步测试.. 18
3.6 配置PXC节点监控(所有PXC节点执行).. 19
第4章 安装配置LVS负载均衡软件.. 22
4.1 安装LVS. 23
4.2 配置LVS(强烈建议在keepalived配置文件中进行配置以自动切换管理).. 24
4.3 LVS扩展知识.. 28
第5章 安装配置keepalived. 28
5.1 安装keepalived. 28
5.2 配置keepalived(包含LVS配置).. 29
5.3 启动keepalived. 30
5.4 测试利用keepalived实现高可用-自动切换LVS. 31
第6章 RealServer网络配置(PXC MySQL 节点).. 32
6.1 编写RealServer网络配置脚本.. 32
6.2 启动/etc/init.d/realserver测试.. 33
第7章 启动PXC + RealServer + LVS + Keepalived 测试负载均衡分发情况.. 35
7.1 先启动所有PXC节点.. 35
7.2 启动所有PXC节点的/etc/init.d/realserver服务脚本.. 35
7.3 启动所有LVS节点的Keepalived服务.. 35
7.4 LVS分发连接测试.. 36
2. 服务器部署规划如下表:
节点类型 |
主机名 |
IP地址 |
VIP地址 (Keepalived) |
操作系统 及版本 |
CPU配置 |
LVS(keepalived) |
centos-mysql-mycat-1 |
192.168.56.6 |
192.168.56.99 |
CentOS 7.2 x86_64 |
Intel i5 2.5GHz 2 Cores 4 Threads |
LVS(keepalived) |
centos-mysql-mycat-2 |
192.168.56.7 |
CentOS 7.2 x86_64 |
Intel i5 2.5GHz 2 Cores 4 Threads |
|
PXC节点1 |
centos-mysql-pxc-1 |
192.168.56.8 |
192.168.56.99 |
CentOS 7.2 x86_64 |
Intel i5 2.5GHz 2 Cores 4 Threads |
PXC节点2 |
centos-mysql-pxc-2 |
192.168.56.9 |
CentOS 7.2 x86_64 |
Intel i5 2.5GHz 2 Cores 4 Threads |
|
软件包及版本 |
Percona-XtraDB-Cluster-5.7.21-rel20-29.26.1.Linux.x86_64.ssl101.tar.gz |
ssl101:表示对应CentOS6/7系统,其它对应关系见官方文档。 |
|||
percona-xtrabackup-24-2.4.11-1.el7.x86_64.rpm |
mysql或pxc执备份工具,使用pxc的sst特性必须组件 |
||||
ipvsadm-1.27-7.el7.x86_64(yum安装) |
HA高可用负载均衡管理软件 |
||||
keepalived-1.3.5-8.el7_6.x86_64(yum安装) |
高可用VIP漂移切换软件 |
(以下操作在所有节点进行,所有包最好所有节点都安装,多总比少好)
[root@centos-mysql-pxc-1 tmp]# rpm -qa | grep -i mariadb ##查询是否已经安装了mariadb相关软件包
[root@centos-mysql-pxc-1 tmp]# yum remove -y mariadb* ##卸载mariadb相关软件包
[root@centos-mysql-pxc-1 tmp]# find / -name "*my.cnf*" 2>/dev/null | xargs rm -rf ##删除mariadb创建的mysql配置文件
[root@centos-mysql-pxc-1 tmp]# rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' \
libaio \ ## pxc 或 mysql 依赖包
numactl-libs \ ## pxc 或mysql 5.7.19及以上版本numa特性依赖包
perl-DBI \ ## xtrabackup的依赖包
perl-DBD-MySQL \ ## xtrabackup的依赖包
perl-IO-Socket-SSL \ ## xtrabackup的依赖包
perl-Digest-MD5 \ ## xtrabackup的依赖包
libev \ ## xtrabackup的依赖包
libev-devel \ ## xtrabackup的依赖包
nc \ ## CentOS7中软件包名为nmap-ncat
socat \
mysql-libs \ ## xtrabackup的依赖包,CentOS7后,用 mariadb 替代了 mysql ,对应的 mysql-libs 变成了 mariadb-libs
##可先不用安装也能成功安装xtrabackup2.4.11,后续安装了pxc后,会有libmysqlclient.so文件的
pcre \
pcre-devel \
openssl \
openssl-devel \
zlib \
zlib-devel \
percona-release \
perl-Time-HiRes \
percona-xtrabackup | sort
[root@centos-mysql-pxc-1 tmp]# yum install epel-release
[root@centos-mysql-pxc-1 tmp]# yum install -y \
libaio \
numactl-libs \
perl-DBI \
perl-DBD-MySQL \
perl-IO-Socket-SSL \
perl-Digest-MD5 \
libev \
libev-devel \
nc \
socat \
mysql-libs \
pcre \
pcre-devel \
openssl \
openssl-devel \
zlib \
zlib-devel \
perl-Time-HiRes
【注】:CentOS系统的,建议先安装以上依赖包,如无需要,可不安装percona-release-0.1-6.noarch.rpm软件包,否则可能会导致安装以上依赖包失败,报如下图错误:
[root@centos-mysql-pxc-1 tmp]# systemctl stop firewalld ##关闭防火墙
[root@centos-mysql-pxc-1 tmp]# systemctl disable firewalld ##禁用开机启动防火墙
[root@centos-mysql-pxc-1 tmp]# systemctl status firewalld ##查看防火墙状态
[root@centos-mysql-pxc-1 tmp]# setenforce 0 ##即时关闭
[root@centos-mysql-pxc-1 tmp]# vi /etc/selinux/config ##把SELINUX=enforcing 改成 SELINUX=disabled,禁止开机启用
[root@centos-mysql-pxc-1 tmp]# getenforce ##查看selinux状态
(以下操作在所有PXC节点进行)
[root@centos-mysql-pxc-1 tmp]# rpm -ivh percona-xtrabackup-24-2.4.11-1.el7.x86_64.rpm
[root@centos-mysql-pxc-1 tmp]# groupadd -r mysql
[root@centos-mysql-pxc-1 tmp]# useradd -r -g mysql -s /bin/false mysql
[root@centos-mysql-pxc-1 tmp]# passwd mysql
[root@centos-mysql-pxc-1 tmp]# mkdir /app
[root@centos-mysql-pxc-1 tmp]# mkdir -p /data/mysql/{data,logs/binlog,tmp}
[root@centos-mysql-pxc-1 tmp]# chown -R mysql:mysql /data/mysql ##必须更改mysql目录属主为mysql:mysql
[root@centos-mysql-pxc-1 tmp]#
tar --no-same-owner -zxvf Percona-XtraDB-Cluster-5.7.21-rel20-29.26.1.Linux.x86_64.ssl101.tar.gz -C /app/
[root@centos-mysql-pxc-1 tmp]# cd /app
[root@centos-mysql-pxc-1 app]# mv Percona-XtraDB-Cluster-5.7.21-rel20-29.26.1.Linux.x86_64.ssl101 mysql ##程序主目录改名为mysql
[root@centos-mysql-pxc-1 app]# chown -R mysql:mysql mysql ##必须更改mysql程序主目录属主为mysql:mysql
[root@centos-mysql-pxc-1 app]# du -sh mysql
1006M mysql ##解压后的pxc主目录大约1G左右
(如果系统是最小化安装,事先做这两个软链接,避免初始化报库文件缺失的错误)
[root@centos-mysql-pxc-1 tmp]# vi /etc/my.cnf
[client]
port = 3306
socket = /data/mysql/tmp/mysql.sock
[mysql]
port = 3306
socket = /data/mysql/tmp/mysql.sock
default_character_set = utf8mb4
prompt = '\\u@\\d*\\h \\R:\\m:\\s> '
no-auto-rehash
user = root
password = caserver
[mysqldump]
quick
max_allowed_packet = 16M
[mysqld]
#sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
#transaction_isolation = READ-COMMITTED
default_storage_engine = InnoDB
default_authentication_plugin = mysql_native_password
basedir = /app/mysql
datadir = /data/mysql/data
pid_file = /data/mysql/logs/mysqld.pid
log_error = /data/mysql/logs/mysqld.err
log_timestamps = SYSTEM
slow_query_log = 1
slow_query_log_file = /data/mysql/logs/slow_query.log
long_query_time = 10 ##设置慢查询时长阀值,默认10
port = 3306
socket = /data/mysql/tmp/mysql.sock
tmpdir = /data/mysql/tmp
explicit_defaults_for_timestamp = 1
lower_case_table_names = 1
skip_name_resolve = 1
character_set_server = utf8mb4
server_id = 1 ##必须唯一,其它节点,请改为不冲突的ID号
log_bin = /data/mysql/logs/binlog/mysql-bin
log_bin_index = /data/mysql/logs/binlog/mysql-bin.index
binlog_rows_query_log_events = 1
binlog_row_image = MINIMAL
gtid_mode = on ##开启GTID
enforce_gtid_consistency = 1 ##开启GTID
innodb_buffer_pool_size = 256M
innodb_log_buffer_size = 64M
innodb_max_dirty_pages_pct = 50
sync_binlog = 1
innodb_flush_log_at_trx_commit = 1
innodb_file_per_table = 1
max_allowed_packet = 16M
max_connections = 1000
max_user_connections = 200
max_connect_errors = 100
#query_cache_type = 2
user = mysql
### Configure for PXC ###
innodb_autoinc_lock_mode = 2
binlog_format = ROW
pxc_strict_mode = ENFORCING
wsrep_cluster_name = pxc-cluster
wsrep_cluster_address = gcomm://192.168.56.8,192.168.56.9
wsrep_node_address = 192.168.56.8
wsrep_provider = /app/mysql/lib/libgalera_smm.so
#wsrep_provider_options="gcache.size = 1G;debug = yes"
wsrep_provider_options="gcache.size = 64M"
wsrep_slave_threads = 2
#wsrep_sst_method = rsync ##很大,上T用这个
wsrep_sst_method = xtrabackup-v2 ##100-200G用
wsrep_sst_auth = sst:xxxxxx
[root@centos-mysql-pxc-1 tmp]# cd /app/mysql
[root@centos-mysql-pxc-1 mysql]#
bin/mysqld --defaults-file=/etc/my.cnf --basedir=/app/mysql --datadir=/data/mysql/data --user=mysql –initialize
#> ls -l /usr/lib64/libssl.so.10 /usr/lib64/libssl.so.6
#> ls -l /usr/lib64/libcrypto.so.10 /usr/lib64/libcrypto.so.6
#> ln -sv /usr/lib64/libssl.so.10 /usr/lib64/libssl.so.6
#> ln -sv /usr/lib64/libcrypto.so.10 /usr/lib64/libcrypto.so.6
#> ls -l /usr/lib64/libcrypto.so.10 /lib64/libcrypto.so.6
#> ls -l /usr/lib64/libssl.so.10 /lib64/libssl.so.6
#> ln -sv /usr/lib64/libcrypto.so.10 /lib64/libcrypto.so.6
#> ln -sv /usr/lib64/libssl.so.10 /lib64/libssl.so.6
执行如上3.3.4步骤初始化数据库文件后,会在log_err变量定义的日志文件中(本例为/data/mysql/logs/mysqld.err)会生成如下的数据库root用户的临时密码:
默认PXC在mysqld_safe启动脚本中定义 的BASEDIR和DATADIR路径比较离奇,类似如下(注意不同版本的PXC路径略有不同,如下蓝色字体部分):
[root@centos-mysql-pxc-1 bin]# grep "percona-xtradb-cluster-5.7-binary-tarball" /app/mysql/bin/mysqld_safe
MY_BASEDIR_VERSION='/mnt/workspace/percona-xtradb-cluster-5.7-binary-tarball/label_exp/centos6-64/Percona-XtraDB-Cluster-5.7.21-29.26/390/usr/local/Percona-XtraDB-Cluster-5.7.21-rel20-29.26.1.Linux.x86_64.ssl101'
ledir='/mnt/workspace/percona-xtradb-cluster-5.7-binary-tarball/label_exp/centos6-64/Percona-XtraDB-Cluster-5.7.21-29.26/390/usr/local/Percona-XtraDB-Cluster-5.7.21-rel20-29.26.1.Linux.x86_64.ssl101/bin'
DATADIR=/mnt/workspace/percona-xtradb-cluster-5.7-binary-tarball/label_exp/centos6-64/Percona-XtraDB-Cluster-5.7.21-29.26/390/usr/local/Percona-XtraDB-Cluster-5.7.21-rel20-29.26.1.Linux.x86_64.ssl101/data
<<篇幅有限,省略后面5个路径的显示>>
………….
5.7.21版本有8处地方需要修改,如下:
[root@centos-mysql-pxc-1 bin]# grep "percona-xtradb-cluster-5.7-binary-tarball" mysqld_safe | wc -l
8
[root@centos-mysql-pxc-1 mysql]# cp -p /app/mysql/bin/mysqld_safe /app/mysql/bin/mysqld_safe.org
[root@centos-mysql-pxc-1 mysql]#
sed -i 's#/mnt/workspace/percona-xtradb-cluster-5.7-binary-tarball/label_exp/centos6-64/Percona-XtraDB-Cluster-5.7.21-29.26/390/usr/local/Percona-XtraDB-Cluster-5.7.21-rel20-29.26.1.Linux.x86_64.ssl101#/app/mysql#g' /app/mysql/bin/mysqld_safe
经查,路径已经成功替换为真实环境路径/app/mysql,如下图:
[root@centos-mysql-pxc-1 logs]# sed -i 's#/app/mysql/data#/data/mysql/data#g' /app/mysql/bin/mysqld_safe
[root@centos-mysql-pxc-1 tmp]# cd /app/mysql/support-files
[root@centos-mysql-pxc-1 support-files]# cp -p mysql.server /etc/init.d/mysqld
[root@centos-mysql-pxc-1 support-files]# vi /etc/init.d/mysqld
更改如下两个变量的值为真实环境路径:
basedir=/app/mysql
datadir=/data/mysql/data
类似3.3.6步骤,/etc/init.d/mysqld 启动脚本的basedir和datadir默认路径也是比较离奇,如下图:
建议执行替换:
[root@centos-mysql-pxc-1 logs]#
sed -i 's#/mnt/workspace/percona-xtradb-cluster-5.7-binary-tarball/label_exp/centos6-64/Percona-XtraDB-Cluster-5.7.21-29.26/390/usr/local/Percona-XtraDB-Cluster-5.7.21-rel20-29.26.1.Linux.x86_64.ssl101#/app/mysql#g' /etc/init.d/mysqld
[root@centos-mysql-pxc-1 logs]# sed -i 's#/app/mysql/data#/data/mysql/data#g' /etc/init.d/mysqld
[root@centos-mysql-pxc-1 logs]# /etc/init.d/mysqld bootstrap-pxc ##报如下错误:
Bootstrapping PXC (Percona XtraDB Cluster)Reloading systemd[ OK ]
Starting mysqld (via systemctl): Job for mysqld.service failed because the control process exited with error code. See "systemctl status mysqld.service" and "journalctl -xe" for details. [FAILED]
log_error : /data/mysql/logs/mysqld.err 日志报错误如下图:
详细日志见:
搜索资料,发现解释和解决方法如下3.4.2所示!
类似解决方法见资料:https://www.cnblogs.com/zejin2008/p/5475285.html,如下:
# /usr/local/mysql/bin/mysqld_safe --defaults-file=/data/mysql/mysql_3306/my_3306.cnf --wsrep-cluster-address=gcomm:// &
或
# /usr/local/mysql/bin/mysqld_safe --defaults-file=/data/mysql/mysql_3306/my_3306.cnf --wsrep-new-cluster &
【注】:当node1启动的时候,它会先尝试加入一个已存在的集群,但是现在集群并不存在,pxc必须从0开始,并且原来参数--wsrep-cluster-address中的其它节点还没有启动,主节点启动时找不到这些节点,所以node1的启动必须加上参数--wsrep-cluster-address=gcomm:// 或 --wsrep-new-cluster,用于新建一个新的集群。node1正常启动之后,其他的node就可以使用平时的启动方式,它们都会自动连接上primary node】
[root@centos-mysql-pxc-1 mysql]# bin/mysqld_safe --wsrep-new-cluster &
[1] 884
启动成功了,启动日志见附件:
[root@centos-mysql-pxc-1 mysql]# ps -ef | grep -i -E "pxc|percona|mysql" | grep -v grep
root 884 1527 0 01:18 pts/0 00:00:00 /bin/sh bin/mysqld_safe --wsrep-new-cluster
mysql 1723 884 0 01:18 pts/0 00:00:03 /app/mysql/bin/mysqld --basedir=/app/mysql --datadir=/data/mysql/data --plugin-dir=/app/mysql/lib/mysql/plugin --user=mysql --wsrep-provider=/app/mysql/lib/libgalera_smm.so --wsrep-new-cluster --log-error=/data/mysql/logs/mysqld.err --pid-file=/data/mysql/logs/mysqld.pid --socket=/data/mysql/tmp/mysql.sock --port=3306 --wsrep_start_position=00000000-0000-0000-0000-000000000000:-1
详见:4.2 XC各节点启动与关闭问题
### Configure for PXC ###
export PXC=/app/mysql
export PXCD=/data/mysql/data
export PATH=/app/mysql/bin:$PATH
export LD_LIBRARY_PATH=/app/mysql/lib:/usr/lib:/usr/lib64:/lib:/lib64:/usr/local/lib:/usr/local/lib64
" >> /etc/profile
[root@centos-mysql-pxc-1 logs]# mysql -uroot -p"JWb7dTqepd:i" ##含有特殊符号的密码需加引号(如:,@等符号)
root@(none)*localhost 02:28:52> alter user user() identified by 'xxxxxx';
root@(none)*localhost 02:29:22> flush privileges;
只需要在第一个启动的PXC节点创建即可。
SST(State Snapshot Transfer)用于新节点加入集时的全量同步,需要创建一个用于SST同步时使用数据库用户,用户名由my.cnf的参数wsrep_sst_auth = sst:xxxxxx指定,格式 :wsrep_sst_auth=username:password
root@(none)*localhost 02:30:50> GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO 'sst'@'localhost' IDENTIFIED BY 'xxxxxx';
[root@centos-mysql-pxc-2 logs]# ps -ef | grep -i -E "pxc|percona|mysql" | grep -v grep
root 2260 1 0 03:05 ? 00:00:00 /bin/sh /app/mysql/bin/mysqld_safe --datadir=/data/mysql/data --pid-file=/data/mysql/logs/mysqld.pid
mysql 2950 2260 0 03:05 ? 00:00:02 /app/mysql/bin/mysqld --basedir=/app/mysql --datadir=/data/mysql/data --plugin-dir=/app/mysql/lib/mysql/plugin --user=mysql --wsrep-provider=/app/mysql/lib/libgalera_smm.so --log-error=/data/mysql/logs/mysqld.err --pid-file=/data/mysql/logs/mysqld.pid --socket=/data/mysql/tmp/mysql.sock --port=3306 --wsrep_start_position=00000000-0000-0000-0000-000000000000:-1
第二个及之后的PXC节点的启动方法不同于第一个节点,只需要像平时启动普通的mysql服务一样启动即可,如下:
[root@centos-mysql-pxc-2 logs]# /etc/init.d/mysqld start
Reloading systemd: [ OK ]
Starting mysqld (via systemctl): [ OK ]
启动日志详见附件:
经测试,第二个节点数据库的root用户密码也自动同步为和第一个节点的一样了,并且也自动创建了sst用户。应为启动后,加入PXC集群时,自动同步过来了。
root@(none)*localhost 03:24:37> show global status like 'wsrep%';
+----------------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------------+--------------------------------------+
| wsrep_local_state_uuid | f6dea560-74d9-11e9-965d-fe8f6c4df7e6 |
| wsrep_protocol_version | 8 |
| wsrep_last_applied | 3 |
| wsrep_last_committed | 3 |
| wsrep_replicated | 3 |
| wsrep_replicated_bytes | 744 |
| wsrep_repl_keys | 3 |
| wsrep_repl_keys_bytes | 96 |
| wsrep_repl_data_bytes | 443 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 6 |
| wsrep_received_bytes | 462 |
| wsrep_local_commits | 0 |
| wsrep_local_cert_failures | 0 |
| wsrep_local_replays | 0 |
| wsrep_local_send_queue | 0 |
| wsrep_local_send_queue_max | 1 |
| wsrep_local_send_queue_min | 0 |
| wsrep_local_send_queue_avg | 0.000000 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_max | 2 |
| wsrep_local_recv_queue_min | 0 |
| wsrep_local_recv_queue_avg | 0.166667 |
| wsrep_local_cached_downto | 1 |
| wsrep_flow_control_paused_ns | 0 |
| wsrep_flow_control_paused | 0.000000 |
| wsrep_flow_control_sent | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_flow_control_interval | [ 141, 141 ] |
| wsrep_flow_control_interval_low | 141 |
| wsrep_flow_control_interval_high | 141 |
| wsrep_flow_control_status | OFF |
| wsrep_cert_deps_distance | 1.000000 |
| wsrep_apply_oooe | 0.000000 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 1.000000 |
| wsrep_commit_oooe | 0.000000 |
| wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 1.000000 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_cert_index_size | 1 |
| wsrep_cert_bucket_count | 22 |
| wsrep_gcache_pool_size | 2320 |
| wsrep_causal_reads | 0 |
| wsrep_cert_interval | 0.000000 |
| wsrep_ist_receive_status | |
| wsrep_ist_receive_seqno_start | 0 |
| wsrep_ist_receive_seqno_current | 0 |
| wsrep_ist_receive_seqno_end | 0 |
| wsrep_incoming_addresses | 192.168.56.9:3306,192.168.56.8:3306 |
| wsrep_desync_count | 0 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0/0/0/0/0 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | f6de1435-74d9-11e9-8ce8-df425f70ab7b |
| wsrep_cluster_conf_id | 2 |
| wsrep_cluster_size | 2 |
| wsrep_cluster_state_uuid | f6dea560-74d9-11e9-965d-fe8f6c4df7e6 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 1 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy
| wsrep_provider_version | 3.26(r) |
| wsrep_ready | ON |
+----------------------------------+--------------------------------------+
68 rows in set (0.00 sec)
root@(none)*localhost 03:24:40> show global status like 'wsrep%';
+----------------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------------+--------------------------------------+
| wsrep_local_state_uuid | f6dea560-74d9-11e9-965d-fe8f6c4df7e6 |
| wsrep_protocol_version | 8 |
| wsrep_last_applied | 3 |
| wsrep_last_committed | 3 |
| wsrep_replicated | 0 |
| wsrep_replicated_bytes | 0 |
| wsrep_repl_keys | 0 |
| wsrep_repl_keys_bytes | 0 |
| wsrep_repl_data_bytes | 0 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 3 |
| wsrep_received_bytes | 244 |
| wsrep_local_commits | 0 |
| wsrep_local_cert_failures | 0 |
| wsrep_local_replays | 0 |
| wsrep_local_send_queue | 0 |
| wsrep_local_send_queue_max | 1 |
| wsrep_local_send_queue_min | 0 |
| wsrep_local_send_queue_avg | 0.000000 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_max | 1 |
| wsrep_local_recv_queue_min | 0 |
| wsrep_local_recv_queue_avg | 0.000000 |
| wsrep_local_cached_downto | 0 |
| wsrep_flow_control_paused_ns | 0 |
| wsrep_flow_control_paused | 0.000000 |
| wsrep_flow_control_sent | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_flow_control_interval | [ 141, 141 ] |
| wsrep_flow_control_interval_low | 141 |
| wsrep_flow_control_interval_high | 141 |
| wsrep_flow_control_status | OFF |
| wsrep_cert_deps_distance | 0.000000 |
| wsrep_apply_oooe | 0.000000 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 0.000000 |
| wsrep_commit_oooe | 0.000000 |
| wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 0.000000 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_cert_index_size | 0 |
| wsrep_cert_bucket_count | 22 |
| wsrep_gcache_pool_size | 1456 |
| wsrep_causal_reads | 0 |
| wsrep_cert_interval | 0.000000 |
| wsrep_ist_receive_status | |
| wsrep_ist_receive_seqno_start | 0 |
| wsrep_ist_receive_seqno_current | 0 |
| wsrep_ist_receive_seqno_end | 0 |
| wsrep_incoming_addresses | 192.168.56.9:3306,192.168.56.8:3306 |
| wsrep_desync_count | 0 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0/0/0/0/0 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | db38a228-74e8-11e9-9799-86d6b680ceac |
| wsrep_cluster_conf_id | 2 |
| wsrep_cluster_size | 2 |
| wsrep_cluster_state_uuid | f6dea560-74d9-11e9-965d-fe8f6c4df7e6 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 0 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy
| wsrep_provider_version | 3.26(r) |
| wsrep_ready | ON |
+----------------------------------+--------------------------------------+
68 rows in set (0.10 sec)
wsrep_cluster_status 为 Primary
wsrep_connected 必须要为 ON
wsrep_ready必须要为 ON
才正常!
其中:
wsrep_local_index 为集群每个节点的唯一编号,从0开始
【注】:
wsrep_cluster_status状态值:
OPEN: 节点启动成功,尝试连接到集群,如果失败则根据配置退出或创建新的集群
PRIMARY: 节点处于集群PXC中,尝试从集群中选取donor进行数据同步
JOINER: 节点处于等待接收/接收数据文件状态,数据传输完成后在本地加载数据
JOINED: 节点完成数据同步工作,尝试保持和集群进度一致
SYNCED:节点正常提供服务:数据的读写,集群数据的同步,新加入节点的sst请求
DONOR(数据贡献者):节点处于为新节点准备或传输集群全量数据状态,对客户端不可用。
刚启动节点时的同步是调用xtrabackup备份工具进行的。同步正常,也说明xtrabackup工作正常。
root@(none)*192.168.56.8 03:47:17> create database testdb;
Query OK, 1 row affected (0.01 sec)
root@(none)*192.168.56.8 03:48:18> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| testdb |
+--------------------+
5 rows in set (0.00 sec)
root@(none)*192.168.56.9 03:47:27> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| testdb |
+--------------------+
5 rows in set (0.00 sec)
root@(none)*192.168.56.9 03:51:19> use testdb;
Database changed
root@testdb*192.168.56.9 03:52:20> create table test(id int auto_increment primary key,name varchar(100));
Query OK, 0 rows affected (0.05 sec)
root@testdb*192.168.56.9 03:52:56> show tables;
+------------------+
| Tables_in_testdb |
+------------------+
| test |
+------------------+
1 row in set (0.00 sec)
root@(none)*192.168.56.8 03:51:26> use testdb;
Database changed
root@testdb*192.168.56.8 03:54:03> show tables;
+------------------+
| Tables_in_testdb |
+------------------+
| test |
+------------------+
1 row in set (0.00 sec)
LVS(Linux Virtual Server)是一个高可用性虚拟的服务器集群系统。本项目在1998年5月由章文嵩博士成立,是中国国内最早出现的自由软件项目之一。LVS主要用于多服务器的负载均衡,作用于网络层。LVS构建的服务器集群系统中,前端的负载均衡层被称为Director Server,后端提供服务的服务器组层被称为Real Server。通过下图可以大致了解LVS的基础架构:
LVS是像iptables一样是在OS内核堆栈上的实现的,所以对应的内核模块叫IPVS。
IPVS:内核中的协议栈上实现。
ipvsadm:用户空间的集群服务管理工具,即LVS的管理命令行工具,一般需单独安装。
[root@centos-mysql-mycat-1 tmp]# lsmod | grep -i ip_vs
ip_vs 140944 0
libcrc32c 12644 1 ip_vs
nf_conntrack 105745 6 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_ipv4
【注】:如以上输出,表示系统已经开启支持IPVS功能模块。
[root@centos-mysql-mycat-1 tmp]# grep -A3 -i "IPVS" /boot/config-3.10.0-327.el7.x86_64
【注】:如以上输出,表示系统已经开启支持IPVS功能模块。
【注】:如果经以上检查,IPVS模块未加载的话,可以尝试modprobe ip_vs命令重新加载后,再执行以上检查。
[root@centos-mysql-mycat-1 tmp]# echo "1" > /proc/sys/net/ipv4/ip_forward # 一般默认是开启的
[root@centos-mysql-mycat-1 tmp]# rpm -qa | grep -i ipvsadm
[root@centos-mysql-mycat-1 tmp]# yum install -y ipvsadm
[root@centos-mysql-mycat-1 tmp]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
【注】:由于刚安装的ipvsadm,未作任何转发配置,所以如上输出配置及转发条目为空。
[root@centos-mysql-mycat-2 tmp]# ipvsadm -A -t 192.168.56.99:3306 -s rr ##192.168.56.99即为LVS的VIP,这里只是配置这个LVS集群,VIP还没有生效,需在以下keepalived中配置。
[root@centos-mysql-mycat-1 tmp]# ipvsadm -a -t 192.168.56.99:3306 -r 192.168.56.8:3306 –g
[root@centos-mysql-mycat-1 tmp]# ipvsadm -a -t 192.168.56.99:3306 -r 192.168.56.9:3306 –g
本例接着在192.168.56.6 和192.168.56.7两个LVS节点安装配置keepalived, 以防止LVS单点故障对数据库的访问产生影响,给服务器之间通过keepalived进行心跳检测,如果其中的某个机器出现问题,另外一台将会接管,对用户来说整个过程是透明的。
[root@centos-mysql-mycat-1 tmp]# yum install -y keepalived
配置keepalived.conf
[root@centos-mysql-mycat-1 tmp]# cp -p /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.org
[root@centos-mysql-mycat-1 tmp]# echo > /etc/keepalived/keepalived.conf ##先清空原来默认配置内容,然后重新配置
[root@centos-mysql-mycat-1 tmp]# vi /etc/keepalived/keepalived.conf ##重新配置,加入以下配置内容
! Configuration File for keepalived
global_defs {
router_id LVS_HA ##keepalived组的名称
}
vrrp_sync_group VG1 {
group {
VI_1
}
}
vrrp_instance VI_1 {
state BACKUP ##这里state不配置MASTER,所有keepalived主机服务配置都改为BACKUP,是期望在MASTER宕机后再恢复时,不主动将MASTER状态抢过来,避免MySQL服务的波动(必须加上以下的nopreempt选项,否则还是会抢占VIP)
interface enp0s3 ##在我的环境里的CentOS系统默认第一块网卡是enp0s3,如果是其它Linux,应该是eth0
virtual_router_id 51 ##同一集群中该数值要相同,只能从1-255
priority 100 ##优先级设置,LVS备机要设置比这个小,如设置为90
advert_int 1
nopreempt ##非抢占模式,和以上state BACKUP配合使用,原MASTER宕机后再恢复后,才不会把VIP抢占过来,避免MySQL服务的波动(如果以上state为MASTER,即使指定该项,原MASTER重启后,还是会抢占VIP,必须是state BACKUP+nopreempt才不抢占VIP)
authentication {
auth_type PASS ##Auth 用密码,但密码不要超过8位
auth_pass 1234
}
virtual_ipaddress {
192.168.56.99/24 brd 192.168.56.255 dev enp0s3 label enp0s3:vip ##配置故障自动漂移的VIP,经Samdy测试,至少需要添加 label enp0s3:vip 虚拟IP标签项(即 192.168.56.99/24 label enp0s3:vip),否则启动日志无报错,但vip启动不起来,执行ifconfig 看不见vip
}
}
virtual_server 192.168.56.99 3306 { # 定义虚拟服务器,地址与上面的virtual_ipaddress相同
delay_loop 3 # 健康检查时间间隔,3秒
lb_algo rr # 负载均衡调度算法:rr|wrr|lc|wlc|sh|dh|lblc
lb_kind DR # 负载均衡转发规则:NAT|DR|TUN
# persistence_timeout 5 # 会话保持时间5秒,动态服务建议开启
protocol TCP # 转发协议protocol,一般有tcp和udp两种
#后端真实服务器,有几台就设置几个
real_server 192.168.56.8 3306 {
weight 1 # 权重越大负载分越大,0表示失效
TCP_CHECK {
connect_timeout 3 # 连接超时时间,单位为秒
nb_get_retry 3 # 检测失败后的重试次数,如果达到重试次数仍然失败,将后端从服务器池中移除
delay_before_retry 3 # 失败重试的间隔时间,单位为秒
connect_port 3306 # 连接端口
}
}
real_server 192.168.56.9 3306 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 3306
}
}
}
keepalived 三种状态:
1)backup
2)master
3)尝试进入master状态,没成功: FAULT
先启动192.168.56.6(LVS1:centos-mysql-mycat-1)上的keepalived服务:
[root@centos-mysql-mycat-1 tmp]# systemctl start keepalived.service
本例为192.168.56.7(LVS2:centos-mysql-mycat-2):
[root@centos-mysql-mycat-2 tmp]# systemctl start keepalived.service
经测试,在LVS1:192.168.56.6上关闭keepalived服务后,会自动漂移VIP(192.168.56.99)到LVS2:192.168.56.7上,并且ipvs转发规则也一并生效(其实只要启动了keepalived服务,不管VIP有没漂移过来,ipvs转发规则就已经生效了的,只是如果VIP没漂移过来,ipvs转发规则即使生效,没VIP的连接请求过来,也不起作用而已),如下图:
[root@centos-mysql-pxc-1 tmp]# vi /etc/init.d/realserver
#!/bin/sh
VIP=192.168.56.99 # 必须和LVS 一样的VIP地址,RS的VIP梆定到lo回环网络接口,用于直接与源客户端通信
. /etc/rc.d/init.d/functions
case "$1" in
start)
/sbin/ifconfig lo down
/sbin/ifconfig lo up
# 禁用本地的ARP请求、绑定本地回环地址
echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" > /proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" > /proc/sys/net/ipv4/conf/all/arp_announce
# /sbin/sysctl -p > /dev/null 2>&1
/sbin/ifconfig lo:vip $VIP broadcast $VIP netmask 255.255.255.255 up # 在回环接口上绑定VIP,设定掩码必须为255.255.255.255
/sbin/route add -host $VIP dev lo:vip
echo "LVS-DR Real Server start successfully.\n"
;;
stop)
/sbin/ifconfig lo:vip down
/sbin/route del $VIP > /dev/null 2>&1
echo "0" > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo "0" > /proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" > /proc/sys/net/ipv4/conf/all/arp_ignore
echo "0" > /proc/sys/net/ipv4/conf/all/arp_announce
echo "LVS-DR Real Server stopped.\n"
;;
status)
isLoOn=`/sbin/ifconfig lo:vip | grep "$VIP"`
isRoOn=`/bin/netstat -rn | grep "$VIP"`
if [ "$isLoON" == "" -a "$isRoOn" == "" ]; then
echo "LVS-DR Real Server is stopped."
else
echo "LVS-DR Real Server is running."
fi
exit 3
;;
*)
echo "Usage: $0 {start|stop|status}"
exit 1
esac
exit 0
[root@centos-mysql-pxc-1 tmp]# chmod u+x /etc/init.d/realserver
[root@centos-mysql-pxc-1 tmp]# service realserver start
Starting realserver (via systemctl): [ 确定 ]
[root@centos-mysql-pxc-1 tmp]# service realserver status
LVS-DR Real Server is running.
[root@centos-mysql-pxc-1 tmp]# ifconfig -a lo:vip
lo:vip: flags=73
inet 192.168.56.99 netmask 255.255.255.255
loop txqueuelen 0 (Local Loopback)
[root@centos-mysql-pxc-1 tmp]# cat /proc/sys/net/ipv4/conf/lo/arp_ignore
1
[root@centos-mysql-pxc-1 tmp]# cat /proc/sys/net/ipv4/conf/lo/arp_announce
2
[root@centos-mysql-pxc-1 tmp]# cat /proc/sys/net/ipv4/conf/all/arp_ignore
1
[root@centos-mysql-pxc-1 tmp]# cat /proc/sys/net/ipv4/conf/all/arp_announce
2
[root@centos-mysql-pxc-1 tmp]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.56.1 0.0.0.0 UG 100 0 0 enp0s3
0.0.0.0 10.0.2.1 0.0.0.0 UG 101 0 0 enp0s8
10.0.2.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s8
192.168.56.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
192.168.56.99 0.0.0.0 255.255.255.255 UH 0 0 0 lo
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
或
[root@centos-mysql-pxc-1 tmp]# netstat -rn | grep "192.168.56.99"
192.168.56.99 0.0.0.0 255.255.255.255 UH 0 0 0 lo
[root@centos-mysql-pxc-1 tmp]# mysqld_safe --user=mysql --wsrep-cluster-address=gcomm:// & # 第一个PXC节点必须要加 --wsrep-cluster-address=gcomm:// 或 --wsrep-new-cluster 参数启动
[root@centos-mysql-pxc-2 tmp]# mysqld_safe --user=mysql & # 之后的PXC节点按正常方式启动即可
[root@centos-mysql-pxc-1 tmp]# /etc/init.d/realserver start
【注】:启动后,需执行以上《6.2 启动/etc/init.d/realserver测试》步骤检查回环接口lo的VIP是否已经启动,arp广播参数是否已经生效。
[root@centos-mysql-mycat-1 tmp]# systemctl start keepalived.service
[root@centos-mysql-mycat-1 tmp]# ifconfig enp0s3:vip # VIP已经启动
enp0s3:vip: flags=4163
inet 192.168.56.99 netmask 255.255.255.0 broadcast 192.168.56.255
ether 08:00:27:3e:89:b4 txqueuelen 1000 (Ethernet)
[root@centos-mysql-mycat-1 tmp]# ipvsadm -L -n # LVS集群和转发规则已经生效
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.56.99:3306 rr
-> 192.168.56.8:3306 Route 1 0 0
-> 192.168.56.9:3306 Route 1 0 0
如上图,经测试,由于采用了LVS的rr轮询模式,所以是不考虑后端RS(PXC)节点的负载情况的,而是轮流发起连接。
【注】:经测试,无法在启动了VIP的LVS节点使用VIP发起到RS(PXC)后端的连接,会卡住,但在另一个没有启动VIP的LVS节点(虽然也启动了keepalived,但VIP没有抢占过来,没生效)是可以发起到RS(PXC)后端的连接的。
[root@centos-mysql-mycat-1 tmp]# ipvsadm -Lnc
IPVS connection entries
pro expire state source virtual destination
TCP 01:57 FIN_WAIT 192.168.56.7:60651 192.168.56.99:3306 192.168.56.8:3306
TCP 14:46 ESTABLISHED 192.168.56.7:60650 192.168.56.99:3306 192.168.56.9:3306
TCP 01:05 FIN_WAIT 192.168.56.7:60649 192.168.56.99:3306 192.168.56.8:3306
【说明】:FIN_WAIT:表示在RS后端断开的连接还没有释放(本例测试是在数据库中输入exit退出),稍等一会,
会自动释放,并消失该条连接记录。
FIN_WAIT 是tcpfin参数的超时时间,如下命令可查得,默认为120秒:
[root@centos-mysql-mycat-1 tmp]# ipvsadm -L --timeout
Timeout (tcp tcpfin udp): 900 120 300
对应的设置命令为: ipvsadm --set 900 120 300
ESTABLISHED:表示目前在正常使用的连接。
【state】列各状态解释描述如下:
CLOSED:无连接是活动的或正在进行
LISTEN:服务器在等待进入呼叫
SYN_RECV:一个连接请求已经到达,等待确认
SYN_SENT:应用已经开始,打开一个连接
ESTABLISHED:正常数据传输状态
FIN_WAIT1:应用说它已经完成
FIN_WAIT2:另一边已同意释放
ITMED_WAIT:等待所有分组死掉
CLOSING:两边同时尝试关闭
TIME_WAIT:另一边已初始化一个释放
LAST_ACK:等待所有分组死掉
更详细解释见如下附件:
本文为作者原创,谢绝转载!