redis-cluster原理请看这篇
https://blog.csdn.net/truelove12358/article/details/79612954
说明:本实验是在2台服务器上搭建启动6个redis实例做的伪集群
环境:
服务器1:10.129.24.16 redis实例5001、5002、5003
服务器2:10.129.24.7 redis实例5004、5005、5006
一、准备工作
1.1、安装redis
[root@Redis-RS01 src]# tar -zxvf redis-3.0.7.tar.gz [root@Redis-RS01 src]# cd redis-3.0.7 [root@Redis-RS01 redis-3.0.7]# make && make PREFIX=/usr/local/redis install
1.2、配置文件
daemonize yes pidfile /var/run/redis_5001.pid port 5001 bind 10.129.24.16 logfile "/usr/local/redis-cluster/5001/log/redis.log" dir /usr/local/redis-cluster/5001/data cluster-enabled yes cluster-config-file nodes5001.conf cluster-node-timeout 15000 appendonly yes
1.3、创建集群目录(多个redis实例)
[root@Redis-RS01 redis-3.0.7]# mkdir -p /usr/local/redis-cluster/{5001/{data,log},5002/{data,log},5003/{data,log}} [root@Redis-RS01 redis-3.0.7]# cp redis.conf /usr/local/redis-cluster/5001/ [root@Redis-RS01 redis-3.0.7]# cp redis.conf /usr/local/redis-cluster/5002/ [root@Redis-RS01 redis-3.0.7]# sed -i 's/5001/5002/g' /usr/local/redis-cluster/5002/redis.conf [root@Redis-RS01 redis-3.0.7]# cp redis.conf /usr/local/redis-cluster/5003/ [root@Redis-RS01 redis-3.0.7]# sed -i 's/5001/5003/g' /usr/local/redis-cluster/5002/redis.conf
二、配置redis集群
2.1、redis3.0以后集群的创建命令是ruby实现的,所以要先安装ruby环境
[root@Redis-RS01 redis-3.0.7]# yum -y install ruby [root@Redis-RS01 redis-3.0.7]# yum -y install rubygems [root@Redis-RS01 redis-3.0.7]# gem install redis
这里安装redis可能会报错:
gem install redis
ERROR: Error installing redis:
redis requires Ruby version >= 2.2.2.
解决办法,安装大于2.2.2的ruby版本
[root@Redis-RS01 redis-3.0.7]# gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB [root@Redis-RS01 redis-3.0.7]# curl -sSL https://get.rvm.io | bash -s stable [root@Redis-RS01 redis-3.0.7]# source /usr/local/rvm/scripts/rvm [root@Redis-RS01 redis-3.0.7]# rvm install 2.4.1 [root@Redis-RS01 redis-3.0.7]# rvm use 2.4.1 [root@Redis-RS01 redis-3.0.7]# rvm use 2.4.1 --default [root@Redis-RS01 redis-3.0.7]# gem install redis Fetching: redis-4.0.1.gem (100%) Successfully installed redis-4.0.1 Parsing documentation for redis-4.0.1 Installing ri documentation for redis-4.0.1 Done installing documentation for redis after 3 seconds 1 gem installed
另一台服务器的配置和上面相同,就不在重复写了,注意配置文件中的端口和目录要配置正确
2.2、启动所有redis实例
[root@Redis-RS01 redis-3.0.7]# /usr/local/redis/bin/redis-server /usr/local/redis-cluster/5001/redis.conf [root@Redis-RS01 redis-3.0.7]# /usr/local/redis/bin/redis-server /usr/local/redis-cluster/5002/redis.conf [root@Redis-RS01 redis-3.0.7]# /usr/local/redis/bin/redis-server /usr/local/redis-cluster/5003/redis.conf
三、启动redis集群
3.1、redis-trib.rb命令的参数
1、 create :创建集群
2、 check :检查集群
3、 info :查看集群信息
4、 fix :修复集群
5、 reshard :在线迁移slot
6、 rebalance :平衡集群节点slot数量
7、 add-node :将新节点加入集群
8、 del-node :从集群中删除节点
9、 set-timeout :设置集群节点间心跳连接的超时时间
10、 call :在集群全部节点上执行命令
11、 import :将外部redis数据导入集群
[root@Redis-RS01 5001]# /usr/local/redis-cluster/bin/redis-trib.rb create --replicas 1 10.129.24.16:5001 10.129.24.16:5002 10.129.24.16:5003 10.129.24.7:5004 10.129.24.7:5005 10.129.24.7:5006 >>> Creating cluster >>> Performing hash slots allocation on 6 nodes... Using 3 masters: 10.129.24.16:5001 10.129.24.7:5004 10.129.24.16:5002 Adding replica 10.129.24.7:5005 to 10.129.24.16:5001 Adding replica 10.129.24.16:5003 to 10.129.24.7:5004 Adding replica 10.129.24.7:5006 to 10.129.24.16:5002 M: 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b 10.129.24.16:5001 slots:0-5460 (5461 slots) master M: 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 10.129.24.16:5002 slots:10923-16383 (5461 slots) master S: 51d0615f10b6912e1e88bbca6cd4e8d843541f3a 10.129.24.16:5003 replicates bdf828d85ecb97612d16dc20803d25ade427e7e6 M: bdf828d85ecb97612d16dc20803d25ade427e7e6 10.129.24.7:5004 slots:5461-10922 (5462 slots) master S: 4a06022c8d4354cecbce3dc769173be0b100a939 10.129.24.7:5005 replicates 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b S: 958e79522a2eb81134a2e62d9fc991fe14320af4 10.129.24.7:5006 replicates 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join....... >>> Performing Cluster Check (using node 10.129.24.16:5001) M: 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b 10.129.24.16:5001 slots:0-5460 (5461 slots) master M: 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 10.129.24.16:5002 slots:10923-16383 (5461 slots) master M: 51d0615f10b6912e1e88bbca6cd4e8d843541f3a 10.129.24.16:5003 slots: (0 slots) master replicates bdf828d85ecb97612d16dc20803d25ade427e7e6 M: bdf828d85ecb97612d16dc20803d25ade427e7e6 10.129.24.7:5004 slots:5461-10922 (5462 slots) master M: 4a06022c8d4354cecbce3dc769173be0b100a939 10.129.24.7:5005 slots: (0 slots) master replicates 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b M: 958e79522a2eb81134a2e62d9fc991fe14320af4 10.129.24.7:5006 slots: (0 slots) master replicates 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. [root@Redis-RS01 5001]#
--replicas 1 表示为每个主节点创建一个从节点
至此,redis集群创建完成,下面进行验证
四、验证
4.1、连接redis
[root@Redis-RS01 5001]# /usr/local/redis/bin/redis-cli -c -h 10.129.24.16 -p 5001 10.129.24.16:5001> keys * (empty list or set) 10.129.24.16:5001>
4.2、设置一个key
10.129.24.16:5001> set name "john.gou" -> Redirected to slot [5798] located at 10.129.24.7:5004 OK 10.129.24.7:5004>
我们看到已经成功写入,并且集群将数据自动分配到了5789槽中,5004的主redis实例上
4.3、连接其他实例尝试读取刚才设置的key
[root@Redis-RS01 5001]# /usr/local/redis/bin/redis-cli -c -h 10.129.24.16 -p 5002 10.129.24.16:5002> get name -> Redirected to slot [5798] located at 10.129.24.7:5004 "john.gou" [root@Redis-RS01 5001]# /usr/local/redis/bin/redis-cli -c -h 10.129.24.7 -p 5005 10.129.24.7:5005> get name -> Redirected to slot [5798] located at 10.129.24.7:5004 "john.gou" [root@Redis-RS01 5001]# /usr/local/redis/bin/redis-cli -c -h 10.129.24.16 -p 5003 10.129.24.16:5003> get name -> Redirected to slot [5798] located at 10.129.24.7:5004 "john.gou"
其他主实例也可以读取到
4.4、查看当前集群主从状态
[root@iZ28tqwgn5qZ redis-cluster]# /usr/local/redis-cluster/bin/redis-trib.rb check 10.129.24.16:5001 >>> Performing Cluster Check (using node 10.129.24.16:5001) M: 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b 10.129.24.16:5001 slots:0-5460 (5461 slots) master 1 additional replica(s) M: 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 10.129.24.16:5002 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: 958e79522a2eb81134a2e62d9fc991fe14320af4 10.129.24.7:5006 slots: (0 slots) slave replicates 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 M: bdf828d85ecb97612d16dc20803d25ade427e7e6 10.129.24.7:5004 slots:5461-10922 (5462 slots) master 1 additional replica(s) S: 4a06022c8d4354cecbce3dc769173be0b100a939 10.129.24.7:5005 slots: (0 slots) slave replicates 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b S: 51d0615f10b6912e1e88bbca6cd4e8d843541f3a 10.129.24.16:5003 slots: (0 slots) slave replicates bdf828d85ecb97612d16dc20803d25ade427e7e6 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
5001、5004、5002都是主实例,从实例分别是5005、5003、5006,下面我们关闭5004主实例,验证5003是否会升级为主接替5004实例
4.5、关掉5004实例验证数据可用性
[root@iZ28tqwgn5qZ redis-cluster]# ps -ef |grep redis root 2550 1 0 15:32 ? 00:00:01 /usr/local/redis/bin/redis-server 10.129.24.7:5004 [cluster] root 2558 1 0 15:32 ? 00:00:01 /usr/local/redis/bin/redis-server 10.129.24.7:5005 [cluster] root 2566 1 0 15:32 ? 00:00:01 /usr/local/redis/bin/redis-server 10.129.24.7:5006 [cluster] root 3876 12717 0 15:58 pts/7 00:00:00 grep redis [root@iZ28tqwgn5qZ redis-cluster]# kill -9 2550 [root@iZ28tqwgn5qZ redis-cluster]# /usr/local/redis/bin/redis-cli -c -h 10.129.24.16 -p 5001 10.129.24.16:5001> 10.129.24.16:5001> get name -> Redirected to slot [5798] located at 10.129.24.16:5003 "john.gou" 10.129.24.16:5003>
可以看到已经自动跳转到了5003实例上,刚才是5004
4.6、再次查看集群状态
[root@iZ28tqwgn5qZ redis-cluster]# /usr/local/redis-cluster/bin/redis-trib.rb check 10.129.24.16:5001 >>> Performing Cluster Check (using node 10.129.24.16:5001) M: 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b 10.129.24.16:5001 slots:0-5460 (5461 slots) master 1 additional replica(s) M: 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 10.129.24.16:5002 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: 958e79522a2eb81134a2e62d9fc991fe14320af4 10.129.24.7:5006 slots: (0 slots) slave replicates 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 S: 4a06022c8d4354cecbce3dc769173be0b100a939 10.129.24.7:5005 slots: (0 slots) slave replicates 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b M: 51d0615f10b6912e1e88bbca6cd4e8d843541f3a 10.129.24.16:5003 slots:5461-10922 (5462 slots) master 0 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
5003实例已经接替5004变成主了
4.7、恢复5004实例验证是否会自动加入集群并接替5004变为主实例
[root@iZ28tqwgn5qZ redis-cluster]# /usr/local/redis/bin/redis-server ./5004/redis.conf [root@Redis-RS01 redis-cluster]# ./bin/redis-trib.rb check 10.129.24.16:5001 >>> Performing Cluster Check (using node 10.129.24.16:5001) M: 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b 10.129.24.16:5001 slots:0-5460 (5461 slots) master 1 additional replica(s) M: 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 10.129.24.16:5002 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: 958e79522a2eb81134a2e62d9fc991fe14320af4 10.129.24.7:5006 slots: (0 slots) slave replicates 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 S: bdf828d85ecb97612d16dc20803d25ade427e7e6 10.129.24.7:5004 slots: (0 slots) slave replicates 51d0615f10b6912e1e88bbca6cd4e8d843541f3a S: 4a06022c8d4354cecbce3dc769173be0b100a939 10.129.24.7:5005 slots: (0 slots) slave replicates 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b M: 51d0615f10b6912e1e88bbca6cd4e8d843541f3a 10.129.24.16:5003 slots:5461-10922 (5462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
可以看到,主恢复后不会接替现有的5003实例变为主,直到5003实例故障后从才会接替主
五、向集群中添加主节点
5.1、准备一个新的实例
root@master1:/usr/local/redis# mkdir 6388 root@master1:/usr/local/redis# sed -i 's/6380/6388/g' 6388/redis.conf root@master1:/usr/local/redis# ./bin/redis-server 6388/redis.conf root@master1:/usr/local/redis# ps -ef |grep 6388 root 23686 1 0 10:10 ? 00:00:00 ./bin/redis-server 192.168.3.143:6388 [cluster]
5.2、向当前集群中添加主节点
root@master1:/usr/local/redis# ./redis-trib.rb add-node 192.168.3.143:6388 192.168.3.143:6380 >>> Adding node 192.168.3.143:6388 to cluster 192.168.3.143:6380 >>> Performing Cluster Check (using node 192.168.3.143:6380) M: 0c9fae90cfa5fac015f8e172af056f830056b79d 192.168.3.143:6380 slots:0-4095 (4096 slots) master 1 additional replica(s) S: ab5898dde3de48ddf2829ae8a7359e274790f5b9 192.168.3.143:6381 slots: (0 slots) slave replicates 1a4852c5bf5d751c011d7cd57764da015a24e468 S: f639f8ff30c7f2f3a92b2406cb3e640629358645 192.168.3.143:6384 slots: (0 slots) slave replicates cacb64e70f0f142d6ef24b862f84eab074eb7bea S: dd4da448f03f5bd415f9e62e3070e4050984ede0 192.168.3.143:6386 slots: (0 slots) slave replicates 0c9fae90cfa5fac015f8e172af056f830056b79d M: 1a4852c5bf5d751c011d7cd57764da015a24e468 192.168.3.143:6387 slots:4096-8191 (4096 slots) master 1 additional replica(s) S: 788d5ef31c92f7e5f6d1020db46241c652a60f54 192.168.3.143:6385 slots: (0 slots) slave replicates f4308e9dac62e18934ad29b25007c48691a2051f M: f4308e9dac62e18934ad29b25007c48691a2051f 192.168.3.143:6382 slots:8192-12287 (4096 slots) master 1 additional replica(s) M: cacb64e70f0f142d6ef24b862f84eab074eb7bea 192.168.3.143:6383 slots:12288-16383 (4096 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 192.168.3.143:6388 to make it join the cluster. [OK] New node added correctly.
5.3、查看当前集群状态
root@master1:/usr/local/redis# ./redis-trib.rb check 192.168.3.143:6380 >>> Performing Cluster Check (using node 192.168.3.143:6380) ... ... ... M: e279f6728d4996cf864acb590b03747dae6191f4 192.168.3.143:6388 slots: (0 slots) master 0 additional replica(s)
6388节点已经加入并为主节点,但是6388上并没有slots。还要为集群重新分配slots
5.4、为新增的master节点分配hash槽
输入要分配多少节点给新增的master节点
root@master1:/usr/local/redis# ./redis-trib.rb reshard 192.168.3.143:6380 >>> Performing Cluster Check (using node 192.168.3.143:6380) ... ... [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 2000
输入接收slots的节点ID,我这里是6388节点的ID
What is the receiving node ID? e279f6728d4996cf864acb590b03747dae6191f4 Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1:all
最后输入要从哪些节点抽取slots给6388节点。all表示从当前所有主节点分配
查看重新分配slots后的集群状态
root@master1:/usr/local/redis# ./redis-trib.rb check 192.168.3.143:6380 >>> Performing Cluster Check (using node 192.168.3.143:6380) ... ... M: e279f6728d4996cf864acb590b03747dae6191f4 192.168.3.143:6388 slots:0-1169,4096-4619,8192-8714,12288-12810 (2740 slots) master 0 additional replica(s)
5.5、为新增的master节点添加从节点
依旧是准备一个新的实例
root@master1:/usr/local/redis# mkdir -p 6389/{data,log} root@master1:/usr/local/redis# cat 6380/redis.conf >6389/redis.conf root@master1:/usr/local/redis# sed -i 's/6380/6389/g' 6389/redis.conf root@master1:/usr/local/redis# ./bin/redis-server 6389/redis.conf root@master1:/usr/local/redis# ps -ef |grep 6389 root 24132 1 0 10:49 ? 00:00:00 ./bin/redis-server 192.168.3.143:6389 [cluster]
为6388增加slave节点
root@master1:/usr/local/redis# ./redis-trib.rb add-node --slave --master-id e279f6728d4996cf864acb590b03747dae6191f4 192.168.3.143:6389 192.168.3.143:6380 >>> Adding node 192.168.3.143:6389 to cluster 192.168.3.143:6380 >>> Performing Cluster Check (using node 192.168.3.143:6380) ... ... M: e279f6728d4996cf864acb590b03747dae6191f4 192.168.3.143:6388 slots:0-1169,4096-4619,8192-8714,12288-12810 (2740 slots) master 0 additional replica(s) ... ... [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 192.168.3.143:6389 to make it join the cluster. Waiting for the cluster to join. >>> Configure node as replica of 192.168.3.143:6388. [OK] New node added correctly.
查看当前集群状态
root@master1:/usr/local/redis# ./redis-trib.rb check 192.168.3.143:6380 >>> Performing Cluster Check (using node 192.168.3.143:6380) ... ... M: e279f6728d4996cf864acb590b03747dae6191f4 192.168.3.143:6388 slots:0-1169,4096-4619,8192-8714,12288-12810 (2740 slots) master 1 additional replica(s) S: fefd056625f5cc1a75f81ca555e70bcf3f4efaed 192.168.3.143:6389 slots: (0 slots) slave replicates e279f6728d4996cf864acb590b03747dae6191f4 ... ...
添加成功,可以看到6389的复制节点ID是6388的节点ID。
5.6、删除节点
在删除节点的时候要先使用reshard将该节点上的slots分配出去,才能删除。
删除从节点
root@master1:/usr/local/redis# ./redis-trib.rb del-node 192.168.3.143:6389 fefd056625f5cc1a75f81ca555e70bcf3f4efaed >>> Removing node fefd056625f5cc1a75f81ca555e70bcf3f4efaed from cluster 192.168.3.143:6389 >>> Sending CLUSTER FORGET messages to the cluster... >>> SHUTDOWN the node. root@master1:/usr/local/redis# ./redis-trib.rb check 192.168.3.143:6380 >>> Performing Cluster Check (using node 192.168.3.143:6380) M: 0c9fae90cfa5fac015f8e172af056f830056b79d 192.168.3.143:6380 slots:1170-4095 (2926 slots) master 1 additional replica(s) M: e279f6728d4996cf864acb590b03747dae6191f4 192.168.3.143:6388 slots:0-1169,4096-4619,8192-8714,12288-12810 (2740 slots) master 0 additional replica(s) S: 788d5ef31c92f7e5f6d1020db46241c652a60f54 192.168.3.143:6385 slots: (0 slots) slave replicates f4308e9dac62e18934ad29b25007c48691a2051f M: 1a4852c5bf5d751c011d7cd57764da015a24e468 192.168.3.143:6387 slots:4620-8191 (3572 slots) master 1 additional replica(s) M: cacb64e70f0f142d6ef24b862f84eab074eb7bea 192.168.3.143:6383 slots:12811-16383 (3573 slots) master 1 additional replica(s) S: ab5898dde3de48ddf2829ae8a7359e274790f5b9 192.168.3.143:6381 slots: (0 slots) slave replicates 1a4852c5bf5d751c011d7cd57764da015a24e468 S: dd4da448f03f5bd415f9e62e3070e4050984ede0 192.168.3.143:6386 slots: (0 slots) slave replicates 0c9fae90cfa5fac015f8e172af056f830056b79d S: f639f8ff30c7f2f3a92b2406cb3e640629358645 192.168.3.143:6384 slots: (0 slots) slave replicates cacb64e70f0f142d6ef24b862f84eab074eb7bea M: f4308e9dac62e18934ad29b25007c48691a2051f 192.168.3.143:6382 slots:8715-12287 (3573 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage...
删除主节点
必须先将删除节点上的slots分配出去,否则报错
root@master1:/usr/local/redis# ./redis-trib.rb del-node 192.168.3.143:6388 e279f6728d4996cf864acb590b03747dae6191f4 >>> Removing node e279f6728d4996cf864acb590b03747dae6191f4 from cluster 192.168.3.143:6388 [ERR] Node 192.168.3.143:6388 is not empty! Reshard data away and try again.
将主节点上的slots分配出去,输入要分配出去的slots数量:2740,接收slots的节点是6380,源节点是6388
root@master1:/usr/local/redis# ./redis-trib.rb reshard 192.168.3.143:6388 >>> Performing Cluster Check (using node 192.168.3.143:6388) ... ... ... [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 2740 What is the receiving node ID? 0c9fae90cfa5fac015f8e172af056f830056b79d Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1:e279f6728d4996cf864acb590b03747dae6191f4 Source node #1:done ... ... ... Do you want to proceed with the proposed reshard plan (yes/no)? yes
查看6388状态已经没有了slots
root@master1:/usr/local/redis# ./redis-trib.rb check 192.168.3.143:6380 >>> Performing Cluster Check (using node 192.168.3.143:6380) ... ... M: e279f6728d4996cf864acb590b03747dae6191f4 192.168.3.143:6388 slots: (0 slots) master 0 additional replica(s) ... ...
删除6388节点
root@master1:/usr/local/redis# ./redis-trib.rb del-node 192.168.3.143:6388 e279f6728d4996cf864acb590b03747dae6191f4 >>> Removing node e279f6728d4996cf864acb590b03747dae6191f4 from cluster 192.168.3.143:6388 >>> Sending CLUSTER FORGET messages to the cluster... >>> SHUTDOWN the node.
再次查看集群信息
root@master1:/usr/local/redis# ./redis-trib.rb check 192.168.3.143:6380 >>> Performing Cluster Check (using node 192.168.3.143:6380) M: 0c9fae90cfa5fac015f8e172af056f830056b79d 192.168.3.143:6380 slots:0-4619,8192-8714,12288-12810 (5666 slots) master 1 additional replica(s) S: 788d5ef31c92f7e5f6d1020db46241c652a60f54 192.168.3.143:6385 slots: (0 slots) slave replicates f4308e9dac62e18934ad29b25007c48691a2051f M: 1a4852c5bf5d751c011d7cd57764da015a24e468 192.168.3.143:6387 slots:4620-8191 (3572 slots) master 1 additional replica(s) M: cacb64e70f0f142d6ef24b862f84eab074eb7bea 192.168.3.143:6383 slots:12811-16383 (3573 slots) master 1 additional replica(s) S: ab5898dde3de48ddf2829ae8a7359e274790f5b9 192.168.3.143:6381 slots: (0 slots) slave replicates 1a4852c5bf5d751c011d7cd57764da015a24e468 S: dd4da448f03f5bd415f9e62e3070e4050984ede0 192.168.3.143:6386 slots: (0 slots) slave replicates 0c9fae90cfa5fac015f8e172af056f830056b79d S: f639f8ff30c7f2f3a92b2406cb3e640629358645 192.168.3.143:6384 slots: (0 slots) slave replicates cacb64e70f0f142d6ef24b862f84eab074eb7bea M: f4308e9dac62e18934ad29b25007c48691a2051f 192.168.3.143:6382 slots:8715-12287 (3573 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.