基于docker容器工具实现redis分布式存储、容错切换、扩容缩容

cluster(集群)模式:docker版 哈希槽分区进行亿级数据存储

一、3主3从redis集群配置

基于docker容器工具实现redis分布式存储、容错切换、扩容缩容_第1张图片

注:主从机器分配情况以实际情况为准

1、关闭防火墙+启动docker后台服务
systemctl stop firewalld
systemctl start docker

2、新建6个docker容器实例

docker run -d --name redis-node-1 --net host --privileged=true -v /data/redis/share/redis-node-1:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6381

docker run -d --name redis-node-2 --net host --privileged=true -v /data/redis/share/redis-node-2:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6382

docker run -d --name redis-node-3 --net host --privileged=true -v /data/redis/share/redis-node-3:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6383

docker run -d --name redis-node-4 --net host --privileged=true -v /data/redis/share/redis-node-4:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6384

docker run -d --name redis-node-5 --net host --privileged=true -v /data/redis/share/redis-node-5:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6385

docker run -d --name redis-node-6 --net host --privileged=true -v /data/redis/share/redis-node-6:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6386

解析:
--net host  使用宿主机的ip和端口,默认
--cluster-enabled yes 是否开启集群
--appendonly yes 是否打开持久化

3、6台机器构建机器关系

随便进入一台机器,我这里演示进入reids-node-1机器

首先进入容器redis-node-1:docker exec -it redis-node-1 /bin/bash

构建主从关系:(注意主机主机的真实ip)
redis-cli --cluster create 192.168.10.136:6381 192.168.10.136:6382 192.168.10.136:6383 192.168.10.136:6384 192.168.10.136:6385 192.168.10.136:6386 --cluster-replicas 1
解析:
--cluster-replicas 1 表示为每个master创建一个slave节点 就代表3主3从
执行过程:
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.10.136:6385 to 192.168.10.136:6381
Adding replica 192.168.10.136:6386 to 192.168.10.136:6382
Adding replica 192.168.10.136:6384 to 192.168.10.136:6383
>>> Trying to optimize slaves allocation for anti-affinity
Can I set the above configuration? (type 'yes' to accept): yes
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.     创建成功

4、链接进入6381作为切入点,查看集群状态

docker exec -it redis-node-1 /bin/bash
root@serverb:/data# redis-cli -p 6381
127.0.0.1:6381> cluster info
127.0.0.1:6381> cluster nodes       这里挂载的是: 6381挂的是6384 、 6382挂的是6385 、 6383挂的是6386
二、主从容错切换迁移案例

数据读写存储

1、启动6台机器构成的集群并通过exec进入
docker exec -it redis-node-1 /bin/bash

2、对6381新增两个key
root@serverb:/data# redis-cli -p 6381
127.0.0.1:6381> 
127.0.0.1:6381> keys *
(empty array)
127.0.0.1:6381> set k1 v1
(error) MOVED 12706 192.168.10.136:6383
127.0.0.1:6381> set k2 v2
OK
127.0.0.1:6381> set k3 v3
OK
127.0.0.1:6381> set k4 v4
(error) MOVED 8455 192.168.10.136:6382

3、防止路由失效加参数-c并新增两个key,以集群方式进入
root@serverb:/data# redis-cli -p 6381 -c
127.0.0.1:6381> flushall
OK
127.0.0.1:6381> set k1 v1
-> Redirected to slot [12706] located at 192.168.10.136:6383
OK
192.168.10.136:6383> set k2 v2
-> Redirected to slot [449] located at 192.168.10.136:6381
OK
192.168.10.136:6381> set k3 v3
OK
192.168.10.136:6381> set k4 v4
-> Redirected to slot [8455] located at 192.168.10.136:6382
OK

4、查看集群信息
root@serverb:/data# redis-cli --cluster check 192.168.10.136:6381
192.168.10.136:6381 (7a63aea7...) -> 2 keys | 5461 slots | 1 slaves.
192.168.10.136:6382 (011887e4...) -> 1 keys | 5462 slots | 1 slaves.
192.168.10.136:6383 (00c5a8dd...) -> 1 keys | 5461 slots | 1 slaves.
[OK] 4 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.10.136:6381)
M: 7a63aea7249b4f3f3379446b78dd2eb7ae71423e 192.168.10.136:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: f69e7c90038d5044af2710b8609277179582e104 192.168.10.136:6385
   slots: (0 slots) slave
   replicates 011887e4b0c075b78494c2d8dda5ce7d99c95439
M: 011887e4b0c075b78494c2d8dda5ce7d99c95439 192.168.10.136:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 00c5a8dd6cfc165302bbfa5fdcd2747c42659e2d 192.168.10.136:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 73c671334a2790bb725532d63186356064060586 192.168.10.136:6384
   slots: (0 slots) slave
   replicates 7a63aea7249b4f3f3379446b78dd2eb7ae71423e
S: 49dc02d9cebd9e8c47191ffb0ad493cea8bcc018 192.168.10.136:6386
   slots: (0 slots) slave
   replicates 00c5a8dd6cfc165302bbfa5fdcd2747c42659e2d
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

容错切换迁移

实现目的:6381宕机,6384上位

1、主6381和从机切换,先停止主机6381
[root@serverb share]# docker stop redis-node-1
[root@serverb share]# docker exec -it redis-node-2 /bin/bash

2、再次查看集群信息
root@serverb:/data# redis-cli -p 6382 -c
127.0.0.1:6382> cluster nodes
f69e7c90038d5044af2710b8609277179582e104 192.168.10.136:6385@16385 slave 011887e4b0c075b78494c2d8dda5ce7d99c95439 0 1664975570000 2 connected
011887e4b0c075b78494c2d8dda5ce7d99c95439 192.168.10.136:6382@16382 myself,master - 0 1664975568000 2 connected 5461-10922
00c5a8dd6cfc165302bbfa5fdcd2747c42659e2d 192.168.10.136:6383@16383 master - 0 1664975570000 3 connected 10923-16383
73c671334a2790bb725532d63186356064060586 192.168.10.136:6384@16384 master - 0 1664975571000 7 connected 0-5460
7a63aea7249b4f3f3379446b78dd2eb7ae71423e 192.168.10.136:6381@16381 master,fail - 1664975534075 1664975526515 1 disconnected
49dc02d9cebd9e8c47191ffb0ad493cea8bcc018 192.168.10.136:6386@16386 slave 00c5a8dd6cfc165302bbfa5fdcd2747c42659e2d 0 1664975571881 3 connected
127.0.0.1:6382> 
解析:会发现6381从原来的master称为fail状态,从6384已经上位成为master

[root@serverb ~]# docker start redis-node-1 
192.168.10.136:6382> cluster nodes
f69e7c90038d5044af2710b8609277179582e104 192.168.10.136:6385@16385 slave 011887e4b0c075b78494c2d8dda5ce7d99c95439 0 1664975959025 2 connected
011887e4b0c075b78494c2d8dda5ce7d99c95439 192.168.10.136:6382@16382 myself,master - 0 1664975960000 2 connected 5461-10922
00c5a8dd6cfc165302bbfa5fdcd2747c42659e2d 192.168.10.136:6383@16383 master - 0 1664975960032 3 connected 10923-16383
73c671334a2790bb725532d63186356064060586 192.168.10.136:6384@16384 master - 0 1664975959000 7 connected 0-5460
7a63aea7249b4f3f3379446b78dd2eb7ae71423e 192.168.10.136:6381@16381 slave 73c671334a2790bb725532d63186356064060586 0 1664975958000 7 connected
49dc02d9cebd9e8c47191ffb0ad493cea8bcc018 192.168.10.136:6386@16386 slave 00c5a8dd6cfc165302bbfa5fdcd2747c42659e2d 0 1664975961040 3 connected

解析:当6381重新回来后,发现自己还是从机,6384仍然是主机,如果想要返回原来的3主3从,看下面一步!

3、先还原之前的3主3从
	先启6381:docker start redis-node-1
	再停6384:docker stop redis-node-4
	等十秒左右...
	再启6384:docker start redis-node-4
	
4、查看集群状态
[root@serverb share]# docker exec -it redis-node-1 /bin/bash
root@serverb:/data# redis-cli --cluster check 192.168.10.136:6381
192.168.10.136:6381 (7a63aea7...) -> 2 keys | 5461 slots | 1 slaves.
192.168.10.136:6383 (00c5a8dd...) -> 1 keys | 5461 slots | 1 slaves.
192.168.10.136:6382 (011887e4...) -> 1 keys | 5462 slots | 1 slaves.
[OK] 4 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.10.136:6381)
M: 7a63aea7249b4f3f3379446b78dd2eb7ae71423e 192.168.10.136:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 00c5a8dd6cfc165302bbfa5fdcd2747c42659e2d 192.168.10.136:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 73c671334a2790bb725532d63186356064060586 192.168.10.136:6384
   slots: (0 slots) slave
   replicates 7a63aea7249b4f3f3379446b78dd2eb7ae71423e
S: f69e7c90038d5044af2710b8609277179582e104 192.168.10.136:6385
   slots: (0 slots) slave
   replicates 011887e4b0c075b78494c2d8dda5ce7d99c95439
S: 49dc02d9cebd9e8c47191ffb0ad493cea8bcc018 192.168.10.136:6386
   slots: (0 slots) slave
   replicates 00c5a8dd6cfc165302bbfa5fdcd2747c42659e2d
M: 011887e4b0c075b78494c2d8dda5ce7d99c95439 192.168.10.136:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

解析:会发现6381、6382、6383都是master,恢复了原来的状态。
三、主从扩容案例(4主4从)
4主4从
目的:新增主机master4:6387无槽号,从机slave4:6388,master4:6387建好后加入集群,6388挂到6387

1、新建两个实例6387、6388,启动,查看是否有8个节点
[root@serverb share]# docker run -d --name redis-node-7 --net host --privileged=true -v /data/redis/share/redis-node-7:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6387
fa0f8b2406a39c7ae33f93aa9b39bb0099b0eb4879d834a88ee4f632e7ecc678
[root@serverb share]# docker run -d --name redis-node-8 --net host --privileged=true -v /data/redis/share/redis-node-8:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6388
aeddf6717a27f9575fb55b6cf4ed04186d5efd15fa0828cbdcc2054c9b0610a1

2、进入6387容器内:docker exec -it redis-node-7 /bin/bash

3、将新增的节点6387节点(空槽号)作为master节点加入原集群
格式:redis-cli --cluster add-node 主机的实际ip:6387(作为master节点加入) 主机的实际ip:6381(原来集群节点里面的领路人)
[root@serverb share]# docker exec -it redis-node-7 /bin/bash
root@serverb:/data# redis-cli --cluster add-node 192.168.10.136:6387 192.168.10.136:6381
>>> Adding node 192.168.10.136:6387 to cluster 192.168.10.136:6381
>>> Performing Cluster Check (using node 192.168.10.136:6381)
M: 7a63aea7249b4f3f3379446b78dd2eb7ae71423e 192.168.10.136:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 00c5a8dd6cfc165302bbfa5fdcd2747c42659e2d 192.168.10.136:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 73c671334a2790bb725532d63186356064060586 192.168.10.136:6384
   slots: (0 slots) slave
   replicates 7a63aea7249b4f3f3379446b78dd2eb7ae71423e
S: f69e7c90038d5044af2710b8609277179582e104 192.168.10.136:6385
   slots: (0 slots) slave
   replicates 011887e4b0c075b78494c2d8dda5ce7d99c95439
S: 49dc02d9cebd9e8c47191ffb0ad493cea8bcc018 192.168.10.136:6386
   slots: (0 slots) slave
   replicates 00c5a8dd6cfc165302bbfa5fdcd2747c42659e2d
M: 011887e4b0c075b78494c2d8dda5ce7d99c95439 192.168.10.136:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.10.136:6387 to make it join the cluster.
[OK] New node added correctly.

4、检查集群情况第1次
root@serverb:/data# redis-cli --cluster check 192.168.10.136:6387 
192.168.10.136:6387 (929ad349...) -> 0 keys | 0 slots | 0 slaves.
192.168.10.136:6383 (00c5a8dd...) -> 1 keys | 5461 slots | 1 slaves.
192.168.10.136:6382 (011887e4...) -> 1 keys | 5462 slots | 1 slaves.
192.168.10.136:6381 (7a63aea7...) -> 2 keys | 5461 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.10.136:6387)
M: 929ad349461a8933f8d8b3a92cc6db66be5c2055 192.168.10.136:6387
   slots: (0 slots) master
M: 00c5a8dd6cfc165302bbfa5fdcd2747c42659e2d 192.168.10.136:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 49dc02d9cebd9e8c47191ffb0ad493cea8bcc018 192.168.10.136:6386
   slots: (0 slots) slave
   replicates 00c5a8dd6cfc165302bbfa5fdcd2747c42659e2d
M: 011887e4b0c075b78494c2d8dda5ce7d99c95439 192.168.10.136:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: f69e7c90038d5044af2710b8609277179582e104 192.168.10.136:6385
   slots: (0 slots) slave
   replicates 011887e4b0c075b78494c2d8dda5ce7d99c95439
S: 73c671334a2790bb725532d63186356064060586 192.168.10.136:6384
   slots: (0 slots) slave
   replicates 7a63aea7249b4f3f3379446b78dd2eb7ae71423e
M: 7a63aea7249b4f3f3379446b78dd2eb7ae71423e 192.168.10.136:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
分析:可以看见6387已经加入集群,并且是master,但是没有槽位0 keys | 0 slots | 0 slaves.

5、重新分配槽号
命令:redis-cli --cluster reshard IP地址:端口号
[root@serverb share]# docker exec -it redis-node-7 /bin/bash
root@serverb:/data# redis-cli --cluster reshard 192.168.10.136:6381
How many slots do you want to move (from 1 to 16384)? 4096   #16384/4(master总数)=4096
What is the receiving node ID? 929ad349461a8933f8d8b3a92cc6db66be5c2055   #6387的id
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: all     #选择all
Do you want to proceed with the proposed reshard plan (yes/no)? yes    #选择yes

6、检查集群情况第2次
[root@serverb share]# docker exec -it redis-node-7 /bin/bash
root@serverb:/data# redis-cli --cluster check 192.168.10.136:6381(哪个master端口都行)
192.168.10.136:6387 (929ad349...) -> 1 keys | 4096 slots | 0 slaves.
192.168.10.136:6383 (00c5a8dd...) -> 1 keys | 4096 slots | 1 slaves.
192.168.10.136:6382 (011887e4...) -> 1 keys | 4096 slots | 1 slaves.
192.168.10.136:6381 (7a63aea7...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.10.136:6387)
M: 929ad349461a8933f8d8b3a92cc6db66be5c2055 192.168.10.136:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
M: 00c5a8dd6cfc165302bbfa5fdcd2747c42659e2d 192.168.10.136:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: 49dc02d9cebd9e8c47191ffb0ad493cea8bcc018 192.168.10.136:6386
   slots: (0 slots) slave
   replicates 00c5a8dd6cfc165302bbfa5fdcd2747c42659e2d
M: 011887e4b0c075b78494c2d8dda5ce7d99c95439 192.168.10.136:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: f69e7c90038d5044af2710b8609277179582e104 192.168.10.136:6385
   slots: (0 slots) slave
   replicates 011887e4b0c075b78494c2d8dda5ce7d99c95439
S: 73c671334a2790bb725532d63186356064060586 192.168.10.136:6384
   slots: (0 slots) slave
   replicates 7a63aea7249b4f3f3379446b78dd2eb7ae71423e
M: 7a63aea7249b4f3f3379446b78dd2eb7ae71423e 192.168.10.136:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
分析:不是从0开始分配,是前3个master匀了一些给6387,总构成4096个槽位!

7、为主节点6387分配从节点6388
命令:redis-cli --cluster add-node ip:新slave端口 ip:新master端口 --cluster-slave --cluster-master-id 新主机节点ID
[root@serverb share]# docker exec -it redis-node-7 /bin/bash
root@serverb:/data# redis-cli --cluster add-node 192.168.10.136:6388 192.168.10.136:6387 --cluster-slave --cluster-master-id 929ad349461a8933f8d8b3a92cc6db66be5c2055
...
>>> Configure node as replica of 192.168.10.136:6387.
[OK] New node added correctly.

8、检查集群情况第3次
[root@serverb share]# docker exec -it redis-node-7 /bin/bash
root@serverb:/data# redis-cli --cluster check 192.168.10.136:6387(哪个master端口都行)
root@serverb:/data# redis-cli -p 6387 -c
127.0.0.1:6387> cluster nodes
四、主从缩容案例

高峰期过后,恢复成3主3从
目的:删除6387和6388,恢复3主3从 1、先清除从节点6388 2、清出来的槽号重新分配 3、再删除6387 4、恢复成3主3从

1、检查集群情况第1次,获得6388的节点id

2、将6388删除,从集群中将4号从节点6388删除
命令:redis-cli --cluster del-node ip:从机端口 从机ID

[root@serverb share]# docker exec -it redis-node-7 /bin/bash
root@serverb:/data# redis-cli --cluster del-node 192.168.10.136:6388 b2ca0913c658abd6cac9e689f7d3d04c42c3c2a6
root@serverb:/data# redis-cli --cluster check 192.168.10.136:6382  #再次查看6388从节点已经没有了

3、将6387的槽号清空,重新分配,本例将清出来的槽号都给6381

[root@serverb share]# docker exec -it redis-node-7 /bin/bash
root@serverb:/data# redis-cli --cluster reshard 192.168.10.136:6381     #以6381作为突破口
How many slots do you want to move (from 1 to 16384)? 4096     # 为了省事把4096全给了
What is the receiving node ID? 7a63aea7249b4f3f3379446b78dd2eb7ae71423e   #给谁,给6381
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: 929ad349461a8933f8d8b3a92cc6db66be5c2055    #告知删除哪个,谁来给:6387  
Source node #2: done
Do you want to proceed with the proposed reshard plan (yes/no)? yes 

4、检查集群情况第2次

[root@serverb share]# docker exec -it redis-node-7 /bin/bash
root@serverb:/data# redis-cli --cluster check 192.168.10.136:6381
192.168.10.136:6381 (7a63aea7...) -> 2 keys | 8192 slots | 1 slaves.
192.168.10.136:6387 (929ad349...) -> 0 keys | 0 slots | 0 slaves.
192.168.10.136:6383 (00c5a8dd...) -> 1 keys | 4096 slots | 1 slaves.
192.168.10.136:6382 (011887e4...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.10.136:6381)
M: 7a63aea7249b4f3f3379446b78dd2eb7ae71423e 192.168.10.136:6381
   slots:[0-6826],[10923-12287] (8192 slots) master
   1 additional replica(s)
M: 929ad349461a8933f8d8b3a92cc6db66be5c2055 192.168.10.136:6387
   slots: (0 slots) master
M: 00c5a8dd6cfc165302bbfa5fdcd2747c42659e2d 192.168.10.136:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: 73c671334a2790bb725532d63186356064060586 192.168.10.136:6384
   slots: (0 slots) slave
   replicates 7a63aea7249b4f3f3379446b78dd2eb7ae71423e
S: f69e7c90038d5044af2710b8609277179582e104 192.168.10.136:6385
   slots: (0 slots) slave
   replicates 011887e4b0c075b78494c2d8dda5ce7d99c95439
S: 49dc02d9cebd9e8c47191ffb0ad493cea8bcc018 192.168.10.136:6386
   slots: (0 slots) slave
   replicates 00c5a8dd6cfc165302bbfa5fdcd2747c42659e2d
M: 011887e4b0c075b78494c2d8dda5ce7d99c95439 192.168.10.136:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
分析:发现6387的槽位已经为0,把4096槽位分配给了6381 ---> slots:[0-6826],[10923-12287] (8192 slots)

5、将6387删除
命令:redis-cli --cluster del-node ip:主机端口 主机id

[root@serverb share]# docker exec -it redis-node-7 /bin/bash
root@serverb:/data# redis-cli --cluster del-node 192.168.10.136:6387 929ad349461a8933f8d8b3a92cc6db66be5c2055
Removing node 929ad349461a8933f8d8b3a92cc6db66be5c2055 from cluster 192.168.10.136:6387
Sending CLUSTER FORGET messages to the cluster...
Sending CLUSTER RESET SOFT to the deleted node.

6、检查集群情况第3次

[root@serverb share]# docker exec -it redis-node-7 /bin/bash
root@serverb:/data# redis-cli --cluster check 192.168.10.136:6381   #发现已经恢复为原来的3主3从

你可能感兴趣的:(docker,docker,redis,分布式,容器,linux)