docker 安装 redis 6.0.8 cluster 实战 (3主3从) 安装篇

1. 创建并启动6个node节点

docker run -d \
--net host \
--privileged \
--name redis-node-1 \
--log-opt max-size=100m \
--log-opt max-file=3 \
-v /root/docker/redis-node-1/data:/data \
redis:6.0.8 \
--cluster-enabled yes \
--appendonly yes \
--port 16379 \
--requirepass admin123

docker run -d \
--net host \
--privileged \
--name redis-node-2 \
--log-opt max-size=100m \
--log-opt max-file=3 \
-v /root/docker/redis-node-2/data:/data \
redis:6.0.8 \
--cluster-enabled yes \
--appendonly yes \
--port 16380 \
--requirepass admin123

docker run -d \
--net host \
--privileged \
--name redis-node-3 \
--log-opt max-size=100m \
--log-opt max-file=3 \
-v /root/docker/redis-node-3/data:/data \
redis:6.0.8 \
--cluster-enabled yes \
--appendonly yes \
--port 16381 \
--requirepass admin123

docker run -d \
--net host \
--privileged \
--name redis-node-4 \
--log-opt max-size=100m \
--log-opt max-file=3 \
-v /root/docker/redis-node-4/data:/data \
redis:6.0.8 \
--cluster-enabled yes \
--appendonly yes \
--port 16382 \
--requirepass admin123

docker run -d \
--net host \
--privileged \
--name redis-node-5 \
--log-opt max-size=100m \
--log-opt max-file=3 \
-v /root/docker/redis-node-5/data:/data \
redis:6.0.8 \
--cluster-enabled yes \
--appendonly yes \
--port 16383 \
--requirepass admin123

docker run -d \
--net host \
--privileged \
--name redis-node-6 \
--log-opt max-size=100m \
--log-opt max-file=3 \
-v /root/docker/redis-node-6/data:/data \
redis:6.0.8 \
--cluster-enabled yes \
--appendonly yes \
--port 16384 \
--requirepass admin123

这里使用主机网络模式, 就不需要做端口映射了, 可以参考上篇博客的配置 传送 

2. 创建redis集群 

    # 进入节点容器
    docker exec -it redis-node-1 bash
    # 创建命令
    redis-cli --pass admin123 --cluster create 你的ip:16379 你的ip:16380 你的ip:16381 你的ip:16382 你的ip:16383 你的ip:16384 --cluster-replicas 1

    --pass 指定密码

    --cluster-replicas 指定副本数量, 也就是几个从节点 

    执行完命令后输入yes 等待集群的创建

[root@OrionEcsServer ~]# docker exec -it redis-node-1 bash
root@OrionEcsServer:/data# redis-cli --pass admin123 --cluster create 172.19.6.128:16379 172.19.6.128:16380 172.19.6.128:16381 172.19.6.128:16382 172.19.6.128:16383 172.19.6.128:16384 --cluster-replicas 1
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.19.6.128:16383 to 172.19.6.128:16379
Adding replica 172.19.6.128:16384 to 172.19.6.128:16380
Adding replica 172.19.6.128:16382 to 172.19.6.128:16381
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 5ddfdb998c64916b2b624fa1d70acab33cf39fc3 172.19.6.128:16379
   slots:[0-5460] (5461 slots) master
M: 344331df652c7db5599af38507739aca49cab16c 172.19.6.128:16380
   slots:[5461-10922] (5462 slots) master
M: b04e2493efedc56dec34bc796e9a4f4817b98845 172.19.6.128:16381
   slots:[10923-16383] (5461 slots) master
S: 25a5b5357d6861328b8734f3db5a1db40b419ab1 172.19.6.128:16382
   replicates 344331df652c7db5599af38507739aca49cab16c
S: 47fac839bbb32e00ee592d3ecf095426ae66f27d 172.19.6.128:16383
   replicates b04e2493efedc56dec34bc796e9a4f4817b98845
S: 6626185f45bdb336597b0924959f066f8bc5283f 172.19.6.128:16384
   replicates 5ddfdb998c64916b2b624fa1d70acab33cf39fc3
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 172.19.6.128:16379)
M: 5ddfdb998c64916b2b624fa1d70acab33cf39fc3 172.19.6.128:16379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 344331df652c7db5599af38507739aca49cab16c 172.19.6.128:16380
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 25a5b5357d6861328b8734f3db5a1db40b419ab1 172.19.6.128:16382
   slots: (0 slots) slave
   replicates 344331df652c7db5599af38507739aca49cab16c
M: b04e2493efedc56dec34bc796e9a4f4817b98845 172.19.6.128:16381
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 47fac839bbb32e00ee592d3ecf095426ae66f27d 172.19.6.128:16383
   slots: (0 slots) slave
   replicates b04e2493efedc56dec34bc796e9a4f4817b98845
S: 6626185f45bdb336597b0924959f066f8bc5283f 172.19.6.128:16384
   slots: (0 slots) slave
   replicates 5ddfdb998c64916b2b624fa1d70acab33cf39fc3
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

3. 检查集群状态
        # 连接  
        redis-cli -p 16379 -c --pass admin123
        # 查看集群信息
        cluster info
        # 查看集群节点
        cluster nodes

        这里大概这样就代表集群创建成功了

127.0.0.1:16379> cluster nodes
344331df652c7db5599af38507739aca49cab16c 172.19.6.128:16380@26380 master - 0 1670158269000 2 connected 5461-10922
25a5b5357d6861328b8734f3db5a1db40b419ab1 172.19.6.128:16382@26382 slave 344331df652c7db5599af38507739aca49cab16c 0 1670158266000 2 connected
5ddfdb998c64916b2b624fa1d70acab33cf39fc3 172.19.6.128:16379@26379 myself,master - 0 1670158268000 1 connected 0-5460
b04e2493efedc56dec34bc796e9a4f4817b98845 172.19.6.128:16381@26381 master - 0 1670158268000 3 connected 10923-16383
47fac839bbb32e00ee592d3ecf095426ae66f27d 172.19.6.128:16383@26383 slave b04e2493efedc56dec34bc796e9a4f4817b98845 0 1670158268599 3 connected
6626185f45bdb336597b0924959f066f8bc5283f 172.19.6.128:16384@26384 slave 5ddfdb998c64916b2b624fa1d70acab33cf39fc3 0 1670158269601 1 connected

4. 测试
        set a 1
        set b 2
        set c 3

你可能感兴趣的:(docker,docker,redis)