Redis-5-集群cluster

1.什么是Redis集群?

Redis的集群有两种:
读写分离集群,主从复制。
内置集群,分片。

2.为什么要使用Redis集群?

读写分离集群,解决的是高可用问题。
内置集群,解决的是高扩展和高可用问题。

3.如何搭建Redis集群?

为了保证集群投票,至少需要3个主节点,每个主节点至少需要一个从节点,作为读写分离和热备,总共需要6台服务器。

3.1准备集群节点

复制一个干净的Redis环境的bin文件夹,清理后作为第一个Redis节点

# 进入redis安装目录
cd /usr/local/redis
#新建目录,存放redis集群节点目录
mkdir redis-cluster
# 复制redis
cp -R bin/ redis-cluster/node1

#进入node1目录
cd redis-cluster/node1
# 删除快照和持久化文件(如果有的话)
rm -f dump.rdb & rm -f appendonly.aof
# 删除原来的配置文件(如果该配置文件被修改过,则删除,可选)
#rm -r redis.conf
# 复制新的配置文件(可选)
#cp /root/redis-5.0.5/redis.conf ./
# 修改配置文件
vi redis.conf

3.2集群环境的Redis配置文件

# 不能设置密码,否则集群启动时会连接不上
# Redis服务器可以跨网络访问
bind 0.0.0.0
# 修改端口号
port 7001
# Redis后台启动
daemonize yes
# 开启aof持久化
appendonly yes

#下面的配置是集群相关的(新)
# 开启集群
cluster-enabled yes
# 集群的配置 配置文件首次启动自动生成
cluster-config-file nodes.conf
# 请求超时,默认15000
cluster-node-timeout 5000

cluster-node-timeout:主要为了解决网络抖动造成的影响。

网络抖动:真实世界的机房网络往往并不是风平浪静的,它们经常会发生各种各样的小问题。比如网络抖动就是非常常见的一种现象,突然之间部分连接变得不可访问,然后很快又恢复正常。

为解决这种问题,Redis Cluster 提供了一种选项cluster-node-timeout,表示当某个节点持续 timeout 的时间失联时,才可以认定该节点出现故障,需要进行主从切换。如果没有这个选项,网络抖动会导致主从频繁切换 (数据的重新复制)。

3.3第一个节点准备好后,再复制5份

cd /usr/local/redis/redis-cluster

[root@bobohost redis-cluster]# echo 'node2 node3 node4 node5 node6' | xargs -n 1 cp -R node1

修改六个节点的端口号分别为7001~7006,修改redis.conf配置文件

编写启动节点的脚本

vi start-all.sh

脚本的内容

cd node1
./redis-server redis.conf
cd ..
cd node2
./redis-server redis.conf
cd ..
cd node3
./redis-server redis.conf
cd ..
cd node4
./redis-server redis.conf
cd ..
cd node5
./redis-server redis.conf
cd ..
cd node6
./redis-server redis.conf
cd ..

设置脚本权限并启动

chmod 744 start-all.sh
./start-all.sh

使用命令 ps -ef | grep redis 查看

[root@bobohost redis-cluster]#  ps -ef | grep redis 
root      59782      1  0 14:23 ?        00:00:00 ./redis-server 0.0.0.0:7001 [cluster]
root      59787      1  0 14:23 ?        00:00:00 ./redis-server 0.0.0.0:7002 [cluster]
root      59792      1  0 14:23 ?        00:00:00 ./redis-server 0.0.0.0:7003 [cluster]
root      59797      1  0 14:23 ?        00:00:00 ./redis-server 0.0.0.0:7004 [cluster]
root      59802      1  0 14:23 ?        00:00:00 ./redis-server 0.0.0.0:7005 [cluster]
root      59807      1  0 14:23 ?        00:00:00 ./redis-server 0.0.0.0:7006 [cluster]

4.启动集群

通过./redis-cli --cluster help查看集群使用方式

[root@bobohost node1]# ./redis-cli --cluster help
Cluster Manager Commands:
  create         host1:port1 ... hostN:portN
                 --cluster-replicas 
  check          host:port
                 --cluster-search-multiple-owners
  info           host:port
  fix            host:port
                 --cluster-search-multiple-owners
  reshard        host:port
                 --cluster-from 
                 --cluster-to 
                 --cluster-slots 
                 --cluster-yes
                 --cluster-timeout 
                 --cluster-pipeline 
                 --cluster-replace
  rebalance      host:port
                 --cluster-weight 
                 --cluster-use-empty-masters
                 --cluster-timeout 
                 --cluster-simulate
                 --cluster-pipeline 
                 --cluster-threshold 
                 --cluster-replace
  add-node       new_host:new_port existing_host:existing_port
                 --cluster-slave
                 --cluster-master-id 
  del-node       host:port node_id
  call           host:port command arg arg .. arg
  set-timeout    host:port milliseconds
  import         host:port
                 --cluster-from 
                 --cluster-copy
                 --cluster-replace
  help           

For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.

启动集群

#进入任意节点目录
cd node1
#创建集群
./redis-cli --cluster create 192.168.40.141:7001 192.168.40.141:7002 192.168.40.141:7003 192.168.40.141:7004 192.168.40.141:7005 192.168.40.141:7006 --cluster-replicas 1

–cluster-replicas 1:表示我们希望为集群中的每个主节点创建一个从节点。本案例总共有6个节点,因此会变成三组,每组均有一主一从。若值改为2,则是尝试每组一主两从。

运行的结果

[root@bobohost node1]# ./redis-cli --cluster create 192.168.40.141:7001 192.168.40.141:7002 192.168.40.141:7003 192.168.40.141:7004 192.168.40.141:7005 192.168.40.141:7006 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.40.141:7005 to 192.168.40.141:7001
Adding replica 192.168.40.141:7006 to 192.168.40.141:7002
Adding replica 192.168.40.141:7004 to 192.168.40.141:7003
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 5d2addc1fdc2a97feaa212dec4286a1f64bd5f68 192.168.40.141:7001
   slots:[0-5460] (5461 slots) master
M: d5ae3f40842daf165732a16cda81123e65f02de0 192.168.40.141:7002
   slots:[5461-10922] (5462 slots) master
M: 8763eea1a4c1712a5c1ff502457739697c9bb189 192.168.40.141:7003
   slots:[10923-16383] (5461 slots) master
S: fefa3474f18f232f34a04a3a8cfa691854e81016 192.168.40.141:7004
   replicates 8763eea1a4c1712a5c1ff502457739697c9bb189
S: 25badf8fb26d3e5f027c8131002f2405c51cbb1b 192.168.40.141:7005
   replicates 5d2addc1fdc2a97feaa212dec4286a1f64bd5f68
S: 1bdc2d6207d0d1578c48937d3d4ce08df3b6041c 192.168.40.141:7006
   replicates d5ae3f40842daf165732a16cda81123e65f02de0
Can I set the above configuration? (type 'yes' to accept): 

上述列出了3个主节点和3个从节点的配置,是否认可这个配置,如果认可,则输入yes,结果:

Can I set the above configuration? (type 'yes' to accept): yes #注意选择为yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
..
>>> Performing Cluster Check (using node 192.168.40.141:7001)
M: 5d2addc1fdc2a97feaa212dec4286a1f64bd5f68 192.168.40.141:7001
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 25badf8fb26d3e5f027c8131002f2405c51cbb1b 192.168.40.141:7005
   slots: (0 slots) slave
   replicates 5d2addc1fdc2a97feaa212dec4286a1f64bd5f68
M: 8763eea1a4c1712a5c1ff502457739697c9bb189 192.168.40.141:7003
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: fefa3474f18f232f34a04a3a8cfa691854e81016 192.168.40.141:7004
   slots: (0 slots) slave
   replicates 8763eea1a4c1712a5c1ff502457739697c9bb189
S: 1bdc2d6207d0d1578c48937d3d4ce08df3b6041c 192.168.40.141:7006
   slots: (0 slots) slave
   replicates d5ae3f40842daf165732a16cda81123e65f02de0
M: d5ae3f40842daf165732a16cda81123e65f02de0 192.168.40.141:7002
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

查看所有槽已经均匀分配,集群搭建成功。

你可能感兴趣的:(Redis)