前言
- 通过之前的文章,我们了解到了redis的编译安装流程,以及配置优化内容。这里,我们将深入讲解redis群集的应用,并模拟主从redis服务集群的搭建。
存在单点故障;
不满足高并发的需求;
数据丢失引发灾难(容错率非常低);
1) Redis集群是一个提供在多个Redis间节点共享数据的程序集;
2) Redis集群并不支持多处理多个Keys的命令,应为这需要在不同节点间移动数据,从而达不到像Redis那样的性能,在
高负载的情况下可能会导致不可预料的错误;
3) Redis集群通过分区来提供一定程度的可用性,在实际环境中档某个节点宕机或则不可达的情况下继续处理命令。
1) 自动分割数据到不同节点上;
2) 整个集群的部分节点失败或不可达的情况下依旧可以处理业务指令。
Redis集群没有使用一致性hash,而是引入了哈希槽的概念;
Redis集群总共有16384个哈希槽(0-16383);
每个Key通过CRC16校验后对16384取模来决定如何进行存放;
集群的每个节点负责一部分的hash槽;
在Redis集群中,支持添加或者删除节点,且无需停止服务。
Redis-Cluster数据分片详解,以3个节点组成的集群为例
节点A包含0到5500号哈希槽
节点B包含5501到11000号哈希槽
节点C包含11001到16384号哈希槽
支持添加或者删除节点,添加删除节点无需停止服务
例如
如果想新添加个节点D,需要移动节点A,B,C中的部分槽到D上
如果想移除节点A,需要将A中的槽移到B和C节点上,再将没有任何槽的A节点从集群中移除
1.集群中具有A,B,C三个节点,如果节点B失败了,整个集群就会因缺少这个范围的槽而不可用
2.为每个节点添加一个从节点A1, B1, C1,整个集群便有三个master节点和三个slave节点组成,在节点B失败后,集群
便会选举B1为新的主节点继续服务
3.当A和A1都失败后,集群将不可用
项目环境
master1 192.168.140.20
master2 192.168.140.21
master3 192.168.140.22
slvae1 92.168.140.13
slvae2 92.168.140.14
slvae3 92.168.140.15
在所有节点上设置网络参数、关闭防火墙和selinux
在所有节点上下载并安装Redis
在所有节点上修改Redis配置文件
创建Redis集群(在master1节点)
创建集群步骤
1)导入key文件并安装rvm
2)执行环境变量让其生效
3)安装Ruby2.4. 1版本
4)安装redis客户端
5)创建redis集群
发送键输入到所有会话,即在每台服务器上均进行以下配置
1)解压
tar zxvf redis-5.0.4.tar.gz
2)配置安装
cd redis-5.0.4/
make
make PREFIX=/usr/local/redis install
3)链接快捷命令
ln -s /usr/local/redis/bin/* /usr/local/bin/
4)安装运行脚本,并查看端口状态
cd redis-5.0.4/utils/
./install_server.sh
netstat -anptu | grep redis
5)修改主配置文件,并启动服务
[root@master1 ~]# vi /etc/redis/6379.conf
...
bind 192.168.140.20 '//修改127.0.0.1为本机地址 (6台服务器均需单独更改IP)'
cluster-enabled yes
appendonly yes '//开启AOF持久化'
cluster-config-file nodes-6379.conf
cluster-node-timeout 15000
cluster-require-full-coverage yes
重启redis服务
/etc/init.d/redis_6379 stop
/etc/init.d/redis_6379 start
6)在master1上,使用脚本创建群集
[root@master1 ~]# yum -y install ruby rubygems
[root@master1 ~]# gem install redis-3.2.0.gem
[root@master1 ~]# redis-cli --cluster create --cluster-replicas 1 \
192.168.140.20:6379 192.168.140.21:6379 192.168.140.22:6379 \
192.168.140.13:6379 192.168.140.14:6379 192.168.140.15:6379
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.140.14:6379 to 192.168.140.20:6379
Adding replica 192.168.140.15:6379 to 192.168.140.21:6379
Adding replica 192.168.140.13:6379 to 192.168.140.22:6379
M: b32bddc815edec59943aef28275b073925f3bf6c 192.168.140.20:6379
slots:[0-5460] (5461 slots) master
M: 7ab1a75dbac2dd91be898874895c636d2fa3b790 192.168.140.21:6379
slots:[5461-10922] (5462 slots) master
M: 440c768ed0378686f347244bf37d6e5adb191401 192.168.140.22:6379
slots:[10923-16383] (5461 slots) master
S: 550f265ad5d2714a20731cbd7cd8a61e826da443 192.168.140.13:6379
replicates 440c768ed0378686f347244bf37d6e5adb191401
S: 4b8052a36136df43078db53c7d472b4acc848dcb 192.168.140.14:6379
replicates b32bddc815edec59943aef28275b073925f3bf6c
S: c3f7dc4e4c17c385fdad4af782100266d2c691e2 192.168.140.15:6379
replicates 7ab1a75dbac2dd91be898874895c636d2fa3b790
Can I set the above configuration? (type 'yes' to accept): yes '//需要输入yes'
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.....
>>> Performing Cluster Check (using node 192.168.140.20:6379)
M: b32bddc815edec59943aef28275b073925f3bf6c 192.168.140.20:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: 440c768ed0378686f347244bf37d6e5adb191401 192.168.140.22:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: 7ab1a75dbac2dd91be898874895c636d2fa3b790 192.168.140.21:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: 4b8052a36136df43078db53c7d472b4acc848dcb 192.168.140.14:6379
slots: (0 slots) slave
replicates b32bddc815edec59943aef28275b073925f3bf6c
S: 550f265ad5d2714a20731cbd7cd8a61e826da443 192.168.140.13:6379
slots: (0 slots) slave
replicates 440c768ed0378686f347244bf37d6e5adb191401
S: c3f7dc4e4c17c385fdad4af782100266d2c691e2 192.168.140.15:6379
slots: (0 slots) slave
replicates 7ab1a75dbac2dd91be898874895c636d2fa3b790
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@master1 ~]# redis-cli -h 192.168.140.20 -p 6379 -c //登录服务器
192.168.140.20:6379> set weather sunny //添加weather键值为sunny
-> Redirected to slot [8949] located at 192.168.140.21:6379
OK
[root@master3 ~]# redis-cli -h 192.168.140.22 -p 6379 -c //登录服务器
192.168.140.22:6379> get weather
-> Redirected to slot [8949] located at 192.168.140.21:6379 //提示存储该键值的哈希槽为8949,定位到该服务器上
"sunny"
[root@master2 ~]# redis-cli -h 192.168.140.21 -p 6379 -c //登录服务器
192.168.140.21:6379> get weather
"sunny"
[root@slave1 ~]# redis-cli -h 192.168.140.13 -p 6379 -c
192.168.140.13:6379> get weather
-> Redirected to slot [8949] located at 192.168.140.21:6379
"sunny"
[root@slave1 ~]# redis-cli -h 192.168.140.13 -p 6379 -c
192.168.140.13:6379> set centos 7.5
-> Redirected to slot [467] located at 192.168.140.20:6379
OK
'//提示存储该键值的哈希槽为467,定位到master1服务器'
[root@master1 ~]# redis-cli -h 192.168.140.20 -p 6379 -c
192.168.140.20:6379> get centos '//获取centos的键值'
"7.5"
登录到服务器上后
cluster info 查看群集信息
cluster nodes 查看节点信息