Redis集群搭建

文章目录

  • Redis集群介绍
    • Redis集群的优势
    • 实现方法
    • Redis-Cluster数据分片
    • Redis-Cluster的主从复制模型
  • Redis集群搭建
    • 安装Redis数据库
    • 验证集群功能
    • 模拟master宕机

Redis集群介绍

Redis集群是一个提供在多个Redis间节点间共享数据的程序集

Redis集群并不支持处理多个keys的命令,因为这需要在不同的节点间移动数据,从而达不到像Redis那样的性能,在高负载的情况下可能会导致不可预料的错误

Redis集群通过分区来提供一定程度的可用性,在实际环境中当某个节点宕机或者不可达的情况下可继续处理命令

Redis集群的优势

自动分割数据到不同的节点上

整个集群的部分节点失败或者不可达的情况下能够继续处理命令

实现方法

客户端分片

代理分片

服务器端分片

Redis-Cluster数据分片

Redis集群没有使用一致性hash,而是引入了哈希槽概念

Redis集群有16384个哈希槽

每个key通过CRC16校验后对16384取模来决定放置槽

集群的每个节点负责一部分哈希槽

以3个节点组成的集群为例

  • 节点A包含О到5500号哈希槽
  • 节点B包含5501到11000号哈希槽
  • 节点C包含11001到16384号哈希槽

支持添加或者删除节点

  • 添加删除节点无需停止服务

例如

  • 如果想新添加个节点D,需要移动节点A,B,C中的部分槽到D上
  • 如果想移除节点A,需要将A中的槽移到B和C节点上,再将没有任何槽的A节点从集群中移除

Redis-Cluster的主从复制模型

集群中具有A,B,C三个节点,如果节点B失败了,整个集群就会因缺少5501-11000这个范围的槽而不可用

为每个节点添加一个从节点A1,B1,C1,整个集群便有三个master节点和三个slave节点组成,在节点B失败后,集群便会选举B1为新的主节点继续服务

当B和B1都失败后,集群将不可用

Redis集群搭建

安装Redis数据库

6台节点操作一样

[root@localhost mnt]# yum install gcc gcc-c++ -y
[root@localhost mnt]# tar zxvf redis-5.0.7.tar.gz -C /opt
[root@localhost mnt]# cd /opt/redis-5.0.7/
[root@localhost redis-5.0.7]# make
[root@localhost redis-5.0.7]# make PREFIX=/usr/local/redis/ install
[root@localhost redis-5.0.7]# cd utils/
[root@localhost utils]# ./install_server.sh
Welcome to the redis service installer
This script will help you easily set up a running redis server

Please select the redis port for this instance: [6379] 	#直接回车
Selecting default: 6379
Please select the redis config file name [/etc/redis/6379.conf] 
Selected default - /etc/redis/6379.conf
Please select the redis log file name [/var/log/redis_6379.log] 
Selected default - /var/log/redis_6379.log
Please select the data directory for this instance [/var/lib/redis/6379] 
Selected default - /var/lib/redis/6379
Please select the redis executable path [] /usr/local/redis/bin/redis-server
Selected config:
Port           : 6379
......省略内容
Installation successful!
[root@localhost utils]# ln -s /usr/local/redis/bin/* /usr/local/bin/
[root@localhost utils]# netstat -natp | grep 6379
tcp        0      0 127.0.0.1:6379          0.0.0.0:*               LISTEN      24447/redis-server  

主从服务器配置文件修改6台一样配置

[root@localhost utils]# vim /etc/redis/6379.conf  #前面的是行号
70 #bind 127.0.0.1   #注释第70行的监听127地址,表示监听所有地址
89 protected-mode no  #去掉第89行注释关闭安全保护
93 port 6379     #去掉第93行注释,开启端口6379
137 daemonize yes   #去掉第137行注释,以独立进程启动
700 appendonly yes  #去掉第700行注释,开启aof持久化
833 cluster-enabled yes  #去掉第833行注释,开启群集功能
841 cluster-config-file nodes-6379.conf  #去掉第841行注释,群集名称文件设置
847 cluster-node-timeout 15000	 #去掉第847行注释,群集超时时间设置
[root@localhost utils]# service redis_6379 restart  #重启redis服务
[root@localhost utils]# cd /var/lib/redis/6379/
[root@localhost 6379]# ls
appendonly.aof  dump.rdb  nodes-6379.conf  
#生成了三个文件,appendonly.aof是AOF持久化文件,dump.rdb是RDB快照文件,nodes-6379.conf是节点首次启动生成的配置文件

主服务器安装rvm,RUBY控制集群软件

[root@master01 6379]# gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 
#导入key文件,若出现error,重新导入一次即可
[root@master01 6379]# curl -sSL https://get.rvm.io | bash -s stable #安装rvm
[root@localhost 6379]# cd /opt/
[root@localhost opt]# ls
redis-5.0.7  redis-5.0.7.tar.gz  rvm-installer.sh
[root@master01 opt]# ./rvm-installer.sh
[root@master01 opt]# source /etc/profile.d/rvm.sh  #执行环境变量
[root@master01 opt]# rvm list known  #列出ruby可以安装的版本
# MRI Rubies
[ruby-]1.8.6[-p420]
[ruby-]1.8.7[-head] # security released on head
......省略内容
ruby-head
[root@master01 opt]# rvm install 2.4.10  #安装ruby2.4.10版本,安装版本的事件会比较长
[root@master01 opt]# ruby -v  #查看当前ruby版本
ruby 2.4.10p364 (2020-03-31 revision 67879) [x86_64-linux]
[root@master01 opt]# rvm use 2.4.10  #使用Ruby2.4.10版本
Using /usr/local/rvm/gems/ruby-2.4.10
[root@master01 opt]# gem install redis  #再次安装Redis
Fetching redis-4.2.2.gem
Successfully installed redis-4.2.2
Parsing documentation for redis-4.2.2
Installing ri documentation for redis-4.2.2
Done installing documentation for redis after 0 seconds
1 gem installed

主服务器上创建群集

[root@master01 ~]# redis-cli --cluster create 192.168.110.10:6379 192.168.110.15:6379 192.168.110.20:6379 192.168.110.25:6379 192.168.110.30:6379 192.168.110.35:6379 --cluster-replicas 1
#创建群集,6个实例分为3组,每组一主一从
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.110.10:6379 to 192.168.110.15:6379   #下面会显示各服务器之间的主从关系
Adding replica 192.168.110.20:6379 to 192.168.110.25:6379
Adding replica 192.168.110.30:6379 to 192.168.110.35:6379
M: 37524ac18b5a90c589bf9308727503f0ae1cb6d6 192.168.110.10:6379  
   slots:[0-5460] (5461 slots) master
M: d4da3248d4c813ade2a55cb600b8345edfb4f19d 192.168.110.15:6379
   slots:[5461-10922] (5462 slots) master
M: 760590c352f61d413eb77d90b92ef6235f588ce9 192.168.110.20:6379
   slots:[10923-16383] (5461 slots) master
S: a9b67a868d11724fd4e0d6b85627ea04ba4ce806 192.168.110.25:6379  
   replicates 760590c352f61d413eb77d90b92ef6235f588ce9
S: db67f150e8b774992d089d2b926e06fa91e4345f 192.168.110.30:6379
   replicates 37524ac18b5a90c589bf9308727503f0ae1cb6d6
S: 0fa0fbbc3bfd481d16727931d3c26b5d034d8803 192.168.110.35:6379
   replicates d4da3248d4c813ade2a55cb600b8345edfb4f19d
Can I set the above configuration? (type 'yes' to accept): yes   
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
....
>>> Performing Cluster Check (using node 192.168.110.10:6379)
M: 37524ac18b5a90c589bf9308727503f0ae1cb6d6 192.168.110.10:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 760590c352f61d413eb77d90b92ef6235f588ce9 192.168.110.15:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: a9b67a868d11724fd4e0d6b85627ea04ba4ce806 192.168.110.20:6379
   slots: (0 slots) slave
   replicates 760590c352f61d413eb77d90b92ef6235f588ce9
S: db67f150e8b774992d089d2b926e06fa91e4345f 192.168.110.25:6379
   slots: (0 slots) slave
   replicates 37524ac18b5a90c589bf9308727503f0ae1cb6d6
M: d4da3248d4c813ade2a55cb600b8345edfb4f19d 192.168.110.30:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 0fa0fbbc3bfd481d16727931d3c26b5d034d8803 192.168.110.35:6379
   slots: (0 slots) slave
   replicates d4da3248d4c813ade2a55cb600b8345edfb4f19d
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

验证集群功能

[root@master01 ~]# redis-cli -c -h 192.168.110.10   #连接master01
192.168.110.10:6379> keys *
(empty list or set)
192.168.110.110:6379> set name zhangsan   #创建一个键值
-> Redirected to slot [5798] located at 192.168.110.15:6379  #存入master02
OK
192.168.110.15:6379> set addr nanjing    #自动跳转至master02,再创建一个键值
-> Redirected to slot [12790] located at 192.168.110.20:6379  #存入master03
OK
[root@slave03 6379]# redis-cli -c -h 192.168.110.35  #进入master02的slave
192.168.110.35:6379> keys *    #查看键值,发现已同步
1) "name"
192.168.110.35:6379> get name
-> Redirected to slot [5798] located at 192.168.110.15:6379  #在master02找到
"zhangsan"
[root@slave01 6379]# redis-cli -c -h 192.168.110.25  '进入master03的slave'
192.168.110.25:6379> keys *   #查看键值,发现已同步
1) "addr"
192.168.110.25:6379> get addr
-> Redirected to slot [12790] located at 192.168.110.20:6379  #在master03找到
"nanjing"

模拟master宕机

[root@master03 6379]# service redis_6379 stop  #关闭master03
Stopping ...
Waiting for Redis to shutdown ...
Redis stopped
[root@slave01 6379]# redis-cli -c -h 192.168.110.25  #连接master03的slave
192.168.110.25:6379> keys *   #依旧可以查看键值,业务不受影响
1) "addr"
192.168.110.25:6379> get addr
"nanjing"
[root@master02 ~]# redis-cli -c -h 192.168.110.20  #进入master02
192.168.110.20:6379> set age 20      #创建一个键值,存入了master01
-> Redirected to slot [741] located at 192.168.110.10:6379
OK 
192.168.110.15:6379> set team yun   #再创建一个键值,此时存入了master03的slave
-> Redirected to slot [15631] located at 192.168.110.20:6379
OK
192.168.110.25:6379> keys *   #查看master03的slave,两个键值存在
1) "team"
2) "addr"
192.168.110.25:6379> get team   #刚刚创建的键值正常查看
"yun"

你可能感兴趣的:(群集,redis)