实操系列-Redis主从-哨兵-集群搭建

疫情封闭,终于有时间将Redis的主从和集群搭建的步骤亲手操作一遍了,之前都是理论派
应该是最完整的Redis安装步骤了~~

Redis是一个单机性能比较好的Key Value缓存产品
如果单机能满足业务需求,使用Keepalive主从集群方案,而不是哨兵主从模式或者集群模式

Redis主从搭建

准备3台Linux机器CentOS8

  1. 192.168.3.67(Master)
  2. 192.168.3.68(Slave1)
  3. 192.168.3.69(Slave2)

安装单机版Reids服务

通过Redis官网 下载需要的tar包,我下载的版本(redis-6.2.1),并复制到三台机器的/opt目录下.
操作前请记得关闭Linux的防火墙

  • 设置CentOS 的yum源为aliyun, 更新yum
cd /etc/yum.repos.d/
mkdir bak
mv *.repo bak
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
yum clean all && yum makecache
yum update -y
  • 安装必须组件
yum install epel-release -y
yum install snapd -y
yum install gcc pcre-devel zlib-devel openssl-devel -y
yum install gcc automake autoconf libtool make -y
yum install lrzsz -y
  • 解压下载的Redis包
cd /opt
tar -zxvf redis-6.2.1.tar.gz 
  • 编译安装Redis
cd redis-6.2.1/
make && make install
  • 启动Redis服务
redis-server

没有错误,即成功~~ 当前单机版安装Redis服务完成,后面开始配置主从配置

Redis主从配置

  • 分别连接本地的Redis服务,并执行命令查看当前机器的主从配置状态
# 启动Redis服务
redis-server /myredis/redis.conf

#连接Redis,并打开Redis的客户端
redis-cli

#redis客户端执行查看信息命令
info replication

#返回信息详情
# Replication
role:master
connected_slaves:0
master_failover_state:no-failover
master_replid:6c2a1d5f5614859547d7334bcf2e1d8068dd9eb9
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
  • 修改Master主机的redis.conf文件
# 关闭bind配置
#bind 127.0.0.1 -::1

# 保护模式关闭
protected-mode no
  • 重启Master的Redis服务
redis-cli shutdown
redis-server /myredis/redis.conf
  • 在Slave1, Slave2机器上的Redis客户端执行命令
# 向主机注册从机
slaveof 192.168.3.67 6379
#查看当前注册情况
info replication
# Replication
role:slave
master_host:192.168.3.67
master_port:6379
master_link_status:up
master_last_io_seconds_ago:6
master_sync_in_progress:0
slave_repl_offset:2492
slave_priority:100
slave_read_only:1
connected_slaves:0
master_failover_state:no-failover
master_replid:7a5a0a392adf59e94da6bc96817f80d3d6848f4a
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:2492
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:2492
  • 查看Master主机的从机配置信息
info replication
# Replication
role:master
connected_slaves:2
slave0:ip=192.168.3.69,port=6379,state=online,offset=14,lag=0
slave1:ip=192.168.3.68,port=6379,state=online,offset=14,lag=0
master_failover_state:no-failover
master_replid:7a5a0a392adf59e94da6bc96817f80d3d6848f4a
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:14
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:14

当看到下面的内容就算是配置成功了

role:master
connected_slaves:2
slave0:ip=192.168.3.69,port=6379,state=online,offset=14,lag=0
slave1:ip=192.168.3.68,port=6379,state=online,offset=14,lag=0


配置Redis 哨兵

  • 新增一台用于哨兵服务的机器192.168.3.70,
  • /myredis目录下创建哨兵配置文件:sentinel.conf
# 监控redis master节点
# 其中mymaster为监控对象起的服务器名称,1 为至少有多少个哨兵同意迁移的数量。
sentinel monitor mymaster 192.168.3.67 6379 1
  • 启动哨兵
redis-sentinel  /myredis/sentinel.conf

当打印出下面的日志,即为启动成功:

7541:X 24 Apr 2022 14:39:33.344 # Sentinel ID is 88537cabb1fe858d5debbb6ec0c2e6303e16e990
7541:X 24 Apr 2022 14:39:33.344 # +monitor master mymaster 192.168.3.67 6379 quorum 1
7541:X 24 Apr 2022 14:39:33.346 * +slave slave 192.168.3.68:6379 192.168.3.68 6379 @ mymaster 192.168.3.67 6379
7541:X 24 Apr 2022 14:39:33.346 * +slave slave 192.168.3.69:6379 192.168.3.69 6379 @ mymaster 192.168.3.67 6379

  • 停止master节点,哨兵节点会打印日志,查看关键字符:+switch-master...
8141:X 24 Apr 2022 15:44:30.200 # +selected-slave slave 192.168.3.69:6379 192.168.3.69 6379 @ mymaster 192.168.3.67 6379
8141:X 24 Apr 2022 15:44:30.200 * +failover-state-send-slaveof-noone slave 192.168.3.69:6379 192.168.3.69 6379 @ mymaster 192.168.3.67 6379
8141:X 24 Apr 2022 15:44:30.257 * +failover-state-wait-promotion slave 192.168.3.69:6379 192.168.3.69 6379 @ mymaster 192.168.3.67 6379
8141:X 24 Apr 2022 15:44:30.793 # +promoted-slave slave 192.168.3.69:6379 192.168.3.69 6379 @ mymaster 192.168.3.67 6379
8141:X 24 Apr 2022 15:44:30.793 # +failover-state-reconf-slaves master mymaster 192.168.3.67 6379
8141:X 24 Apr 2022 15:44:30.868 * +slave-reconf-sent slave 192.168.3.68:6379 192.168.3.68 6379 @ mymaster 192.168.3.67 6379
8141:X 24 Apr 2022 15:44:31.814 * +slave-reconf-inprog slave 192.168.3.68:6379 192.168.3.68 6379 @ mymaster 192.168.3.67 6379
8141:X 24 Apr 2022 15:44:31.815 * +slave-reconf-done slave 192.168.3.68:6379 192.168.3.68 6379 @ mymaster 192.168.3.67 6379
8141:X 24 Apr 2022 15:44:31.890 # +failover-end master mymaster 192.168.3.67 6379
8141:X 24 Apr 2022 15:44:31.890 # +switch-master mymaster 192.168.3.67 6379 192.168.3.69 6379
8141:X 24 Apr 2022 15:44:31.890 * +slave slave 192.168.3.68:6379 192.168.3.68 6379 @ mymaster 192.168.3.69 6379
8141:X 24 Apr 2022 15:44:31.890 * +slave slave 192.168.3.67:6379 192.168.3.67 6379 @ mymaster 192.168.3.69 6379
8141:X 24 Apr 2022 15:45:01.915 # +sdown slave 192.168.3.67:6379 192.168.3.67 6379 @ mymaster 192.168.3.69 6379
8141:X 24 Apr 2022 15:47:13.005 # -sdown slave 192.168.3.67:6379 192.168.3.67 6379 @ mymaster 192.168.3.69 6379
8141:X 24 Apr 2022 15:47:22.972 * +convert-to-slave slave 192.168.3.67:6379 192.168.3.67 6379 @ mymaster 192.168.3.69 6379

Redis 集群配置

Redis 集群实现了对Redis的水平扩容,即启动N个redis节点,将整个数据库分布存储在这N个节点中,每个节点存储总数据的1/N。
Redis 集群通过分区(partition)来提供一定程度的可用性(availability): 即使集群中有一部分节点失效或者无法进行通讯, 集群也可以继续处理命令请求。

  • 新增3台服务器

    • 192.168.3.71
    • 192.168.3.72
    • 192.168.3.73
  • 共6台机器(除哨兵机器)的/myredis/redis.conf编辑以下配置:

# 开启集群
cluster-enabled yes
# 集群节点配置文件名
cluster-config-file nodes.conf
# 节点失联时间,超过该时间(毫秒),集群自动进行主从切换。
cluster-node-timeout 15000
  • 切换到Redis的源文件包目录:/opt/redis-6.2.1/src
# 创建集群,--replicas 1 采用最简单的方式配置集群,主:从(1:1)共3组
redis-cli --cluster create \
  192.168.3.67:6379 \
  192.168.3.68:6379 \
  192.168.3.69:6379 \
  192.168.3.70:6379 \
  192.168.3.71:6379 \
  192.168.3.72:6379 \
  --cluster-replicas 1

出现下面的日志表示集群创建成功!

>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.3.71:6379 to 192.168.3.67:6379
Adding replica 192.168.3.72:6379 to 192.168.3.68:6379
Adding replica 192.168.3.70:6379 to 192.168.3.69:6379
M: 115f271e696839117dd3af4eddfd01525247fc3d 192.168.3.67:6379
   slots:[0-5460] (5461 slots) master
M: ec725159dee9ccd058449142bce614b3eb2ecfad 192.168.3.68:6379
   slots:[5461-10922] (5462 slots) master
M: 67ee09246142c18b489ad11153a302e446028c80 192.168.3.69:6379
   slots:[10923-16383] (5461 slots) master
S: 3fe525edc548401959ab6346b318132df7e90047 192.168.3.70:6379
   replicates 67ee09246142c18b489ad11153a302e446028c80
S: 009a40e2ad693bc3b440bbfcb53a891061237011 192.168.3.71:6379
   replicates 115f271e696839117dd3af4eddfd01525247fc3d
S: 68bc76677ae3c369bcfd643191866d2602e97172 192.168.3.72:6379
   replicates ec725159dee9ccd058449142bce614b3eb2ecfad
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 192.168.3.67:6379)
M: 115f271e696839117dd3af4eddfd01525247fc3d 192.168.3.67:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 68bc76677ae3c369bcfd643191866d2602e97172 192.168.3.72:6379
   slots: (0 slots) slave
   replicates ec725159dee9ccd058449142bce614b3eb2ecfad
S: 009a40e2ad693bc3b440bbfcb53a891061237011 192.168.3.71:6379
   slots: (0 slots) slave
   replicates 115f271e696839117dd3af4eddfd01525247fc3d
S: 3fe525edc548401959ab6346b318132df7e90047 192.168.3.70:6379
   slots: (0 slots) slave
   replicates 67ee09246142c18b489ad11153a302e446028c80
M: 67ee09246142c18b489ad11153a302e446028c80 192.168.3.69:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: ec725159dee9ccd058449142bce614b3eb2ecfad 192.168.3.68:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

  • 客户端连接集群,并查看集群状态
redis-cli -c -p 6379
# 打印集群状态
cluster nodes

集群状态信息

115f271e696839117dd3af4eddfd01525247fc3d 192.168.3.67:6379@16379 master - 0 1650875306000 1 connected 0-5460
009a40e2ad693bc3b440bbfcb53a891061237011 192.168.3.71:6379@16379 slave 115f271e696839117dd3af4eddfd01525247fc3d 0 1650875307000 1 connected
67ee09246142c18b489ad11153a302e446028c80 192.168.3.69:6379@16379 master - 0 1650875306951 3 connected 10923-16383
3fe525edc548401959ab6346b318132df7e90047 192.168.3.70:6379@16379 myself,slave 67ee09246142c18b489ad11153a302e446028c80 0 1650875307000 3 connected
68bc76677ae3c369bcfd643191866d2602e97172 192.168.3.72:6379@16379 slave ec725159dee9ccd058449142bce614b3eb2ecfad 0 1650875307965 2 connected
ec725159dee9ccd058449142bce614b3eb2ecfad 192.168.3.68:6379@16379 master - 0 1650875308986 2 connected 5461-10922

Redis高可用的主从复制,哨兵模式,集群搭建都已操作完一遍,记录并给大家做个参考,如有不对的请指正

你可能感兴趣的:(经验总结)